subset
stringclasses
3 values
text
stringlengths
14
7.51M
source
stringclasses
2 values
web
# PoS(HQL 2016)042 PEN: a low energy test of lepton universality D. Pocanic, L.P. Alonzi, V.A. Baranov, W. Bertl, M. Bychkov, Y.M. Bystritsky, E. Frlez, C.J. Glaser, V.A. Kalinnikov, N.V. Khomutov, A.S. Korenchenko, S.M. Korenchenko, M. Korolija, T. Kozlowski, N.P. Kravchuk, N.A. Kuchinsky, M.C. Lehman, D. Mzhavia, A. Palladino, P. Robmann, A.M. Rozhdestvensky, I. Supek, P. Truoel, A. Van der Schaaf, E.P. Velicheva, M.G. Vitz, V.P. Volnykh Contribution: pdf Abstract Allowed charged $\pi$ meson decays are characterized by simple dynamics, few available decay channels, mainly into leptons, and extremely well controlled radiative and loop corrections. In that sense, pion decays represent a veritable triumph of the standard model (SM) of elementary particles and interactions. This relative theoretical simplicity makes charged pion decays a sensitive means for testing the underlying symmetries and the universality of weak fermion couplings, as well as for studying pion structure and chiral dynamics. Even after considerable recent improvements, experimental precision is lagging far behind that of the theoretical description for pion decays. We review the current state of experimental study of the pion electronic decay $\pi^+ \to e^+\nu_e(\gamma)$, or $\pi_{e2(\gamma)}$, where the $(\gamma)$ indicates inclusion and explicit treatment of radiative decay events. We briefly review the limits on non-SM processes arising from the present level of experimental precision in $\pi_{e2(\gamma)}$ decays. Focusing on the PEN experiment at the Paul Scherrer Institute (PSI), Switzerland, we examine the prospects for further improvement in the near term.
auto_math_text
web
herwig is hosted by Hepforge, IPPP Durham ## Plots from Rivet analyses ### Z+jets at 13 TeV (ATLAS_2015_CONF_2015_041_MU) ATLAS-CONF-2015-041 Preliminary measurements of the cross section for the production of a $Z$ boson in association with jets in pp collisions at $\sqrt{s} = 13$\,TeV are presented, using data corresponding to an integrated luminosity of $85\,\text{pb}^{-1}$ collected by the ATLAS experiment at the Large Hadron Collider. The cross sections are measured for events containing a $Z$ boson decaying to electrons or muons and produced in association with up to four jets in the kinematical range of $p_\text{T} > 30$\,GeV and $|y| < 2.5$. NB--Use the plugin names ATLAS_2015_CONF_2015_041_EL or ATLAS_2015_CONF_2015_041_MU to specify the lepton channel. ### Z+jets at 13 TeV (ATLAS_2015_CONF_2015_041_EL) ATLAS-CONF-2015-041 Preliminary measurements of the cross section for the production of a $Z$ boson in association with jets in pp collisions at $\sqrt{s} = 13$\,TeV are presented, using data corresponding to an integrated luminosity of $85\,\text{pb}^{-1}$ collected by the ATLAS experiment at the Large Hadron Collider. The cross sections are measured for events containing a $Z$ boson decaying to electrons or muons and produced in association with up to four jets in the kinematical range of $p_\text{T} > 30$\,GeV and $|y| < 2.5$. NB--Use the plugin names ATLAS_2015_CONF_2015_041_EL or ATLAS_2015_CONF_2015_041_MU to specify the lepton channel. ### W + jets (ATLAS_2014_I1319490) Spires | Eur.Phys.J. C75 (2015) 82 | doi:10.1140/epjc/s10052-015-3262-7 | arXiv:1409.8639 [hep-ex] Measurements of cross sections for the production of a $W$ boson in association with jets in proton–proton collisions at $\sqrt{s} = 7$ TeV with the ATLAS experiment at the Large Hadron Collider. With an integrated luminosity of 4.6 $\text{fb}^{−1}$, this data set allows for an exploration of a large kinematic range, including jet production up to a transverse momentum of 1 TeV and multiplicities up to seven associated jets. The production cross sections for W bosons are measured in both the electron and muon decay channels. Differential cross sections for many observables are also presented including measurements of the jet observables such as the rapidities and the transverse momenta as well as measurements of event observables such as the scalar sums of the transverse momenta of the jets. The default routine will pick up the electron decay channel of the $W$ boson and compare it to the combined (muon and electron channel) data. Individual channels (for data) are available as well, use ATLAS_2014_I1312627_EL and ATLAS_2014_I1312627_MU to specify the decay channel directly. ### W + jets (ATLAS_2014_I1319490_EL) Spires | Eur.Phys.J. C75 (2015) 82 | doi:10.1140/epjc/s10052-015-3262-7 | arXiv:1409.8639 [hep-ex] Measurements of cross sections for the production of a $W$ boson in association with jets in proton–proton collisions at $\sqrt{s} = 7$ TeV with the ATLAS experiment at the Large Hadron Collider. With an integrated luminosity of 4.6 $\text{fb}^{−1}$, this data set allows for an exploration of a large kinematic range, including jet production up to a transverse momentum of 1 TeV and multiplicities up to seven associated jets. The production cross sections for W bosons are measured in both the electron and muon decay channels. Differential cross sections for many observables are also presented including measurements of the jet observables such as the rapidities and the transverse momenta as well as measurements of event observables such as the scalar sums of the transverse momenta of the jets. The default routine will pick up the electron decay channel of the $W$ boson and compare it to the combined (muon and electron channel) data. Individual channels (for data) are available as well, use ATLAS_2014_I1312627_EL and ATLAS_2014_I1312627_MU to specify the decay channel directly. ### W + jets (ATLAS_2014_I1319490_MU) Spires | Eur.Phys.J. C75 (2015) 82 | doi:10.1140/epjc/s10052-015-3262-7 | arXiv:1409.8639 [hep-ex] Measurements of cross sections for the production of a $W$ boson in association with jets in proton–proton collisions at $\sqrt{s} = 7$ TeV with the ATLAS experiment at the Large Hadron Collider. With an integrated luminosity of 4.6 $\text{fb}^{−1}$, this data set allows for an exploration of a large kinematic range, including jet production up to a transverse momentum of 1 TeV and multiplicities up to seven associated jets. The production cross sections for W bosons are measured in both the electron and muon decay channels. Differential cross sections for many observables are also presented including measurements of the jet observables such as the rapidities and the transverse momenta as well as measurements of event observables such as the scalar sums of the transverse momenta of the jets. The default routine will pick up the electron decay channel of the $W$ boson and compare it to the combined (muon and electron channel) data. Individual channels (for data) are available as well, use ATLAS_2014_I1312627_EL and ATLAS_2014_I1312627_MU to specify the decay channel directly. ### Distributions sensitive to the underlying event in inclusive Z-boson production at 7 TeV (ATLAS_2014_I1315949) Spires | Eur.Phys.J. C74 (2014) 3195 | doi:10.1140/epjc/s10052-014-3195-6 | arXiv:1409.3433 [hep-ex] Charged-particle distributions sensitive to the properties of the underlying event are measured for an inclusive sample of events containing a $Z$-boson, decaying to an electron or muon pair. The measurement is based on data collected using the ATLAS detector at the LHC in proton–proton collisions at a centre-of-mass energy of 7 TeV with an integrated luminosity of 4.6 $\text{fb}^{−1}$. Distributions of the charged particle multiplicity and of the charged particle transverse momentum are measured in regions of azimuthal angle defined with respect to the $Z$-boson direction. ### Ratios of $V$+jets observables between $W$ and $Z$ events, electron channel (ATLAS_2014_I1312627_EL) Spires | Eur.Phys.J. C74 (2014) 3168 | doi:10.1140/epjc/s10052-014-3168-9 | arXiv:1408.6510 [hep-ex] Measurements of the ratio of the production cross sections for $W$ and $Z$ bosons in association with jets in proton–proton collisions at $\sqrt{s} = 7$ TeV with the ATLAS experiment at the Large Hadron Collider. The measurement is based on the entire 2011 dataset, corresponding to an integrated luminosity of 4.6fb−1. Inclusive and differential cross-section ratios for massive vector bosons decaying to electrons and muons are measured in association with jets with transverse momentum $p_\text{T} > 30$ GeV and jet rapidity $|y| < 4.4$. The default routine will pick up the electron decay channel of the heavy bosons and compare it to the combined (muon and electron channel) data. Individual channels (for data) are available as well, use ATLAS_2014_I1312627_EL and ATLAS_2014_I1312627_MU to specify the decay channel directly. NB #1: The "x01" Scatter2D objects are constructed from the ratio of "x02" to "x03" Histo1D objects. If several output yoda files are merged with yodamerge, the merged "x01" objects will become meaningless. New "x01" Scatter2Ds can easil be constructed in a postprocessing step from the merged "x02" (nominator) and "x03" (denominator) objects. NB #2: Special care ought to be taken when evaluating theoretical uncertainties due to potential cancellations/correlations between numerator and denominator. ### Ratios of $V$+jets observables between $W$ and $Z$ events (ATLAS_2014_I1312627) Spires | Eur.Phys.J. C74 (2014) 3168 | doi:10.1140/epjc/s10052-014-3168-9 | arXiv:1408.6510 [hep-ex] Measurements of the ratio of the production cross sections for $W$ and $Z$ bosons in association with jets in proton–proton collisions at $\sqrt{s} = 7$ TeV with the ATLAS experiment at the Large Hadron Collider. The measurement is based on the entire 2011 dataset, corresponding to an integrated luminosity of 4.6fb−1. Inclusive and differential cross-section ratios for massive vector bosons decaying to electrons and muons are measured in association with jets with transverse momentum $p_\text{T} > 30$ GeV and jet rapidity $|y| < 4.4$. The default routine will pick up the electron decay channel of the heavy bosons and compare it to the combined (muon and electron channel) data. Individual channels (for data) are available as well, use ATLAS_2014_I1312627_EL and ATLAS_2014_I1312627_MU to specify the decay channel directly. NB #1: The "x01" Scatter2D objects are constructed from the ratio of "x02" to "x03" Histo1D objects. If several output yoda files are merged with yodamerge, the merged "x01" objects will become meaningless. New "x01" Scatter2Ds can easil be constructed in a postprocessing step from the merged "x02" (nominator) and "x03" (denominator) objects. NB #2: Special care ought to be taken when evaluating theoretical uncertainties due to potential cancellations/correlations between numerator and denominator. ### Ratios of $V$+jets observables between $W$ and $Z$ events, muon channel (ATLAS_2014_I1312627_MU) Spires | Eur.Phys.J. C74 (2014) 3168 | doi:10.1140/epjc/s10052-014-3168-9 | arXiv:1408.6510 [hep-ex] Measurements of the ratio of the production cross sections for $W$ and $Z$ bosons in association with jets in proton–proton collisions at $\sqrt{s} = 7$ TeV with the ATLAS experiment at the Large Hadron Collider. The measurement is based on the entire 2011 dataset, corresponding to an integrated luminosity of 4.6fb−1. Inclusive and differential cross-section ratios for massive vector bosons decaying to electrons and muons are measured in association with jets with transverse momentum $p_\text{T} > 30$ GeV and jet rapidity $|y| < 4.4$. The default routine will pick up the electron decay channel of the heavy bosons and compare it to the combined (muon and electron channel) data. Individual channels (for data) are available as well, use ATLAS_2014_I1312627_EL and ATLAS_2014_I1312627_MU to specify the decay channel directly. NB #1: The "x01" Scatter2D objects are constructed from the ratio of "x02" to "x03" Histo1D objects. If several output yoda files are merged with yodamerge, the merged "x01" objects will become meaningless. New "x01" Scatter2Ds can easil be constructed in a postprocessing step from the merged "x02" (nominator) and "x03" (denominator) objects. NB #2: Special care ought to be taken when evaluating theoretical uncertainties due to potential cancellations/correlations between numerator and denominator. ### Measurement of Z boson in association with b-jets at 7 TeV in ATLAS (electron channel) (ATLAS_2014_I1306294) Spires | arXiv:1407.3643 [hep-ex] | JHEP 1410 (2014) 141 Measurements of differential production cross-sections of a $Z$ boson in association with $b$-jets in $pp$ collisions at $\sqrt{s}=7$ TeV are reported. The data analysed correspond to an integrated luminosity of 4.6 fb$^{-1}$ recorded with the ATLAS detector at the Large Hadron Collider. Particle-level cross-sections are determined for events with a $Z$ boson decaying into an electron or muon pair, and containing $b$-jets. For events with at least one $b$-jet, the cross-section is presented as a function of the $Z$ boson transverse momentum and rapidity, together with the inclusive $b$-jet cross-section as a function of $b$-jet transverse momentum, rapidity and angular separations between the $b$-jet and the $Z$ boson. For events with at least two $b$-jets, the cross-section is determined as a function of the invariant mass and angular separation of the two highest transverse momentum $b$-jets, and as a function of the $Z$ boson transverse momentum and rapidity. Results are compared to leading-order and next-to-leading-order perturbative QCD calculations. This Rivet module implements the event selection for Z decaying into electrons. If you want to use muonic events, please refer to ATLAS\_2014\_I1306294\_MU ### Measurement of $Z/\gamma^*$ boson $p_T$ at $\sqrt{s} = 7\text{TeV}$ (ATLAS_2014_I1300647) Spires | JHEP09(2014)145 | arXiv:1406.3660 [hep-ex] A measurement of the $Z/\gamma^*$ transverse momentum spectrum using ATLAS proton-proton collision data at a center of mass energy of $\sqrt{s} = 7{\text{TeV}}$ at the LHC. The measurement is performed in both $Z/\gamma^* \rightarrow ee$ and $Z/\gamma^* \rightarrow \mu \mu$ channels. ### Measurement of the low-mass Drell-Yan differential cross section at 7 TeV (ATLAS_2014_I1288706) Spires | JHEP06(2014)112 | doi:10.1007/JHEP06(2014)112 | arXiv:1404.1212 [hep-ex] Measurements of the differential cross section for the process $Z/\gamma^\ast \rightarrow l$ $(l = e, \mu)$ as a function of dilepton invariant mass in pp collisions at $\sqrt{s} = 7$ TeV. The measurement is performed in the $e$ and $\mu$ channels for invariant masses between 26 GeV and 66 GeV in the fiducial region $p_\text{T}^\text{leading} > 15$ GeV, $p_\text{T}^\text{subleading} > 12$ GeV, $|\eta| < 2.4$ using an integrated luminosity of 1.6 $\text{fb}^{-1}$. The analysis is extended to invariant masses as low as 12 GeV in the muon channel within a fiducal region of $p_\text{T}^\text{leading} > 9$ GeV, $p_\text{T}^\text{subleading} > 6$ GeV, $|\eta| < 2.4$ with 35 $\text{pb}^{-1}$. ### Measurements of electroweak production of dijets + $Z$ boson, and distributions sensitive to vector boson fusion (ATLAS_2014_I1279489) Spires | JHEP 1404 (2014) 031 | arXiv:1401.7610 [hep-ex] Measurements differential distributions for inclusive $Z$-boson-plus-dijet production are performed in five fiducial regions, each with different sensitivity to the electroweak contribution. Measured distributions include the differential cross section as a function of the dijet invariant mass, the differential cross section and a function of the dijet rapidity separation, the differential cross section as a function of the number of jets in the rapidity interval bounded by the two leading jets. Other measurements include the jet veto effiency as a function of the dijet invariant mass and rapdity separation, the normalized transverse momentum balance cut efficiency, and the average number of jets falling into the rapidity interval boundd by the two leading jets, as a function of dijet invariant mass and dijet rapidity separation. ### $Z$ + jets in $pp$ at 7 TeV (ATLAS_2013_I1230812) Spires | arXiv:1304.7098 [hep-ex] | J. High Energy Phys. 07 (2013) 032 Measurements of the production of jets of particles in association with a $Z$ boson in $pp$ collisions at $\sqrt{s}$ = 7 TeV are presented, using data corresponding to an integrated luminosity of 4.6/fb collected by the ATLAS experiment at the Large Hadron Collider. Inclusive and differential jet cross sections in $Z$ events, with Z decaying into electron or muon pairs, are measured for jets with transverse momentum $p_T > 30$ GeV and rapidity $|y| < 4.4$. This Rivet module implements the event selection for the weighted combination of both decay channels and uses the data from that combination (as in the paper plots). But for simplification of its usage it only requires events with the electronic final state (muonic final state will be ignored). This allows to use it with either pure electronic samples, or mixed electron/muon events. If you want to use it with a pure muon sample, please refer to ATLAS\_2013\_I1230812\_MU. ### W + b production at 7 TeV (ATLAS_2013_I1219109) Spires | JHEP 1306 (2013) 084 | doi:10.1007/JHEP06(2013)084 | arXiv:1302.2929 [hep-ex] Measurements of the W+b-jets ($W+b+X$ and $W+b\bar{b}+X$) production cross-section in proton-proton collisions at a centre-of-mass energy of 7 TeV at the LHC. These results are based on data corresponding to an integrated luminosity of 4.6 $fb^{−1}$, collected with the ATLAS detector. Cross-sections are presented as a function of jet multiplicity and of the transverse momentum of the leading b-jet for both the combined muon and electron decay modes of the W boson. The default routine will consider the electron decay channel of the W boson. Use ATLAS_2013_I1217863_W_EL and ATLAS_2013_I1217863_W_MU to specify the decay channel directly. ### kT splitting scales in W->lv events (ATLAS_2013_I1217867) Spires | arXiv:1302.1415 [hep-ex] Cluster splitting scales are measured in events containing W bosons decaying to electrons or muons. The measurement comprises the four hardest splitting scales in a kT cluster sequence of the hadronic activity accompanying the W boson, and ratios of these splitting scales. ### W + DPI at 7 TeV (ATLAS_2013_I1216670) Spires | New J.Phys. 15 (2013) 033038 | doi:10.1088/1367-2630/15/3/033038 | arXiv:1301.6872 [hep-ex] The production of $W$ bosons in association with two jets in proton-proton collisions at a centre-of-mass energy of $\sqrt{s} = 7$ TeV has been analysed for the presence of double-parton interactions using data corresponding to an integrated luminosity of 36/pb, collected with the ATLAS detector at the LHC. ### Measurement of the $W^+ W^-$ production cross-section at 7 TeV (ATLAS_2013_I1190187) Spires | arXiv:1210.2979 [hep-ex] Measurement of the fiducial cross section for $W^+ W^-$ production in proton proton collisions at a centre-of mass energy of 7 TeV, is presented, using data corresponding to an integrated luminosity of 4.6/fb collected by the ATLAS experiment at the Large Hadron Collider. The cross section is measured in the leptonic decay channels, using electron+MET and muon+MET $W$ decays. $W \to \tau$ processes with the tau decaying into electron + MET or muon + MET are also included in the measurement. The fiducial region contains dressed leptons in restricted $p_T$ and $\eta$ ranges. The selection has specific requirements for each production channel. A measurement of the normalized fiducial cross section as a function of the leading lepton transverse momentum is also presented. ### Measurement of angular correlations in Drell-Yan lepton pairs to probe $Z/\gamma^*$ boson transverse momentum (ATLAS_2012_I1204784) Spires | arXiv:1211.6899 [hep-ex] A measurement of angular correlations in Drell-Yan lepton pairs via the $\phi^*$ observable is presented. This variable probes the same physics as the $Z/\gamma^*$ boson transverse momentum with a better experimental resolution. The $Z/\gamma^* \to ee$ and $Z/\gamma^* \to \mu \mu$ decays produced in proton--proton collisions at a centre-of-mass energy of $\sqrt{s} = 7\;\text{TeV}$ are used. Normalised differential cross sections as a function of $\phi^*$ are measured separately for electron and muon decay channels. The cross-section is also measured double differentially as a function of $\phi^*$ for three independent bins of the $Z$ boson rapidity. ### Measurement of the $ZZ(*)$ production cross-section in $pp$ collisions at 7 TeV with ATLAS (ATLAS_2012_I1203852) Spires | arXiv:1211.6096 [hep-ex] Measurement of the fiducial cross section for $ZZ(*)$ production in proton proton collisions at a centre-of mass energy of 7 TeV, is presented, using data corresponding to an integrated luminosity of 4.6/fb collected by the ATLAS experiment at the Large Hadron Collider. The cross-section is measured using processes with two $Z$ bosons decaying to electrons or muons or with one $Z$ boson decaying to electrons or muons and a second $Z$ boson decaying to neutrinos. The fiducial region contains dressed leptons in restricted $p_T$ and $\eta$ ranges. The selection has specific requirements for both production processes. A measurement of the normalized fiducial cross-section as a function of $ZZ$ invariant mass, leading $Z$ $p_T$ and angle of two leptons coming from the leading $Z$ is also presented for both signal processes. ### $W$+jets production at 7 TeV (ATLAS_2012_I1083318) Spires | arXiv:1201.1276 [hep-ex] Differential cross-sections of properties of the four leading jets in $W$+jets production, using the full 2010 dataset of 36 pb$^-1$. Observables include jet multiplicities, $pT$, $H_T$, angular distances, and others. All observables are available using jets with $pT>30$ and $pT>20$ GeV. ### $WZ$ fiducial cross-section at 7 TeV in ATLAS (ATLAS_2011_I954993) Spires | Phys.Lett. B709 (2012) 341-357 | arXiv:1111.5570 This is a measurement of $WZ$ production in 1.02 fb$^{-1}$ of $pp$ collision data at $\sqrt{s} =$7 TeV collected by the ATLAS experiment in 2011. Doubly leptonic decay events are selected with electrons, muons and missing transverse momentum in the final state. The measurement of the combined fiducial cross section for the $WZ$ bosons decaying directly into electrons and muons is performed. ### $Z$+jets in $pp$ at 7 TeV (ATLAS_2011_I945498) Spires | arXiv:1111.2690v1 [hep-ex] | CERN-PH-EP-2011-162 Production of jets in association with a $Z/\gamma^*$ boson in proton--proton collisions at $\sqrt{s} = 7$ TeV with the ATLAS detector. The analysis includes the full 2010 data set, collected with a low rate of multiple proton--proton collisions in the accelerator, corresponding to an integrated luminosity of 36 pb$^{-1}$. Inclusive jet cross sections in $Z/\gamma^*$ events, with $Z/\gamma^*$ decaying into electron or muon pairs, are measured for jets with transverse momentum $p_T > 30$ GeV and jet rapidity $|y| < 4.4$. ### W inclusive cross sections at 7 TeV (ATLAS_2011_I928289_W) Spires The production cross sections of the inclusive Drell-Yan process $W^\pm \rightarrow \ell\nu$ ($\ell = e, µ$) are measured in proton-proton collisions at $\sqrt{s} = 7$ TeV with the ATLAS detector. The cross sections are evaluated differentially as a function of the $W$ boson rapidity based on an integrated luminosity of about 35 $\text{pb}^{−1}$ collected in 2010. The cross sections are measured separately for $W^+$ and $W^-$ production, and then used to construct the $W$ charge asymmetry as well. ### Z inclusive cross sections at 7 TeV (ATLAS_2011_I928289_Z) Spires The production cross sections of the inclusive Drell-Yan process $Z/\gamma^\ast \rightarrow \ell\ell$ ($\ell = e, µ$) are measured in proton-proton collisions at $\sqrt{s} = 7$ TeV with the ATLAS detector. The cross sections are evaluated differentially as a function of the $Z$ boson rapidity based on an integrated luminosity of about 35 $\text{pb}^{−1}$ collected in 2010. ### Measurement of the W pT with electrons and muons at 7 TeV (ATLAS_2011_I925932) Spires | arXiv:1108.6308v1 [hep-ex] The W pT at $\sqrt{s} = 7$\;TeV is measured using $W\to e \, \nu_e$ and $W\to \mu \, \nu_\mu$ decay channels. The dressed leptons kinematics calculated from the sum of the post-FSR lepton momentum and the momenta of all photons radiated in a cone around the lepton, while the bare uses the lepton kinematics after all QED radiation. ### Measurement of the Z pT with electrons and muons at 7 TeV (ATLAS_2011_S9131140) Spires | arXiv:1107.2381 [hep-ex] The Z pT at $\sqrt{s} = 7$\;TeV is measured using electron and muon $Z$ decay channels. The dressed leptons definition uses photons clustered in a cone around the charged leptons, while the bare lepton definition uses the post-FSR charged leptons only in the $Z$ reconstruction. The data used in the bare leptons calculation are based on a forward application of a PHOTOS-based energy loss correction and are hence not quite model-independent. ### Muon charge asymmetry in W events at 7 TeV in ATLAS (ATLAS_2011_S9002537) Spires | arXiv:1103.2929 Measurement of the muon charge asymmetry from W bosons produced in proton-proton collisions at a centre-of-mass energy of 7 TeV with ATLAS. The asymmetry is measured in the $W \to \mu$ decay mode as a function of the muon pseudorapidity using a data sample corresponding to a total integrated luminosity of 31 pb$^{-1}$. ### $W$ + jets jet multiplicities and $p_\perp$ (ATLAS_2010_S8919674) Spires | arXiv:1012.5382 [hep-ex] Cross sections, in both the electron and muon decay modes of the W boson, are presented as a function of jet multiplicity and of the transverse momentum of the leading and next-to-leading jets in the event. Measurements are also presented of the ratio of cross sections for inclusive jet multiplicities. The results, based on an integrated luminosity of 1.3 pb-1, have been corrected for all known detector effects and are quoted in a limited and well-defined range of jet and lepton kinematics. ### Jet multiplicity and differential cross-sections of $Z$+jets events in $pp$ at $\sqrt{s} = 7$ TeV (CMS_2015_I1310737) Spires | Phys.Rev. D91 (2015) 052008 | http://dx.doi.org/10.1103/PhysRevD.91.052008 | http://arxiv.org/abs/arXiv:1408.3104 | http://inspirehep.net/record/1310737 Measurements of differential cross sections are presented for the production of a Z boson and at least one hadronic jet in proton-proton collisions at $\sqrt{s}=7$ TeV, recorded by the CMS detector, using a data sample corresponding to an integrated luminosity of 4.9 $\text{fb}^{-1}$. The jet multiplicity distribution is measured for up to six jets. The differential cross sections are measured as a function of jet transverse momentum and pseudorapidity for the four highest transverse momentum jets. The distribution of the scalar sum of jet transverse momenta is also measured as a function of the jet multiplicity. The measurements are compared with theoretical predictions at leading and next-to-leading order in perturbative QCD. Cuts: First two leading electrons or muons with $p_T > 20$ GeV and $|\eta| < 2.4$ Dilepton invariant mass in the [71,111] GeV range Jets $p_T > 30$ GeV and $|\eta| < 2.4$ $\Delta{R}($lepton,jet$) > 0.5$ ### Differential cross-section of $W$ bosons + jets in $pp$ collisions at $\sqrt{s}=7$ TeV (CMS_2014_I1303894) Spires | Phys. Lett. B741 (2014) 12-37 | https://inspirehep.net/record/1303894 | http://arxiv.org/abs/1406.7533 A study of jet production in association with $W$ bosons has been performed, in events with the $W$ decaying to a muon. Jets are required to have $pT > 30$ GeV and $|eta| < 2.4$. Muons are required to have $pT > 25$ and $|eta| < 2.1$. Jets are only considered if they are separated from the muon by $\Delta{R} > 0.5$. Muons are dressed with photons in a cone of $0.1$ around the muon. ### Study of observables sensitive to double parton scattering in $W + 2$ jets process in $pp$ collisions at $\sqrt{s} = 7$ TeV (CMS_2013_I1272853) Spires | CMS-FSQ-12-028 | CERN-PH-EP-2013-224 | arXiv:1312.5729 | Submitted to JHEP Double parton scattering is investigated in proton-proton collisions at $\sqrt{s} = 7$ TeV where the final state includes a $W$ boson, which decays into a muon and a neutrino, and two jets. The data sample corresponds to an integrated luminosity of 5 inverse femtobarns, collected with the CMS detector at the LHC. ### Rapidity distributions in exclusive $Z$ + jet and $\gamma$ + jet events in $pp$ collisions at $\sqrt{s} = 7$ TeV (CMS_2013_I1258128) Spires | arXiv:1310.3082 | https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsSMP12004 | Submitted to Phys. Rev. Lett Rapidity distributions are presented for events containing either a $Z$ boson or a photon in association with a single jet in proton-proton collisions produced at the CERN LHC. The data, collected with the CMS detector at $\sqrt{s} = 7$ TeV, correspond to an integrated luminosity of 5.0/fb. The individual rapidity distributions of the boson and the jet are consistent within 5\% with expectations from perturbative QCD. However, QCD predictions for the sum and the difference in rapidities of the two final-state objects show significant discrepancies with CMS data. In particular, next-to-leading-order QCD calculations, and two Monte Carlo event generators using different methods to merge matrix-element partons with evolved parton showers, appear inconsistent with the data as well as with each other. ### Cross-section and angular correlations in $Z$ boson with $b$-hadrons events at $\sqrt{s} = 7$ TeV (CMS_2013_I1256943) Spires | 10.1007 / JHEP12(2013)039 | arXiv:1310.1349 | CERN-PH-EP-2013-153 | CMS-EWK-11-015 A study of proton-proton collisions in which two $b$-hadrons are produced in association with a $Z$ boson is reported. The collisions were recorded at a centre-of-mass energy of 7 TeV with the CMS detector at the LHC, for an integrated luminosity of 5.2/fb. The $b$-hadrons are identified by means of displaced secondary vertices, without the use of reconstructed jets, permitting the study of $b$-hadron pair production at small angular separation. Differential cross sections are presented as a function of the angular separation of the $b$-hadrons and the $Z$ boson. In addition, inclusive measurements are presented. For both the inclusive and differential studies,different ranges of $Z$ boson momentum are considered. ### CMS jet mass measurement in $W$ + jet events (CMS_2013_I1224539_WJET) Spires | arXiv:1303.4811 Measurements of the mass spectra of the jets in dijet and $W/Z$+jet events from proton--proton collisions at a centre-of-mass energy of 7 TeV using a data sample of integrated luminosity of 5 fb$^-1$. The jets are reconstructed using the both the anti-$k_T$ algorithm with $R=0.7$ (AK7) and the Cambridge-Aachen algorithm with $R=0.8$ (CA8) and $R=1.2$ (CA12) with several grooming techniques applied (ungroomed, filtered, pruned and trimmed). See the text of the paper for more details. For the dijet events the distributions are presented as a function of the mean Mass of the two leading jets in bins of the mean p_\perp of the two jets. ### CMS jet mass measurement in $Z$ + jet events (CMS_2013_I1224539_ZJET) Spires | arXiv:1303.4811 Measurements of the mass spectra of the jets in dijet and $W/Z$+jet events from proton--proton collisions at a centre-of-mass energy of 7 TeV using a data sample of integrated luminosity of 5 fb$^-1$. The jets are reconstructed using the both the anti-$k_T$ algorithm with $R=0.7$ (AK7) and the Cambridge-Aachen algorithm with $R=0.8$ (CA8) and $R=1.2$ (CA12) with several grooming techniques applied (ungroomed, filtered, pruned and trimmed). See the text of the paper for more details. For the dijet events the distributions are presented as a function of the mean Mass of the two leading jets in bins of the mean p_\perp of the two jets. ### Azimuthal correlations and event shapes in $Z$ + jets in $pp$ collisions at 7 TeV (CMS_2013_I1209721) Spires | http://cms.cern.ch/iCMS/analysisadmin/cadi?ancode=EWK-11-021 | https://cds.cern.ch/record/1503578 | http://inspirehep.net/record/1209721 | arXiv:1301.1646 [hep-ex] (http://arxiv.org/abs/arXiv:1301.1646) | Submitted to Phys. Lett. B Measurements are presented of event shapes and azimuthal correlations in the inclusive production of a Z boson in association with jets in proton-proton collisions. The data correspond to an integrated luminosity of 5.0/fb, collected with the CMS detector at the CERN LHC at $\sqrt{s} = 7$\;TeV. This to test perturbative QCD predictions and evaluate a substantial background to most physics channels. Studies performed as a function of jet multiplicity for inclusive $Z$ boson production and for $Z$ bosons with transverse-momenta greater than 150\;GeV, are compared to predictions from Monte Carlo event generators that include leading-order multiparton matrix-element (with up to four hard partons in the final state) and next-to-leading-order simulations of Z + 1-jet events. The results are corrected for detector effects, and can therefore be used as input to improve models for describing these processes. ### Forward-backward asymmetry A_FB in Drell-Yan lepton pairs at sqrt(s) = 7 TeV (CMS_2013_I1122847) Spires | Phys. Lett. B 718 (2013) 752 | doi:10.1016/j.physletb.2012.10.082 | http://arxiv.org/abs/1207.3973 This analysis measures the forward-backward asymmetry $A_{FB}$ in Drell-Yan events at a center-of-mass energy of 7 TeV. Both the individual and combined electron and muon pair channels are analyzed. In four rapidity regions, $A_{FB}$ is given as a function of the lepton mass. The data, recorded with the CMS detector, corresponds to an integrated luminosity of $5\,\textrm{fb}^{-1}$. ### Measurement of the underlying event activity in the Drell-Yan process at a centre-of-mass energy of 7 TeV (CMS_2012_I1107658) Spires | CMS-QCD-11-012 | CERN-PH-EP-2012-085 | arXiv:1204.1411 [hep-ex] A measurement of the underlying event activity using Drell-Yan events using muonic final state. The production of charged particles with pseudorapidity $|\eta| < 2$ and transverse momentum $p_\perp > 0.5\,\text{GeV}/c$ is studied in towards, transverse and away region w.r.t. to the direction of di-muon system. The UE activity is measured in terms of of a particle density and an energy density. The particle density is computed as the average number of primary charged particles per unit pseudorapidity and per unit azimuth. The energy density is expressed in terms of the average of the scalar sum of the transverse momenta of primary charged particles per unit pseudorapidity and azimuth. The ratio of the energy and particle density is also reported in 3 regions. UE activity is studied as a function of invariant mass of muon pair ($M_{\mu\mu}$) by limiting the ISR contribution by requiring transverse momentum of muon pair $p_\perp(\mu\mu) < 5\,\text{GeV}/c$. The $p_\perp(\mu\mu)$ dependence is studied for the events having $M_{\mu\mu}$ in window of 81--101 GeV/$c$. The normalized charged particle multiplicity and $p_\perp$ spectrum of the charged particles in three regions also been reported for events having $M_{\mu\mu}$ in window of 81--101 GeV/$c$. Multiplicity and $p_\perp$ spectra in the transverse region are also reported, for events having $p_\perp(\mu\mu) < 5\,\text{GeV}/c$. ### Measurement of differential $Z/\gamma^*$ $p_T$ and y (CMS_2012_I941555) Spires | Phys.Rev. D85 (2012) 032002 | arXiv:1110.4973 | CMS-EWK-10-010 | CERN-PH-EP-2011-169 Cross section as a function of $p_T$ and y of the Z boson decaying into muons in p p collisions at $\sqrt{s}$ = 7 TeV. $p_T$ and y cross sections are measured for $60 < m_{\mu\mu} < 120$ GeV. The $p_T$ cross section is measured for lepton $p_T > 20$ GeV and $\eta < 2.1$, while the y cross section is extrapolated to all lepton $p_T$ and $\eta$. This measurement was performed using 36 pb$^{-1}$ of data collected during 2010 with the CMS detector at the LHC. ### Differential cross-sections of $\mathrm{Z}/\gamma^* \to e^{+}e^{-}$ vs rapidity and $\phi^*$ (LHCB_2012_I1208102) Spires | J. High Energy Phys. 02 (2013) 106 | doi:10.1007/JHEP02(2013)106 | arXiv:1212.4620 [hep-ex] Measurement of the $pp \to \mathrm{Z}^0$ cross-section in the $\mathrm{Z}/\gamma \to e^{+}e^{-}$ mode at $\sqrt{s} = 7$ TeV. Daughter electrons are required to have $p_T > 20$ GeV/$c$, $2 < \eta < 4.5$ and the dielectron invariant mass in range 60-120 GeV/$c^2$. The cross-section is given as a function of $Z$ rapidity and an angular variable ($\phi^*$) closely related to $Z$ transverse momentum (derived from the lepton pseudorapidity and azimuthal angle differences). For event generators implementing cross-section QCD corrections only at LO the distributions are normalized to the cross-section measured in data $76.0 \pm 0.8 \pm 2.0 \pm 2.6 \pm 0.4$ pb, where the first uncertainty is statistical, the second is systematic, the third is due to luminosity uncertainty and the fourth to FSR corrections. ### Monte Carlo validation observables for $W[e \, \nu]$ + jets production (MC_WJETS) Monte Carlo validation observables for $W[e \, \nu]$ + jets production ### Monte Carlo validation observables for $W$ polarisation (MC_WPOL) Observables sensitive to the polarisation of the W boson: A0, ... A7, fR, fL, f0, separately for W+ and W-. ### Monte Carlo validation observables for $W^+[e^+ \, \nu]W^-[\mu^- \, \nu]$ + jets production (MC_WWJETS) In addition to the typical jet observables this analysis contains observables related to properties of the WW-pair momentum, correlations between the WW, properties of the W bosons, properties of the leptons, correlations between the opposite charge leptons and correlations with jets. ### Monte Carlo validation observables for $Z[e^+ \, e^-]$ production (MC_ZINC) Monte Carlo validation observables for $Z[e^+ \, e^-]$ production ### Monte Carlo validation observables for $Z[e^+ \, e^-]$ + jets production (MC_ZJETS) Available observables are the pT of jets 1-4, jet multiplicity, $\Delta\eta(Z, \text{jet1})$, $\Delta R(\text{jet2}, \text{jet3})$, differential jet rates 0->1, 1->2, 2->3, 3->4, and integrated 0--4 jet rates. ### Monte Carlo validation observables for $Z[e^+ \, e^-]Z[\mu^+ \, \mu^-]$ + jets production (MC_ZZJETS) In addition to the typical jet observables this analysis contains observables related to properties of the ZZ-pair momentum, correlations between the ZZ, properties of the Z bosons, properties of the leptons, correlations between the opposite charge leptons and correlations with jets. Generated at Saturday, 05. December 2015 12:10PM
auto_math_text
web
# Level Theta S images My Virtual Forest project is still running strong and generates tons of spherical images (currently ~50GB). However, the post on which the camera sits is not perfectly level.  The Theta S camera normally compensates for this using an internal gyroscope which detects pitch and roll of the camera.  Yet, when downloading images directly from the camera no adjustments are made and the pitch and roll data is merely recorded in the EXIF data of the image. As such I wrote a small bash script which rectifies (levels the horizon) in Theta S spherical images using this internal EXIF data. This is an alternative implementation to the THETA EXIF Library by Regen. I use his cute Lama test images for reference. All credit for the funky images go to Regen. Below is the quick install guide to using my script. I hope it helps speed up people’s Theta S workflow. ## Install Download, fork or copy paste the script from my github repository to your machine and make it executable. $chmod +x theta_rectify.sh ## Use $ theta_rectify.sh image.jpg The above command will rectify the image.jpg file and output a new file called image_rectified.jpg. Visual comparison between my results and those of Regen’s python script show good correspondence. ## Requirements The script depends on a running copy of exiftools, imagemagick and POVRay. These tools are commonly available in most Linux distros, and can be installed on OSX using tools such as homebrew. I lack a MS Windows system, but the script should be easily adjusted to cover similar functionality.
auto_math_text
web
Themed Section: COVID-19| Volume 25, ISSUE 5, P699-708, May 2022 Ok # Quantifying the Effect of Public Activity Intervention Policies on COVID-19 Pandemic Containment Using Epidemiologic Data From 145 Countries • Author Footnotes ∗ Jichao Sun, Yefeng Zheng, Wenhua Liang, Zifeng Yang, and Zhiqi Zeng contributed equally to this work. Published:December 07, 2021 ## Highlights • The evidence regarding the effectiveness of different public activity intervention policies on coronavirus disease 2019 (COVID-19) containment is currently inconsistent. • Earlier implementation, longer durations, and more strictness of intervention policies at the early but not middle stage were associated with reduced infections of COVID-19. • A novel counterfactual estimator proved to be valid and reliable in estimating the quantitative effects of policy intervention on COVID-19 containment. ## Abstract ### Objectives Most countries have adopted public activity intervention policies to control the coronavirus disease 2019 (COVID-19) pandemic. Nevertheless, empirical evidence of the effectiveness of different interventions on the containment of the epidemic was inconsistent. ### Methods We retrieved time-series intervention policy data for 145 countries from the Oxford COVID-19 Government Response Tracker from December 31, 2019, to July 1, 2020, which included 8 containment and closure policies. We investigated the association of timeliness, stringency, and duration of intervention with cumulative infections per million population on July 1, 2020. We introduced a novel counterfactual estimator to estimate the effects of these interventions on COVID-19 time-varying reproduction number (Rt). ### Results There is some evidence that earlier implementation, longer durations, and more strictness of intervention policies at the early but not middle stage were associated with reduced infections of COVID-19. The counterfactual model proved to have controlled for unobserved time-varying confounders and established a valid causal relationship between policy intervention and Rt reduction. The average intervention effect revealed that all interventions significantly decrease Rt after their implementation. Rt decreased by 30% (22%-41%) in 25 to 32 days after policy intervention. Among the 8 interventions, school closing, workplace closing, and public events cancellation demonstrated the strongest and most consistent evidence of associations. ### Conclusions Our study provides more reliable evidence of the quantitative effects of policy interventions on the COVID-19 epidemic and suggested that stricter public activity interventions should be implemented at the early stage of the epidemic for improved containment. ## Introduction As of July 2021, the severe acute respiratory syndrome coronavirus 2, causing the coronavirus disease 2019 (COVID-19), is still spreading globally. Coronavirus COVID-19 global cases by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (JHU). The nonpharmaceutical interventions appear to be an important way that could reduce virus transmission until effective treatment regimens or mass immunizations are available. • Fong M.W. • Gao H. • Wong J.Y. • et al. Nonpharmaceutical measures for pandemic influenza in nonhealthcare settings-social distancing measures. , • Ryu S. • Gao H. • Wong J.Y. • et al. Nonpharmaceutical measures for pandemic influenza in nonhealthcare settings-international travel-related measures. These intervention strategies include swift surveillance, quarantine, and physical distancing measures such as school and workplace closing, internal and external travel restrictions, and stay at home requirements. • Chu D.K. • Akl E.A. • Duda S. • et al. Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: a systematic review and meta-analysis. • Nussbaumer-Streit B. • Mayr V. • Dobrescu A. • et al. Quarantine alone or in combination with other public health measures to control COVID-19: a rapid review. • Davies N.G. • Kucharski A.J. • Eggo R.M. • Gimma A. • Edmunds W.J. Centre for the Mathematical Modelling of Infectious Diseases COVID-19 Working Group. Effects of non-pharmaceutical interventions on COVID-19 cases, deaths, and demand for hospital services in the UK: a modelling study. • Flaxman S. • Mishra S. • Gandy A. • et al. Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe. Almost all countries have adopted a series of containment and closure policies at different time points, and some seem to have curbed the virus transmission with varying degrees of success. • Chowdhury R. • Heng K. • Shawon M.S.R. • et al. Dynamic interventions to control COVID-19 pandemic: a multivariate prediction modelling study comparing 16 worldwide countries. • Jarvis C.I. • Van Zandvoort K. • Gimma A. • et al. Quantifying the impact of physical distance measures on the transmission of COVID-19 in the UK. • Ngonghala C.N. • Iboi E. • Eikenberry S. • et al. Mathematical assessment of the impact of non-pharmaceutical interventions on curtailing the 2019 novel coronavirus. Nevertheless, the quantitative evidence on the effectiveness of different intervention policies has been inconsistent. Most early studies applied modeling assumptions using data within a single country to examine the effectiveness of interventions. • Davies N.G. • Kucharski A.J. • Eggo R.M. • Gimma A. • Edmunds W.J. Centre for the Mathematical Modelling of Infectious Diseases COVID-19 Working Group. Effects of non-pharmaceutical interventions on COVID-19 cases, deaths, and demand for hospital services in the UK: a modelling study. , • Jarvis C.I. • Van Zandvoort K. • Gimma A. • et al. Quantifying the impact of physical distance measures on the transmission of COVID-19 in the UK. • Ngonghala C.N. • Iboi E. • Eikenberry S. • et al. Mathematical assessment of the impact of non-pharmaceutical interventions on curtailing the 2019 novel coronavirus. Milne GJ, Xie S. The Effectiveness of Social Distancing in Mitigating COVID-19 Spread: a modelling analysis. Preprint. Posted online March 23, 2020. medRxiv 2020.03.20.20040055. https://doi.org/10.1101/2020.03.20.20040055. • Lai S. • Ruktanonchai N.W. • Zhou L. • et al. Effect of non-pharmaceutical interventions to contain COVID-19 in China. • Matrajt L. • Leung T. Evaluating the effectiveness of social distancing interventions to delay or flatten the epidemic curve of coronavirus disease. For example, Lai et al • Lai S. • Ruktanonchai N.W. • Zhou L. • et al. Effect of non-pharmaceutical interventions to contain COVID-19 in China. developed a simulation framework using daily travel networks across China. They estimated that without nonpharmaceutical interventions, the COVID-19 cases would likely have increased 67-fold. Another research group developed an age-structured susceptible-exposed-infectious-removed model with data from medium-sized cities in the United States, suggesting that interventions that started earlier in the epidemic delayed the epidemic curve and interventions that started later flattened the epidemic curve. • Matrajt L. • Leung T. Evaluating the effectiveness of social distancing interventions to delay or flatten the epidemic curve of coronavirus disease. Nevertheless, those results were derived from mathematical assumptions under presumptive scenarios and thus could not be verified. Data from individual countries suffered from its intrinsic incapability in quantifying and comparing the effects of different interventions. By now, evidence from comparative analysis using data from multiple countries is still inconsistent. The major issue in analyzing the causal relationships is that there exist unobserved confounders such as different testing capacities over time and heterogeneity across countries. Previous empirical studies leveraging straightforward statistical methods failed to address this problem. • Liu Y. • Morgenstern C. • Kelly J. • Lowe R. • Jit M. CMMID COVID-19 Working Group The impact of non-pharmaceutical interventions on SARS-CoV-2 transmission across 130 countries and territories. , • Islam N. • Sharp S.J. • Chowell G. • et al. Physical distancing interventions and incidence of coronavirus disease 2019: natural experiment in 149 countries. By early July 2020, with most Asian and European countries reaching the late stage of the first epidemic wave and the availability of detailed intervention information, we were able to retrospectively scrutinize the effects of interventions using more sophisticated statistical methods. In this study, we performed a comprehensive analysis of the effectiveness of the timeliness, stringency, and duration of 8 public intervention policies on COVID-19 containment using data from worldwide countries. Moreover, we introduced a novel counterfactual estimator based on the time-series COVID-19 epidemic data to quantify the effects of different interventions with less bias. ## Methods ### Data Sources and Selection The daily confirmed cases for COVID-19 of each country were retrieved from https://ourworldindata.org from December 31, 2019, to July 1, 2020. The data were collected and reported by the health authority of each country. The country-based time-series data for the containment and closure policies were retrieved from the Oxford COVID-19 Government Response Tracker (https://github.com/OxCGRT/covid-policy-tracker) during the same time period. Details of the data collection and annotation have been described in a working article. • Hale T. • Noam A. • Beatriz K. • et al. Variation in government responses to COVID-19. Blavatnik School of Government Working Paper. In brief, a group of policy and government experts routinely collected information on public policies worldwide, including containment and closure interventions, and economic and healthcare supports. The policies of our interest were containment and closure interventions including 8 regimens, namely, school closing, workplace closing, public events cancellation, restrictions on gatherings, public transport closing, stay at home requirements, restrictions on internal movement, and international travel controlling. Each of the 8 interventions was recorded on an ordinal scale representing the level of strictness of the policy. Take workplace closing for example, 0 represents no measures; 1 represents recommending closing; 2 represents requiring closing for some sectors or categories of workers; and 3 represents requiring closing (or work from home) for all-but-essential workplaces. We selected countries with a time series longer than 90 days and the number of cumulative infections > 100 to reduce uncertainties. We coded each of the 8 interventions into 3 independent variables (Start-Date, Stringency, and Duration). Start-Date was recorded as the days of intervention commencement relative to the first 100 cases occurrence in that country, which served as an indicator for timely response to the pandemic. We selected the date of the first 100 cases instead of the first case as the start point because the detection of the first case was subject to more randomness that might introduce substantial noise in the following analysis. Stringency was measured as the average level of strictness across all days during a certain epidemic phase. Duration was measured as the number of days under intervention divided by the number of total days during a certain epidemic phase. ### Outcomes The first outcome was the estimated cumulative infections per million population for each country on July 1, 2020. This outcome was used to investigate the correlations with different intervention variables. We chose July 1, 2020, as the endpoint because most Asian and European countries had reached the end of their first epidemic waves by late June or early July 2020 (see Appendix Fig. 1 in Supplemental Materials found at https://doi.org/10.1016/j.jval.2021.10.007). During this period, some governments lifted policy restrictions and the public loosened precautions, which in certain cases led to the resurgence of cases and deaths subsequently. Then, the governments reimposed policies in a policy seesaw as the epidemic waxed and waned. • Hale T. • Angrist N. • Goldszmidt R. • et al. A global panel database of pandemic policies (Oxford COVID-19 Government Response Tracker). , • Looi M.K. COVID-19: is a second wave hitting Europe?. The lift and reimposition of policies and resurgence of the epidemic would affect the estimation of intervention effects. We aim to study the effect of policies within a single epidemic wave, with the consideration of reducing confounding and uncertainty. Nevertheless, there is no precise and consistent cutoff date for all countries. Therefore, we performed additional sensitivity analyses, by altering the cutoff date 1 month earlier (June 1, 2020) or later (August 1, 2020), to test the robustness of the results. The number of reported cases represents a straightforward metric reflecting the severity of the pandemic and has been widely used in previous articles for comparison between countries. • Chaudhry R. • Dranitsaris G. • Mubashir T. • Bartoszko J. • Riazi S. A country level analysis measuring the impact of government actions, country preparedness and socioeconomic factors on COVID-19 mortality and related health outcomes. , • Zweig S.A. • Zapf A.J. • Xu H. • et al. Impact of public health and social measures on the COVID-19 pandemic in the United States and other countries: descriptive analysis. Nevertheless, the metric has limitations because the testing and reporting strategies are different across countries. In the current study, we chose the number of true infections rather than the number of reported cases as the metric for comparison. Given that the true infections were not known, we referred to the age-structured susceptible-exposed-infectious-removed model (https://github.com/mrc-ide/squire) developed by Imperial College London to estimate true infections. This Imperial College London model is among the most widely used approaches for estimations of true infections and has been recommended by Our World in Data for studying policy effects. In brief, the model fit data on confirmed deaths by using an estimated infection fatality rate to “back-calculate” how many infections would have occurred over the previous weeks to produce that number of deaths. It also accounted for mobility and testing rates data by country if available under a range of assumptions and epidemiological knowledge to generate a less biased infection estimate. The second outcome was the time-varying reproduction number (Rt) for each country on each day. Rt is a measurement that represents the mean number of secondary cases that were infected by 1 index case. We used the median Rt estimates from the widely used EpiForecasts model (https://epiforecasts.io). The process of estimation was based on confirmed cases and deaths while accounting for uncertainties of the incubation period, the infection-to-confirmation delays, and the infection-to-death delays. The method of calculating Rt has been detailed in Cori et al. • Cori A. • Ferguson N.M. • Fraser C. • Cauchemez S. A new framework and software to estimate time-varying reproduction numbers during epidemics. In brief, the transmission rate of COVID-19 can be estimated by the ratio between new infections or deaths at time t and the infectious people at time t - d where d is the previous infection-to-confirmation delays or the infection-to-death delays as appropriate. The missing data (ie, confirmed cases or policy intervention) in the middle of time series were linearly interpolated using nonmissing observations. ### Correlation Analysis We conducted Spearman rank correlation analysis between Start-Date variable of each intervention and cumulative infection numbers using paired data from countries, to test whether the delayed policy implementation was associated with more infected cases. We chose Spearman rank correlation because the Start-Date variable (also the Stringency and Duration variables) has a highly skewed distribution. The correlation coefficient ranging from −1 to 1 is a statistical measure of the strength of a monotonic relationship between 2 variables. We also draw boxplots of the outcome by tertiles of Start-Date to examine the monotonic relationship. Because countries hit by the epidemic later may implement interventions timelier in view of the outbreaks in other countries, we performed an additional partial correlation analysis that was adjusted for the absolute date of first case occurrence. We investigated the Spearman rank correlations of cumulative infection numbers with Stringency and Duration of interventions, separately at the early and middle stages of the epidemic. We adopted an approach proposed in a previous article Batista M. Estimation of the final size of coronavirus epidemic by the logistic model. Preprint. Posted online February 28, 2020. medRxiv 2020.02.16.20023606. https://doi.org/10.1101/2020.02.16.20023606. that divided the progression curve of COVID-19 into the early slow growth phase, the middle fast growth phase, and the late steady phase, using a data-driven phenomenological logistic model. The slope of the curve represents the rate of epidemic growth, which is used to divide different phases. Examples of the fitted curves and phase cutoffs are shown in Appendix Fig. 2 in Supplemental Materials found at https://doi.org/10.1016/j.jval.2021.10.007. For sensitivity analysis, we alternatively defined the early phase as the first month since the first confirmed case occurrence for each country. ### Counterfactual Effect Estimates First, we compared the trends in the Rt before and after the implementation of each intervention for descriptive purpose. We calculated the mean value (95% confidence intervals [CIs]) of the Rt for all countries on different days relative to intervention implementation. As a summarization, we defined the commence date of any intervention as the median date of different intervention initiation dates if available and thus generated a new variable named “any intervention.” This simple “averaging method” provided a direct way for us to inspect the effect of interventions on Rt. We introduced a new counterfactual estimator to infer causal relationships between interventions and Rt using time-series cross-sectional data (see Appendix Fig. 3 in Supplemental Materials found at https://doi.org/10.1016/j.jval.2021.10.007). Counterfactual estimator compares the observed outcomes with those one would expect if the intervention had not been implemented. In brief, the counterfactual estimator first constructs a model using time-series observations in the preintervention period and then takes observations under intervention as missing data and directly estimates their counterfactuals. This method has been detailed in previous articles and shown to provide more reliable causal effects than the conventional linear 2-way fixed effect models when the intervention effect is heterogeneous among units or there exist unobserved time-varying confounders. Liu L, Wang Y, Xu Y. A practical guide to counterfactual estimators for causal inference with time-series cross-sectional data. Preprint. Posted online April 10, 2020. SSRN 3555463. https://doi.org/10.2139/ssrn.3555463. Therefore, the improved model was able to take account of the influences of time factor and country (unit) factor (such as population and Gross Domestic Product) on outcomes. The original article Liu L, Wang Y, Xu Y. A practical guide to counterfactual estimators for causal inference with time-series cross-sectional data. Preprint. Posted online April 10, 2020. SSRN 3555463. https://doi.org/10.2139/ssrn.3555463. provided 3 sets of counterfactual estimators and we chose the improved interactive 2-way fixed-effects model because of its ability to deal with time-varying confounders, as suggested by the author. The model is as follows: For any i = 1, 2, …, N and t = 1, 2, …, T, $Yit=δitDit+Xit′β+λi′ft+αi+ξt+εit$ where Yit is the outcome (Rt) for country i at time t; Dit is intervention indicator that equals 1 if country i is under intervention at time t and equals 0 otherwise; δit is the intervention effect on country i at time t; Xit is a (p×1) vector of exogenous covariates; β is a (p×1) vector of unknown parameters; ft is an (r × 1) vector of unobserved common factors; and λi is an (r × 1) vector of unknown factor loadings. Intuitively, factors can be understood as time-varying trends that affect each country differently, and factor loadings capture their heterogeneous impacts caused by each country’s various unobserved characteristics. Here, the interactive component $λi′ft$ implicitly captures the effects of unobserved time-varying confounders and the effects of other policies, through which the influence of other policies is eliminated or controlled when performing estimations of δit. αi and ξt are additive country and time fixed effects, respectively, and εit represents unobserved idiosyncratic shocks for country i at time t and has 0 mean. The primary causal quantity of interest is the average intervention effect, which is an approximation of the estimated effects of the intervention on the outcome after policy implementation over time. Details about the calculation of average intervention effect were shown in Appendix Method 1 in Supplemental Materials found at https://doi.org/10.1016/j.jval.2021.10.007. ## Results Overall, time-series data of the intervention policies were retrieved for 178 countries on July 1, 2020, from the Oxford COVID-19 Government Response Tracker. After exclusion, a total of 145 countries were included in the current study. The number of countries in different continents is as follows: 36 in Europe, 36 in Asia, 47 in Africa, 13 in North America, 11 in South America, and 2 in Oceania. Cumulative infections per million population on July 1 ranges from 46 (Burundi) to 212 154 (Peru), with a median value of 9867 (interquartile range 2655-30 581). ### Association of Intervention Start-Date With Cumulative Infections Per Million Population Correlations between the Start-Date of 8 interventions and cumulative infections per million population are shown in Table 1. Start-Dates of all 8 interventions were significantly and positively associated with the outcome, suggesting that the later intervention was commenced, the more infected cases that would be expected in that country. Start-Dates of public events cancellation (correlation coefficient [r] = 0.45), school closing (r = 0.43), and international travel controls (r = 0.43) showed the most pronounced associations. We displayed the boxplots of the outcome by tertiles of Start-Date in Figure 1. Table 1Correlations of the Start-Date for 8 interventions with cumulative infections per million population on July 1, 2020. InterventionsCorrelation coefficients95% CI School closing0.430.28-0.55 Workplace closing0.280.12-0.42 Public events cancellation0.450.31-0.57 Restrictions on gatherings0.320.17-0.46 Public transport closing0.220.06-0.37 Stay at home requirements0.270.12-0.42 Restrictions on internal movement0.270.12-0.42 International travel controls0.430.29-0.55 Note. The correlation coefficients were calculated using Spearman rank correlation analysis. Start-Date for each intervention was the days of intervention initiation relative to the date of first cumulative 100 cases occurrence. CI indicates confidence interval. The distributions of cumulative infections by tertitles of Start-Date for 8 interventions are presented in Figure 1. A similar monotonic increasing trend of cumulative infections along with Start-Date tertiles was observed for all interventions. Countries in tertile 3 demonstrated notably more infections and wider distributions than those in tertiles 1 and 2. Additional partial correlation analysis that was adjusted for the absolute date of first case occurrence did not change the results substantially (see Appendix Table 1 in Supplemental Materials found at https://doi.org/10.1016/j.jval.2021.10.007). Sensitivity analysis by changing the definition of Start-Date variable to days relative to the first 10 cases occurrence showed that the positive associations still persist in most cases, although some are not significant (see Appendix Table 2 in Supplemental Materials found at https://doi.org/10.1016/j.jval.2021.10.007). ### Association of Intervention Stringency and Duration With Cumulative Infections Per Million Population The associations of intervention Stringency and Duration with cumulative infections per million population at different epidemic phases are presented in Table 2 and Appendix Table 3 in Supplemental Materials found at https://doi.org/10.1016/j.jval.2021.10.007. Most of the Stringency and Duration variables for the 8 interventions in the early phase (slow growth period) were negatively correlated with the outcome, with some of them showing significance (Table 2). Nevertheless, during the middle phase (the fast growth phase), the Stringency and Duration were mostly positively correlated with the outcome. The average duration of the early phase for all countries was 61 days. We conducted a further analysis by calculating Stringency and Duration in the first month and the second month of the epidemic, respectively. Similar results were found that the Stringency and Duration variables in the first month were mostly negatively, whereas in the second month were mostly positively, associated with cumulative infections (see Appendix Table 3 in Supplemental Materials found at https://doi.org/10.1016/j.jval.2021.10.007). Table 2Correlations of the Stringency and Duration for 8 interventions with cumulative infections per million population on July 1, 2020. InterventionsStringencyDuration Correlation coefficient95% CICorrelation coefficient95% CI Early phase School closing−0.14−0.30 to 0.02−0.15−0.31 to 0.01 Workplace closing−0.05−0.11 to 0.21−0.02−0.18 to 0.15 Public events cancellation−0.16−0.31 to 0.00−0.18−0.33 to −0.02 Restrictions on gatherings−0.02−0.18 to 0.14−0.10−0.26 to 0.06 Public transport closing0.04−0.13 to 0.200.04−0.13 to 0.20 Stay at home requirements−0.05−0.12 to 0.21−0.01−0.18 to 0.15 Restrictions on internal movement−0.03−0.14 to 0.19−0.02−0.14 to 0.18 International travel controls−0.18−0.33 to −0.02−0.23−0.38 to −0.07 Middle phase School closing0.11−0.07 to 0.270.170.00-0.34 Workplace closing0.340.17-0.480.340.18-0.48 Public events cancellation0.15−0.02 to 0.310.15−0.02 to 0.32 Restrictions on gatherings0.340.18-0.480.220.05-0.38 Public transport closing0.15−0.02 to 0.320.250.08-0.41 Stay at home requirements0.180.01-0.340.240.07-0.39 Restrictions on internal movement0.260.10-0.420.330.17-0.47 International travel controls−0.03−0.20 to 0.150.03−0.15 to 0.20 Note. The correlation coefficients were calculated using Spearman correlation analysis, separately in the early phase and middle phase. CI indicates confidence interval. The results suggested some evidence that the longer and stricter implementations of some interventions in the very early phase but not the middle phase were associated with reductions in infected cases at the end. ### Rt Before and After the Interventions The COVID-19 Rt before and after the implementation of 8 interventions is presented in Figure 2. Overall, a consistent similar pattern was observed for all interventions that Rt decreased slowly before the intervention, yet after the intervention, Rt decreased rapidly in 7 to 14 days, and the decreasing trend attenuated afterward. Rt converged to around 1 in approximately 30 days after the intervention. Overall, the average Rt decreased by 6.7% (95% CI 4.8-12.4) at 7 days and by 17.0% (95% CI 7.8-29.1) at 14 days after any of the interventions (see Appendix Fig. 4 in Supplemental Materials found at https://doi.org/10.1016/j.jval.2021.10.007). ### Counterfactual Estimates for the Effects of Interventions on Rt With counterfactual estimators, the average effects of different interventions by the time are presented in Figure 3, and the average values for all periods after intervention are presented in Table 3. All interventions give average estimates significantly < 0, among which the estimate of international travel controls is marginally significant. The test for no pretrend results is shown in Appendix Figure 5 in Supplemental Materials found at https://doi.org/10.1016/j.jval.2021.10.007. All 8 interventions have passed the equivalence test, suggesting the model successfully controlled for the effects of time-varying confounders and other interventions. In most cases, the average effect estimates surround zero in the preintervention period and decrease rapidly to below zero in 7 to 14 days after the intervention, to its minimum values (ranging from −0.52 to −0.08 with a median of −0.30) in 25 to 32 days (Fig. 3). This corresponds to a maximum 22% to 41% reduction in Rt. Among the 8 interventions, school closing, workplace closing, and public events cancellation demonstrated the strongest and most consistent evidence of associations. Table 3Counterfactual estimates for the average effects of 8 interventions on Rt. InterventionAIE on Rt95% CIP value School closing−0.29−0.40 to −0.195.28E-08 Workplace closing−0.29−0.38 to −0.205.28E-11 Public events cancellation−0.39−0.52 to −0.277.12E-10 Restrictions on gatherings−0.24−0.35 to −0.147.63E-06 Public transport closing−0.11−0.20 to −0.033.55E-03 Stay at home requirements−0.17−0.25 to −0.081.02E-04 Restrictions on internal movement−0.21−0.28 to −0.147.58E-09 International travel controls−0.20−0.39 to −0.023.16E-02 Note. The data show the average effects of intervention policies on Rt for all countries and across postintervention periods. AIE indicates average intervention effect; CI, confidence interval. ### Robustness Analyses by Altering the Endpoint Date Appendix Tables 4 to 7 in Supplemental Materials found at https://doi.org/10.1016/j.jval.2021.10.007 provide the robustness analyses for correlation results between variables of interventions and cumulative infections, and Appendix Tables 8 and 9 in Supplemental Materials found at https://doi.org/10.1016/j.jval.2021.10.007 provide the robustness analyses for counterfactual effect estimates on Rt, by altering the cutoff date 1 month earlier (June 1, 2020) or later (August 1, 2020). The results demonstrate that the abovementioned findings are roughly unchanged, although the estimates differ to some extent. ## Discussion In this study, we found some evidence that earlier implementation, longer durations, and more strictness of containment policies at the early stage but not middle stage were associated with reduced infections of COVID-19. With a novel counterfactual estimator, we were able to control for the unobserved time-varying confounders, generating more reliable causal relationships. Our results showed that the government intervention policies were associated with a 22% to 41% reduction in COVID-19 transmission in approximately 25 to 32 days after their implementations. ### Comparison With Previous Studies The findings from our work align with those from previous studies, • Chowdhury R. • Heng K. • Shawon M.S.R. • et al. Dynamic interventions to control COVID-19 pandemic: a multivariate prediction modelling study comparing 16 worldwide countries. • Matrajt L. • Leung T. Evaluating the effectiveness of social distancing interventions to delay or flatten the epidemic curve of coronavirus disease. , • Islam N. • Sharp S.J. • Chowell G. • et al. Physical distancing interventions and incidence of coronavirus disease 2019: natural experiment in 149 countries. , • Kraemer M.U.G. • Yang C.H. • Gutierrez B. • et al. The effect of human mobility and control measures on the COVID-19 epidemic in China. • Viner R.M. • Russell S.J. • Croker H. • et al. School closure and management practices during coronavirus outbreaks including COVID-19: a rapid systematic review. • Wong C.K.H. • Wong J.Y.H. • Tang E.H.M. • Au C.H. • Lau K.T.K. • Wai A.K.C. Effects of national containment measures on decelerating the increase in daily new cases of COVID-19 in 54 countries and 4 epicenters of pandemic: Comparative Observational Study. • Cowling B.J. • Ali S.T. • Ng T.W.Y. • et al. Impact assessment of non-pharmaceutical interventions against coronavirus disease 2019 and influenza in Hong Kong: an observational study. • Tian H. • Liu Y. • Li Y. • et al. An investigation of transmission control measures during the first 50 days of the COVID-19 epidemic in China. except that previous results mostly depended on modeling assumptions under presumptive scenarios or used data within a single country. Only a few studies assessed the impact of intervention policies for different countries using comparative methods. • Liu Y. • Morgenstern C. • Kelly J. • Lowe R. • Jit M. CMMID COVID-19 Working Group The impact of non-pharmaceutical interventions on SARS-CoV-2 transmission across 130 countries and territories. , • Islam N. • Sharp S.J. • Chowell G. • et al. Physical distancing interventions and incidence of coronavirus disease 2019: natural experiment in 149 countries. , • Wong C.K.H. • Wong J.Y.H. • Tang E.H.M. • Au C.H. • Lau K.T.K. • Wai A.K.C. Effects of national containment measures on decelerating the increase in daily new cases of COVID-19 in 54 countries and 4 epicenters of pandemic: Comparative Observational Study. Nevertheless, these studies mainly depended on straightforward statistical methods, simply relating intervention policies to COVID-19 growth rate or Rt directly, which failed to account for time-varying confounders that affected the effect estimates. One study comparing the COVID-19 curve trends before and after interventions using data from 54 countries suggested that stay at home orders, curfews, and lockdowns curbed the increase in daily new case to < 5% within a month. • Wong C.K.H. • Wong J.Y.H. • Tang E.H.M. • Au C.H. • Lau K.T.K. • Wai A.K.C. Effects of national containment measures on decelerating the increase in daily new cases of COVID-19 in 54 countries and 4 epicenters of pandemic: Comparative Observational Study. Another study including 149 countries leveraged a simple meta-analysis method and synthesized the incidence rate ratios of COVID-19 before and after the implementation of physical distancing, concluding that physical distancing was associated with a 13% reduction in COVID-19 incidence. • Islam N. • Sharp S.J. • Chowell G. • et al. Physical distancing interventions and incidence of coronavirus disease 2019: natural experiment in 149 countries. This study had less focus on the timeliness, strictness, and durations of interventions and was thus not able to conclude causal relationships. To the best of our knowledge, our study is the first study that addressed the issue of confounding using a novel counterfactual estimator based on an interactive 2-way fixed-effects model. The results from our study provided more reliable evidence and could better assist policy making. ### Interpretation of Our Findings We found that the early implementation of all containment policies was associated with reduced infection cases. This finding was as expected and in concert with most previous studies. • Matrajt L. • Leung T. Evaluating the effectiveness of social distancing interventions to delay or flatten the epidemic curve of coronavirus disease. , • Ainslie K.E.C. • Walters C.E. • Fu H. • et al. Evidence of initial success for China exiting COVID-19 social distancing policy after achieving containment. • Lee V.J. • Chiew C.J. • Khong W.X. Interrupting transmission of COVID-19: lessons from containment efforts in Singapore. • Koo J.R. • Cook A.R. • Park M. • et al. Interventions to mitigate early spread of SARS-CoV-2 in Singapore: a modelling study [published correction appears in Lancet Infect Dis. 2020;20(5):e79]. Alongside this finding, we also found some evidence that the higher stringency and longer duration of some containment policies at the early or slow growth stage were correlated with reduced infection cases. Nevertheless, results from the middle or fast growth stage suggested evidence of positive associations. This is a novel finding that previous studies did not address. The positive associations between Stringency and Duration of intervention in the middle stage and total infections were probably attributed to reverse causality, which means that some countries strengthened and prolonged the interventions in face of more severe situations. Recently, the new variants of severe acute respiratory syndrome coronavirus 2, especially the Delta variant that is more contagious than the original strain, spread rapidly in some countries such as India and the United States. Evidence regarding whether government intervention policies are effective on containing the new variants is scarce because most countries have loosened restrictions on public activities. Nevertheless, a recent regional outbreak of the Delta variant attacked Guangzhou and Shenzhen in China from May 2021 to June 2021. The local governments immediately enforced strict control measures since the identification of the first new case, including public events cancellation, unnecessary workplace closing, and contact tracing. The regional outbreak was successfully controlled within a month, preventing virus spillover and large-scale spreading. • Zhang M. • Xiao J. • Deng A. • et al. Transmission dynamics of an outbreak of the COVID-19 Delta variant B.1.617.2 - Guangdong Province, China, May-June 2021. On the contrary, countries that did not implement strict containment measures at the very beginning are experiencing an uncontrollable domestic outbreak. Salvatore M, Bhattacharyya R, Purkayastha S, et al. Resurgence of SARS-CoV-2 in India: potential role of the B.1.617.2 (Delta) variant and delayed interventions. Preprint. Posted online June 30, 2021. medRxiv 2021.06.23.21259405. https://doi.org/10.1101/2021.06.23.21259405. This provides us some preliminary evidence that the early and stringent interventions are effective in controlling the outbreak of new variants. Given the strong transmissibility of the new variants, governments should enforce aggressive control measures as early as possible, even though the growth rate might be very slow in their countries at the early period. It seems too late to remedy when arriving at the fast growth stage. The descriptive results from Figure 2 show that Rt demonstrated a decreasing trend before the intervention, which suggested that apart from the 8 interventions, some other unobserved factors such as public self-protective measures also had an effect on transmission reduction. Nevertheless, the preintervention period decreasing trend disappeared in Figure 3 with our counterfactual estimator, suggesting that our methods had successfully eliminated the effects of unobserved confounding factors and generated less biased effect estimates. Notably, we observed the strongest and consistent effects for school closing, workplace closing, and public events cancellation. All 3 containment policies were mandatory policies and more likely to take effects because it is easier to close public facilities. Quantitatively, we found most interventions took their effects on reducing Rt rapidly about 7 to 14 days after implementation. The effects were strengthened by time to a maximum effect of around 30% reduction for Rt in 25 to 32 days. The estimates were similar to a previous study, • Lauer S.A. • Grantz K.H. • Bi Q. • et al. The incubation period of coronavirus Disease 2019 (COVID-19) from publicly reported confirmed cases: estimation and application. except that our results provided the effect trends over time. ### Limitations Our study does have several limitations. First, the coding of intervention variables from Oxford COVID-19 Government Response Tracker relied on government announcements. Nevertheless, announcements did not guarantee mandatory implementation and people adherence varied because of the cultural and legal system differences. Second, because of the relatively small sample size for the number of countries, not all correlation analyses are significant, especially for Stringency and Duration; hence, those findings need to be interpreted with caution. Third, in addition to the public containment and closure policies, other personal protection strategies including wearing masks, quarantine, and hand hygiene also played an important role in epidemic mitigation. Those strategies were not the focus of our current study and have been addressed in previous researches. • Chu D.K. • Akl E.A. • Duda S. • et al. Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: a systematic review and meta-analysis. , • Nussbaumer-Streit B. • Mayr V. • Dobrescu A. • et al. Quarantine alone or in combination with other public health measures to control COVID-19: a rapid review. , • Hellewell J. • Abbott S. • Gimma A. • et al. Feasibility of controlling COVID-19 outbreaks by isolation of cases and contacts [published correction appears in Lancet Glob Health. 2020;9(5):e597]. • Ferretti L. • Wymant C. • Kendall M. • et al. Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing. • Meng S. • Wu Y.J. • et al. Epidemiology, causes, clinical manifestation and diagnosis, prevention and control of coronavirus disease (COVID-19) during the early outbreak period: a scoping review. Moreover, some intervention policies were often introduced in close temporal sequences. It has been difficult to untangle the individual effects. Although our counterfactual approach proved to have been largely controlled for the effects of time-varying confounders and other policies, statistical models dealing with confounders might be not perfect. Hence, results of interventions that are temporally correlated need to be interpreted with caution. Fourth, a large proportion of confirmed cases and deaths has been recorded from nursing homes including both residents and care workers. • Abrams H.R. • Loomer L. • Gandhi A. • Grabowski D.C. Characteristics of U.S. Nursing homes with COVID-19 cases. Nevertheless, data regarding the fraction of cases and deaths emerging from nursing homes were not available for most countries. We were not able to investigate the effects of the general policy interventions on nursing home epidemics at this point. ## Conclusions Using epidemiological data from 145 countries, we found some evidence that earlier, stricter, and longer implementation of containment policies at the early stage was associated with a reduction in infected cases. Moreover, the novel counterfactual estimator proved to have generated more reliable intervention effect estimates of policies. Our results provided evidence of the quantitative effect of different policy intervention over time. Those findings shall have important implications for governments to enact or lift containment policies in fighting against the current and future waves of the COVID-19 outbreak. Future studies should emphasize on how adding and removing intervention policies affect the transmission of the virus, especially its new mutants such as the Delta variant, for decision making on lifting containment policies. ## Article and Author Information Author Contributions: Concept and design: Sun, Zheng, Liang, Yang, Li, Luo, He, Zhong Acquisition of data: Sun, Luo Analysis and interpretation of data: Sun, Zheng, Liang, Yang, Zeng, Li, Luo Drafting of the manuscript: Sun, Zheng, Zeng, Luo Critical revision of the paper for important intellectual content: Sun, Zheng, Liang, Yang, Zeng, Li, Luo, Alexander Ng, He Statistical analysis: Sun, Luo Administrative, technical, or logistic support: Alexander Ng, Zhong Supervision: Alexander Ng, He, Zhong Conflict of Interest Disclosures: The authors reported no conflicts of interest. Funding/Support: This work was funded by Key-Area Research and Development Program of Guangdong Province, China (No. 2018B010111001). ## Supplemental Material • Supplementary information Appendix Method 1. Calculation of the average intervention effects (AIEs). Appendix Table 1. Partial correlations (adjusted for the date of the first case) between the Start-Date (days relative to date of the first 100 cases occurrence) for 8 interventions and cumulative infections per million population on July 1, 2020. Appendix Table 2. Correlations between the Start-Date (days relative to date of the first 10 cases occurrence) for 8 interventions and cumulative infections per million population on July 1, 2020. Appendix Table 3. Correlations between the Stringency and Duration for 8 interventions (for the first month and second month) and cumulative infections per million population on July 1, 2020. Appendix Table 4. Robustness analysis: correlations between the Start-Date (relative to date of the first 100 cases occurrence) for 8 interventions and cumulative infections per million population on June 1, 2020. Appendix Table 5. Robustness analysis: correlations between the Start-Date (relative to date of the first 100 cases occurrence) for 8 interventions and cumulative infections per million population on August 1, 2020. Appendix Table 6. Robustness analysis: correlations between the Stringency and Duration for 8 interventions (for the early phase and middle phase) and cumulative infections per million population on June 1, 2020. Appendix Table 7. Robustness analysis: correlations between the Stringency and Duration for 8 interventions (for the early phase and middle phase) and cumulative infections per million population on August 1, 2020. Appendix Table 8. Robustness analysis: counterfactual estimates for the average effects of 8 interventions on Rt through June 1, 2020. Appendix Table 9. Robustness analysis: counterfactual estimates for the average effects of 8 interventions on Rt through August 1, 2020. Appendix Figure 1. Country examples of trajectories for COVID-19 pandemic. Appendix Figure 2. Illustrations of COVID-19 growth curve with different phases. Appendix Figure 3. An illustration of time-series cross-sectional data for School closing. Appendix Figure 4. Rt before and after the implementation of any intervention. Appendix Figure 5. Tests for no pretrend analysis. ## References 1. Coronavirus COVID-19 global cases by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (JHU). (Available at:) https://coronavirus.jhu.edu/map.html Date accessed: July 10, 2021 • Fong M.W. • Gao H. • Wong J.Y. • et al. Nonpharmaceutical measures for pandemic influenza in nonhealthcare settings-social distancing measures. Emerg Infect Dis. 2020; 26: 976-984 • Ryu S. • Gao H. • Wong J.Y. • et al. Nonpharmaceutical measures for pandemic influenza in nonhealthcare settings-international travel-related measures. Emerg Infect Dis. 2020; 26: 961-966 • Chu D.K. • Akl E.A. • Duda S. • et al. Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: a systematic review and meta-analysis. Lancet. 2020; 395: 1973-1987 • Nussbaumer-Streit B. • Mayr V. • Dobrescu A. • et al. Quarantine alone or in combination with other public health measures to control COVID-19: a rapid review. Cochrane Database Syst Rev. 2020; 9CD013574 • Davies N.G. • Kucharski A.J. • Eggo R.M. • Gimma A. • Edmunds W.J. Centre for the Mathematical Modelling of Infectious Diseases COVID-19 Working Group. Effects of non-pharmaceutical interventions on COVID-19 cases, deaths, and demand for hospital services in the UK: a modelling study. Lancet Public Health. 2020; 5: e375-e385 • Flaxman S. • Mishra S. • Gandy A. • et al. Estimating the effects of non-pharmaceutical interventions on COVID-19 in Europe. Nature. 2020; 584: 257-261 • Chowdhury R. • Heng K. • Shawon M.S.R. • et al. Dynamic interventions to control COVID-19 pandemic: a multivariate prediction modelling study comparing 16 worldwide countries. Eur J Epidemiol. 2020; 35: 389-399 • Jarvis C.I. • Van Zandvoort K. • Gimma A. • et al. Quantifying the impact of physical distance measures on the transmission of COVID-19 in the UK. BMC Med. 2020; 18: 124 • Ngonghala C.N. • Iboi E. • Eikenberry S. • et al. Mathematical assessment of the impact of non-pharmaceutical interventions on curtailing the 2019 novel coronavirus. Math Biosci. 2020; 325108364 2. Milne GJ, Xie S. The Effectiveness of Social Distancing in Mitigating COVID-19 Spread: a modelling analysis. Preprint. Posted online March 23, 2020. medRxiv 2020.03.20.20040055. https://doi.org/10.1101/2020.03.20.20040055. • Lai S. • Ruktanonchai N.W. • Zhou L. • et al. Effect of non-pharmaceutical interventions to contain COVID-19 in China. Nature. 2020; 585: 410-413 • Matrajt L. • Leung T. Evaluating the effectiveness of social distancing interventions to delay or flatten the epidemic curve of coronavirus disease. Emerg Infect Dis. 2020; 26: 1740-1748 • Liu Y. • Morgenstern C. • Kelly J. • Lowe R. • Jit M. • CMMID COVID-19 Working Group The impact of non-pharmaceutical interventions on SARS-CoV-2 transmission across 130 countries and territories. BMC Med. 2021; 19: 40 • Islam N. • Sharp S.J. • Chowell G. • et al. Physical distancing interventions and incidence of coronavirus disease 2019: natural experiment in 149 countries. BMJ. 2020; 370: m2743 • Hale T. • Noam A. • Beatriz K. • et al. Variation in government responses to COVID-19. Blavatnik School of Government Working Paper. • Hale T. • Angrist N. • Goldszmidt R. • et al. A global panel database of pandemic policies (Oxford COVID-19 Government Response Tracker). Nat Hum Behav. 2021; 5: 529-538 • Looi M.K. COVID-19: is a second wave hitting Europe?. BMJ. 2020; 371: m4113 • Chaudhry R. • Dranitsaris G. • Mubashir T. • Bartoszko J. • Riazi S. A country level analysis measuring the impact of government actions, country preparedness and socioeconomic factors on COVID-19 mortality and related health outcomes. EClinicalMedicine. 2020; 25100464 • Zweig S.A. • Zapf A.J. • Xu H. • et al. Impact of public health and social measures on the COVID-19 pandemic in the United States and other countries: descriptive analysis. JMIR Public Health Surveill. 2021; 7e27917 • Cori A. • Ferguson N.M. • Fraser C. • Cauchemez S. A new framework and software to estimate time-varying reproduction numbers during epidemics. Am J Epidemiol. 2013; 178: 1505-1512 3. Batista M. Estimation of the final size of coronavirus epidemic by the logistic model. Preprint. Posted online February 28, 2020. medRxiv 2020.02.16.20023606. https://doi.org/10.1101/2020.02.16.20023606. 4. Liu L, Wang Y, Xu Y. A practical guide to counterfactual estimators for causal inference with time-series cross-sectional data. Preprint. Posted online April 10, 2020. SSRN 3555463. https://doi.org/10.2139/ssrn.3555463. • Kraemer M.U.G. • Yang C.H. • Gutierrez B. • et al. The effect of human mobility and control measures on the COVID-19 epidemic in China. Science. 2020; 368: 493-497 • Viner R.M. • Russell S.J. • Croker H. • et al. School closure and management practices during coronavirus outbreaks including COVID-19: a rapid systematic review. Lancet Child Adolesc. 2020; 4: 397-404 • Wong C.K.H. • Wong J.Y.H. • Tang E.H.M. • Au C.H. • Lau K.T.K. • Wai A.K.C. Effects of national containment measures on decelerating the increase in daily new cases of COVID-19 in 54 countries and 4 epicenters of pandemic: Comparative Observational Study. J Med Internet Res. 2020; 22e19904 • Cowling B.J. • Ali S.T. • Ng T.W.Y. • et al. Impact assessment of non-pharmaceutical interventions against coronavirus disease 2019 and influenza in Hong Kong: an observational study. Lancet Public Health. 2020; 5: e279-e288 • Tian H. • Liu Y. • Li Y. • et al. An investigation of transmission control measures during the first 50 days of the COVID-19 epidemic in China. Science. 2020; 368: 638-642 • Ainslie K.E.C. • Walters C.E. • Fu H. • et al. Evidence of initial success for China exiting COVID-19 social distancing policy after achieving containment. Wellcome Open Res. 2020; 5: 8 • Lee V.J. • Chiew C.J. • Khong W.X. Interrupting transmission of COVID-19: lessons from containment efforts in Singapore. J Travel Med. 2020; 27taaa039 • Koo J.R. • Cook A.R. • Park M. • et al. Interventions to mitigate early spread of SARS-CoV-2 in Singapore: a modelling study [published correction appears in Lancet Infect Dis. 2020;20(5):e79]. Lancet Infect Dis. 2020; 20: 678-688 • Zhang M. • Xiao J. • Deng A. • et al. Transmission dynamics of an outbreak of the COVID-19 Delta variant B.1.617.2 - Guangdong Province, China, May-June 2021. China CDC Wkly. 2021; 3: 584-586 5. Salvatore M, Bhattacharyya R, Purkayastha S, et al. Resurgence of SARS-CoV-2 in India: potential role of the B.1.617.2 (Delta) variant and delayed interventions. Preprint. Posted online June 30, 2021. medRxiv 2021.06.23.21259405. https://doi.org/10.1101/2021.06.23.21259405. • Lauer S.A. • Grantz K.H. • Bi Q. • et al. The incubation period of coronavirus Disease 2019 (COVID-19) from publicly reported confirmed cases: estimation and application. Ann Intern Med. 2020; 172: 577-582 • Hellewell J. • Abbott S. • Gimma A. • et al. Feasibility of controlling COVID-19 outbreaks by isolation of cases and contacts [published correction appears in Lancet Glob Health. 2020;9(5):e597]. Lancet Glob Health. 2020; 8: e488-e496 • Ferretti L. • Wymant C. • Kendall M. • et al. Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing. Science. 2020; 368eabb6936 • Meng S. • Wu Y.J. • et al. Epidemiology, causes, clinical manifestation and diagnosis, prevention and control of coronavirus disease (COVID-19) during the early outbreak period: a scoping review. Infect Dis Pover. 2020; 9: 29 • Abrams H.R. • Loomer L. • Gandhi A. • Grabowski D.C. Characteristics of U.S. Nursing homes with COVID-19 cases. J Am Geriatr Soc. 2020; 68: 1653-1656
auto_math_text
web
# EmbeddingModel¶ class ampligraph.latent_features.EmbeddingModel(k=100, eta=2, epochs=100, batches_count=100, seed=0, embedding_model_params={}, optimizer='adam', optimizer_params={'lr': 0.0005}, loss='nll', loss_params={}, regularizer=None, regularizer_params={}, initializer='xavier', initializer_params={'uniform': False}, verbose=False) Abstract class for embedding models AmpliGraph neural knowledge graph embeddings models extend this class and its core methods. Methods __init__([k, eta, epochs, batches_count, …]) Initialize an EmbeddingModel fit(X[, early_stopping, early_stopping_params]) Train an EmbeddingModel (with optional early stopping). get_embeddings(entities[, embedding_type]) Get the embeddings of entities or relations. predict(X[, from_idx]) Predict the scores of triples using a trained embedding model. _fn(e_s, e_p, e_o) The scoring function of the model. _initialize_parameters() Initialize parameters of the model. _get_model_loss(dataset_iterator) Get the current loss including loss due to regularization. get_embedding_model_params(output_dict) Save the model parameters in the dictionary. restore_model_params(in_dict) Load the model parameters from the input dictionary. _save_trained_params() After model fitting, save all the trained parameters in trained_model_params in some order. _load_model_from_trained_params() Load the model from trained params. _initialize_early_stopping() Initializes and creates evaluation graph for early stopping. _perform_early_stopping_test(epoch) Performs regular validation checks and stop early if the criteria is achieved. configure_evaluation_protocol([config]) Set the configuration for evaluation set_filter_for_eval() Configures to use filter _initialize_eval_graph([mode]) Initialize the evaluation graph. end_evaluation() End the evaluation and close the Tensorflow session. __init__(k=100, eta=2, epochs=100, batches_count=100, seed=0, embedding_model_params={}, optimizer='adam', optimizer_params={'lr': 0.0005}, loss='nll', loss_params={}, regularizer=None, regularizer_params={}, initializer='xavier', initializer_params={'uniform': False}, verbose=False) Initialize an EmbeddingModel Also creates a new Tensorflow session for training. Parameters: k (int) – Embedding space dimensionality. eta (int) – The number of negatives that must be generated at runtime during training for each positive. epochs (int) – The iterations of the training loop. batches_count (int) – The number of batches in which the training set must be split during the training loop. seed (int) – The seed used by the internal random numbers generator. embedding_model_params (dict) – Model-specific hyperparams, passed to the model as a dictionary. Refer to model-specific documentation for details. optimizer (string) – The optimizer used to minimize the loss function. Choose between ‘sgd’, ‘adagrad’, ‘adam’, ‘momentum’. optimizer_params (dict) – Arguments specific to the optimizer, passed as a dictionary. Supported keys: ’lr’ (float): learning rate (used by all the optimizers). Default: 0.1. ’momentum’ (float): learning momentum (only used when optimizer=momentum). Default: 0.9. Example: optimizer_params={'lr': 0.01} loss (string) – The type of loss function to use during training. pairwise the model will use pairwise margin-based loss function. nll the model will use negative loss likelihood. absolute_margin the model will use absolute margin likelihood. self_adversarial the model will use adversarial sampling loss function. multiclass_nll the model will use multiclass nll loss. Switch to multiclass loss defined in [aC15] by passing ‘corrupt_sides’ as [‘s’,’o’] to embedding_model_params. To use loss defined in [KBK17] pass ‘corrupt_sides’ as ‘o’ to embedding_model_params. loss_params (dict) – Dictionary of loss-specific hyperparameters. See loss functions documentation for additional details. Example: optimizer_params={'lr': 0.01} if loss='pairwise'. regularizer (string) – The regularization strategy to use with the loss function. None: the model will not use any regularizer (default) LP: the model will use L1, L2 or L3 based on the value of regularizer_params['p'] (see below). regularizer_params (dict) – Dictionary of regularizer-specific hyperparameters. See the regularizers documentation for additional details. Example: regularizer_params={'lambda': 1e-5, 'p': 2} if regularizer='LP'. initializer (string) – The type of initializer to use. normal: The embeddings will be initialized from a normal distribution uniform: The embeddings will be initialized from a uniform distribution xavier: The embeddings will be initialized using xavier strategy (default) initializer_params (dict) – Dictionary of initializer-specific hyperparameters. See the initializer documentation for additional details. Example: initializer_params={'mean': 0, 'std': 0.001} if initializer='normal'. verbose (bool) – Verbose mode. fit(X, early_stopping=False, early_stopping_params={}) Train an EmbeddingModel (with optional early stopping). The model is trained on a training set X using the training protocol described in [TWR+16]. Parameters: X (ndarray (shape [n, 3]) or object of AmpligraphDatasetAdapter) – Numpy array of training triples OR handle of Dataset adapter which would help retrieve data. early_stopping (bool) – Flag to enable early stopping (default:False) early_stopping_params (dictionary) – Dictionary of hyperparameters for the early stopping heuristics. The following string keys are supported: ’x_valid’: ndarray (shape [n, 3]) or object of AmpligraphDatasetAdapter : Numpy array of validation triples OR handle of Dataset adapter which would help retrieve data. ’criteria’: string : criteria for early stopping ‘hits10’, ‘hits3’, ‘hits1’ or ‘mrr’(default). ’x_filter’: ndarray, shape [n, 3] : Positive triples to use as filter if a ‘filtered’ early stopping criteria is desired (i.e. filtered-MRR if ‘criteria’:’mrr’). Note this will affect training time (no filter by default). If the filter has already been set in the adapter, pass True ’burn_in’: int : Number of epochs to pass before kicking in early stopping (default: 100). check_interval’: int : Early stopping interval after burn-in (default:10). ’stop_interval’: int : Stop if criteria is performing worse over n consecutive checks (default: 3) ’corruption_entities’: List of entities to be used for corruptions. If ‘all’, it uses all entities (default: ‘all’) ’corrupt_side’: Specifies which side to corrupt. ‘s’, ‘o’, ‘s+o’ (default) Example: early_stopping_params={x_valid=X['valid'], 'criteria': 'mrr'} get_embeddings(entities, embedding_type='entity') Get the embeddings of entities or relations. Note Use ampligraph.utils.create_tensorboard_visualizations() to visualize the embeddings with TensorBoard. Parameters: entities (array-like, dtype=int, shape=[n]) – The entities (or relations) of interest. Element of the vector must be the original string literals, and not internal IDs. embedding_type (string) – If ‘entity’, entities argument will be considered as a list of knowledge graph entities (i.e. nodes). If set to ‘relation’, they will be treated as relation types instead (i.e. predicates). embeddings – An array of k-dimensional embeddings. ndarray, shape [n, k] predict(X, from_idx=False) Predict the scores of triples using a trained embedding model. The function returns raw scores generated by the model. Note To obtain probability estimates, use a logistic sigmoid: >>> model.fit(X) >>> y_pred = model.predict(np.array([['f', 'y', 'e'], ['b', 'y', 'd']])) >>> print(y_pred) [-0.13863425, -0.09917116] >>> from scipy.special import expit >>> expit(y_pred) array([0.4653968 , 0.47522753], dtype=float32) Parameters: X (ndarray, shape [n, 3]) – The triples to score. from_idx (bool) – If True, will skip conversion to internal IDs. (default: False). scores_predict – The predicted scores for input triples X. ndarray, shape [n] _fn(e_s, e_p, e_o) The scoring function of the model. Assigns a score to a list of triples, with a model-specific strategy. Triples are passed as lists of subject, predicate, object embeddings. This function must be overridden by every model to return corresponding score. Parameters: e_s (Tensor, shape [n]) – The embeddings of a list of subjects. e_p (Tensor, shape [n]) – The embeddings of a list of predicates. e_o (Tensor, shape [n]) – The embeddings of a list of objects. score – The operation corresponding to the scoring function. TensorFlow operation _initialize_parameters() Initialize parameters of the model. This function creates and initializes entity and relation embeddings (with size k). If the graph is large, then it loads only the required entity embeddings (max:batch_size*2) and all relation embeddings. Overload this function if the parameters needs to be initialized differently. _get_model_loss(dataset_iterator) Get the current loss including loss due to regularization. This function must be overridden if the model uses combination of different losses(eg: VAE). Parameters: dataset_iterator (tf.data.Iterator) – Dataset iterator. loss – The loss value that must be minimized. tf.Tensor get_embedding_model_params(output_dict) Save the model parameters in the dictionary. Parameters: output_dict (dictionary) – Dictionary of saved params. It’s the duty of the model to save all the variables correctly, so that it can be used for restoring later. restore_model_params(in_dict) Load the model parameters from the input dictionary. Parameters: in_dict (dictionary) – Dictionary of saved params. It’s the duty of the model to load the variables correctly. _save_trained_params() After model fitting, save all the trained parameters in trained_model_params in some order. The order would be useful for loading the model. This method must be overridden if the model has any other parameters (apart from entity-relation embeddings). _load_model_from_trained_params() Load the model from trained params. While restoring make sure that the order of loaded parameters match the saved order. It’s the duty of the embedding model to load the variables correctly. This method must be overridden if the model has any other parameters (apart from entity-relation embeddings) This function also set’s the evaluation mode to do lazy loading of variables based on the number of distinct entities present in the graph. _initialize_early_stopping() Initializes and creates evaluation graph for early stopping. _perform_early_stopping_test(epoch) Performs regular validation checks and stop early if the criteria is achieved. Parameters: epoch (int) – current training epoch. stopped – Flag to indicate if the early stopping criteria is achieved. bool configure_evaluation_protocol(config=None) Set the configuration for evaluation Parameters: config (dictionary) – Dictionary of parameters for evaluation configuration. Can contain following keys: corruption_entities: List of entities to be used for corruptions. If all, it uses all entities (default: all) corrupt_side: Specifies which side to corrupt. s, o, s+o (default) default_protocol: Boolean flag to indicate whether to use default protocol for evaluation. This computes scores for corruptions of subjects and objects and ranks them separately. This could have been done by evaluating s and o separately and then ranking but it slows down the performance. Hence this mode is used where s+o corruptions are generated at once but ranked separately for speed up (default: False). set_filter_for_eval() Configures to use filter _initialize_eval_graph(mode='test') Initialize the evaluation graph. Parameters: mode (string) – Indicates which data generator to use. end_evaluation() End the evaluation and close the Tensorflow session.
auto_math_text
web
/ Experiment-HEP arXiv:1012.4031 Search for Pair Production of First-Generation Scalar Leptoquarks in pp Collisions at sqrt(s) = 7 TeV Abstract: A search for pair production of first-generation scalar leptoquarks is performed in the final state containing two electrons and two jets using proton-proton collision data at sqrt(s)=7 TeV. The data sample used corresponds to an integrated luminosity of 33 inverse picobarns collected with the CMS detector at the CERN LHC. The number of observed events is in good agreement with the predictions for the standard model background processes, and an upper limit is set on the leptoquark pair production cross section times beta^2 as a function of the leptoquark mass, where beta is the branching fraction of the leptoquark decay to an electron and a quark. A 95% confidence level lower limit is set on the mass of a first-generation scalar leptoquark at 384 GeV for beta=1, which is the most stringent direct limit to date. Note: * Temporary entry * Total numbers of views: 3921 Numbers of unique views: 893
auto_math_text
web
## 1/24/2012 ### (TIP) Maple 15 - the method of the matrix differential (TIP) The method of the matrix differential in the Maple 15 math tool. You should use "map" function. Show the below example. Thanks you. ^^
auto_math_text
web
Outlook: Cohen & Steers Quality Income Realty Fund Inc Common Shares is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Hold Time series to forecast n: 24 Jan 2023 for (n+4 weeks) Methodology : Modular Neural Network (Market Volatility Analysis) ## Abstract Cohen & Steers Quality Income Realty Fund Inc Common Shares prediction model is evaluated with Modular Neural Network (Market Volatility Analysis) and Statistical Hypothesis Testing1,2,3,4 and it is concluded that the RQI stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Hold ## Key Points 1. What is neural prediction? 2. Can neural networks predict stock market? 3. What is Markov decision process in reinforcement learning? ## RQI Target Price Prediction Modeling Methodology We consider Cohen & Steers Quality Income Realty Fund Inc Common Shares Decision Process with Modular Neural Network (Market Volatility Analysis) where A is the set of discrete actions of RQI stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Statistical Hypothesis Testing)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Market Volatility Analysis)) X S(n):→ (n+4 weeks) $∑ i = 1 n a i$ n:Time series to forecast p:Price signals of RQI stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## RQI Stock Forecast (Buy or Sell) for (n+4 weeks) Sample Set: Neural Network Stock/Index: RQI Cohen & Steers Quality Income Realty Fund Inc Common Shares Time series to forecast n: 24 Jan 2023 for (n+4 weeks) According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Hold X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Cohen & Steers Quality Income Realty Fund Inc Common Shares 1. An entity's documentation of the hedging relationship includes how it will assess the hedge effectiveness requirements, including the method or methods used. The documentation of the hedging relationship shall be updated for any changes to the methods (see paragraph B6.4.17). 2. An entity shall apply this Standard for annual periods beginning on or after 1 January 2018. Earlier application is permitted. If an entity elects to apply this Standard early, it must disclose that fact and apply all of the requirements in this Standard at the same time (but see also paragraphs 7.1.2, 7.2.21 and 7.3.2). It shall also, at the same time, apply the amendments in Appendix C. 3. The credit risk on a financial instrument is considered low for the purposes of paragraph 5.5.10, if the financial instrument has a low risk of default, the borrower has a strong capacity to meet its contractual cash flow obligations in the near term and adverse changes in economic and business conditions in the longer term may, but will not necessarily, reduce the ability of the borrower to fulfil its contractual cash flow obligations. Financial instruments are not considered to have low credit risk when they are regarded as having a low risk of loss simply because of the value of collateral and the financial instrument without that collateral would not be considered low credit risk. Financial instruments are also not considered to have low credit risk simply because they have a lower risk of default than the entity's other financial instruments or relative to the credit risk of the jurisdiction within which an entity operates. 4. If changes are made in addition to those changes required by interest rate benchmark reform to the financial asset or financial liability designated in a hedging relationship (as described in paragraphs 5.4.6–5.4.8) or to the designation of the hedging relationship (as required by paragraph 6.9.1), an entity shall first apply the applicable requirements in this Standard to determine if those additional changes result in the discontinuation of hedge accounting. If the additional changes do not result in the discontinuation of hedge accounting, an entity shall amend the formal designation of the hedging relationship as specified in paragraph 6.9.1. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Cohen & Steers Quality Income Realty Fund Inc Common Shares is assigned short-term Ba1 & long-term Ba1 estimated rating. Cohen & Steers Quality Income Realty Fund Inc Common Shares prediction model is evaluated with Modular Neural Network (Market Volatility Analysis) and Statistical Hypothesis Testing1,2,3,4 and it is concluded that the RQI stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Hold ### RQI Cohen & Steers Quality Income Realty Fund Inc Common Shares Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementBa1Baa2 Balance SheetBaa2Baa2 Leverage RatiosB1B2 Cash FlowBaa2Ba1 Rates of Return and ProfitabilityCBaa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 84 out of 100 with 610 signals. ## References 1. D. Bertsekas. Dynamic programming and optimal control. Athena Scientific, 1995. 2. Gentzkow M, Kelly BT, Taddy M. 2017. Text as data. NBER Work. Pap. 23276 3. Alpaydin E. 2009. Introduction to Machine Learning. Cambridge, MA: MIT Press 4. Li L, Chu W, Langford J, Moon T, Wang X. 2012. An unbiased offline evaluation of contextual bandit algo- rithms with generalized linear models. In Proceedings of 4th ACM International Conference on Web Search and Data Mining, pp. 297–306. New York: ACM 5. Firth JR. 1957. A synopsis of linguistic theory 1930–1955. In Studies in Linguistic Analysis (Special Volume of the Philological Society), ed. JR Firth, pp. 1–32. Oxford, UK: Blackwell 6. L. Prashanth and M. Ghavamzadeh. Actor-critic algorithms for risk-sensitive MDPs. In Proceedings of Advances in Neural Information Processing Systems 26, pages 252–260, 2013. 7. V. Konda and J. Tsitsiklis. Actor-Critic algorithms. In Proceedings of Advances in Neural Information Processing Systems 12, pages 1008–1014, 2000 Frequently Asked QuestionsQ: What is the prediction methodology for RQI stock? A: RQI stock prediction methodology: We evaluate the prediction models Modular Neural Network (Market Volatility Analysis) and Statistical Hypothesis Testing Q: Is RQI stock a buy or sell? A: The dominant strategy among neural network is to Hold RQI Stock. Q: Is Cohen & Steers Quality Income Realty Fund Inc Common Shares stock a good investment? A: The consensus rating for Cohen & Steers Quality Income Realty Fund Inc Common Shares is Hold and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of RQI stock? A: The consensus rating for RQI is Hold. Q: What is the prediction period for RQI stock? A: The prediction period for RQI is (n+4 weeks)
auto_math_text
web
/ Experiment-HEP arXiv:1101.0806 Search for $W{^\prime}\rightarrow tb$ resonances with left- and right-handed couplings to fermions Published in: Phys.Rev.Lett. Pages: 7 Abstract: We present a search for the production of a heavy gauge boson, W', that decays to third-generation quarks, by the D0 Collaboration in ppbar collisions at sqrt(s)= 1.96 TeV. We set 95% confidence level upper limits on the production cross section times branching fraction. For the first time, we set limits for arbitrary combinations of left- and right-handed couplings of the W' boson to fermions. For couplings with the same strength as the standard model W boson, we set the following limits for M(W') > m(nu_R): M(W')>863 GeV for purely left-handed couplings, M(W')>885 GeV for purely right-handed couplings, and M(W')>916 GeV if both left- and right-handed couplings are present. The limit for right-handed couplings improves for M(W') < m(nu_R) to M(W')>890 GeV. Note: * Temporary entry * Total numbers of views: 2002 Numbers of unique views: 768
auto_math_text
web
Outlook: RAS TECHNOLOGY HOLDINGS LIMITED is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Sell Time series to forecast n: 08 Feb 2023 for (n+8 weeks) Methodology : Deductive Inference (ML) ## Abstract RAS TECHNOLOGY HOLDINGS LIMITED prediction model is evaluated with Deductive Inference (ML) and Logistic Regression1,2,3,4 and it is concluded that the RTH stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Sell ## Key Points 1. What statistical methods are used to analyze data? 2. Reaction Function ## RTH Target Price Prediction Modeling Methodology We consider RAS TECHNOLOGY HOLDINGS LIMITED Decision Process with Deductive Inference (ML) where A is the set of discrete actions of RTH stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Logistic Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Deductive Inference (ML)) X S(n):→ (n+8 weeks) $\begin{array}{l}\int {r}^{s}\mathrm{rs}\end{array}$ n:Time series to forecast p:Price signals of RTH stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## RTH Stock Forecast (Buy or Sell) for (n+8 weeks) Sample Set: Neural Network Stock/Index: RTH RAS TECHNOLOGY HOLDINGS LIMITED Time series to forecast n: 08 Feb 2023 for (n+8 weeks) According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Sell X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for RAS TECHNOLOGY HOLDINGS LIMITED 1. If the group of items does not have any offsetting risk positions (for example, a group of foreign currency expenses that affect different line items in the statement of profit or loss and other comprehensive income that are hedged for foreign currency risk) then the reclassified hedging instrument gains or losses shall be apportioned to the line items affected by the hedged items. This apportionment shall be done on a systematic and rational basis and shall not result in the grossing up of the net gains or losses arising from a single hedging instrument. 2. If the group of items does not have any offsetting risk positions (for example, a group of foreign currency expenses that affect different line items in the statement of profit or loss and other comprehensive income that are hedged for foreign currency risk) then the reclassified hedging instrument gains or losses shall be apportioned to the line items affected by the hedged items. This apportionment shall be done on a systematic and rational basis and shall not result in the grossing up of the net gains or losses arising from a single hedging instrument. 3. If subsequently an entity reasonably expects that the alternative benchmark rate will not be separately identifiable within 24 months from the date the entity designated it as a non-contractually specified risk component for the first time, the entity shall cease applying the requirement in paragraph 6.9.11 to that alternative benchmark rate and discontinue hedge accounting prospectively from the date of that reassessment for all hedging relationships in which the alternative benchmark rate was designated as a noncontractually specified risk component. 4. If, in applying paragraph 7.2.44, an entity reinstates a discontinued hedging relationship, the entity shall read references in paragraphs 6.9.11 and 6.9.12 to the date the alternative benchmark rate is designated as a noncontractually specified risk component for the first time as referring to the date of initial application of these amendments (ie the 24-month period for that alternative benchmark rate designated as a non-contractually specified risk component begins from the date of initial application of these amendments). *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions RAS TECHNOLOGY HOLDINGS LIMITED is assigned short-term Ba1 & long-term Ba1 estimated rating. RAS TECHNOLOGY HOLDINGS LIMITED prediction model is evaluated with Deductive Inference (ML) and Logistic Regression1,2,3,4 and it is concluded that the RTH stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period, the dominant strategy among neural network is: Sell ### RTH RAS TECHNOLOGY HOLDINGS LIMITED Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementCC Balance SheetB1Baa2 Leverage RatiosB2Ba2 Cash FlowCBaa2 Rates of Return and ProfitabilityB2C *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 78 out of 100 with 620 signals. ## References 1. E. Altman. Constrained Markov decision processes, volume 7. CRC Press, 1999 2. Breiman L. 2001a. Random forests. Mach. Learn. 45:5–32 3. Bertsimas D, King A, Mazumder R. 2016. Best subset selection via a modern optimization lens. Ann. Stat. 44:813–52 4. Chernozhukov V, Chetverikov D, Demirer M, Duflo E, Hansen C, et al. 2018a. Double/debiased machine learning for treatment and structural parameters. Econom. J. 21:C1–68 5. M. J. Hausknecht and P. Stone. Deep recurrent Q-learning for partially observable MDPs. CoRR, abs/1507.06527, 2015 6. Andrews, D. W. K. W. Ploberger (1994), "Optimal tests when a nuisance parameter is present only under the alternative," Econometrica, 62, 1383–1414. 7. Athey S, Imbens G. 2016. Recursive partitioning for heterogeneous causal effects. PNAS 113:7353–60 Frequently Asked QuestionsQ: What is the prediction methodology for RTH stock? A: RTH stock prediction methodology: We evaluate the prediction models Deductive Inference (ML) and Logistic Regression Q: Is RTH stock a buy or sell? A: The dominant strategy among neural network is to Sell RTH Stock. Q: Is RAS TECHNOLOGY HOLDINGS LIMITED stock a good investment? A: The consensus rating for RAS TECHNOLOGY HOLDINGS LIMITED is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of RTH stock? A: The consensus rating for RTH is Sell. Q: What is the prediction period for RTH stock? A: The prediction period for RTH is (n+8 weeks)
auto_math_text
web
# Quantum Bayesian Networks ## March 31, 2012 ### Dogs is people too, and bluffing a pride of lions Filed under: Uncategorized — rrtucci @ 11:57 am This is very technical and nerdy, but I find it really cool. Recently, Carlen and Lieb proved a very nice inequality for entanglement. Check out Bounds for Entanglement via an Extension of Strong Subadditivity of Entropy by Eric A. Carlen, Elliott H. Lieb Let me try to summarize their main result. Consider a bipartite density matrix $\rho_{12}$, and define $\rho_1 = {\rm tr_2}\rho_{12}$ and the same with 1 and 2 swapped. Suppose $S_{12}$ is the von Neumann entropy of 12, $S_{1}$ is that of 1 and $S_{2}$ is that of 2. Call the classical counterparts of these entropies $H_{12}$, $H_{1}$ and $H_{2}$. Call $S_{2|1} = S_{12} - S_1$ (respectively, $H_{2|1} = H_{12} - H_1$) the quantum (resp., classical) conditional spread (or conditional variance) of 2 given 1. One can show that $H_{2|1}\geq 0$, $H_{1|2}\geq 0$ so classically, conditional spreads must be positive (or zero). However, in quantum mechanics such spreads can be negative. The Araki-Lieb inequality $S_{12}\geq |S_1-S_2|$ says that $S_{2|1}\geq -S_2$, $S_{1|2}\geq -S_1$. Now, having a negative conditional spread is a bit of a dog, because spreads are not supposed to be negative. The new Carlen/Lieb inequality teaches us that dogs can count too. If $E$ is entanglement (either entanglement of formation or squashed entanglement), then, according to Carlen/Lieb, $E\geq max(-S_{1|2}, -S_{2|1}, 0)$ So a dog (i.e, a negative conditional spread $S_{1|2}<0$) forces the entanglement to be greater than zero by $-S_{1|2}$. An example of a highly influential dog. P.S. I recently wrote some email to Profs. Lieb and Carlen asking them something about their paper. I felt infinitely dumb in their email presence, like a mouse in the presence of lions (As you probably know, The Lion and the Mouse is a beautiful fable by Aesop.) But I believe I successfully muddled and bluffed my way through the conversation. Amazing but true, it is possible to bluff a pride of lions. If you don’t believe me, watch this amazing YouTube video, entitled “BBC – Men stealing meat from lions”. In the video, 3 extremely courageous and also extremely foolish African guys, with scant weapons other than their daunting audacity, intentionally walk right into the middle of a pride of lions that is having dinner. 1. Congrats on your lion encounter skills 🙂 Will have to ponder this for a while before I could attempt a similar feet. This is an amazing paper. Looks like we can add entanglement to all the other things that won’t work anymore once our universe has run its course (http://en.wikipedia.org/wiki/Heat_death). Just love how new experimental results and theoretical insights like this are demystifying entanglement. Comment by Henning Dekant — March 31, 2012 @ 2:37 pm 2. It seems to me that a lot of people are very confused about Hilbert space. They continue to make up, in never ending variations, particular special case results which have to do with highly specialized sub-spaces of general manifolds. First year math fact: all Hilbert spaces over the complex field are isomorphic, in a real sense there is only one Hilbert space of dimension N. Second fact: couple a system of N quantum degrees of freedom to M and you get an N by M tensor product system of which the maximal Hilbert space is of dimension N by M. Wow! Really surprising 🙂 Why then do Physicists continue to waste their time penning one paper after another pretending there is something deep and special to be found in “entanglement”. They fail to understand that the particular special cases they examine have to do with sub-manifolds of a more general structure which are tied to whatever symmetry group governs the interaction between the chosen systems. The problem is *not* entanglement. Entanglement is the general condition of any interacting system. The real problem to understand is “dis-entanglement”. Under what conditions do systems begin to behave in a “dis-entangled” manner. Bohr was wise enough to know this and said as much when he spoke of isolated quantum systems being describable in “quantum terms”, themselves being probed by weakly interacting systems describable in “classical terms”. Implicitly, the isolated quantum system is very special “dis-entangled state. When you manage to create those, you can then weakly entangle them to understand the elementary interactions. This is the right way to understand the physics. Now people turn things upside down and are deeply mystified about how to create an entangled system! The point of all this is quantum information theory is a bit of a canard. The information you get is from an actual experiment, described in Bayesian fashion via the conditional measurement correlation Pr (classical data | quantum state). This you can invert to get Pr ( quantum state | classical data ). It is remarkable that Physicists don’t get this! Every field known to Man (and I mean *all* of them, from speech recognition, through bioinformatics and machine learning, through classical communication theory and signal processing) understands this. Only Physicists are special in their supreme ignorance of such matters. Already, in (1994), I published a general analysis along these lines (including explicit discussion of the dynamical sub-group issue). The paper is at: K.R.W. Jones (1994), Fundamental limits upon the measurement of state-vectors, Phys. Rev. A50, 3682-3699. After some 18 years, I have formed the distinct impression that my physicist audience may not have understood the basis of the paper for the above reasons. The general limit has already been established and published. The quest for a “quantum information theory” and “quantum computation theory” etc etc is essentially a Grail Quest to make up for a lack of understanding among physicists of what it means to actually measure something. I claim … measurement = inference There is something you know which is correlated to something you don’t know. You use a knowledge of the statistics to infer the unknown. In statistics you call the “thing you can’t see” a LATENT VARIABLE and you use known observable data to infer the values of that. This is the basis of quantum inference (a word I coined 23 years ago). Like I said… EVERY, and I mean EVERY field outside of modern physics has fully grasped this point. It is only the Physicists who are idiots. The really big BREAKTHROUGH in physics happens when some arrogant up-start re-labels the word LATENT and wins a Nobel prize 🙂 CLUE: Latent variable = Wave function (which happens to be non-local). Non-local hidden variables theories are not excluded. ANOTHER BIG CLUE: Latent Variable Hidden Variable Can you hear those pennies drop? Any takers for a genuinely New Physics instead of this sham Quantum XXX caper??? Comment by Kingsley Jones — June 30, 2012 @ 2:32 am 3. Kingsley, don’t think there is anything wrong with the view that inference = measurement and that the pure, isolated disentangled state is the special exception to the rule. But with regards to this paper this is exactly were the rubber hits the road, isn’t it? The intriguing aspect of pure isolated quantum system is that they exhibit an irreversible loss of information when measured. Something that very much ties them to the 2nd law of thermodynamics. In that light I find the Carlen/Lieb inequality rather instructive. Comment by Henning Dekant — June 30, 2012 @ 3:45 am 4. Kingsley, I think Quantum Information Theory is a giant set that contains many OVERLAPPING subsets, e.g., inference, entanglement, channel capacity, compression, error correction, noise, programming of quantum computers and the computational complexity of such programs, quantum bayesian networks, etc. ,etc. The same is true with classical information theory. In fact, for each of the quantum topics presented above there is a classical limit (in the case of entanglement, I would say it’s correlations conditioned on a latent variable) Comment by rrtucci — June 30, 2012 @ 1:39 pm 5. If I am not mistaken this paper claims that this negative conditional spread could be used to construct some sort of quantum Maxwell demon. Are they right or does this involve some sleight of hand? Pop science write-up can be found here. Comment by Henning Dekant — August 2, 2012 @ 5:43 pm 6. […] This is very technical and nerdy, but I find it really cool. Recently, Carlen and Lieb proved a very nice inequality for entanglement. Check out Bounds for Entanglement via an Extension of Strong S…  […] Pingback by Dogs is people too, and bluffing a pride of lions | Quantum Computing | Scoop.it — August 2, 2012 @ 5:45 pm 7. Hi Henning, I haven’t read the Funo et al paper carefully but it looks very interesting. It will take me a while before I can digest it. The connection between energy/work and quantum SIT (Shannon information theory) is certainly a fascinating topic. Maxwell’s demon & Landauer’s principle are great gedanken experiments on which to hone one’s understanding of that connection. The paper doesn’t explicitly use the tools of squashed entanglement or entanglement of formation, which are what the Carlen & Lieb inequality applies to. However, perhaps it can be extended so that it does use these tools. Comment by rrtucci — August 2, 2012 @ 6:27 pm Create a free website or blog at WordPress.com.
auto_math_text
web
PREPRINT # Exploring the Ability of HST WFC3 G141 to Uncover Trends in Populations of Exoplanet Atmospheres Through a Homogeneous Transmission Survey of 70 Gaseous Planets Billy Edwards, Quentin Changeat, Angelos Tsiaras, Kai Hou Yip, Ahmed F. Al-Refaie, Lara Anisman, Michelle F. Bieger, Amelie Gressier, Sho Shibata, Nour Skaf, Jeroen Bouwman, James Y-K. Cho, Masahiro Ikoma, Olivia Venot, Ingo Waldmann, Pierre-Olivier Lagage, Giovanna Tinetti Submitted on 1 November 2022 ## Abstract We present the analysis of the atmospheres of 70 gaseous extrasolar planets via transit spectroscopy with Hubble's Wide Field Camera 3 (WFC3). For over half of these, we statistically detect spectral modulation which our retrievals attribute to molecular species. Among these, we use Bayesian Hierarchical Modelling to search for chemical trends with bulk parameters. We use the extracted water abundance to infer the atmospheric metallicity and compare it to the planet's mass. We also run chemical equilibrium retrievals, fitting for the atmospheric metallicity directly. However, although previous studies have found evidence of a mass-metallicity trend, we find no such relation within our data. For the hotter planets within our sample, we find evidence for thermal dissociation of dihydrogen and water via the H${}^{-}$ opacity. We suggest that the general lack of trends seen across this population study could be due to i) the insufficient spectral coverage offered by HST WFC3 G141, ii) the lack of a simple trend across the whole population, iii) the essentially random nature of the target selection for this study or iv) a combination of all the above. We set out how we can learn from this vast dataset going forward in an attempt to ensure comparative planetology can be undertaken in the future with facilities such as JWST, Twinkle and Ariel. We conclude that a wider simultaneous spectral coverage is required as well as a more structured approach to target selection. ## Preprint Comment: Accepted for publication in ApJS Subject: Astrophysics - Earth and Planetary Astrophysics
auto_math_text
web
# Transposition (music) Transposition example from Koch[1] . The melody on the first line is in the key of D, while the melody on the second line is identical except that it is major third lower, in the key of B. In music transposition refers to the process, or operation, of moving a collection of notes (pitches or pitch classes) up or down in pitch by a constant interval. The shifting of a melody, a harmonic progression or an entire musical piece to another key, while maintaining the same tone structure, i.e. the same succession of whole tones and semitones and remaining melodic intervals. Musikalisches Lexicon, 879 (1865), Heinrich Christoph Koch (trans. Schuijer)[1] For example, one might transpose an entire piece of music into another key. Similarly, one might transpose a tone row or an unordered collection of pitches such as a chord so that it begins on another pitch. The transposition of a set A by n semitones is designated by Tn(A), representing the addition (mod 12) of an integer n to each of the pitch class integers of the set A.[1] Thus the set (A) consisting of 0-1-2 transposed by 5 semitones is 5-6-7 (T5(A)) since 0+5=5, 1+5=6, and 2+5=7. ## Four kinds of transposition ### Chromatic and scalar (diatonic) transposition There are two different kinds of transposition, depending on whether one is measuring intervals according to the chromatic scale or some other scale. In chromatic transposition one shifts every pitch in a collection of notes by a fixed number of semitones. For instance, if one transposes the pitches C4-E4-G4 upwards by four semitones, one obtains the pitches E4-G4-B4. In scalar transposition one shifts every pitch in a collection by a fixed number of scale steps relative to some scale. For example, if one transposes the pitches C4-E4-G4 up by two steps relative to the familiar C major scale, one obtains the pitches E4-G4-B4. If one transposes the same pitches up by two steps relative to the F major scale, one obtains instead E4-G4-B4. Scalar transposition is sometimes called diatonic transposition, but this term can be misleading, as it suggests transposition with respect to a diatonic scale. However, scalar transposition can occur with respect to any type of scale, not just the diatonic. ### Pitch and pitch class There are two further kinds of transposition, by pitch interval or by pitch interval class, applied to pitches or pitch classes, respectively. Transposition may be applied to pitches or to pitch classes.[1] For example the pitch A4, or 9, transposed by a major third, or the pitch interval 4: $9 + 4 = 13$ while that pitch class, 9, tranposed by a major fourth, or the pitch class interval 4: $9 + 4 =13 \equiv 1\pmod{12}$. ## Sight transposition Excerpt of the trumpet part of Symphony No. 9 of Antonín Dvořák, where sight transposition is required. Although transpositions are usually written out, musicians are occasionally asked to transpose music "at sight", that is, to read the music in one key while playing in another. Musicians who play transposing instruments sometimes have to do this (for example when encountering an unusual transposition, such as clarinet in C), as well as singers' accompanists, since singers sometimes request a different key than the one printed in the music to better fit their vocal range. There are three basic techniques for teaching sight transposition: interval, clef, and numbers. ### Interval First one determines the interval between the written key and the target key. Then one imagines the notes up (or down) by the corresponding interval. A performer using this method may calculate each note individually, or group notes together (e.g. "a descending chromatic passage starting on F" might become a "descending chromatic passage starting on A" in the target key). ### Clef Clef transposition is routinely taught (among other places) in Belgium and France. One imagines a different clef and a different key signature than the ones printed. The change of clef is used so that the lines and spaces correspond to different notes than the lines and spaces of the original score. Seven clefs are used for this: treble (2nd line G-clef), bass (4th line F-clef), baritone (3rd line F-clef or 5th line C-clef, although in France and Belgium sight-reading exercises for this clef, as a preparation for clef transposition practice, are always printed with the 3rd line F-clef), and C-clefs on the four lowest lines; these allow any given staff position to correspond to each of the seven note names A through G. The signature is then adjusted for the actual accidental (natural, sharp or flat) one wants on that note. The octave may also have to be adjusted (this sort of practice ignores the conventional octave implication of the clefs), but this is a trivial matter for most musicians. ### Numbers Transposing by numbers means, one determines the scale degree of the written note (e.g. first, fourth, fifth, etc.) in the given key. The performer then plays the corresponding scale degree of the target chord. ## Transpositional equivalence Two musical objects are transpositionally equivalent if one can be transformed into another by transposition. It is similar to enharmonic equivalence and octave equivalence. In many musical contexts, transpositionally equivalent chords are thought to be similar. Transpositional equivalence is a feature of musical set theory. The terms transposition and transposition equivalence allow the concept to be discussed as both an operation and relation, an activity and a state of being. Compare with modulation and related key. Using integer notation and modulo 12, to transpose a pitch x by n semitones: $T^p_n (x) = x+n$ or $T^p_n (x) \rightarrow x+n$ For pitch class transposition by a pitch class interval: $T_n (x) = x+n \pmod{12}$ [2] ## Twelve-tone transposition Milton Babbitt defined the "transformation" of transposition within the twelve-tone technique as follows: By applying the transposition operator (T) to a [twelve-tone] set we will mean that every p of the set P is mapped homomorphically (with regard to order) into a T(p) of the set T(P) according to the following operation: $T_o(p_{i,j})=p_{i,j}+I_o$ where To is any integer 0-11 inclusive, where, of course, the To remains fixed for a given transposition. The + sign indicates ordinary transposition. [3] Allen Forte defines transposition so as to apply to unordered sets of other than twelve pitches: the addition mod 12 of any integer k in S to every integer p of P. thus giving, "12 transposed forms of P".[4] ## Fuzzy transposition Straus created the concept of fuzzy transposition, and fuzzy inversion, to express transposition as a voice-leading event, "the 'sending' of each element of a given PC set to its Tn-correspondent...[enabling] him to relate PC sets of two adjacent chords in terms of a transposition, even when not all of the 'voices' participated fully in the transpositional move.".[5] A transformation within voice-leading space rather than pitch-class space as in pitch class transposition.
auto_math_text
web
Article # Study of Isoscaling with Statistical Multifragmentation Models 06/2001; Source: arXiv ABSTRACT Different statistical multifragmentation models have been used to study isoscaling, i.e. the factorization of the isotope ratios from two reactions, into fugacity terms of proton and neutron number, R21(N,Z)=Y2(N,Z)/Y1(N,Z)=C*exp(a*N+b*Z). Even though the primary isotope distributions are quite different from the final distributions due to evaporation from the excited fragments, the values of a and b are not much affected by sequential decays. a is shown to be mainly sensitive to the proton and neutron composition of the emitting source and may be used to study isospin-dependent properties in nuclear collisions such as the symmetry energy in the equation of state of asymmetric nuclear matter. 0 0 · 0 Bookmarks · 43 Views 01/1976; 1 Download Available from
auto_math_text
web
# Measurements of Wγ and Zγ production in pp collisions at √s=7 TeV with the ATLAS detector at the LHC ATLAS Collaboration 83 引文 斯高帕斯(Scopus) ## 摘要 The integrated and differential fiducial cross sections for the production of a W or Z boson in association with a high-energy photon are measured using pp collisions at √s=7 TeV. The analyses use a data sample with an integrated luminosity of 4.6 fb-1 collected by the ATLAS detector during the 2011 LHC data-taking period. Events are selected using leptonic decays of the W and Z bosons [W(eν,μν) and Z(e+e-, μ+μ-,νν¯)] with the requirement of an associated isolated photon. The data are used to test the electroweak sector of the Standard Model and search for evidence for new phenomena. The measurements are used to probe the anomalous WWγ, ZZγ, and Zγγ triple-gauge-boson couplings and to search for the production of vector resonances decaying to Zγ and Wγ. No deviations from Standard Model predictions are observed and limits are placed on anomalous triple-gauge-boson couplings and on the production of new vector meson resonances. 原文 English 112003 Physical Review D - Particles, Fields, Gravitation and Cosmology 87 11 https://doi.org/10.1103/PhysRevD.87.112003 Published - 2013 六月 4 • 核能與高能物理 • 物理與天文學(雜項)
auto_math_text
web
# Source code for stumpy.stump # STUMPY import logging import numpy as np from numba import njit, prange import numba from . import core, config from .aamp import aamp logger = logging.getLogger(__name__) @njit(fastmath=True) def _compute_diagonal( T_A, T_B, m, M_T, μ_Q, Σ_T_inverse, σ_Q_inverse, cov_a, cov_b, cov_c, cov_d, T_A_subseq_isfinite, T_B_subseq_isfinite, T_A_subseq_isconstant, T_B_subseq_isconstant, diags, diags_start_idx, diags_stop_idx, ρ, I, ignore_trivial, ): """ Compute (Numba JIT-compiled) and update the Pearson correlation, ρ, and I sequentially along individual diagonals using a single thread and avoiding race conditions Parameters ---------- T_A : ndarray The time series or sequence for which to compute the matrix profile T_B : ndarray The time series or sequence that will be used to annotate T_A. For every subsequence in T_A, its nearest neighbor in T_B will be recorded. m : int Window size M_T : ndarray Sliding mean of time series, T μ_Q : ndarray Mean of the query sequence, Q, relative to the current sliding window Σ_T_inverse : ndarray Inverse sliding standard deviation of time series, T σ_Q_inverse : ndarray Inverse standard deviation of the query sequence, Q, relative to the current sliding window cov_a : ndarray The first covariance term relating T_A[i + k + m - 1] and M_T_m_1[i + k] cov_b : ndarray The second covariance term relating T_B[i + m - 1] and μ_Q_m_1[i] cov_c : ndarray The third covariance term relating T_A[i + k - 1] and M_T_m_1[i + k] cov_d : ndarray The fourth covariance term relating T_B[i - 1] and μ_Q_m_1[i] μ_Q_m_1 : ndarray Mean of the query sequence, Q, relative to the current sliding window and using a window size of m-1 T_A_subseq_isfinite : ndarray A boolean array that indicates whether a subsequence in T_A contains a np.nan/np.inf value (False) T_B_subseq_isfinite : ndarray A boolean array that indicates whether a subsequence in T_B contains a np.nan/np.inf value (False) T_A_subseq_isconstant : ndarray A boolean array that indicates whether a subsequence in T_A is constant (True) T_B_subseq_isconstant : ndarray A boolean array that indicates whether a subsequence in T_B is constant (True) diags : ndarray The diagonal indices diags_start_idx : int The starting (inclusive) diagonal index diags_stop_idx : int The stopping (exclusive) diagonal index ρ : ndarray The Pearson correlations I : ndarray The matrix profile indices ignore_trivial : bool Set to True if this is a self-join. Otherwise, for AB-join, set this to False. Default is True. Returns ------- None Notes ----- DOI: 10.1007/s10115-017-1138-x \ <https://www.cs.ucr.edu/~eamonn/ten_quadrillion.pdf>__ See Section 4.5 The above reference outlines a general approach for traversing the distance matrix in a diagonal fashion rather than in a row-wise fashion. DOI: 10.1145/3357223.3362721 \ <https://www.cs.ucr.edu/~eamonn/public/GPU_Matrix_profile_VLDB_30DraftOnly.pdf>__ See Section 3.1 and Section 3.3 The above reference outlines the use of the Pearson correlation via Welford's centered sum-of-products along each diagonal of the distance matrix in place of the sliding window dot product found in the original STOMP method. """ n_A = T_A.shape[0] n_B = T_B.shape[0] m_inverse = 1.0 / m constant = (m - 1) * m_inverse * m_inverse # (m - 1)/(m * m) for diag_idx in range(diags_start_idx, diags_stop_idx): k = diags[diag_idx] if k >= 0: iter_range = range(0, min(n_A - m + 1, n_B - m + 1 - k)) else: iter_range = range(-k, min(n_A - m + 1, n_B - m + 1 - k)) for i in iter_range: if i == 0 or (k < 0 and i == -k): cov = ( np.dot( (T_B[i + k : i + k + m] - M_T[i + k]), (T_A[i : i + m] - μ_Q[i]) ) * m_inverse ) else: # The next lines are equivalent and left for reference # cov = cov + constant * ( # (T_B[i + k + m - 1] - M_T_m_1[i + k]) # * (T_A[i + m - 1] - μ_Q_m_1[i]) # - (T_B[i + k - 1] - M_T_m_1[i + k]) * (T_A[i - 1] - μ_Q_m_1[i]) # ) cov = cov + constant * ( cov_a[i + k] * cov_b[i] - cov_c[i + k] * cov_d[i] ) if T_B_subseq_isfinite[i + k] and T_A_subseq_isfinite[i]: # Neither subsequence contains NaNs if T_B_subseq_isconstant[i + k] or T_A_subseq_isconstant[i]: pearson = 0.5 else: pearson = cov * Σ_T_inverse[i + k] * σ_Q_inverse[i] if T_B_subseq_isconstant[i + k] and T_A_subseq_isconstant[i]: pearson = 1.0 if pearson > ρ[thread_idx, i, 0]: I[thread_idx, i, 0] = i + k if ignore_trivial: # self-joins only if pearson > ρ[thread_idx, i + k, 0]: ρ[thread_idx, i + k, 0] = pearson I[thread_idx, i + k, 0] = i if i < i + k: # left pearson correlation and left matrix profile index if pearson > ρ[thread_idx, i + k, 1]: ρ[thread_idx, i + k, 1] = pearson I[thread_idx, i + k, 1] = i # right pearson correlation and right matrix profile index if pearson > ρ[thread_idx, i, 2]: I[thread_idx, i, 2] = i + k return @njit(parallel=True, fastmath=True) def _stump( T_A, T_B, m, M_T, μ_Q, Σ_T_inverse, σ_Q_inverse, M_T_m_1, μ_Q_m_1, T_A_subseq_isfinite, T_B_subseq_isfinite, T_A_subseq_isconstant, T_B_subseq_isconstant, diags, ignore_trivial, ): """ A Numba JIT-compiled version of STOMPopt with Pearson correlations for parallel computation of the matrix profile, matrix profile indices, left matrix profile indices, and right matrix profile indices. Parameters ---------- T_A : ndarray The time series or sequence for which to compute the matrix profile T_B : ndarray The time series or sequence that will be used to annotate T_A. For every subsequence in T_A, its nearest neighbor in T_B will be recorded. m : int Window size M_T : ndarray Sliding mean of time series, T μ_Q : ndarray Mean of the query sequence, Q, relative to the current sliding window Σ_T_inverse : ndarray Inverse sliding standard deviation of time series, T σ_Q_inverse : ndarray Inverse standard deviation of the query sequence, Q, relative to the current sliding window M_T_m_1 : ndarray Sliding mean of time series, T, using a window size of m-1 μ_Q_m_1 : ndarray Mean of the query sequence, Q, relative to the current sliding window and using a window size of m-1 T_A_subseq_isfinite : ndarray A boolean array that indicates whether a subsequence in T_A contains a np.nan/np.inf value (False) T_B_subseq_isfinite : ndarray A boolean array that indicates whether a subsequence in T_B contains a np.nan/np.inf value (False) T_A_subseq_isconstant : ndarray A boolean array that indicates whether a subsequence in T_A is constant (True) T_B_subseq_isconstant : ndarray A boolean array that indicates whether a subsequence in T_B is constant (True) diags : ndarray The diagonal indices ignore_trivial : bool Set to True if this is a self-join. Otherwise, for AB-join, set this to False. Default is True. Returns ------- profile : ndarray Matrix profile indices : ndarray The first column consists of the matrix profile indices, the second column consists of the left matrix profile indices, and the third column consists of the right matrix profile indices. Notes ----- DOI: 10.1007/s10115-017-1138-x \ <https://www.cs.ucr.edu/~eamonn/ten_quadrillion.pdf>__ See Section 4.5 The above reference outlines a general approach for traversing the distance matrix in a diagonal fashion rather than in a row-wise fashion. DOI: 10.1145/3357223.3362721 \ <https://www.cs.ucr.edu/~eamonn/public/GPU_Matrix_profile_VLDB_30DraftOnly.pdf>__ See Section 3.1 and Section 3.3 The above reference outlines the use of the Pearson correlation via Welford's centered sum-of-products along each diagonal of the distance matrix in place of the sliding window dot product found in the original STOMP method. DOI: 10.1109/ICDM.2016.0085 \ <https://www.cs.ucr.edu/~eamonn/STOMP_GPU_final_submission_camera_ready.pdf>__ See Table II Timeseries, T_A, will be annotated with the distance location (or index) of all its subsequences in another times series, T_B. Return: For every subsequence, Q, in T_A, you will get a distance and index for the closest subsequence in T_B. Thus, the array returned will have length T_A.shape[0]-m+1. Additionally, the left and right matrix profiles are also returned. Note: Unlike in the Table II where T_A.shape is expected to be equal to T_B.shape, this implementation is generalized so that the shapes of T_A and T_B can be different. In the case where T_A.shape == T_B.shape, then our algorithm reduces down to the same algorithm found in Table II. Additionally, unlike STAMP where the exclusion zone is m/2, the default exclusion zone for STOMP is m/4 (See Definition 3 and Figure 3). For self-joins, set ignore_trivial = True in order to avoid the trivial match. Note that left and right matrix profiles are only available for self-joins. """ n_A = T_A.shape[0] n_B = T_B.shape[0] l = n_A - m + 1 ρ = np.full((n_threads, l, 3), -np.inf) I = np.full((n_threads, l, 3), -1, np.int64) ndist_counts = core._count_diagonal_ndist(diags, m, n_A, n_B) cov_a = T_B[m - 1 :] - M_T_m_1[:-1] cov_b = T_A[m - 1 :] - μ_Q_m_1[:-1] # The next lines are equivalent and left for reference # cov_c = np.roll(T_A, 1) # cov_ = cov_c[:M_T_m_1.shape[0]] - M_T_m_1[:] cov_c = np.empty(M_T_m_1.shape[0]) cov_c[1:] = T_B[: M_T_m_1.shape[0] - 1] cov_c[0] = T_B[-1] cov_c[:] = cov_c - M_T_m_1 # The next lines are equivalent and left for reference # cov_d = np.roll(T_B, 1) # cov_d = cov_d[:μ_Q_m_1.shape[0]] - μ_Q_m_1[:] cov_d = np.empty(μ_Q_m_1.shape[0]) cov_d[1:] = T_A[: μ_Q_m_1.shape[0] - 1] cov_d[0] = T_A[-1] cov_d[:] = cov_d - μ_Q_m_1 # Compute and update cov, I within a single thread to avoiding race conditions _compute_diagonal( T_A, T_B, m, M_T, μ_Q, Σ_T_inverse, σ_Q_inverse, cov_a, cov_b, cov_c, cov_d, T_A_subseq_isfinite, T_B_subseq_isfinite, T_A_subseq_isconstant, T_B_subseq_isconstant, diags, ρ, I, ignore_trivial, ) # Reduction of results from all threads for i in prange(l): if ρ[0, i, 0] < ρ[thread_idx, i, 0]: ρ[0, i, 0] = ρ[thread_idx, i, 0] I[0, i, 0] = I[thread_idx, i, 0] # left pearson correlation and left matrix profile indices if ρ[0, i, 1] < ρ[thread_idx, i, 1]: ρ[0, i, 1] = ρ[thread_idx, i, 1] I[0, i, 1] = I[thread_idx, i, 1] # right pearson correlation and right matrix profile indices if ρ[0, i, 2] < ρ[thread_idx, i, 2]: ρ[0, i, 2] = ρ[thread_idx, i, 2] I[0, i, 2] = I[thread_idx, i, 2] # Convert pearson correlations to distances D = np.abs(2 * m * (1 - ρ[0, :, :])) for i in prange(D.shape[0]): if D[i, 0] < config.STUMPY_D_SQUARED_THRESHOLD: D[i, 0] = 0.0 if D[i, 1] < config.STUMPY_D_SQUARED_THRESHOLD: D[i, 1] = 0.0 if D[i, 2] < config.STUMPY_D_SQUARED_THRESHOLD: D[i, 2] = 0.0 P = np.sqrt(D) return P[:, :], I[0, :, :] [docs]@core.non_normalized(aamp) def stump(T_A, m, T_B=None, ignore_trivial=True, normalize=True): """ Compute the z-normalized matrix profile This is a convenience wrapper around the Numba JIT-compiled parallelized _stump function which computes the matrix profile according to STOMPopt with Pearson correlations. Parameters ---------- T_A : ndarray The time series or sequence for which to compute the matrix profile m : int Window size T_B : ndarray, default None The time series or sequence that will be used to annotate T_A. For every subsequence in T_A, its nearest neighbor in T_B will be recorded. Default is None which corresponds to a self-join. ignore_trivial : bool, default True Set to True if this is a self-join. Otherwise, for AB-join, set this to False. Default is True. normalize : bool, default True When set to True, this z-normalizes subsequences prior to computing distances. Otherwise, this function gets re-routed to its complementary non-normalized equivalent set in the @core.non_normalized function decorator. Returns ------- out : ndarray The first column consists of the matrix profile, the second column consists of the matrix profile indices, the third column consists of the left matrix profile indices, and the fourth column consists of the right matrix profile indices. Notes ----- DOI: 10.1007/s10115-017-1138-x \ <https://www.cs.ucr.edu/~eamonn/ten_quadrillion.pdf>__ See Section 4.5 The above reference outlines a general approach for traversing the distance matrix in a diagonal fashion rather than in a row-wise fashion. DOI: 10.1145/3357223.3362721 \ <https://www.cs.ucr.edu/~eamonn/public/GPU_Matrix_profile_VLDB_30DraftOnly.pdf>__ See Section 3.1 and Section 3.3 The above reference outlines the use of the Pearson correlation via Welford's centered sum-of-products along each diagonal of the distance matrix in place of the sliding window dot product found in the original STOMP method. DOI: 10.1109/ICDM.2016.0085 \ <https://www.cs.ucr.edu/~eamonn/STOMP_GPU_final_submission_camera_ready.pdf>__ See Table II Timeseries, T_A, will be annotated with the distance location (or index) of all its subsequences in another times series, T_B. Return: For every subsequence, Q, in T_A, you will get a distance and index for the closest subsequence in T_B. Thus, the array returned will have length T_A.shape[0]-m+1. Additionally, the left and right matrix profiles are also returned. Note: Unlike in the Table II where T_A.shape is expected to be equal to T_B.shape, this implementation is generalized so that the shapes of T_A and T_B can be different. In the case where T_A.shape == T_B.shape, then our algorithm reduces down to the same algorithm found in Table II. Additionally, unlike STAMP where the exclusion zone is m/2, the default exclusion zone for STOMP is m/4 (See Definition 3 and Figure 3). For self-joins, set ignore_trivial = True in order to avoid the trivial match. Note that left and right matrix profiles are only available for self-joins. """ if T_B is None: T_B = T_A ignore_trivial = True ( T_A, μ_Q, σ_Q_inverse, μ_Q_m_1, T_A_subseq_isfinite, T_A_subseq_isconstant, ) = core.preprocess_diagonal(T_A, m) ( T_B, M_T, Σ_T_inverse, M_T_m_1, T_B_subseq_isfinite, T_B_subseq_isconstant, ) = core.preprocess_diagonal(T_B, m) if T_A.ndim != 1: # pragma: no cover raise ValueError( f"T_A is {T_A.ndim}-dimensional and must be 1-dimensional. " "For multidimensional STUMP use stumpy.mstump or stumpy.mstumped" ) if T_B.ndim != 1: # pragma: no cover raise ValueError( f"T_B is {T_B.ndim}-dimensional and must be 1-dimensional. " "For multidimensional STUMP use stumpy.mstump or stumpy.mstumped" ) core.check_window_size(m, max_size=min(T_A.shape[0], T_B.shape[0])) if ignore_trivial is False and core.are_arrays_equal(T_A, T_B): # pragma: no cover logger.warning("Arrays T_A, T_B are equal, which implies a self-join.") logger.warning("Try setting ignore_trivial = True.") if ignore_trivial and core.are_arrays_equal(T_A, T_B) is False: # pragma: no cover logger.warning("Arrays T_A, T_B are not equal, which implies an AB-join.") logger.warning("Try setting ignore_trivial = False.") n_A = T_A.shape[0] n_B = T_B.shape[0] l = n_A - m + 1 excl_zone = int(np.ceil(m / config.STUMPY_EXCL_ZONE_DENOM)) out = np.empty((l, 4), dtype=object) if ignore_trivial: diags = np.arange(excl_zone + 1, n_A - m + 1) else: diags = np.arange(-(n_A - m + 1) + 1, n_B - m + 1) P, I = _stump( T_A, T_B, m, M_T, μ_Q, Σ_T_inverse, σ_Q_inverse, M_T_m_1, μ_Q_m_1, T_A_subseq_isfinite, T_B_subseq_isfinite, T_A_subseq_isconstant, T_B_subseq_isconstant, diags, ignore_trivial, ) out[:, 0] = P[:, 0] out[:, 1:] = I threshold = 10e-6 if core.are_distances_too_small(out[:, 0], threshold=threshold): # pragma: no cover logger.warning(f"A large number of values are smaller than {threshold}.") logger.warning("For a self-join, try setting ignore_trivial = True.") return out
auto_math_text
web
# Publications You can also find my articles on my Google Scholar Profile. Research Topics:Show all by date / Show all by topic / Show selected ## Robot Learning CLOUD: Contrastive Learning of Unsupervised Dynamics Jianren Wang*, Yujie Lu*, Hang Zhao (* indicates equal contribution) 2020 Conference on Robot Learning [Project Page] [Code] [Abstract] [Bibtex] Developing agents that can perform complex control tasks from high dimensional observations such as pixels is challenging due to difficulties in learning dynamics efficiently. In this work, we propose to learn forward and inverse dynamics in a fully unsupervised manner via contrastive estimation. Specifically, we train a forward dynamics model and an inverse dynamics model in the feature space of states and actions with data collected from random exploration. Unlike most existing deterministic models, our energy-based model takes into account the stochastic nature of agent-environment interactions. We demonstrate the efficacy of our approach across a variety of tasks including goal-directed planning and imitation from observations.@inproceedings{jianren20cloud, Author = {Wang, Jianren and Lu, Yujie and Zhao, Hang}, Title = {CLOUD: Contrastive Learning of Unsupervised Dynamics}, Booktitle = {CORL}, Year = {2020} } Integration of a Low-Cost Three-Axis Sensor for Robot Force Control Shuyang Chen, Jianren Wang, Peter Kazanzides 2018 Second IEEE International Conference on Robotic Computing
auto_math_text
web
Outlook: National Grid Transco PLC National Grid PLC (NEW) American Depositary Shares is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Buy Time series to forecast n: 30 Jan 2023 for (n+6 month) Methodology : Transfer Learning (ML) ## Abstract National Grid Transco PLC National Grid PLC (NEW) American Depositary Shares prediction model is evaluated with Transfer Learning (ML) and Logistic Regression1,2,3,4 and it is concluded that the NGG stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Buy ## Key Points 1. Decision Making 2. What statistical methods are used to analyze data? 3. Short/Long Term Stocks ## NGG Target Price Prediction Modeling Methodology We consider National Grid Transco PLC National Grid PLC (NEW) American Depositary Shares Decision Process with Transfer Learning (ML) where A is the set of discrete actions of NGG stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Logistic Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Transfer Learning (ML)) X S(n):→ (n+6 month) $\begin{array}{l}\int {e}^{x}\mathrm{rx}\end{array}$ n:Time series to forecast p:Price signals of NGG stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## NGG Stock Forecast (Buy or Sell) for (n+6 month) Sample Set: Neural Network Stock/Index: NGG National Grid Transco PLC National Grid PLC (NEW) American Depositary Shares Time series to forecast n: 30 Jan 2023 for (n+6 month) According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Buy X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for National Grid Transco PLC National Grid PLC (NEW) American Depositary Shares 1. Paragraph 5.7.5 permits an entity to make an irrevocable election to present in other comprehensive income subsequent changes in the fair value of particular investments in equity instruments. Such an investment is not a monetary item. Accordingly, the gain or loss that is presented in other comprehensive income in accordance with paragraph 5.7.5 includes any related foreign exchange component. 2. However, the designation of the hedging relationship using the same hedge ratio as that resulting from the quantities of the hedged item and the hedging instrument that the entity actually uses shall not reflect an imbalance between the weightings of the hedged item and the hedging instrument that would in turn create hedge ineffectiveness (irrespective of whether recognised or not) that could result in an accounting outcome that would be inconsistent with the purpose of hedge accounting. Hence, for the purpose of designating a hedging relationship, an entity must adjust the hedge ratio that results from the quantities of the hedged item and the hedging instrument that the entity actually uses if that is needed to avoid such an imbalance 3. Interest Rate Benchmark Reform—Phase 2, which amended IFRS 9, IAS 39, IFRS 7, IFRS 4 and IFRS 16, issued in August 2020, added paragraphs 5.4.5–5.4.9, 6.8.13, Section 6.9 and paragraphs 7.2.43–7.2.46. An entity shall apply these amendments for annual periods beginning on or after 1 January 2021. Earlier application is permitted. If an entity applies these amendments for an earlier period, it shall disclose that fact. 4. An entity shall apply the impairment requirements in Section 5.5 retrospectively in accordance with IAS 8 subject to paragraphs 7.2.15 and 7.2.18–7.2.20. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions National Grid Transco PLC National Grid PLC (NEW) American Depositary Shares is assigned short-term Ba1 & long-term Ba1 estimated rating. National Grid Transco PLC National Grid PLC (NEW) American Depositary Shares prediction model is evaluated with Transfer Learning (ML) and Logistic Regression1,2,3,4 and it is concluded that the NGG stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Buy ### NGG National Grid Transco PLC National Grid PLC (NEW) American Depositary Shares Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementCaa2B3 Balance SheetBaa2B3 Leverage RatiosCCaa2 Cash FlowBa1Caa2 Rates of Return and ProfitabilityB2Baa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 73 out of 100 with 794 signals. ## References 1. Doudchenko N, Imbens GW. 2016. Balancing, regression, difference-in-differences and synthetic control methods: a synthesis. NBER Work. Pap. 22791 2. Allen, P. G. (1994), "Economic forecasting in agriculture," International Journal of Forecasting, 10, 81–135. 3. Abadie A, Imbens GW. 2011. Bias-corrected matching estimators for average treatment effects. J. Bus. Econ. Stat. 29:1–11 4. C. Szepesvári. Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2010 5. P. Milgrom and I. Segal. Envelope theorems for arbitrary choice sets. Econometrica, 70(2):583–601, 2002 6. Rumelhart DE, Hinton GE, Williams RJ. 1986. Learning representations by back-propagating errors. Nature 323:533–36 7. Varian HR. 2014. Big data: new tricks for econometrics. J. Econ. Perspect. 28:3–28 Frequently Asked QuestionsQ: What is the prediction methodology for NGG stock? A: NGG stock prediction methodology: We evaluate the prediction models Transfer Learning (ML) and Logistic Regression Q: Is NGG stock a buy or sell? A: The dominant strategy among neural network is to Buy NGG Stock. Q: Is National Grid Transco PLC National Grid PLC (NEW) American Depositary Shares stock a good investment? A: The consensus rating for National Grid Transco PLC National Grid PLC (NEW) American Depositary Shares is Buy and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of NGG stock? A: The consensus rating for NGG is Buy. Q: What is the prediction period for NGG stock? A: The prediction period for NGG is (n+6 month) ## People also ask What are the top stocks to invest in right now?
auto_math_text
web
• Compared with intermediate-dose prophylaxis (3 × 1000 IU/wk), high-dose prophylaxis (3 × 2000 IU/wk) resulted in a 66% higher total cost. • At age 24 years, high-dose prophylaxis resulted in a small reduction in bleeding and hemophilic arthropathy, but equal quality of life. Prophylactic treatment in severe hemophilia is very effective but is limited by cost issues. The implementation of 2 different prophylactic regimens in The Netherlands and Sweden since the 1970s may be considered a natural experiment. We compared the costs and outcomes of Dutch intermediate- and Swedish high-dose prophylactic regimens for patients with severe hemophilia (factor VIII/IX < 1 IU/dL) born between 1970 and 1994, using prospective standardized outcome assessment and retrospective collection of cost data. Seventy-eight Dutch and 50 Swedish patients, median age 24 years (range, 14-37 years), were included. Intermediate-dose prophylaxis used less factor concentrate (median: Netherlands, 2100 IU/kg per year [interquartile range (IQR), 1400-2900 IU/kg per year] vs Sweden, 4000 IU/kg per year [IQR, 3000-4900 IU/kg per year]); (P < .01). Clinical outcome was slightly inferior for the intermediate-dose regimen (P < .01) for 5-year bleeding (median, 1.3 [IQR, 0.8-2.7] vs 0 [IQR, 0.0-2.0] joint bleeds/y) and joint health (Haemophilia Joint Health Score >10 of 144 points in 46% vs 11% of participants), although social participation and quality of life were similar. Annual total costs were 66% higher for high-dose prophylaxis (mean, 180 [95% confidence interval, 163 - 196] × US$1000 for Dutch vs 298 [95% confidence interval, 271-325]) × US$1000 for Swedish patients; (P < .01). At group level, the incremental benefits of high-dose prophylaxis appear limited. At the patient level, prophylaxis should be tailored individually, and many patients may do well receiving lower doses of concentrate without compromising safety. Patients with severe hemophilia have undetectable factor VIII (FVIII) or IX levels, resulting in spontaneous and trauma-related bleeding, especially in the joints. Repeat joint bleeding eventually leads to a crippling arthropathy. Severe hemophilia is rare, with a prevalence of about 40 cases per million inhabitants. Since its introduction in 1958 by Professor Nilsson in Sweden,1  many long-term observational studies2-5  and 2 pediatric randomized controlled trials6,7  have shown that prophylactic replacement therapy in severe hemophilia prevents bleeds and subsequent hemophilic arthropathy. This was confirmed by the latest version of the Cochrane review on prophylaxis.8  However, the increased use of factor concentrates in prophylaxis and the associated costs (from €72 000 [US$76 700] annually for small children9 to €146 000 [US$155 600] for an adult10  receiving high-dose prophylaxis in the 1990s) have been limiting factors of a more widespread introduction of prophylaxis. The Swedish regimen originally aimed at maintaining minimum trough levels of clotting factor activity by using doses of 25 to 40 IU/kg 3 times a week for hemophilia A.11  In The Netherlands, however, prophylaxis was introduced in 1968,12  using lower doses and tailoring the on the basis of clinical observation to prevent spontaneous joint bleeds. Although treatment has intensified over the years in both countries,3,13  the difference in dosing has remained considerable: Today, a typical adult Dutch patient with hemophilia A uses 3 × 1000 IU FVIII/week, whereas a typical adult Swedish patient uses 3 × 2000 IU or 1500 IU every other day. Both groups reported favorable long-term results, but with increasing pressure on health care budgets and a formal cost review by the Swedish authorities,14  it is important to assess the incremental gains of high-dose prophylaxis. Assessment of long-term effects requires decades of follow-up, but the number of patients with hemophilia is limited.15  Comparing birth cohorts from centers in 2 countries provides the best alternative to a randomized controlled trial to assess long-term outcomes of the Dutch intermediate-dose and Swedish high-dose prophylactic regimens. Selection bias was avoided, as the choice of prophylactic regimen depended on country of birth only. In addition, external factors such as social circumstances and level of general health care provision in Sweden and the Netherlands are quite similar. The aim of this study was to compare long-term outcomes and costs between the Dutch intermediate-dose and the Swedish high-dose prophylactic regimens for persons with severe hemophilia with a follow-up of up to 3 decades. As optimal dosing for prophylaxis has never been established, this study provides a unique insight that could not have been reported previously. ### Design and setting The study was designed as an observational study comparing 2 cohorts, using retrospective assessment of treatment and prospective assessment of outcome. The study was performed at the hemophilia treatment centers of the University Medical Center Utrecht, the Netherlands (Van Creveldkliniek); the Karolinska University Hospital in Stockholm, Sweden; and the Skåne University Hospital in Malmö, Sweden. These clinics had routinely collected annual data on treatment and bleeding, hospital admissions, and surgical procedures for decades. Data for this study were collected between January 2006 and July 2009. Ethical approval for this study was obtained from the institutional review boards of Utrecht (nr 06-002) and Malmö (nr 413/2006 and 493/2007). This study was conducted in accordance with the Declaration of Helsinki. ### Patients All patients with severe hemophilia (FVIII/IX < 1% or < 1 IU/dL) born between January 1, 1970, and January 1, 1994, who were treated at the participating centers and who had lifelong access to care and treatment data available were eligible for this study. Patients with a history of inhibitors (any inhibitor activity > 0.6 Bethesda Units with decreased recovery) were excluded. Assessments were performed during regular outpatient visits. Patients aged 18 years and older were considered as adults. Informed consent was obtained from all patients before participation. ### Patient characteristics and treatment history Baseline patient characteristics registered included date of birth, date of diagnosis, type of hemophilia, hepatitis C status and HIV status, and date of first joint bleed. To assess treatment history, date of first treatment, start of home treatment, and onset of prophylaxis, as well as complete history of prophylactic regimens used, were collected. In addition, orthopedic surgical procedures (including arthroscopies and radioactive synovectomies) were extracted from patient files. ### Current treatment For the last 5 years before evaluation, annual clotting factor consumption was extracted from patient logs and hospital pharmacy records. In addition, the number of visits to the center and details on hospital admissions was documented. ### Outcome The primary outcome parameter was clinical joint status, assessed by the center’s physiotherapist, using the Haemophilia Joint Health Score (HJHS; version 1.0).16,17  The HJHS is based on physical examination of elbows, knees, and ankles (maximum, 20-26 points per joint) and observation of gait for knees and ankles (0-4 points). The total score was calculated without adding overall global gait to the individual joint scores, resulting in a total score ranging from 0, signifying perfect joint health, to 144. The HJHS score was originally developed to assess subtle joint damage in children with hemophilia. The score was used for this study because the items scored are not age-specific and because differences in outcome were expected to be small. All HJHS scores were performed by a single physiotherapist at each participating center. Standardization and reliability were established during a training session (January 2006; 12 patients; intraclass correlation, 0.84) with all 3 designated physiotherapists.18 Secondary outcome parameters were the annual number of joint bleeds, self-reported activities, health-related quality of life, and social participation. The annual number of joint and soft tissue bleeds during the last 5 years was extracted from the patient logs, medical files, and hospital databases by research nurses at each center. Bleeds were defined as any complaint requiring treatment with clotting factor concentrate. Bleeds located in shoulders, elbows, wrists, hips, knees, or ankles were considered joint bleeds. All data were entered in an electronic case report form, using predefined definitions. To minimize bias, all definitions and how to complete the electronic case report form were documented and discussed before the study start. Questionnaires were administered to adult patients only. Self-reported limitations in activities were assessed using the Haemophilia Activities lists,19-21  whereas physical activity levels were assessed by the International Physical Activity Questionnaire.22  Health-related quality of life expressed as utility was assessed using the Euroqol (EQ-5D).23  EQ-5D utility values were calculated using the Dutch tariff24  for both cohorts. To compare social participation, data on achieved level of education and labor market participation were collected and compared with data for the age-matched general male population in the respective countries, using the Labor Force Survey at Statistics Netherlands25  and Statistics Sweden,26  respectively, and the Swedish Registration of Education.27 ### Cost and resource use Dutch prices for the year 2010 were collected for evaluation of health care resource use and lost production in both cohorts. Prices were based on national price lists28,29  and on academic hospital prices from 2011 for surgeries (Table 1). Days lost from work were valued according to the human capital approach.30  Costs were translated to US dollars using the European Central Bank 2010 bilateral average annual exchange rate: €1 = US$1.3257 (http://sdw.ecb.europa.eu). Table 1 Prices Resource useAverage price*Source Direct costs Orthopedic surgery 12 643 Dutch Board of Insurance Companies, tariff-university hospital 201128 Other surgery 8429 Hospital day 762 Dutch Board of Insurance Companies28 Visit to hemophilia clinic 171 Factor VIII (per IU) 1.10 Farmacotherapeutisch kompas 201029 Factor IX (per IU) 1.11 Indirect costs Cost per day of lost production by age group Dutch Board of Insurance Companies28 <25 y 176 25-29 y 240 30+ y 294 Resource useAverage price*Source Direct costs Orthopedic surgery 12 643 Dutch Board of Insurance Companies, tariff-university hospital 201128 Other surgery 8429 Hospital day 762 Dutch Board of Insurance Companies28 Visit to hemophilia clinic 171 Factor VIII (per IU) 1.10 Farmacotherapeutisch kompas 201029 Factor IX (per IU) 1.11 Indirect costs Cost per day of lost production by age group Dutch Board of Insurance Companies28 <25 y 176 25-29 y 240 30+ y 294 * Prices are converted from Dutch prices in euros, using average exchange rate year 2010: €1 = US$1.3257. Use of factor concentrates was included separately. Orthopedic surgery was based on average of tariff for arthrodesis and arthroplasty. Price of hospital resources include staff, other material, and overhead cost. Direct medical costs (factor concentrate costs and other costs) and indirect costs (cost of days lost from work) for the 5-year evaluation period were compared between cohorts. In addition, lifelong use of factor concentrates according to age and treatment strategy was estimated from individual-level data on the history of prescribed prophylactic regimens and body weight for Swedish patients and from an earlier study for Dutch patients.31  Factor consumption according to age and treatment strategy was also compared graphically. ### Statistics Student t tests, nonparametrical Mann–Whitney U tests, and χ-squared tests were used to compare patient characteristics and outcomes according to treatment strategy. Panel data population-averaged generalized linear regression was used to predict the average annual cost of a mean-weight adult patient for each treatment regimen. Both logistic analysis (dependent HJHS ≥ 10) and generalized linear models (dependent HJHS, γ distribution, log link) were used to study the effects of age at start of prophylaxis independent of country, age at evaluation, and 5-year factor consumption. Statistical analyses were performed using SPSS version 20 (IBM Corp., Armonk, NY) and Stata version12 (StataCorp LP, College Station, TX). ### Patients Seventy-eight Dutch (intermediate-dose) and 50 Swedish (high-dose) patients were assessed during regular outpatient visits. The overall inclusion rate was 128/156 (78%), including 78/92 (85%) Dutch patients (8 refusals, 5 unable to include because of irregular visits, and 1 patient not invited as he was currently taking interferon) and 50/71 (70%) Swedish patients (21 refusals). To assess the effect of excluded patients on the overall study population, we compared age, previous orthopedic surgeries, and treatment with clotting factor concentrate during the last 5 years between excluded and included patients for both countries. Dutch excluded patients (n = 14) were significantly older (mean age, 32.3 vs 24.9 years; P < .01) but had a similar history of previous orthopedic surgery (21% vs 15%; P = .69); excluded patients showed a trend toward using full prophylaxis less often (64% vs 78%; P = .31) and a 23% lower annual clotting factor consumption (mean, 1680 IU/kg per year; mean difference, 500 IU/kg per year; P = .06). Swedish excluded patients (n = 21) had a similar age (25.9 vs 23.8 years; P = .24) and history of orthopedic surgery (5% vs 8%; P = 1.0) but displayed a trend toward using full prophylaxis less often (76% vs 86%; P = .15) and had a 21% lower annual clotting factor consumption (mean, 3240 IU/kg per year; mean difference, 865 IU/kg per year; P = .03) The majority of included patients were adults at the time of evaluation (Netherlands, 62 of 78 [79%]; Sweden, 41 of 50 [82%]). The number of patients with available data according to outcome parameter is shown in Figure 1. Figure 1 Overview of available data. Figure 1 Overview of available data. ### Patient characteristics and treatment The mean age of included patients was 24.5 years (range, 14-37 years).The majority (115 of 128; 90%) of patients had hemophilia A. Overall, 34% of patients were positive for hepatitis C, and 5% were HIV-positive. Although the prevalence of HIV was similar, hepatitis C infection was more common in Dutch patients (42% vs 22%; P = .04). Patient characteristics and treatment according to prophylactic regimen are shown in Table 2. Patients were diagnosed with severe hemophilia early in life in both countries (median, 0.7 years; IQR, 0.2-1.0 years). Dutch patients entered the clinic about 1 year later than Swedish patients, at a median age of 1.8 years versus 0.6 years (P < .01). Treatment was started early in both countries. The first infusion was usually given around the age of 1 year, but at a slightly older age in Dutch patients (median, 1.1 years vs 0.9 years; P value < .01). Patient characteristics and treatment in the 2 Swedish centers were comparable. Table 2 Treatment characteristics according to prophylactic regimen Netherlands, intermediate-dose median (IQR)*Sweden, high-dose median (IQR)*P 78 50 Hemophilia A, n 70 45 1.00 Age at evaluation, y 24.8 (19.3-30.2) 23.2(18.7-28.0) .47 Weight, kg 75 (64-85) 73 (62-80) .28 Treatment history Age at diagnosis, y 0.7 (0.04-1.2) 0.6 (0.3-0.9) .55 Age at 1st treatment, y 1.1 (0.9-1.7) 0.9 (0.5-1.2) <.01 Age at start prophylaxis, y 4.5 (3.2-6.0) 1.5 (1.1-2.5) <.01 Prophylaxis started before first joint bleed, n/N (%) 6/69 (9%) 19/45 (42%) <.01 Age at start of home treatment, y 5.7 (3.9-9.3) 3.3 (2.0-4.5) <.01 Treatment during the last 5 y Full-time prophylaxis, n (%) 61 (78) 48 (96) <.01 Weekly dose, IU/kg 46 (34-55) 88 (61-113) <.01 Number of infusions/week 3.0 (2.5-3.0) 3.3 (1.6-3.5) .19 Annual consumption, IU/kg per year 2100 (1400-2900) 4000 (3000-4900) <.01 Netherlands, intermediate-dose median (IQR)*Sweden, high-dose median (IQR)*P 78 50 Hemophilia A, n 70 45 1.00 Age at evaluation, y 24.8 (19.3-30.2) 23.2(18.7-28.0) .47 Weight, kg 75 (64-85) 73 (62-80) .28 Treatment history Age at diagnosis, y 0.7 (0.04-1.2) 0.6 (0.3-0.9) .55 Age at 1st treatment, y 1.1 (0.9-1.7) 0.9 (0.5-1.2) <.01 Age at start prophylaxis, y 4.5 (3.2-6.0) 1.5 (1.1-2.5) <.01 Prophylaxis started before first joint bleed, n/N (%) 6/69 (9%) 19/45 (42%) <.01 Age at start of home treatment, y 5.7 (3.9-9.3) 3.3 (2.0-4.5) <.01 Treatment during the last 5 y Full-time prophylaxis, n (%) 61 (78) 48 (96) <.01 Weekly dose, IU/kg 46 (34-55) 88 (61-113) <.01 Number of infusions/week 3.0 (2.5-3.0) 3.3 (1.6-3.5) .19 Annual consumption, IU/kg per year 2100 (1400-2900) 4000 (3000-4900) <.01 IQR, interquartile range; IU, international units. * Values are the median (IQR) of unit of measurement unless otherwise stated. Annual clotting factor consumption was rounded to the nearest 100. Both cohorts included a single patient with a mild bleeding phenotype who never started prophylactic treatment: a Dutch patient born in 1972 and a Swedish patient born in 1979. Overall, the prophylactic treatment regimens were very different: Patients treated with the Dutch intermediate-dose regimen started prophylaxis later, mostly after the onset of joint bleeding, and switched to home treatment at a later age. Since the start of prophylaxis, most patients continued this treatment, although there was a trend toward more frequent discontinuation (P = .19) and a significantly lower proportion of Dutch patients receiving full-time prophylaxis during the last 5 years (78% vs 96%; P < .01). At evaluation, the overall annual consumption was 2150 IU/kg per year (95% CI, 1600-2700) lower for the intermediate-dose regimen (median, 2100 vs 4000 IU/kg per year; P < .01). The frequency of infusions was similar, at around 3 infusions per week. Patient characteristics and treatment patterns were similar for hemophilia A and B (data on request). ### Clinical outcome Clinical outcome according to regimen is shown in Table 3. In total, 643 patient-years were evaluated for bleeding and treatment. Overall, physical activity was high in both groups and bleeding frequencies were low. When comparing both regimens, however, the intermediate-dose regimen resulted in a limited but statistically significant increase in the number of bleeds, at just more than 1 additional joint bleed per year (median, 1.3 vs 0 bleeds; P < .01) and 7 to 8 additional bleeds (median, 10 vs 2.5 bleeds) over the course of 5 years. During the 5-year observation period, a single Dutch HIV-positive patient experienced an intracranial bleed. No other life-threatening bleeds were observed in either group. Table 3 Clinical outcome and social participation according to prophylactic regimen Netherlands, intermediate-dose median (IQR)Sweden, high-dose median (IQR)P Physical activity METs 4294 (1037-13 740) 3200 (1152-9292) .50 Bleeding Joint bleeds per y, n 1.3 (0.8-2.7) 0 (0.0-2.0) <.01 Joint bleeds in 5 y, n 10 (4-18) 2.5 (0-9.3) <.01 Joint outcome Loss of function, HJHS, maximum, 144 points 9.0 (2.0-18.0) 4.0 (2.0-6.8) .01 HJHS≥ 10 points 31/68 (46%) 5/44 (11%) <.01 Nr of joints affected 2 (1-4) 3 (2-3) .47 Limitations in activities 93 (81-98) 99 (93-100) <.01 Haemophilia Activities List sum, maximum 100 Health-related quality of life EQ-5D utility 0.84 (0.81-1.00) 1.00 (0.81-1.00) .93 Netherlands, intermediate-dose median (IQR)Sweden, high-dose median (IQR)P Physical activity METs 4294 (1037-13 740) 3200 (1152-9292) .50 Bleeding Joint bleeds per y, n 1.3 (0.8-2.7) 0 (0.0-2.0) <.01 Joint bleeds in 5 y, n 10 (4-18) 2.5 (0-9.3) <.01 Joint outcome Loss of function, HJHS, maximum, 144 points 9.0 (2.0-18.0) 4.0 (2.0-6.8) .01 HJHS≥ 10 points 31/68 (46%) 5/44 (11%) <.01 Nr of joints affected 2 (1-4) 3 (2-3) .47 Limitations in activities 93 (81-98) 99 (93-100) <.01 Haemophilia Activities List sum, maximum 100 Health-related quality of life EQ-5D utility 0.84 (0.81-1.00) 1.00 (0.81-1.00) .93 EQ-5D, utility values according to the Dutch tariff; Haemophilia Activities List; HJHS, Haemophilia Joint Health Score; IU, international units; METs, metabolic equivalent of task units. In these young adults, only minor changes in joint status were observed, and few limitations in activities were reported. However, again, in the direct comparison, the patients treated with the intermediate-dose regimen had slightly, but significantly, higher HJHS scores (median, 9.0 vs 7.0 points of 144) and reported slightly, but significantly, more limitations in daily activities (median Haemophilia Activities List score, 93 vs 99 of 100). However, high-dose prophylaxis did not completely prevent joint damage in all patients: 5/44 (11%) of Swedish patients still had a HJHS of 10 or more points compared with 31/68 (46%) of Dutch patients (P < .01). A history of orthopedic surgery was rare in both populations, at 15% of Dutch patients compared with 8% of Swedish patients (P = .29). Regression analyses did not show an independent effect of age at start of prophylaxis on outcome (β = 0.098 [P = .21] for logistic regression and β = 0.032 [P = .18] for generalized linear models). Outcome parameters were similar for hemophilia A and B (data on request). ### Quality of life The quality of life measured by EQ-5D utility was high (Table 3) and was similar across both cohorts (P = .93). At the group level, values were close to those of the general male population aged 20 to 29 years: mean utility was 0.88 for Dutch patients vs 0.93 in the Dutch general population,32  and 0.86 for Swedish patients vs 0.91 in the Swedish general population.33 The response rate was lower for Dutch patients (76% vs 83%; Figure 1), but this is not expected to have affected outcome, as nonresponders and responders had similar ages and education and employment levels, as well as joint status. ### Social participation The achieved level of education and labor force participation rates for adult patients are shown in Table 4. At evaluation, a higher percentage of Swedish participants reported having completed university education, but the differences in overall educational achievement were not significant. Compared with the general male population, fewer participants had achieved a university degree at evaluation, although this may result from some still being students and national statistics not including the youngest adults (The Netherlands, age 25+ years; Sweden, 20+ years).25,27 Table 4 Level of education and labor force participation in adult patients according to prophylactic regimen and compared with the general population NetherlandsSweden Intermediate-dose, % of n = 62*General population, %*,High-dose, % of n=41*General population, %*, Achieved level of education Compulsory/secondary 29 20 24 11 Upper secondary/professional 68 43 59 53 University 36 15 33 Missing 2* 3* Labor force participation Active 76 85 78 87 Employed 69 82 73 81 Unemployed Not active 24 15 20 13 Student 21  17 Disability allowance Housekeeping Missing  3* NetherlandsSweden Intermediate-dose, % of n = 62*General population, %*,High-dose, % of n=41*General population, %*, Achieved level of education Compulsory/secondary 29 20 24 11 Upper secondary/professional 68 43 59 53 University 36 15 33 Missing 2* 3* Labor force participation Active 76 85 78 87 Employed 69 82 73 81 Unemployed Not active 24 15 20 13 Student 21  17 Disability allowance Housekeeping Missing  3* * Total 99% or 101% due to rounding error. Statistics Netherlands, year 2008 age group 25-35 y for education and 20-35 y for labor force participation, Labor Force Survey.25 Statistics Sweden, year 2008 age group 20-35 y, Register of Education,27  and age group 20-34 y Labor Force Survey.26 Among employed participants, full-time employment dominated: 38/43 (88%) of Dutch and 26/30 (87%) of Swedish patients were working full-time. Few patients were unemployed (The Netherlands, 2; Sweden, 4), and the unemployment rates among participants were similar (The Netherlands) or lower (Sweden) than those of their peers. Overall, 86% of patients receiving either regimen did not report any days lost from work or school because of hemophilia during the 5-year study period. Among patients who reported missing days from work or education on a short- or long-term basis, the median number of days lost during the 5-year period for Dutch patients (n = 11) was 202 (IQR, 39-536) days compared with 28 (IQR, 3-39) days for Swedish patients (n = 7). This large difference was driven by 5 Dutch patients: 3 patients undergoing interferon-based treatment of HCV infection who could not work for on average 7.7 months and 2 who were disabled on a long-term basis, 1 after an intracranial bleed and another for severe arthropathy, HIV, and HCV infection. ### Costs The mean and median 5-year treatment costs in US$1000 per patient according to prophylactic regimen and Dutch prices are shown in Table 5. For the 5-year period, median total costs per patient were 73% higher for high-dose prophylaxis: US$0.85 million (IQR, US$0.66-US$1.09 million) for Dutch vs US$1.48 million (IQR, US$1.15-US$1.79 million; P < .01) for Swedish patients. During this period, the clotting factor consumption dominated costs, making up 97.1% of costs for the intermediate-dose regimen and 99.6% for the high-dose regimen. On average, resource use and costs were 40% to 50% lower for the intermediate-dose regimen, except for a US$700 higher category for other health care (including surgery, hospitalizations, and health care visits), which accounted for less than 1% of total costs. Using 5-year data, the cost per bleed avoided would be US$91 000. Table 5 Five-year costs per patient according to prophylactic regimen Netherlands Intermediate-doseSweden High-dosePNetherlands Intermediate-doseSweden High-doseP Mean (SD), US$1000Median (IQR), US$1000 Direct costs Factor use 867 (380) 1452 (483) <.01 851 (647-1046) 1474 (1154-1776) <.01 Other health care 5 (9) 4 (6) .28 2 (2-3) 1 (1–3) <.01 Indirect costs Lost production 14 (50) 1 (3) .08 0 (0-0)* 0 (0-0)* .82 Total costs 886 (382) 1457 (484) <.01 852 (659-1094) 1475 (1155-1787) <.01 Netherlands Intermediate-doseSweden High-dosePNetherlands Intermediate-doseSweden High-doseP Mean (SD), US$1000Median (IQR), US$1000 Direct costs Factor use 867 (380) 1452 (483) <.01 851 (647-1046) 1474 (1154-1776) <.01 Other health care 5 (9) 4 (6) .28 2 (2-3) 1 (1–3) <.01 Indirect costs Lost production 14 (50) 1 (3) .08 0 (0-0)* 0 (0-0)* .82 Total costs 886 (382) 1457 (484) <.01 852 (659-1094) 1475 (1155-1787) <.01 IQR, interquartile range; SD, standard deviation. * Ninetieth percentile values were US$30 vs US$2 for Netherlands and Sweden, respectively, in US$1000. Mean costs are influenced by outliers but reflect the total budget at group level, and median costs reflect the cost of the middle person in a skewed distribution. The predicted annual total costs for an average-weight patient (74 kg) were 66% higher for high-dose prophylaxis: mean US$179 600 per year (95% CI, US$163 000-US$196 200) for Dutch patients versus US$297 900 per year (95% CI, US$270 800-US$324 900) for Swedish patients (P < .01). ### Clinical implications This study shows a statistically significant but small incremental benefit after nearly doubling the annual prophylactic dose. The benefit was observed in all outcome parameters except quality of life. This may reflect the limited clinical effects of an additional joint bleed per year or the inability of the generic EQ-5D questionnaire to pick up small differences. From a lifelong perspective, it is expected that differences in outcome between these 2 cohorts will have increased in another 20 years. However, we do not know the extent or the clinical implications of such an increase. Is the difference attributable to dose difference only? One of the drivers of the slightly better outcome in the high-dose group may be the earlier start of prophylaxis. This is well-established,42,43  and both countries have started prophylaxis earlier during the last decades.2,3  Regression analyses failed to identify a statistically significant and independent effect of age at start of prophylaxis in outcome. This unexpected finding may be a result of 2 limitations in the present data: lack of variation and limited power. Lack of variation was present in the Swedish data, as all patients started prophylaxis very early. Power was limited by small differences in outcome and limited patient numbers. At this time, prophylactic dosing is mostly based on the Swedish regimen of 25 to 40 IU/kg per infusion, and dosages used in pediatric trials have been consistently high, at 25 IU/kg thrice weekly or every other day (ie, 3900 and 4550 IU/kg per year, respectively).6,7  For older patients, guidelines on dosing are unavailable,44,45  and the recommendation is to just keep this dose,45  despite the fact that adults have more regulated activity patterns, a longer FVIII half-life, and a weaker association between trough levels and bleeding.46,47 For clinical practice, it will always be important to prevent bleeding, especially in joints. Overall, these favorable results support the need for an early start of prophylaxis and continuing this treatment in adults with severe hemophilia. At patient level, the data on joint outcome suggest that a proportion of patients are equally well-off with intermediate-dose prophylaxis, whereas others need a high-dose regimen to control their bleeding. In the absence of valid laboratory parameters to assess a patients’ phenotype, clinical parameters of bleeding frequency and physical activities, combined with pharmacokinetic information,48  are the only tools available to individualize prophylactic dosing. Eventually, some adult patients even discontinued prophylaxis without experiencing frequent bleeding, as was observed in these and other cohorts.49 In conclusion, this first direct comparison of 2 prophylactic regimens suggests that at a group level, a more intensive and higher-dosed regimen may provide slightly improved outcome at a significant cost increase. At the patient level, the challenge is to identify patients who will be as well-off receiving lower doses without compromising patient safety. Even in small patient groups such as these, improving the cost-effectiveness of treatment should be considered. This work was supported by a Bayer Haemophilia Award (special project 2005). Contribution: K.F., K.S.C., P.P., M.H., R.L., H.M.v.d.B. and E.B. conceived the study; K.F., P.P., K.S.C., H.M.v.d.B., and E.B. designed the study; K.F. and E.B. secured funding; K.F., H.M.v.d.B., and E.B. managed all study procedures (ethics and governance, recruitment, patient assessment, data management); K.F. and K.S.C. planned and undertook the statistical analysis and drafted the manuscript with input from all authors; all authors had access to the data and analysis and approved the final manuscript; and K.F. is the guarantor. Conflict-of-interest disclosure: All authors are independent of the funding source. K.F. has acted as a consultant and participated in expert groups for Bayer, Baxter, Biogen, and Novo Nordisk, has received research grants from Baxter, Novo Nordisk, Pfizer, and CSL Behring, has given lectures for Bayer, Baxter, Novo Nordisk, and Pfizer, and has received travel support from Baxter. K.S.C. has acted as a consultant for Baxter, has received research grants from LIF, and has given lectures for Bayer. P.P. has participated in an advisory board for Pfizer, has given lectures for Bayer, Baxter, and Pfizer, and has received travel support from Bayer, Baxter, and Pfizer. M.H. has received research grants from Octapharma and Baxter, and has given lectures for Baxter, Bayer, CSL Behring, and Leo-Pharma. R.L. has acted as a consultant and participated in expert groups for Bayer and Novo Nordisk, has received research grants from Baxter, and has given lectures for Bayer, Baxter, and Novo Nordisk. H.M.v.d.B. has received unrestricted funding from Baxter, Bayer, and Novo Nordisk, has acted as a consultant for Bayer, and has given lectures for Bayer and Baxter. E.B. has acted as a consultant and participated in expert groups for Bayer, Baxter, Novo Nordisk, Sobi, Octapharma, and CSL-Behring, and has given lectures for Bayer, Baxter, Novo Nordisk, Pfizer, Octapharma, Sobi, and CSL Behring. Correspondence: Kathelijn Fischer, University Medical Center Utrecht, Van Creveldkliniek, Room C01.425, PO Box 85500, 3508 GA Utrecht, The Netherlands; e-mail: k.fischer@umcutrecht.nl. 1 Nilsson IM Hedner U Ahlberg A Haemophilia prophylaxis in Sweden. Acta Paediatr Scand 1976 , vol. 65 2 (pg. 129 - 135 ) 2 Nilsson IM Berntorp E Löfqvist T H Twenty-five years’ experience of prophylactic treatment in severe haemophilia A and B. J Intern Med 1992 , vol. 232 1 (pg. 25 - 32 ) 3 Fischer K van der Bom JG Mauser-Bunschoten EP Roosendaal G Prejs R Grobbee DE van den Berg HM Changes in treatment strategies for severe haemophilia over the last 3 decades: effects on clotting factor consumption and arthropathy. Haemophilia 2001 , vol. 7 5 (pg. 446 - 452 ) 4 Steen Carlsson K Höjgård S Glomstein A , et al. On-demand vs. prophylactic treatment for severe haemophilia in Norway and Sweden: differences in treatment characteristics and outcome. Haemophilia 2003 , vol. 9 5 (pg. 555 - 566 ) 5 Khawaji M Astermark J Berntorp E Lifelong prophylaxis in a large cohort of adult patients with severe haemophilia: a beneficial effect on orthopaedic outcome and quality of life. Eur J Haematol 2012 , vol. 88 4 (pg. 329 - 335 ) 6 Manco-Johnson MJ Abshire TC Shapiro , et al. Prophylaxis versus episodic treatment to prevent joint disease in boys with severe hemophilia. N Engl J Med 2007 , vol. 357 6 (pg. 535 - 544 ) 7 Gringeri A Lundin B von Mackensen S Mantovani L Mannucci PM ESPRIT Study Group A randomized clinical trial of prophylaxis in children with hemophilia A (the ESPRIT Study). J Thromb Haemost 2011 , vol. 9 4 (pg. 700 - 710 ) 8 Iorio A Marchesini E Marcucci M Stobart K Chan AK Clotting factor concentrates given to prevent bleeding and bleeding-related complications in people with hemophilia A or B. Cochrane Database Syst Rev. 2011;(9):CD003429 9 Risebrough N Oh P Blanchette V Curtin J Hitzler J Feldman BM Cost-utility analysis of Canadian tailored prophylaxis, primary prophylaxis and on-demand therapy in young children with severe haemophilia A. Haemophilia 2008 , vol. 14 4 (pg. 743 - 752 ) 10 Carlsson KS Höjgård S Lindgren A , et al. Costs of on-demand and prophylactic treatment for severe haemophilia in Norway and Sweden. Haemophilia 2004 , vol. 10 5 (pg. 515 - 526 ) 11 Berntorp E Boulyjenkov V Brettler D , et al. Modern treatment of haemophilia. Bull World Health Organ 1995 , vol. 73 5 Suppl1 (pg. 691 - 701 ) 12 Van Creveld S Prophylaxis of joint hemorrhages in hemophilia. Acta Haematol 1971 , vol. 45 2 (pg. 120 - 127 ) 13 Löfqvist T Nilsson IM Berntorp E H Haemophilia prophylaxis in young patients—a long-term follow-up. J Intern Med 1997 , vol. 241 5 (pg. 395 - 400 ) 14 Berntorp E Astermark J Baghaei F , et al. Treatment of haemophilia A and B and von Willebrand’s disease: summary and conclusions of a systematic review as part of a Swedish health-technology assessment. Haemophilia 2012 , vol. 18 2 (pg. 158 - 165 ) 15 Fischer K Grobbee DE van den Berg HM RCTs and observational studies to determine the effect of prophylaxis in severe haemophilia. Haemophilia 2007 , vol. 13 4 (pg. 345 - 350 ) 16 Hilliard P Funk S Zourikian N , et al. Hemophilia joint health score reliability study. Haemophilia 2006 , vol. 12 5 (pg. 518 - 525 ) 17 Feldman BM Funk SM Bergstrom BM , et al. Validation of a new pediatric joint scoring system from the International Hemophilia Prophylaxis Study Group: validity of the hemophilia joint health score. Arthritis Care Res (Hoboken) 2011 , vol. 63 2 (pg. 223 - 230 ) 18 Fischer K de Kleijn P Using the Haemophilia Joint Health Score for assessment of teenagers and young adults: exploring reliability and validity. Haemophilia. Prepublished on June 4 2013 as DOI: 10.1111/hae.12197 19 van Genderen FR van Meeteren NL van der Bom JG Heijnen L de Kleijn P van den Berg HM Helders PJ Functional consequences of haemophilia in adults: the development of the Haemophilia Activities List. Haemophilia 2004 , vol. 10 5 (pg. 565 - 571 ) 20 van Genderen FR Westers P Heijnen L de Kleijn P van den Berg HM Helders PJ van Meeteren NL Measuring patients’ perceptions on their functional abilities: validation of the Haemophilia Activities List. Haemophilia 2006 , vol. 12 1 (pg. 36 - 46 ) 21 Brodin E Baghaei F Elfvinger P Lindvall K Sunnerhagen KS The Swedish version of the Haemophilia Activity List. Haemophilia 2011 , vol. 17 4 (pg. 662 - 668 ) 22 Craig CL Marshall AL Sjöström M , et al. International physical activity questionnaire: 12-country reliability and validity. Med Sci Sports Exerc 2003 , vol. 35 8 (pg. 1381 - 1395 ) 23 EuroQol Group EuroQol—a new facility for the measurement of health-related quality of life. The EuroQol Group. Health Policy 1990 , vol. 16 3 (pg. 199 - 208 ) 24 Lamers LM Stalmeier PF McDonnell J Krabbe PF van Busschbach JJ Measuring the quality of life in economic evaluations: the Dutch EQ-5D tariff. Ned Tijdschr Geneeskd 2005 , vol. 149 28 (pg. 1574 - 1578 ) 25 Centraal Bureau voor de Statistiek-Statistics Netherlands-Labour Force Survey. http://www.cbs.nl/en-GB/menu/methoden/dataverzameling/dutch-labour-force-survey-characteristics.html. Accessed February 14, 2012 26 Sweden S 27 Sweden S The Swedish Register of Education. http://www.scb.se/statistik/UF/UF0506/Produktbeskrivning_short_English_UF0506_20040101r.doc. Accessed February 14, 2012 28 College voor Zorgverzekeringen 29 Kompas F http://www.fk.cvz.nl. Accessed February 1 2012 30 Drummond MJ Sculpher MJ Torrance GW O'Brien BJ Stoddard GL Methods for the Economic Evaluation of Health Care Programmes 2005 Oxford Oxford University Press 31 van Dijk K Fischer K van der Bom JG Grobbee DE van den Berg HM Variability in clinical phenotype of severe haemophilia: the role of the first joint bleed. Haemophilia 2005 , vol. 11 5 (pg. 438 - 443 ) 32 Stolk E Krabbe P Busschbach J Stolk EA Krabbe P Busschbach J (2009) Using the internet to collect EQ-5D norm scores: a valid alternative? In: Busschbach J, Rabin R, De Charro F, eds. Proceedings of the 24th Scientific Plenary Meeting of the EuroQol Group. September 13-15, 2007; Kijkduin-The Hague, The Netherlands 33 Burström K Johannesson M Diderichsen F Swedish population health-related quality of life results using the EQ-5D. Qual Life Res 2001 , vol. 10 7 (pg. 621 - 635 ) 34 de Moerloose P Fischer K Lambert T , et al. Recommendations for assessment, monitoring and follow-up of patients with haemophilia. Haemophilia 2012 , vol. 18 3 (pg. 319 - 325 ) 35 Fischer K Astermark J van der Bom JG Ljung R Berntorp E Grobbee DE van den Berg HM Prophylactic treatment for severe haemophilia: comparison of an intermediate-dose to a high-dose regimen. Haemophilia 2002 , vol. 8 6 (pg. 753 - 760 ) 36 Miners AH Economic evaluations of prophylaxis with clotting factor for people with severe haemophilia: why do the results vary so much? Haemophilia 2013 , vol. 19 2 (pg. 174 - 180 ) 37 Molho P Rolland N Lebrun T , et al. Epidemiological survey of the orthopedic status of severe haemophilia A and B patients in France. Haemophilia 2000 , vol. 6 1 (pg. 23 - 32 ) 38 Zhou ZY Wu J Baker J , et al. Haemophilia utilization group study - Part Va (HUGS Va): design, methods and baseline data. Haemophilia 2011 , vol. 17 5 (pg. 729 - 736 ) 39 Aznar JA Marco A Jiménez-Yuste V , et al. Spanish Haemophilia Epidemiological Study Working Group Is on-demand treatment effective in patients with severe haemophilia? Haemophilia 2012 , vol. 18 5 (pg. 738 - 742 ) 40 Miners A Revisiting the cost-effectiveness of primary prophylaxis with clotting factor for the treatment of severe haemophilia A. Haemophilia 2009 , vol. 15 4 (pg. 881 - 887 ) 41 Carlsson KS Höjgård S Lethagen S Lindgren A Berntorp E Lindgren B Willingness to pay for on-demand and prophylactic treatment for severe haemophilia in Sweden. Haemophilia 2004 , vol. 10 5 (pg. 527 - 541 ) 42 Astermark J Petrini P Tengborn L Schulman S Ljung R Berntorp E Primary prophylaxis in severe haemophilia should be started at an early age but can be individualized. Br J Haematol 1999 , vol. 105 4 (pg. 1109 - 1113 ) 43 Fischer K van der Bom JG Mauser-Bunschoten EP , et al. The effects of postponing prophylactic treatment on long-term outcome in patients with severe hemophilia. Blood 2002 , vol. 99 7 (pg. 2337 - 2341 ) 44 Richards M Williams M Chalmers E Liesner R Collins P Vidler V Hanley J Paediatric Working Party of the United Kingdom Haemophilia Doctors’ Organisation A United Kingdom Haemophilia Centre Doctors’ Organization guideline approved by the British Committee for Standards in Haematology: guideline on the use of prophylactic factor VIII concentrate in children and adults with severe haemophilia A. Br J Haematol 2010 , vol. 149 4 (pg. 498 - 507 ) 45 National Hemophilia Foundation MASAC recommendations concerning prophylaxis (regular administration of clotting factor concentrate to prevent bleeding). http://www.hemophilia.org/NHFWeb/MainPgs/MainNHF.aspx?menuid=57&contentid=1007. Accessed 4-11-2007 46 Fischer K Collins P Björkman S , et al. Trends in bleeding patterns during prophylaxis for severe haemophilia: observations from a series of prospective clinical trials. Haemophilia 2011 , vol. 17 3 (pg. 433 - 438 ) 47 Collins PW Blanchette VS Fischer K , et al. rAHF-PFM Study Group Break-through bleeding in relation to predicted factor VIII levels in patients receiving prophylactic treatment for severe hemophilia A. J Thromb Haemost 2009 , vol. 7 3 (pg. 413 - 420 ) 48 Collins PW Fischer K Morfini M Blanchette VS Björkman S International Prophylaxis Study Group Pharmacokinetics Expert Working Group Implications of coagulation factor VIII and IX pharmacokinetics in the prophylactic treatment of haemophilia. Haemophilia 2011 , vol. 17 1 (pg. 2 - 10 ) 49 van Dijk K Fischer K van der Bom JG Scheibel E Ingerslev J van den Berg HM Can long-term prophylaxis for severe haemophilia be stopped in adulthood? Results from Denmark and the Netherlands. Br J Haematol 2005 , vol. 130 1 (pg. 107 - 112 )
auto_math_text
web
Outlook: AURA RENEWABLE ACQUISITIONS PLC is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Sell Time series to forecast n: 13 Jan 2023 for (n+6 month) Methodology : Transfer Learning (ML) ## Abstract AURA RENEWABLE ACQUISITIONS PLC prediction model is evaluated with Transfer Learning (ML) and Paired T-Test1,2,3,4 and it is concluded that the LON:ARA stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Sell ## Key Points 1. What is the use of Markov decision process? 2. What is a prediction confidence? 3. Stock Rating ## LON:ARA Target Price Prediction Modeling Methodology We consider AURA RENEWABLE ACQUISITIONS PLC Decision Process with Transfer Learning (ML) where A is the set of discrete actions of LON:ARA stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Paired T-Test)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Transfer Learning (ML)) X S(n):→ (n+6 month) $\stackrel{\to }{S}=\left({s}_{1},{s}_{2},{s}_{3}\right)$ n:Time series to forecast p:Price signals of LON:ARA stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## LON:ARA Stock Forecast (Buy or Sell) for (n+6 month) Sample Set: Neural Network Stock/Index: LON:ARA AURA RENEWABLE ACQUISITIONS PLC Time series to forecast n: 13 Jan 2023 for (n+6 month) According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Sell X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for AURA RENEWABLE ACQUISITIONS PLC 1. An entity shall amend a hedging relationship as required in paragraph 6.9.1 by the end of the reporting period during which a change required by interest rate benchmark reform is made to the hedged risk, hedged item or hedging instrument. For the avoidance of doubt, such an amendment to the formal designation of a hedging relationship constitutes neither the discontinuation of the hedging relationship nor the designation of a new hedging relationship. 2. When a group of items that constitute a net position is designated as a hedged item, an entity shall designate the overall group of items that includes the items that can make up the net position. An entity is not permitted to designate a non-specific abstract amount of a net position. For example, an entity has a group of firm sale commitments in nine months' time for FC100 and a group of firm purchase commitments in 18 months' time for FC120. The entity cannot designate an abstract amount of a net position up to FC20. Instead, it must designate a gross amount of purchases and a gross amount of sales that together give rise to the hedged net position. An entity shall designate gross positions that give rise to the net position so that the entity is able to comply with the requirements for the accounting for qualifying hedging relationships. 3. When an entity designates a financial liability as at fair value through profit or loss, it must determine whether presenting in other comprehensive income the effects of changes in the liability's credit risk would create or enlarge an accounting mismatch in profit or loss. An accounting mismatch would be created or enlarged if presenting the effects of changes in the liability's credit risk in other comprehensive income would result in a greater mismatch in profit or loss than if those amounts were presented in profit or loss 4. Expected credit losses reflect an entity's own expectations of credit losses. However, when considering all reasonable and supportable information that is available without undue cost or effort in estimating expected credit losses, an entity should also consider observable market information about the credit risk of the particular financial instrument or similar financial instruments. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions AURA RENEWABLE ACQUISITIONS PLC is assigned short-term Ba1 & long-term Ba1 estimated rating. AURA RENEWABLE ACQUISITIONS PLC prediction model is evaluated with Transfer Learning (ML) and Paired T-Test1,2,3,4 and it is concluded that the LON:ARA stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Sell ### LON:ARA AURA RENEWABLE ACQUISITIONS PLC Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementB3Baa2 Balance SheetBa1C Leverage RatiosCaa2B2 Cash FlowBaa2Caa2 Rates of Return and ProfitabilityB2Ba1 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 90 out of 100 with 727 signals. ## References 1. Abadie A, Diamond A, Hainmueller J. 2010. Synthetic control methods for comparative case studies: estimat- ing the effect of California's tobacco control program. J. Am. Stat. Assoc. 105:493–505 2. Wan M, Wang D, Goldman M, Taddy M, Rao J, et al. 2017. Modeling consumer preferences and price sensitiv- ities from large-scale grocery shopping transaction logs. In Proceedings of the 26th International Conference on the World Wide Web, pp. 1103–12. New York: ACM 3. Varian HR. 2014. Big data: new tricks for econometrics. J. Econ. Perspect. 28:3–28 4. Bottou L. 1998. Online learning and stochastic approximations. In On-Line Learning in Neural Networks, ed. D Saad, pp. 9–42. New York: ACM 5. E. van der Pol and F. A. Oliehoek. Coordinated deep reinforcement learners for traffic light control. NIPS Workshop on Learning, Inference and Control of Multi-Agent Systems, 2016. 6. Chow, G. C. (1960), "Tests of equality between sets of coefficients in two linear regressions," Econometrica, 28, 591–605. 7. M. Benaim, J. Hofbauer, and S. Sorin. Stochastic approximations and differential inclusions, Part II: Appli- cations. Mathematics of Operations Research, 31(4):673–695, 2006 Frequently Asked QuestionsQ: What is the prediction methodology for LON:ARA stock? A: LON:ARA stock prediction methodology: We evaluate the prediction models Transfer Learning (ML) and Paired T-Test Q: Is LON:ARA stock a buy or sell? A: The dominant strategy among neural network is to Sell LON:ARA Stock. Q: Is AURA RENEWABLE ACQUISITIONS PLC stock a good investment? A: The consensus rating for AURA RENEWABLE ACQUISITIONS PLC is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of LON:ARA stock? A: The consensus rating for LON:ARA is Sell. Q: What is the prediction period for LON:ARA stock? A: The prediction period for LON:ARA is (n+6 month)
auto_math_text
web
Open Access Publications from the University of California ## Searches for Exotic Baryon Number-Violating Processes at Super-Kamiokande Abstract Various theoretical considerations suggest that baryon number should be violated. Nucleon decay, which typically appears within the context of unified theories, would provide a definitive signature of baryon number violation. In this dissertation, we report on the search results for $p \rightarrow e^+X$, $p \rightarrow \mu^+$ (where $X$ is an invisible, massless particle), $n \rightarrow \nu\gamma$, $p \rightarrow e^+\nu\nu$, $p \rightarrow \mu^+\nu\nu$, $np \rightarrow e^+\nu$, $np \rightarrow \mu^+\nu$ and $np \rightarrow \tau^+\nu$ nucleon and dinucleon decays at the Super-Kamiokande experiment. Some of these searches are novel. Using data from a combined exposure of 273.4 kton$\cdot$years and a $\chi^2$ spectral fitting technique, a search for these decays yields a result consistent with no signal. Accordingly, lower limits on the partial lifetimes of $\tau_{p \rightarrow e^+ X} > 7.9 \times 10^{32}$ years, $\tau_{n \rightarrow \nu\gamma} > 5.5 \times 10^{32}$ years, $\tau_{p \rightarrow \mu^+ X} > 4.1 \times 10^{32}$ years, $\tau_{p \rightarrow e^+\nu\nu} > 1.7 \times 10^{32}$ years, $\tau_{p \rightarrow \mu^+\nu\nu} > 2.2 \times 10^{32}$ years, $\tau_{np \rightarrow e^+\nu} > 2.6 \times 10^{32}$ years, $\tau_{np \rightarrow \mu^+\nu} > 2.0 \times 10^{32}$ years and $\tau_{np \rightarrow \tau^+\nu} > 3.0 \times 10^{31}$ years at a 90\% confidence level are obtained. These results provide stringent test of new physics and also limit the parameter space of models that allow for such processes.
auto_math_text
web
OBJECTIVE—To assess the cost and cost effectiveness of hydroxymethylglutaryl (HMG)-CoA reductase inhibitor (statin) therapy for the primary prevention of major coronary events in the U.S. population with diabetes and LDL cholesterol levels ≥100 mg/dl, especially in the population with LDL cholesterol levels 100–129 mg/dl. RESEARCH DESIGN AND METHODS—Analyses were performed using population estimates from National Health and Nutrition Examination Survey (NHANES)-III, cost estimates from a health system perspective, statin LDL-lowering effectiveness from pivotal clinical trials, and treatment effectiveness from the diabetic subgroup analysis of the Heart Protection Study. RESULTS—There are ∼8.2 million Americans with diabetes, LDL cholesterol levels ≥100 mg/dl, and no clinical evidence of cardiovascular disease. Each year, statin therapy could prevent ∼71,000 major coronary events in this population. In the subgroup with LDL cholesterol levels 100–129 mg/dl, the annual cost of statin treatment ranges from $600 to$1,000 per subject. In the population with LDL cholesterol levels ≥130 mg/dl, the annual cost ranges from $700 to$2,100. Annual incremental cost per subject, defined as the cost of statin treatment plus the cost of major coronary events with statin treatment minus the cost of major coronary events without statin treatment, ranges from $480 to$950 in the subgroup with LDL cholesterol levels 100–129 mg/dl and from $590 to$1,920 in the population with LDL cholesterol levels ≥130 mg/dl. CONCLUSIONS—Statin therapy for the primary prevention of major coronary events in subjects with type 2 diabetes and LDL cholesterol levels 100–129 mg/dl is affordable and cost effective relative to statin therapy in subjects with higher LDL cholesterol levels. Cardiovascular disease (CVD) is the major cause of morbidity and mortality in subjects with type 2 diabetes (1,2). Hydroxymethylglutaryl (HMG)-CoA reductase inhibitors (statins) reduce major coronary events and total mortality in diabetic subjects with coronary heart disease (CHD) (35). More recently, a primary prevention study (6) suggested and the Heart Protection Study (HPS) demonstrated that in a large subgroup of participants with diabetes and no history of CHD, statin treatment significantly reduces major coronary events (7). The risk of myocardial infarction in diabetic subjects without CHD is as great as in nondiabetic subjects with CHD (8,9). These observations led the American Diabetes Association (ADA) to recommend that in diabetic subjects, hypercholesterolemia be treated as aggressively as in nondiabetic subjects with known CHD (10). The ADA recommends a LDL cholesterol goal <100 mg/dl (2.6 mmol/l) for all patients with diabetes but does not explicitly recommend pharmacological therapy for patients with LDL cholesterol levels between 100 and 129 mg/dl who do not have CVD (10). The Third National Health and Nutrition Examination Survey (NHANES III) has demonstrated that 29% of Americans with type 2 diabetes and no CVD have LDL cholesterol levels at 100–129 mg/dl and that 56% have LDL cholesterol levels ≥130 mg/dl. The goal of this study was to assess the economic implications of statin therapy for the primary prevention of major coronary events (fatal and nonfatal myocardial infarction [MI] and coronary revascularization) in the U.S. population with diabetes and LDL cholesterol levels at 100–129 mg/dl. We analyzed the cost and cost effectiveness of statin treatment for the primary prevention of major coronary events in the U.S. population with diabetes using population estimates from NHANES III, cost estimates from the perspective of a large health system, statin LDL-lowering effectiveness from pivotal clinical trials, and the health effects of lowering LDL cholesterol from the HPS (7). ### Population estimates NHANES III was conducted by the National Center for Health Statistics between 1988 and 1994. NHANES III included a nationally representative probability sample of the U.S. civilian noninstitutionalized population, identified through a complex multistage cluster sampling design. We applied weights to account for the unequal probabilities of selection, planned oversampling, and differential nonresponse. Description of the standardized protocols has been published (11). In NHANES III, there were 1,509 subjects with self-reported diabetes and 962 subjects with newly diagnosed diabetes. Of the 2,471 subjects with diabetes, 980 had a proper sampling session and had an assigned weight, and 472 of them had LDL cholesterol measured and no history of MI, angina, or chest pain. The 472 diabetic subjects without a history of MI, angina, or chest pain who were eligible for primary prevention were included in this analysis. The subjects were stratified by 10-mg/dl increments of LDL cholesterol. The distribution of LDL cholesterol levels was then extrapolated to the U.S. population with diagnosed and undiagnosed diabetes and no history of CHD (n = 10,580,000), and the number of diabetic individuals in each stratum was calculated. ### Costs The costs of treatment with statins and of major coronary events were assessed from the perspective of a large health system. The costs of lipid-lowering medication were taken as the 2002 Red Book Average Wholesale Price (AWP). Costs of drug monitoring and adverse experiences were adapted from a cost analysis of the Scandinavian Simvastatin Survival Study (4S) and adjusted to year 2002 U.S. dollars (12). Drug monitoring costs included the cost of lipid profiles and liver function tests and were $51.30 per subject per year. Adverse events with statin treatment are rare (7) and cost$0.40 per subject per year. The costs of major coronary events were adapted from Grover et al. (13). These included the direct medical costs of fatal and nonfatal MI, coronary artery bypass graft surgery (CABG), and percutaneous transluminal coronary angioplasty (PTCA). In the HPS, 59% of all subjects with a fatal or nonfatal MI underwent coronary revascularization. We assumed that one-half of revascularizations were CABG and one-half were PTCA. The average cost per major coronary event was adjusted to year 2002 U.S. dollars and was $24,445. ### Statin effectiveness The LDL cholesterol–lowering effectiveness of the available statin medications were derived from pivotal clinical trials as summarized in a clinical practice guideline (14). The treatment goal was a LDL cholesterol level ≤100 mg/dl. For each LDL cholesterol stratum, we applied the dosage of available statins that reduced the LDL cholesterol level to target. For example, subjects with LDL cholesterol levels of 100–129 mg/dl could be treated with 10 mg of atorvastatin, 10 mg of simvastatin, 20 mg of lovastatin, 40 mg of fluvastatin, or 20 mg of pravastatin per day to achieve LDL cholesterol levels ≤100 mg/dl (Table 1). Whereas all statins could achieve LDL cholesterol levels <100 mg/dl in subjects with LDL cholesterol levels of 100–129 mg/dl, only one could do so for subjects with LDL cholesterol levels ≥190 mg/dl (Table 1). ### Treatment effectiveness The HPS was a large randomized, placebo-controlled trial of simvastatin in individuals with CHD, other occlusive arterial disease, or diabetes and total cholesterol levels of at least 135 mg/dl (3.5 mmol/l) (7). Of the 20,536 individuals enrolled, 7,150 had no history of CHD and 3,982 of them had diabetes. In the 7,150 without CHD, the mean LDL cholesterol level was 89 mg/dl (2.3 mmol/l) in the treatment group and 128 mg/dl (3.3 mmol/l) in the untreated group. The incidence of major coronary events (nonfatal MI or death from CHD) was 11.0/1,000 person-years in the diabetic subgroup treated with simvastatin and 16.7/1,000 person-years in the diabetic subgroup treated with placebo (15). A 39-mg/dl (1-mmol/l) decrease in LDL cholesterol thus reduced the incidence of major coronary events by 5.7 events/1,000 person-years (34%) in the diabetic subgroup. In the U.K. Prospective Diabetes Study (UKPDS), LDL cholesterol at baseline was a major risk factor for CHD (16). Fitting the risk factors for CHD as continuous variables indicated that a 39-mg/dl (1-mmol/l) decrease in LDL cholesterol was associated with a 36% reduction in the risk of CHD (16). The incidence of major coronary events per year for statin-treated subjects was calculated using data from the HPS. The incidence of major coronary events for untreated subjects was calculated using data from the HPS (16.7 events per 1,000 person-years for LDL cholesterol 128 mg/dl) and the UKPDS (36% change in risk per each 39-mg/dl change in LDL cholesterol). The difference in incidence of major coronary events between the two groups represents the number of major coronary events per year prevented with statin therapy (Table 1). ### Cost effectiveness analysis Costs were calculated under two hypothetical scenarios, first assuming that all subjects were treated with statins to an LDL cholesterol level <100 mg/dl and then assuming that subjects were treated as they were in the NHANES III and that no additional statin therapy was prescribed. Under the first scenario, costs were calculated as those of statin therapy, drug monitoring, adverse experiences, and major coronary events. Under the second scenario, costs were calculated as the costs of major coronary events only. The difference in costs between the treatment and nontreatment scenarios (incremental cost) was described both on a per patient basis and for the U.S. population with diabetes and no history of CHD. ### Sensitivity analyses Sensitivity analyses were performed to assess the impact of plausible changes in underlying assumption on the results. The base-case analysis was performed with the least-expensive statin for each LDL cholesterol stratum. Sensitivity analyses were performed by increasing or decreasing the cost of statins or major coronary events by 25% or the incidence of major coronary events with or without statin treatment by 25%. Increasing or decreasing the cost of major coronary events by 25% changed the cost from approximately$24,400 to $30,600 or$18,300. If all subjects requiring revascularization underwent CABG and none PTCA, the cost of a major coronary event would be $30,400, similar to the upper bound of the sensitivity analysis. Likewise, if all subjects underwent PTCA and none CABG the cost of a major coronary event would be$18,500, similar to the lower bound. All analyses were performed using Excel spreadsheets and DATA 3.0 decision analysis software (TreeAge Software, Williamstown, MA). Based on data from the NHANES III, we estimated that ∼1.6 million Americans with diabetes and without CHD have LDL cholesterol levels <100 mg/dl, 3.0 million have levels of 100–129 mg/dl, 2.2 million have levels of 130–149 mg/dl, 2.3 million have levels of 150–169 mg/dl, 700,000 have levels of 170–189 mg/dl, and 700,000 have LDL cholesterol levels ≥190 mg/dl. Based on treatment effectiveness data, we assigned various statins and dosages to achieve LDL cholesterol levels <100 mg/dl (Table 1). The annual per capita costs of statin therapy (including medication, drug monitoring, and adverse experiences) and total costs of statin therapy for each LDL cholesterol stratum are shown in Table 2. Treatment of LDL cholesterol levels between 100 and 129 mg/dl to achieve LDL cholesterol levels <100 mg/dl cost $600 to$1,000 per patient per year depending on the statin prescribed. Annual per capita costs of statin therapy ranged from $700 to$2,100 in the groups with LDL cholesterol levels ≥130 mg/dl. If subjects with LDL cholesterol levels between 100 and 129 mg/dl are treated with the least expensive statin, total annual costs are $1.8 billion. Treatment of all subjects with LDL cholesterol levels ≥130 mg/dl costs$6.5 to $10.6 billion. With treatment effectiveness data from the HPS and UKPDS, we estimated that ∼101,000 major coronary events per year would occur if the population was treated with statins and ∼172,000 events would occur if the population was not treated with statins. Statin treatment would thus prevent ∼71,000 major coronary events per year in the U.S. population with diabetes and no CHD, 18% (13,000) of these in the population with LDL cholesterol levels of 100–129 mg/dl. The incremental cost of statin treatment may be defined as the cost of statin therapy (medication, monitoring, and adverse events) plus the cost of major coronary events if the population is treated with statins minus the cost of major coronary events if the population is not treated with statins. Each major coronary event costs ∼$24,400. In the subgroup with LDL cholesterol levels between 100 and 129 mg/dl, major coronary events would cost approximately $0.77 billion per year if the subgroup was treated with statins and$1.09 billion per year if the subgroup was not treated with statins. In the population with LDL cholesterol levels ≥130 mg/dl, major coronary events would cost approximately $1.70 billion per year if subjects were treated with statins and$3.13 billion per year if subjects were not treated with statins. The incremental costs per subject and for the population are shown in Table 3. The incremental cost per subject ranged from $480 to$950 per year in the subgroup with LDL cholesterol levels between 100 and 129 mg/dl and from $590 to$1,920 per year in the population with LDL cholesterol levels ≥130 mg/dl. The incremental cost of statin treatment per subject generally increased with higher baseline LDL cholesterol levels. If the least expensive statin was prescribed for each LDL cholesterol stratum, the incremental cost of statin treatment per subject would range from $480 to$1,050 per year depending on the baseline LDL cholesterol level. ### Sensitivity analysis Sensitivity analyses are shown in Table 4. If the cost of major coronary events, the incidence of major coronary events without statin treatment, or the incidence of major coronary events with statin treatment increased or decreased by 25%, the incremental costs would change only modestly. The incremental costs of treatment are most sensitive to changes in the cost of statin therapy. If the cost of statin therapy was 25% higher than in the base-case analysis, the incremental cost would increase by one-third and range from $620 to$1,390 per subject with diabetes. Similarly, if the cost of statin treatment was 25% lower, the incremental cost would decrease by about one-third and range from $330 to$720 per subject with diabetes. For diabetic subjects without CHD, the ADA recommends starting pharmacological therapy for LDL cholesterol levels ≥130 mg/dl with the treatment goal at <100 mg/dl (10). In subjects with LDL cholesterol levels between 100 and 129 mg/dl, a variety of treatment strategies have been recommended, including aggressive medical nutrition therapy and statin therapy (10). About 3.0 million subjects or 29% of the U.S. population with diabetes and without CHD have LDL cholesterol levels between 100 and 129 mg/dl. Prescribing statin therapy for this group would cost between $1.8 and$3.2 billion for the U.S. health system. The cost of treating such subjects with the least expensive statin ($1.8 billion) is less than half the difference in the costs of treating subjects with LDL cholesterol levels ≥130 mg/dl with the most expensive statin versus the least expensive statin ($4.1 billion). The incremental cost of statin treatment is generally lower in the diabetic subgroup with LDL cholesterol levels at 100–129 mg/dl than in those with LDL cholesterol levels ≥130 mg/dl due to lower medication costs. Sensitivity analyses indicate that the incremental costs of statin treatment are most sensitive to changes in the cost of statin therapy. Thus, the use of the least expensive effective statin within each LDL cholesterol stratum, including the use of generic statins, would decrease the incremental cost substantially. The recommendation for aggressive LDL cholesterol–lowering in the diabetic population with LDL cholesterol levels of 100–129 mg/dl is supported by findings in observational studies and large randomized controlled clinical trials (7,16). The observational findings of the UKPDS indicate that a 39-mg/dl (1-mmol/l) decrease in LDL cholesterol is associated with a 36% reduction in the risk of CHD (16). The HPS demonstrated that in diabetic subjects with no preexisting CHD, statin therapy that lowered LDL cholesterol by 39 mg/dl (1 mmol/l) reduced major vascular events (major coronary events, strokes of any type, and coronary and noncoronary revascularizations) by 25% and major coronary events by 34% (7,15). More importantly, the study demonstrated that lowering LDL cholesterol from <3 mmol/l (116 mg/dl) to <2 mmol/l (77 mg/dl) reduced the risk of major vascular events by one-quarter. Economic analyses of statin therapy have been performed for diabetic subjects with and without CHD. A post hoc subgroup analysis from the 4S that examined lipid-lowering treatment in 202 diabetic subjects with CHD revealed that simvastatin reduced CVD-related hospitalizations and total hospital days and generated net savings of $1,801 (1998 U.S. dollars) in direct medical cost per subject (12). Grover et al. (13) used a Markov model to compare the long-term costs and benefits of treating dyslipidemia in diabetic patients without CVD. Treatment with simvastatin among diabetic subjects without CVD cost between$5,063 and \$23,792 (1998 U.S. dollars) per year of life saved (13). The study by Grover et al. differs from our study in several ways. First, we estimated treatment effectiveness from a primary prevention study, whereas they extrapolated treatment effectiveness from a secondary intervention study. Second, the target LDL cholesterol level in our study was 100 mg/dl or less compared with 122 mg/dl in their study. Some limitations of our study deserve mention. First, the sample of diabetic subjects in NHANES III with measured LDL cholesterol levels and no CHD was relatively small. Nevertheless, the estimates from NHANES III were weighted to represent the U.S. population and are the best data available. Second, our analyses were limited by the limitations of reports published in the literature. LDL cholesterol levels at baseline and with treatment were not reported in the HPS for the diabetic subpopulation without CHD. Because LDL cholesterol levels in the diabetic population do not differ greatly from those in the general population, we assumed that they were the same as for the total population without CHD. Third, our study may have overestimated the benefits of statin therapy because we applied results from randomized controlled clinical trials to the general population with diabetes. Because compliance with therapy is higher in clinical trials, the benefit of statin therapy may be less in the general population with diabetes. Fourth, our study may have underestimated the benefit of statin therapy in subjects with diabetes because we did not assess the beneficial effects of statin treatment on the incidence of stroke and peripheral vascular disease. The HPS demonstrated that statin therapy prevented not only coronary events and revascularization, but also ischemic strokes and peripheral revascularizations (7). Finally, we did not account for the treatment of other cardiovascular risk factors when assessing the incidence of major coronary events. To the extent that control of other cardiovascular risk factors is better or worse in the general diabetic population than it was in the HPS and UKPDS populations, we may have overestimated or underestimated the benefit of statin therapy. In conclusion, from a health system perspective, statin therapy for the primary prevention of major coronary events in subjects with type 2 diabetes and LDL cholesterol levels of 100–129 mg/dl is affordable and cost effective relative to statin therapy for diabetic subjects with higher LDL cholesterol levels. However, statin therapy for primary prevention of major coronary events in subjects with type 2 diabetes is not cost saving regardless of the baseline LDL cholesterol level. 1. Kannel WB, McGee DL: Diabetes and cardiovascular disease: the Framingham Study. JAMA 241 : 2035 –2038, 1979 2. Pyörälä K, Laakso M, Uusitupa M: Diabetes and atherosclerosis: an epidemiologic view. Diabete Metab Rev 3 : 463 –524, 1987 3. Pyorala K, Pedersen TR, Kjekshus J, Faegeman O, Olsson A, Thorgeirsson G: Cholesterol lowering with simvastatin improves prognosis of diabetic patients with coronary heart disease: a subgroup analysis of the Scandanavian Simvastatin Survival Study (4S). Diabetes Care 20 : 614 –620, 1997 4. Goldberg RB, Mellies MJ, Sacks FM, Moye LA, Howard BV, Howard WJ, Davis BR, Cole TG, Pfeffer MA, Braunwald E, for the CARE Investigators: Cardiovascular events and their reduction with pravastatin in diabetic and glucose-intolerant myocardial infarction survivors with average cholesterol levels: subgroup analyses in the Cholesterol and Recurrent Events (CARE) Trial: the Care Investigators. Circulation 98 : 2513 –2519, 1998 5. The Long-Term Intervention with Pravastatin in Ischaemic Disease (LIPID) Study Group: Prevention of cardiovascular events and death with pravastatin in patients with coronary heart disease and a broad range of initial cholesterol levels. N Engl J Med 339 : 1339 –1357, 1998 6. Downs JR, Clearfield M, Weis S, Whitney E, Shapiro DR, Beere PA, Langendorfer A, Stein EA, Kruyer W, Gotto AM Jr: Primary prevention of acute coronary events with lovastatin in men and women with average cholesterol levels: results of AFCAPS/TexCAPS: Air Force/Texas Coronary Atherosclerosis Prevention Study. JAMA 279 : 1615 –1622, 1998 7. Heart Protection Study Collaborative Group: MRC/BHF Heart Protection Study of cholesterol lowering with simvastatin in 20,536 high-risk individuals: a randomised placebo-controlled trial. Lancet 360 : 7 –22, 2002 8. Haffner SM, Lehto S, Rönnemaa T, Pyörälä K, Laakso M: Mortality from coronary heart disease in subjects with type 2 diabetes and in nondiabetic subjects with and without prior myocardial infarction. N Engl J Med 339 : 229 –234, 1998 9. Malmberg K, Yusuf S, Gerstein HC, Brown J, Zhao F, Hunt D, Piegas L, Calvin J, Keltai M, Budaj A: Impact of diabetes on long-term prognosis in patients with unstable angina and non-Q-wave myocardial infarction: results of the OASIS (Organization to Assess Strategies for Ischemic Syndromes) Registry. Circulation 102 : 1014 –1019, 2000 10. American Diabetes Association: Management of dyslipidemia in adults with diabetes (Position Statement). Diabetes Care 26 (Suppl. 1) : S83 –S86, 2003 11. Plan and operation of the Third National Health and Nutrition Examination Survey, 1988–94. Series 1: programs and collection procedures. Vital Health Stat 1 : 1 –407, 1994 12. Herman WH, Alexander CM, Cook JR, Boccuzzi SJ, Musliner TA, Pedersen TR, Kjekshus J, Pyorala K: Effect of simvastatin treatment on cardiovascular resource utilization in impaired fasting glucose and diabetes: findings from the Scandinavian Simvastatin Survival Study. Diabetes Care 22 : 1771 –1778, 1999 13. Grover SA, Coupal L, Zowall H, Alexander CM, Weiss TW, Gomes DR: How cost effective is the treatment of dyslipidemia in patients with diabetes but without cardiovascular disease? Diabetes Care 24 : 45 –50, 2001 14. University of Michigan Health System Clinical Care Guidelines. Screening and Management of Lipids [Article online]. Available from http://www.med.umich.edu/i/oca/practiceguides. Accessed 17 July 2002 15. Heart Protection Study Collaborative Group: Effects of simvastatin allocation on first major coronary event in different prior disease categories [Article online]. Available from http://image.thelancet.com/extras/02art5389webfigure1.pdf. Accessed 30 July 2002 16. Turner RC, Millns H, Neil HA, Stratton IM, Manley SE, Matthews DR, Holman RR: Risk factors for coronary artery disease in non-insulin dependent diabetes mellitus: United Kingdom Prospective Diabetes Study (UKPDS: 23). BMJ 316 : 823 –828, 1998 Address correspondence and reprint requests to William H. Herman, MD, MPH, Division of Endocrinology and Metabolism, Departments of Internal Medicine and Epidemiology and the Michigan Diabetes Research and Training Center, University of Michigan Health System, 1500 E. Medical Center Dr., 3920 Taubman Center, Ann Arbor, MI 48109. E-mail: wherman@umich.edu. Received for publication 26 November 2002 and accepted in revised form 12 March 2003. A table elsewhere in this issue shows conventional and Système International (SI) units and conversion factors for many substances.
auto_math_text
web
## Reading the Comics, June 23, 2018: Big Duck Energy Edition I didn’t have even some good nonsense for this edition’s title and it’s a day late already. And that for only having a couple of comics, most of them reruns. And then this came across my timeline: Please let it not be a big milkshake duck. I can’t take it if it is. Larry Wright’s Motley for the 21st uses mathematics as emblem of impossibly complicated stuff to know. I’m interested to see that biochemistry was also called in to represent something that needs incredible brainpower to know things that can be expressed in one panel. Another free little question: what might “2,368 to the sixth power times pi” be an answer to? The obvious answer to me is “what’s the area of a circle of radius 2,368 to the third power”. That seems like a bad quiz-show question to me, though. It tests a legitimate bit of trivia, but the radius is such an ugly number. There are some other obvious questions that might fit, like “what is the circumference of a circle of radius [ or diameter ] of (ugly number here)?” Or “what is the volume of a circle of radius (similarly ugly number here)?” But the radius (or diameter) of those surfaces would have to be really nasty numbers, ones with radicals of 2,368 — itself no charming number — in it. And “2,368 to the sixth power times pi” is the answer to infinitely many questions. The challenge is finding one that’s plausible as a quiz-show question. That is it should test something that’s reasonable for a lay person to know, and to calculate while on stage, without pen or paper or much time to reflect. Tough set of constraints, especially to get that 2,368 in there. The sixth power isn’t so easy either. Well, the biochemistry people don’t have an easy time thinking of a problem to match Debbie’s answer either. “Hydro- ” and “mono- ” are plausible enough prefixes, but as far as I know there’s no “nucleatic acid” to have some modified variant. Wright might have been thinking of nucleic acid, but as far as I know there’s no mononucleic acid, much less hydromononucleic acid. But, yes, that’s hardly a strike against the premise of the comic. It’s just nitpicking. Charlie Pondrebarac’s CowTown for the 22nd is on at least its third appearance since I started reading the comics for the mathematics stuff regularly. I covered it in June 2016 and also in August 2015. This suggests a weird rerun cycle for the comic. Popping out of Jim Smith’s mouth is the null symbol, which represents a set that hasn’t got any elements. That set is known as the null set. Every set, including the null set, contains a null set. This fact makes set theory a good bit easier than it otherwise would be. That’s peculiar, considering that it is literally nothing. But everything one might want to say about “nothing” is peculiar. That doesn’t make it dispensable. Julie Larson’s Dinette Set for the 22nd sees the Penny family’s adults bemoaning the calculator their kid needs for middle school. I admit feeling terror at being expected to buy a hundred-dollar calculator for school. But I also had one (less expensive) when I was in high school. It saves a lot of boring routine work. And it allows for playful discoveries about arithmetic. Some of them are cute trivialities, such as finding the Golden Ratio and similar quirks. And a calculator does do essentially the work that a slide rule might, albeit more quickly and with more digits of precision. It can’t help telling you what to calculate or why, but it can take the burden out of getting the calculation done. Still, a hundred bucks. Wow. Tony Carrillo’s F Minus for the 23rd puts out the breaking of a rule of arithmetic as a whimsical, inexplicable event. A moment of two plus two equalling five, whatever it might do for the structure of the universe, would be awfully interesting for the philosophy of mathematics. Given what we ordinarily think we mean by ‘two’ and ‘plus’ and ‘equals’ and ‘five’ that just can’t happen. And what would it mean for two plus to to equal five for a few moments? Mathematicians often think about the weird fact that mathematical structures — crafted from definitions and logic — describe the real world stunningly well. Would this two plus two equalling five be something that was observed in the real world, and checked against definitions that suddenly allowed this? Would this be finding a chain of reasoning that supported saying two plus two equalled five, only to find a few minutes later that a proof everyone was satisfied with was now clearly wrong? That’s a particularly chilling prospect, if you’re in the right mood. We like to think mathematical proofs are absolute and irrefutable, things which are known to be true regardless of who knows them, or what state they’re in, or anything. And perhaps they are. They seem to come as near as mortals can to seeing Platonic forms. (My understanding is that mathematical constructs are not Platonic forms, at least in Plato’s view of things. But they are closer to being forms than, say, apples put on a table for the counting would be.) But what we actually know is whether we, fallible beings comprised of meat that thinks, are satisfied that we’ve seen a proof. We can be fooled. We can think something is satisfactory because we haven’t noticed an implication that’s obviously wrong or contradictory. Or because we’re tired and are feeling compliant. Or because we ate something that’s distracting us before we fully understand an argument. We may have a good idea of what a satisfactory logical proof would be. But stare at the idea hard enough and we realize we might never actually know one. If you’d like to see more Reading the Comics posts, you can find them at this link. If you’re interested in the individual comics, here you go. My essays tagged with CowTown are here. Essays tagged Dinette Set are at this link. The essays that mention F Minus since I started adding strip tags are here. And this link holds the Motley comics. ## Reading the Comics, August 26, 2017: Dragon Edition It’s another week where everything I have to talk about comes from GoComics.com. So, no pictures. The Comics Kingdom and the Creators.com strips are harder for non-subscribers to read so I feel better including those pictures. There’s not an overarching theme that I can fit to this week’s strips either, so I’m going to name it for the one that was most visually interesting to me. Charlie Pondrebarac’s CowTown for the 22nd I just knew was a rerun. It turned up the 26th of August, 2015. Back then I described it as also “every graduate students’ thesis defense anxiety dream”. Now I wonder if I have the possessive apostrophe in the right place there. On reflection, if I have “every” there, then “graduate student” has to be singular. If I dropped the “every” then I could talk about “graduate students” in the plural and be sensible. I guess that’s all for a different blog to answer. Mike Thompson’s Grand Avenue for the 22nd threatened to get me all cranky again, as Grandmom decided the kids needed to do arithmetic worksheets over the summer. The strip earned bad attention from me a few years ago when a week, maybe more, of the strip was focused on making sure the kids drudged their way through times tables. I grant it’s a true attitude that some people figure what kids need is to do a lot of arithmetic problems so they get better at arithmetic problems. But it’s hard enough to convince someone that arithmetic problems are worth doing, and to make them chores isn’t helping. John Zakour and Scott Roberts’s Maria’s Day for the 22nd name-drops fractions as a worse challenge than dragon-slaying. I’m including it here for the cool partial picture of the fire-breathing dragon. Also I take a skeptical view of the value of slaying the dragons anyway. Have they given enough time for sanctions to work? Maria’s Day pops back in the 24th. Needs more dragon-slaying. Eric the Circle for the 24th, this one by Dennill, gets in here by throwing some casual talk about arcs around. That and π. The given formula looks like nonsense to me. $\frac{pi}{180}\cdot 94 - sin 94\deg$ has parts that make sense. The first part will tell you what radian measure corresponds to 94 degrees, and that’s fine. Mathematicians will tend to look for radian measures rather than degrees for serious work. The sine of 94 degrees they might want to know. Subtracting the two? I don’t see the point. I dare to say this might be a bunch of silliness. Cathy Law’s Claw for the 25th writes off another Powerball lottery loss as being bad at math and how it’s like algebra. Seeing algebra in lottery tickets is a kind of badness at mathematics, yes. It’s probability, after all. Merely playing can be defended mathematically, though, at least for the extremely large jackpots such as the Powerball had last week. If the payout is around 750 million dollars (as it was) and the chance of winning is about one in 250 million (close enough to true), then the expectation value of playing a ticket is about three dollars. If the ticket costs less than three dollars (and it does; I forget if it’s one or two dollars, but it’s certainly not three), then, on average you could expect to come out slightly ahead. Therefore it makes sense to play. Except that, of course, it doesn’t make sense to play. On average you’ll lose the cost of the ticket. The on-average long-run you need to expect to come out ahead is millions of tickets deep. The chance of any ticket winning is about one in 250 million. You need to play a couple hundred million times to get a good enough chance of the jackpot for it to really be worth it. Therefore it makes no sense to play. Mathematical logic therefore fails us: we can justify both playing and not playing. We must study lottery tickets as a different thing. They are (for the purposes of this) entertainment, something for a bit of disposable income. Are they worth the dollar or two per ticket? Did you have other plans for the money that would be more enjoyable? That’s not my ruling to make. Samson’s Dark Side Of The Horse for the 25th just hurts my feelings. Why the harsh word, Samson? Anyway, it’s playing on the typographic similarity between 0 and O, and how we bunch digits together. Grouping together three decimal digits as a block is as old, in the Western tradition, as decimal digits are. Leonardo of Pisa, in Liber Abbaci, groups the thousands and millions and thousands of millions and such together. By 1228 he had the idea to note this grouping with an arc above the set of digits, like a tie between notes on a sheet of music. This got cut down, part of the struggle in notation to write as little as possible. Johannes de Sacrobosco in 1256 proposed just putting a dot every third digit. In 1636 Thomas Blundeville put a | mark after every third digit. (I take all this, as ever, from Florian Cajori’s A History Of Mathematical Notations, because it’s got like everything in it.) We eventually settled on separating these stanzas of digits with a , or . mark. But that it should be three digits goes as far back as it could. ## Reading the Comics, August 17, 2017: Professor Edition To close out last week’s mathematically-themed comic strips … eh. There’s only a couple of them. One has a professor-y type and another has Albert Einstein. That’s enough for my subject line. Joe Martin’s Mr Boffo for the 15th I’m not sure should be here. I think it’s a mathematics joke. That the professor’s shown with a pie chart suggests some kind of statistics, at least, and maybe the symbols are mathematical in focus. I don’t know. What the heck. I also don’t know how to link to these comics that gives attention to the comic strip artist. I like to link to the site from which I got the comic, but the Mr Boffo site is … let’s call it home-brewed. I can’t figure how to make it link to a particular archive page. But I feel bad enough losing Jumble. I don’t want to lose Joe Martin’s comics on top of that. Charlie Podrebarac’s meat-and-Elvis-enthusiast comic Cow Town for the 15th is captioned “Elvis Disproves Relativity”. Of course it hasn’t anything to do with experimental results or even a good philosophical counterexample. It’s all about the famous equation. Have to expect that. Elvis Presley having an insight that challenges our understanding of why relativity should work is the stuff for sketch comedy, not single-panel daily comics. Paul Trap’s Thatababy for the 15th has Thatadad win his fight with Alexa by using the old Star Trek Pi Gambit. To give a computer an unending task any number would work. Even the decimal digits of, say, five would do. They’d just be boring if written out in full, which is why we don’t. But irrational numbers at least give us a nice variety of digits. We don’t know that Pi is normal, but it probably is. So there should be a never-ending variety of what Alexa reels out here. By the end of the strip Alexa has only got to the 55th digit of Pi after the decimal point. For this I use The Pi-Search Page, rather than working it out by myself. That’s what follows the digits in the second panel. So the comic isn’t skipping any time. Gene Mora’s Graffiti for the 16th, if you count this as a comic strip, includes a pun, if you count this as a pun. Make of it what you like. Mark Anderson’s Andertoons for the 17th is a student-misunderstanding-things problem. That’s a clumsy way to describe the joke. I should look for a punchier description, since there are a lot of mathematics comics that amount to the student getting a silly wrong idea of things. Well, I learned greater-than and less-than with alligators that eat the smaller number first. Though they turned into fish eating the smaller number first because who wants to ask a second-grade teacher to draw alligators all the time? Cartoon goldfish are so much easier.
auto_math_text
web
# Constraints on the Higgs boson width from off-shell production and decay to Z-boson pairs CMS Collaboration; Khachatryan, V; Sirunyan, A M; Tumasyan, A; Amsler, C; Canelli, F; Chiochia, V; De Cosa, A; Hinzmann, A; Hreus, T; Kilminster, B; Lange, C; Mejias, B; Ngadiuba, J; Robmann, P; Ronga, F J; Taroni, S; Verzetti, M; Yang, Y; et al (2014). Constraints on the Higgs boson width from off-shell production and decay to Z-boson pairs. Physics Letters B, 736:64-85. ## Abstract Constraints are presented on the total width of the recently discovered Higgs boson, Gamma[H], using its relative on-shell and off-shell production and decay rates to a pair of Z bosons, where one Z boson decays to an electron or muon pair, and the other to an electron, muon, or neutrino pair. The analysis is based on the data collected by the CMS experiment at the LHC in 2011 and 2012, corresponding to integrated luminosities of 5.1 inverse femtobarns at a centre-of-mass energy sqrt(s) = 7 TeV and 19.7 inverse femtobarns at sqrt(s) = 8 TeV. A simultaneous maximum likelihood fit to the measured kinematic distributions near the resonance peak and above the Z-boson pair production threshold leads to an upper limit on the Higgs boson width of Gamma[H] < 22 MeV at a 95% confidence level, which is 5.4 times the expected value in the standard model at the measured mass. ## Abstract Constraints are presented on the total width of the recently discovered Higgs boson, Gamma[H], using its relative on-shell and off-shell production and decay rates to a pair of Z bosons, where one Z boson decays to an electron or muon pair, and the other to an electron, muon, or neutrino pair. The analysis is based on the data collected by the CMS experiment at the LHC in 2011 and 2012, corresponding to integrated luminosities of 5.1 inverse femtobarns at a centre-of-mass energy sqrt(s) = 7 TeV and 19.7 inverse femtobarns at sqrt(s) = 8 TeV. A simultaneous maximum likelihood fit to the measured kinematic distributions near the resonance peak and above the Z-boson pair production threshold leads to an upper limit on the Higgs boson width of Gamma[H] < 22 MeV at a 95% confidence level, which is 5.4 times the expected value in the standard model at the measured mass. ## Statistics ### Citations Dimensions.ai Metrics 128 citations in Web of Science® 109 citations in Scopus® ### Altmetrics Detailed statistics Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 2014 12 Dec 2014 13:28 17 Sep 2019 16:21 Elsevier 0370-2693 Gold Publisher DOI. An embargo period may apply. https://doi.org/10.1016/j.physletb.2014.06.077 http://arxiv.org/abs/1405.3455
auto_math_text
web
# Probing Dark Matter Long-lived Mediators with Solar $γ$ rays [CL] We show that solar $\gamma$-ray observations can provide a complementary probe of Dark Matter in scenarios where the interactions with the Standard Model proceed via long-lived mediators. For illustration we consider a simplified model which provides solar $\gamma$-ray fluxes observable with the next generation $\gamma$-ray telescopes, while complying with the existing experimental constraints. Our results suggest that solar $\gamma$-ray fluxes can be orders of magnitude larger than the ones from the Galactic center, while being subject to low backgrounds. M. Lucente, C. Arina, M. Backovic, et. al. Thu, 12 Oct 17 10/47 Comments: 4 pages, 2 figures. To appear in the proceedings of The European Physical Society Conference on High Energy Physics, 5-12 July 2017 in Venice, Italy
auto_math_text
web
# Nuclear Science and Techniques 《核技术》(英文版) ISSN 1001-8042 CN 31-1559/TL     2019 Impact factor 1.556 Nuclear Science and Techniques ›› 2019, Vol. 30 ›› Issue (10): 147 • NUCLEAR ENERGY SCIENCE AND ENGINEERING • ### Feasibility analysis of 60Co production in the pressurized water reactors Wei Zhang, Feng-lei Niu, Ying Wu, Zhang-Peng Guo 1. Beijing Key Laboratory of Passive Safety Technology for Nuclear Energy, North China Electric Power University, Beijing 102206, China • Received:2019-02-11 Revised:2019-04-18 Accepted:2019-04-26 • Contact: Feng-Lei Niu E-mail:niufenglei@ncepu.edu.cn • Supported by: This work was supported by the National Natural Science Foundation of China (Nos. 11635005 and 11705058) and the Fundamental Research Funds for the Central Universities (No. 2018ZD10). PDF ShareIt Export Citation Wei Zhang, Feng-lei Niu, Ying Wu, Zhang-Peng Guo. Feasibility analysis of 60Co production in the pressurized water reactors.Nuclear Science and Techniques, 2019, 30(10): 147 Citations Altmetrics Abstract: The radioactive isotope 60Co is used in many applications, and is typically produced in heavy water reactors. As most of the commercial reactors in operation are pressurized light water reactors (PWRs), the world supply of high level radioactive cobalt would be greatly increased if 60Co could be produced in them. Currently, 60Co production in PWRs has not been extensively studied; for the 59Co (n, γ) 60Co reaction, the positioning of 59Co rods in the reactor determines the rate of production. This article primarily uses the models of 60Co production in Canadian CANDU power reactors and American boiling water reactors; based on relevant data from the pressurized water Daya Bay Nuclear Power Plant, a PWR core model is constructed with the Monte Carlo N-Particle (MCNP) Transport Code; this model suggests changes to existing fuel assemblies to enhance 60Co production. In addition, the plug rods are replaced with 59Co rods in the improved fuel assemblies in the simulation model to calculate critical parameters including the effective multiplication factor, neutron flux density, and distribution of energy deposition. By considering different numbers of 59Co rods, the simulation indicates that different layout schemes have different impact levels, but the impact is not large. As a whole, the components with four 59Co rods have a small impact, and the parameters of the reactor remain almost unchanged when four 59Co rods replace the secondary neutron source. Therefore, in theory, the use of a PWR to produce 60Co is feasible.
auto_math_text
web
[ Article ] Journal of Korea Technical Association of the Pulp and Paper Industry - Vol. 52, No. 5, pp.45-54 ISSN: 0253-3200 (Print) Print publication date 30 Oct 2020 Received 14 Sep 2020 Revised 11 Oct 2020 Accepted 13 Oct 2020 # Accuracy of the Different Calculation Methods of Specific Edge Load LIU Huan1, ; DONG Jixian2, ; LUO Chong3 ; DUAN Chuanwu1 ; GUO Xiya2 ; QI Kai1 ; QIAO Lijie4 ; ZHAO Zhiming2 1College of Mechanical and Electrical Engineering, Shaanxi University of Science & Technology, Xi’an, Shaanxi Province, 710021, Student, People’s Republic of China 2College of Mechanical and Electrical Engineering, Shaanxi University of Science & Technology, Xi’an, Shaanxi Province, 710021, Professor, People’s Republic of China 3Henan Cigarette Industry Sheet Co., Ltd., Henan Province, 461100, Engineer, People’s Republic of China 4College of Mechanical and Electrical Engineering, Shaanxi University of Science & Technology, Xi’an, Shaanxi Province, 710021, Lecturer, People’s Republic of China Correspondence to: †E-mail: liuhsust@126.com (Address: College of Mechanical and Electrical Engineering, Shaanxi University of Science & Technology, Xi’an, Shaanxi Province, 710021, People’s Republic of China) Contributed by footnote: ‡ djx@sust.edu.cn (Address: College of Mechanical and Electrical Engineering, Shaanxi University of Science & Technology, Xi’an, Shaanxi Province, 710021, People’s Republic of China) ## Abstract The specific edge load (SEL) is the commonly used intensity for measuring the low consistency refining process, while the cutting edge length (CEL) is the core parameter of it and the accuracy of its calculation is important for the process characterization. There are two main types of calculation methods of CEL for isometric straight bar plates, direct measurement methods and mathematical calculations based on the bar parameters. The CEL of isometric straight bar plates with different bar angles, field angles and bar width, calculated by different methods, were explored in order to verify the calculation accuracy of different methods. It was found that CEL4 and CEL5 could not be used for the CEL calculation of isometric straight bar plates due to the large errors, and CEL1 was the most accurate direct measurement method. While the recommended mathematical calculation method was CEL3 which could effectively and simply calculate the CEL of the straight bar plates with smaller errors. ## Keywords: Low consistency refining, isometric straight bar plate, specific edge load, cutting edge length, accuracy ## 1. Introduction Low consistency pulp refining is an important operating unit to modify the properties of pulp and fibers, and it is usually measured by the refining intensity. Specific edge load (SEL)1) is a widely used indicator to measure the strength of low consistency (LC) refining process conducted by straight bar plates, and many other intensities were proposed based on it, such as specific surface load (SSL),2) modified edge load (MEL)3) and modified specific surface load (MSSL)4) etc. Meanwhile, the SEL is the basis of the structure design of straight bar plates and controlling of the LC refining processes.5-10) Therefore, the accurate calculation of the SEL is important for the optimal design of the straight bar plates and the control of the LC refining process. Specific edge load, proposed by Brecht et al.,1) is one of the earliest established refining intensities. Compared with other refining intensities that considering more bar parameters, SEL has the characteristics of simple and easy calculation. It can be directly used to the design of the straight bar plates due to the value of it can be converted into the arrangement of bars or the calculation of cutting edge length. The SEL could be expressed by Eq.1. $SEL=\frac{{P}_{net}}{n\cdot CEL}$ [1] In which the Pnet is the net power of the refining process (kW), n is the rotation speed of the refining plate (r/min), and the CEL is the cutting edge length of the refining plate (m/r). A reasonable value of SEL can be determined through comprehensive consideration of pulp type and refining process,11,12) and then the range of CEL can be obtained to guide the design of straight bar refining plates. It was noted that the CEL is the characterization parameter and core parameter of SEL,13) which can also be called the characterization parameter of refining plates. Through the analysis of previous studies, many kinds of calculation methods of CEL existed. When Wultsch et al.,14) proposed the prototype of SEL, they defined a new parameter, cumulative edge length, L, that is the bar edge length or the cutting edge length during refining mentioned above, and the expression of it is, $L={n}_{R}\cdot {n}_{S}\cdot {l}_{\text{a}}$ [2] In which, nR and nS is the bar number of rotor and stator, and la is the average bar length (mm). If the CEL of the isometric straight bar plates was calculated according to the definition of Eq.2, it could be obtained15) $CEL=\sum _{k=1}^{P}{n}_{Rk}\cdot {n}_{Sk}\cdot ∆{R}_{\text{k}}$ [3] Where k is the number of ring partition, nRk and nSk are the bar number of rotor and stator in the ring partition k, and the ΔRk is the radial length of the ring k. The bar angle of the isometric straight bar plate was not considered in the CEL calculated by Eq.3. And the TAPPI Preparation Committee16) considers the bar angle of the refining plate and the calculation method, expressed by Eq.3, was modified, which could be expressed by Eq.4. $CEL=\sqrt{\frac{\sum _{k=1}^{P}{n}_{Rk}^{2}∆{R}_{\text{k}}}{{\mathrm{cos}\alpha }_{AR}}}\cdot \sqrt{\frac{\sum _{k=1}^{P}{n}_{Rk}^{2}∆{R}_{\text{k}}}{{\mathrm{cos}\alpha }_{AS}}}$ [4] In which, αAR and αAS are the average bar angle of the rotor and stator (°), and it could be calculated by the following equation. [5] Where β is the field angle (°). Roux et al.15) considered the bar angle of the refiner plate and recalculated the bar number of rotor and stator. The CEL of the refining plate could be obtained by integration from the internal radius, Ri, to the outer radius, Ro, and it could be expressed by Eq.6. $\begin{array}{c}CEL=\underset{{R}_{i}}{\overset{{R}_{o}}{\int }}\frac{{n}_{R}\left(R\right)\cdot {n}_{S}\left(R\right)\cdot dR}{{\mathrm{cos}\alpha }_{S}\cdot {\mathrm{cos}\alpha }_{R}}\hfill \\ =\frac{4{\pi }^{2}\left({R}_{0}^{3}-{R}_{i}^{3}\right)}{3\left({b}_{R}+{g}_{R}\right)\left({b}_{S}+{g}_{S}\right)}\hfill \end{array}$ [6] In which, αS and αR are the bar angle of the stator and rotor (°), bR and bS are the bar width of the stator and rotor (mm), and the gR and gS are the groove width of the stator and rotor (mm). If the calculation method of the bar number used in Eq.6 was introduced, the Eq.4 could be written as $CEL=\frac{4{\pi }^{2}\left({R}_{0}^{3}-{R}_{i}^{3}\right)}{3\left({b}_{R}+{g}_{R}\right)\left({b}_{S}+{g}_{S}\right)}×\sqrt{{\mathrm{cos}\alpha }_{AR}\cdot {\mathrm{cos}\alpha }_{AS}}$ [7] Except the bar width, groove width, there are three important angular parameters of refining plate, field angle, bar angle of the rotor and stator, which must be concerned when charactering the refining process. However, the filed angle of the refining plate is not considered in Eqs.3, 4, 6 and 7. Roux et al.17) comprehensively considered the field angle β, bar angles of rotor and stator, αS, αR, and defined an angular parameter factor. Therefore, the CEL becomes, $\begin{array}{c}CEL=\frac{4{\pi }^{2}\left({R}_{0}^{3}-{R}_{i}^{3}\right)}{3\left({b}_{R}+{g}_{R}\right)\left({b}_{S}+{g}_{S}\right)}\hfill \\ \frac{\left[\mathrm{sin}\left({\alpha }_{S}+\beta \right)-\mathrm{sin}{\alpha }_{S}\right]\left[\mathrm{sin}\left({\alpha }_{R}+\beta \right)-\mathrm{sin}{\alpha }_{R}\right]}{{\beta }^{2}}\hfill \end{array}$ [8] Through the analysis of the above calculation methods of the CEL, it can be concluded that two types of calculation methods existed, direct measurement methods, represented by Eqs.3 and 4, and mathematical calculation methods proposed by considering different bar parameters, mainly represented by Eqs.6, 7 and 8. Theoretically, the direct calculation method is relatively accurate, while the mathematical calculation methods are relatively simple and convenient compared to the previous one. However, its accuracy should be further studied. The objective of this study was to explore the accuracy of different methods for calculating the SEL based on the analysis of the CEL of isometric straight bar plates with different bar widths, field angles and bar angles, which is benefit for the selection of the calculation method and the optimal controlling of the LC refining process. ## 2. Methodology ### 2.1 Isometric straight bar plates with different bar parameters The bar parameters of the isometric straight bar plates mainly include the inner and outer radius of the refining plates, Ri, Ro, the bar angle α, the field angle β, the bar width b, the groove width g, etc., as shown in Fig. 1. To explore the accuracy of the above methods for calculating the CEL of refining plates, three types of isometric straight bar plates with different bar angle, field angle, and bar width were designed in this paper.6,18) The inner radius, outer radius and bar height of them are the same, they are 41.25 mm, 101.5 mm, and 4 mm. Other parameters will be described in the following section. Main parameters of the straight bar plates. 2.1.1 Bar angle Seven isometric straight bar plates with different bar angles were designed to clarify the accuracy of different calculation methods for the CEL calculation of straight bar plates with different bar angles, as shown in Table 1. The bar width, groove width and field angle of them are 2 mm, 3 mm and 40°. The structure of the isometric straight bar plates with different bar angles 2.1.2 Field angle The field angle is one of the important bar parameters of the straight bar refining plates. Through considering the calculation method of β in previous study6) and the convenience design of the refiner plate, nine isometric straight bar plates with different filed angles were designed, which can be shown in Table 2. The bar width, groove width and bar angle of them are 2 mm, 3 mm, and 10°. The structure of the isometric straight bar plates with different field angles 2.1.3 Bar and groove width The bar and groove width are the two key parameters of straight bar plates. Under the constant refining conditions, the intensity of the refining process can be changed by adjusting both. Therefore, it is very important to investigate the CEL calculation of straight bar plates with different bar and groove widths. The code of straight bar plates is usually expressed by "bar width-groove width-bar height", and it can be referred to "bar width-groove width" when the bar height was kept constant. In this paper, nine straight bar plates with different bar width were designed with the constant ratio of bar width and groove width, 2/3, as shown in Table 3, in which the field angle and bar angle are 40° and 10°. The structure of the isometric straight bar plates with different bar and groove width ### 2.2 CEL calculation The calculation methods mentioned above, such as Eqs.3, 4, 6, 7 and 8, were proposed based on the fact the bar parameters of rotor and stator are different. While the rotor and stator with the same bar parameters were concerned to simplify the calculation and clarify the accuracy of different methods. The simplified calculation formulas were shown in Table 4, in which the αA is the average bar angle of the rotor and stator. CEL calculation methods for the rotor and stator plates with same bar parameters Elahimehr19) thought that the integral form of the CEL defined by the TAPPI standard TIP could be expressed by $CEL=\underset{{R}_{i}}{\overset{{R}_{o}}{\int }}\frac{{n}_{R}\left(R\right){n}_{S}\left(R\right)}{{\mathrm{cos}\alpha }_{A}}dR$ [9] For the rotor and stator plates with the same bar parameters, Eq.9 could be simplified and it was shown in Table 4. Among all the calculation methods in Table 4, the corresponding formulas of Eqs.3 and 4 are the two closest methods to the definition, Eq.2. However, the radial length of the single ring zone was considered in Eq.3 which is not the true bar length, and the bar length of the single ring zone was concerned by Eq.4. Therefore, the CEL calculated by Eq.4 would be more accurate. It was found the bar length of a right-handed plate in the single ring zone will gradually decrease from left to right, which means the bar length in the middle of the ring can better measure the average bar length. The controlled CEL was calculated which can be described by Eq.10. $CE{L}_{\text{c}}=\sum _{k=1}^{p}{n}_{k}^{2}\cdot ∆{l}_{m\text{k}}$ [10] In which, the Δlmk is the bar length of the bar in the middle of the zone k. ## 3. Results and discussion ### 3.1 Theoretical analysis The calculation methods of Eqs.3, 4, and 10 were the direct measurement method and their principle was similar to the definition expressed by Eq.2. However, their understanding of the bar length was different, and the radial length of the single ring zone was considered in the calculation of CEL0, which could not characterize the CEL accurately. Eqs. 6 to 9 are the mathematical calculation formulas that relating nk and bar length to other bar parameters of the refining plates. While the bar angle and field angle were not included in the calculation of CEL2, which means that CEL2 cannot be affected by them and CEL2 cannot better measure the refining intensity of the straight bar plates with different bar angles and field angles. Although the Eqs.7 ,8 and 9 are the modified version compared to the Eq.6, the accuracy of them should be further explored compared to the actual value calculated by Eq.10. ### 3.2 Bar angle Bar angle is one of the important parameters that greatly affects the CEL of the isometric straight bar plates. And the accuracy of different calculation methods for the CEL of straight bar plates with different bar angles was explored in this paper, as shown in Fig.2. Except for the CEL2 and CEL5, all the value of CEL calculated by other methods gradually decreases with the increasing of plate bar angle which is consistent with the results obtained by Liu et al.18,20) However, the degree of reduction of them is different and it depends on the accuracy of the calculation method. The CEL1 and CELC are basically the same due to the similar calculation of the bar length in the single ring zone. While the calculation of the CEL1 and CELC are troublesome for that all the bar lengths in different ring zones should be measured separately. In addition, there is no obvious relationship between CEL2 and plate bar angle, which is consistent with the conclusion obtained from the section of theoretical analysis, and the value of it is bigger than that of CELC. The CEL5 of plates gradually increases with the increasing of plate bar angle which is not in line with the facts. Although the value of CEL4 gradually increase with the bar angle, both the value of CEL4 and CEL5 are much larger than that of actual CEL value. Therefore, the calculation methods of CEL2, CEL4 and CEL5 are not suitable for accurate calculation of CEL for isometric straight bar plates with different bar angles. The change of CEL0, CEL1 and CEL3 over the bar angle is consistent with that of CELC, while the value of CEL1 is closer to the actual value, CELC, as shown in Fig.3. Meanwhile, the CEL0 is smaller than the CELC due to the radius increment in a single ring zone was considered as the bar length. Therefore, it is recommended to use CEL1 and CEL3 when the CEL of isometric straight bar plates with different bar angles were calculated, while the value of CEL1 is the closest one to the actual value and CEL3 is the easiest method. The relationship between the CEL calculated by different methods and the bar angle of isometric straight bar plate. The deviation of CEL0, CEL1, CEL3 and CELC over bar angle of the straight bar plates. ### 3.3 Field angle The field angle is another angular parameter of isometric straight bar plates, and there is a direct relationship between the field angle and CEL, meanwhile, the accuracy of CEL calculated by different methods is different, as shown in Figs. 4 and 5. Similar to the results obtained from theoretical analysis, CEL2 remains constant as the increasing of the field angle of straight bar plate due to that it does not take into account the important angular parameters. Actually the CEL of straight bar plate has a tendency to decrease as the increase of the field angle, which means that it is difficult for CEL2 and CEL5 to accurately calculate the CEL of straight bar plates with different field angles. In addition, CEL4 cannot be used to accurately calculate it due to its large volatility. Therefore, the effective method to calculate the CEL of straight bar plates with different field angles are the direct measurement methods, CEL0 and CEL1, and the mathematical calculation method, CEL3, as shown in Fig. 5. And the accuracy of the above three is different, and the recommended order would be CEL1, CEL3 and CEL0 according to the magnitude of the deviation value. Therefore, the most effective CEL calculation method of straight bar plates with different field angles is CEL3, except the direct measurement method, CEL1. The relationship between the CEL calculated by different methods and the field angle of isometric straight bar plate. The deviation of CEL0, CEL1, CEL3 and CELC over field angle of the straight bar plates. ### 3.4 Bar and groove width The change of the bar and groove width of straight bar plates is one of the main ways to adjust the refining intensity of the LC refining process, and both the value of them will directly affect the CEL of the plates. The effect of the bar width on the CEL calculated by different methods were explored under the constant ratio of the bar width and groove width, as shown in Fig. 6. It was found that the value of CEL gradually decreases with the increasing of bar width no matter which method was used, while the value of CEL4 is much larger than the actual value, CELC, and the CEL calculated by other methods, therefore, the CEL4 cannot be used to the CEL calculation of the straight bar plates with different bar width. The difference between the value obtained by other calculation methods and the CELC gradually decreases as the bar width of the straight bar plates increases, as shown in Fig. 7. Among them, the value of CEL1, CEL2, CEL3 and CEL5 is greater than the CELC, and CEL0 is less than it, and the reason for it was explained in the section of bar angle. It can also be seen that CEL1 is almost the same as the actual value, CELC, which means that the average bar length can be represented by the bar length of the intermediate bar in the ring. And the deviation of CEL3 from the CELC is the smallest among all mathematical calculation methods. Therefore, the recommended CEL calculation methods of isometric straight bar plates with different bar width are the CEL1 and CEL3, while the latter is the simple one. The relationship between the CEL calculated by different methods and the bar width of isometric straight bar plate. The deviation of CEL0, CEL1, CEL3 , CEL5 and CELC over bar width of the straight bar plates. ## 4.Conclusions As the characteristic parameter of SEL, CEL is the core calculation part of it. Different CEL calculation methods of isometric straight bar plates were summarized, and the accuracy of them were explored in this paper. The direct measurement methods and mathematical calculation methods based on bar parameters are the two main types methods of CEL calculation for straight bar plates. The core of the direct measurement method is the calculation of the bar length in the single ring zone, while the mathematical calculation methods are relatively simple compared to direct one. However, the most accurate methods are the direct measurement methods based on the bar average length, and there are large errors of the mathematical methods for the CEL calculation. The bar angle and field angle of isometric straight bar plates will greatly affect the CEL of the plates, and the change trend of the CEL obtained by the different calculation methods over the bar angle and field angle are different. The actual value of CEL, CELC, gradually decrease with the increase of bar angle and field angle of the plates, while its change rate over the bar angle is more obvious compared to that of the field angle. In addition to the direct measurement method, CEL1, which can accurately calculate the CEL of straight bar plates with different angular parameters, CEL3 can replace the direct measurement method to a certain extent and simplify the CEL calculation. The value of CEL of the isometric straight bar plates, calculated by all methods, gradually decrease with the increasing of the bar width under the constant ratio of bar and groove width. And the CEL1 and CEL3 are the two effective methods for the calculating of the CEL of isometric straight bar plate with different bar width. ## Acknowledgments The authors gratefully acknowledge the funding by the National Natural Science Foundation of China Grant No. 50745048, Shaanxi Provincial Key Research and Development Project, Grant No. 2020 GY-105 and 2020 GY-174. ## Literature Cited • Brecht, W. and Siewert, W., Zur Theoretisch-technischen Beurteilung des Mahlprozesses Moderner Mahlmaschinen. Papier 20(1):4-14(1966). • Lumiainen, J., Specific surface load theory. 3rd Pira International Refining Conference, Atlanta, 46-47(1995). • Meltzer, F.P., and Sepke, P.W., New ways to forecast the technological results of refining. Proceeding of 3rd Pira International Refining Conference, Atlanta (1995). • Musselman, R., D. Letarte and R. Simard, Third stage low consistency refining of TMP for energy savings and quality enhancement, Proceeding of 4th Pira International Refining Conference, Fiuggu (1997). • Shen, L.X., Design and selection of the bar parameters of refining plates, Paper and Paper Making (6):30 (1998). • Liu, H., Dong, J.X., Guo, X.Y., Jiang, X.J., Luo, C., Tian, X.H., Wang, B., Zhao, Z.M., and Yang R.F., Study on Bar Profile Design of Isometric Straight Bar Refiner Plate Based on SEL, China Pulp & Paper 38(10):38-42(2019). • Wang, J.H., Wang, P.,Wang, H.B., Sh,i J.Y., Design and selection of straight teeth plate of refiners, China Pulp & Paper Industry 36(4):10-15(2015). • Su, S.Y., Wang, P., Design theory and method of disc refiner plates, Paper and Paper Making 30(8):10-16(2001). • Xu, D.W., and Zhou, D.G., A research on design of refining disc for separating wood and straw fibers together (3):26-29 (2012). • Yuchi, X.Y., Research on design and selection method of refiner plate based on refining intensity theory, Tianjin University of Science and Technology (2017). • Shen, L.X., Design and selection of bar profile of disc refiner plate, Shanghai Pulp and Paper (6):30-32(1998). • Hannu, P., Papermaking Part 1, Stock Preparation and Wet End, Hannu, P. (ed.), Finnish Paper Engineers’ Association/Paperi ja Puu Oy, Finland, pp. 113-122(2008). • Liu, H., Dong, J.X., Guo, X.Y., Qiao, L.J. and Jing, H., Quantitative Analysis of Pulp Refining and Its Research Progress, China Pulp and Paper 37(8):66-71(2018). • Wultsch, T.W., Flucher Der Escher-Wyss Kleinrefiner als Standard-Prüfgerät für moderne Stoffaufbereitungsanlagen, Papier 12(13):334-342(1958). • Roux, J. C., Bloch, J. F., Bordin, R. and Nortier, P., The net normal force per crossing point: A unified concept for the low consistency refining of pulp suspensions, In Advances in Pulp and Paper Research, Oxford, pp 51-83 (2009). • TAPPI standard TIP 0508-05: Refiner Plate Intensity Organization, The U.S.: Technical Association of The Pulp and Paper Industry (1994). • Roux. J.C., and Joris, G., Angular parameters beyond specific edge load, TAPPSA Journal, Technical Association of the Pulp & Paper Industry of South Africa, 1–10 (2005). • Liu, H., Dong, J.X., Guo, X.Y., Duan, C.W., Luo, C., Sun, Y., Tian, X.H., and Qi, K., Correlation between bar angle and characterization parameters of the isometric straight bar plate, China Pulp and Paper, 39(4):62-68(2020). • Elahimehr, A., Low Consistency Refining of Mechanical Pulp: The Relationship Between Plate Pattern, Operational Variables and Pulp Properties, Vancouver: The University of British and Columbia (2014). • Liu, H., Dong, J.X., Hui, J., Guo, X.Y., Duan, C.W., Qi, K., Yang,R.F., Guo, H.Z., Wang, B., and Qiao, L.J., Refining characteristics of isometric straight bar plates with different bar angles,Bioresources 15(4):7844-7860 (2020). ### Fig. 1. Main parameters of the straight bar plates. ### Fig. 2. The relationship between the CEL calculated by different methods and the bar angle of isometric straight bar plate. ### Fig. 3. The deviation of CEL0, CEL1, CEL3 and CELC over bar angle of the straight bar plates. ### Fig. 4. The relationship between the CEL calculated by different methods and the field angle of isometric straight bar plate. ### Fig. 5. The deviation of CEL0, CEL1, CEL3 and CELC over field angle of the straight bar plates. ### Fig. 6. The relationship between the CEL calculated by different methods and the bar width of isometric straight bar plate. ### Fig. 7. The deviation of CEL0, CEL1, CEL3 , CEL5 and CELC over bar width of the straight bar plates. ### Table 1. The structure of the isometric straight bar plates with different bar angles α 10° 15° 22° 39° 50° Plate structure ### Table 2. The structure of the isometric straight bar plates with different field angles β 12° 18° 22.5° 24° 30° 36° 40° 45° 60° Plate structure ### Table 3. The structure of the isometric straight bar plates with different bar and groove width Code 1.5-2.25 2-3 2.5-3.75 3-4.5 3.5-5.25 4-6 4.5-6.75 5-7.5 6-9 Plate structure ### Table 4. CEL calculation methods for the rotor and stator plates with same bar parameters NO Original formula Simplified calculation formula 1 Eq.3 $CE{L}_{\text{0}}=\sum _{k=1}^{p}{n}_{k}^{2}\cdot ∆{R}_{\text{k}}$ 2 Eq.4 $CE{L}_{\text{1}}=\sum _{k=1}^{p}{n}_{k}^{2}\cdot ∆{R}_{\text{k}}}{{\mathrm{cos}\alpha }_{A}}$ 3 Eq.6 $CE{L}_{\text{2}}=4{\pi }^{2}\left({R}_{0}^{3}-{R}_{i}^{3}\right)}{\left[3{\left(b+g\right)}^{2}\right]}$ 4 Eq.7 $CE{L}_{\text{3}}=4{\pi }^{2}\left({R}_{0}^{3}-{R}_{i}^{3}\right){\mathrm{cos}\alpha }_{A}/3{\left(b+g\right)}^{2}$ 5 Eq.8 $CE{L}_{\text{4}}=4{\pi }^{2}\left({R}_{0}^{3}-{R}_{i}^{3}\right){\left[\mathrm{sin}\left(\alpha +\beta \right)-\mathrm{sin}\alpha \right]}^{2}}{\left[3{\left(b+g\right)}^{2}{\beta }^{2}\right]}$ 6 Eq.9 $CE{L}_{\text{5}}=4{\pi }^{2}\left({R}_{0}^{3}-{R}_{i}^{3}\right)}{\left[3\mathrm{cos}\alpha {\left(b+g\right)}^{2}\right]}$
auto_math_text
web
0 Research Papers # Techno-Economic Optimal Design of Solid Oxide Fuel Cell Systems for Micro-Combined Heat and Power Applications in the U.S. [+] Author and Article Information Robert J. Braun Division of Engineering, Colorado School of Mines, Golden, CO 80401 Ejector efficiency is defined as $ηejector=(V̇2/V̇1)⋅(P2⋅ln(P3/P2)/(P1−P3))$ where $V̇$ is the volumetric flow rate and $P$ is the static pressure at the denoted location in the ejector. Subscripts 1, 2, and 3 refer to the primary driving flow (fresh air), the secondary flow (recycle), and the mixed flow at the ejector outlet, respectively. The value of heat is estimated as the difference between $COEeo−COECHP$. J. Fuel Cell Sci. Technol 7(3), 031018 (Mar 16, 2010) (15 pages) doi:10.1115/1.3211099 History: Received February 15, 2009; Revised June 01, 2009; Published March 16, 2010; Online March 16, 2010 ## Abstract A techno-economic optimization study investigating optimal design and operating strategies of solid oxide fuel cell (SOFC) micro-combined heat and power (CHP) systems for application in U.S. residential dwellings is carried out through modeling and simulation of various anode-supported planar SOFC-based system configurations. Five different SOFC system designs operating from either methane or hydrogen fuels are evaluated in terms of their energetic and economic performances and their overall suitability for meeting residential thermal-to-electric ratios. Life-cycle cost models are developed and employed to generate optimization objective functions, which are utilized to explore the sensitivity of the life-cycle costs to various system designs and economic parameters and to select optimal system configurations and operating parameters for eventual application in single-family, detached residential homes in the U.S. The study compares the results against a baseline SOFC-CHP system that employs primarily external steam reforming of methane. The results of the study indicate that system configurations and operating parameter selections that enable minimum life-cycle cost while achieving maximum CHP-system efficiency are possible. Life-cycle cost reductions of over 30% and CHP efficiency improvements of nearly 20% from the baseline system are detailed. <> ## Figures Figure 1 Process flowsheet of methane-fueled SOFC-CHP system with external reforming (Case 2a) Figure 2 Process flowsheet of hydrogen-fueled SOFC-CHP system (Case 1a) and with CGR (Case 1b) Figure 3 Process flowsheet of CH4-fueled SOFC-CHP system with IR and tail gas recycle (Cases 4b and 5) Figure 4 Normalized capital cost for each system configuration Figure 5 Normalized cost-of-electricity and life-cycle cost for each system configuration: (a) normalized COE and (b) normalized LCC Figure 6 Breakdown of life-cycle cost contributions: (a) Case 2a cost breakdown and (b) Case 5 cost breakdown Figure 7 Normalized life-cycle savings versus system configuration Figure 8 The effect of cell voltage on normalized LCC, system efficiency, and stack size: (a) system efficiency and number of cells in SOFC stack, and (b) normalized LCC Figure 9 The effect of cell temperature and air temperature rise on normalized SOFC-CHP LCC: (a) LCC versus cell temperature and (b) LCC versus cathode air temperature rise Figure 10 Comparison of SOFC-CHP and conventional system life-cycle costs Figure 11 Sensitivity of normalized life-cycle and capital costs to production volume and system size: (a) LCC sensitivity to production volume and (b) LCC sensitivity to power rating ## Discussions Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
auto_math_text
web
## 2-17. Victor Crémieu Victor Crémieu (1872--1935) was born in Avignon, and obtained the Ph.D. in physics at the Sorbonne under Gabriel Lippmann’s direction in 1901. As a result of his experimental investigations of rotating electrified disks, Crémieu was led at first to deny the existence of a magnetic effect of convected electricity, predicted by Faraday and Maxwell, and first detected in 1876 by Henry Rowland. Crémieu later recognized the reality of Rowland’s effect, attributing his null results to an unsuspected masking phenomenon. He went on to perform delicate, but ultimately inconclusive experiments on gravitational attraction, and in later life found employment in private industry.11endnote: 1 Annuaire de la Société française de physique, 1912, 11; Pierre Crémieu, p.c., 25.05.1997. Much of the Crémieu-Poincaré correspondence concerns Crémieu’s attempts between 1901 and 1903 to detect a magnetic action of electrical convection. Rowland’s effect, cast in the framework of Maxwell’s theory originally by J.J. Thomson in 1881, and with greater rigor by Oliver Heaviside in 1889, assumed significance for electrodynamics only with the rise of atomistic theories, accelerated by the discovery of the electron in 1897. The Maxwellian view of electric current as decay of electric polarization gave way, with the rise of atomistic electrodynamics in the 1890s, to a conception of charged particles moving in empty space, of which Lorentz’s theory was the leading example.22endnote: 2 On the history of electrodynamics in this period, see Whittaker (1951, 305), Buchwald (1985), Jungnickel & McCormmach (1986, 231), Darrigol (2000, chap. 8). It was at this time that Crémieu began his doctoral research at the Sorbonne. On the advice of his supervisor Gabriel Lippmann, Crémieu took up the study of electrical convection, but was unable to reproduce Rowland’s celebrated result, and concluded that a convected current had no magnetic effect, contrary to the previsions of Lorentz’s electron theory. In fact, Crémieu claimed that he had realized an open conduction current, of the sort ruled out by Maxwell’s theory of the electromagnetic field, but allowed in the alternative theories of Wilhelm Weber, Franz Neumann and Hermann von Helmholtz. Poincaré, who was by this time the leading French authority on electrodynamics, took an interest in Crémieu’s experiments.33endnote: 3 $F^{17}13248$, Archives nationales; Poincaré 1901a; Petiau 1954, 391–420. Only the first six sections of the 1901 article were reedited in Science et hypothèse (Poincaré 1902a), omitting the discussion of Crémieu’s experiments. Maxwell’s theory had found strong confirmation in Heinrich Hertz’s famous experiments with electromagnetic waves, and Poincaré had played an important part in this process by contributing a theory of the resonator. Several electrodynamicists, including Hertz, had attempted to apply Maxwell’s theory to matter in motion. Hertz’s theory of the electrodynamics of moving bodies, however, assumed in effect that the carrier of electromagnetic effects – the ether – was completely convected by matter in motion. Hertz (1890, 398) admitted the implausibility of his constraints, a limitation which partially rectified by a different, and more successful theory elaborated by Lorentz. Here it was assumed that a dielectric in motion carried with it not the ether, but electrons. Lorentz’s theory, like Hertz’s, could account for the experimental results of both Röntgen and Rowland. Despite its status as a crucial test of the leading candidate for a final theory of physics, in the laboratory, Rowland’s effect had been established qualitatively at best.44endnote: 4 For the view of Rowland’s effect as a crucial test of Lorentz’s theory, see the latter’s 1907 lecture “The experimental foundations of the theory of electricity” (Zeeman & Fokker 1935, 125–151), and Eichenwald (1908, 83). Before Crémieu took up the problem, the magnetic effect of a charged particle in motion was borne out by a series of observations: spark discharge showed such an effect (assuming material particles from the electrode carry charge to the anode), as did ionic currents in electrolysis (assuming ions to be charge carriers), and cathode rays (assumed to be composed of charged particles). Rowland’s experiment required no special assumptions concerning the charge carrier, and assumed a particular epistemological status. For Maxwellians, the Rowland effect was not a major concern, although there were differences on how to calculate the self-energy of a charged sphere in motion (Buchwald 1985, 273). What Rowland’s experiment indicated to the Cambridge theorist Joseph Larmor, however, was the discrete nature of electrical current, made up of electric charges in motion (Whittaker 1951, 365). Larmor convinced himself --- and many others --- that Rowland’s result could not be accounted for by Maxwell’s theory, and required a theory of the electron, such as he proposed.55endnote: 5 Larmor’s view ignored Hertz’s demonstration that the existence of displacement currents on a rotating, uniformly electrified disk was implied by Maxwell’s theory; on this and other errors in Larmor’s criticism of Maxwell’s theory, see Darrigol (2000, 342). The discovery of the electron and the Zeeman effect provided further arguments for an atomistic view of electricity. In the face of evidence amassed in favor of a magnetic effect of convected charge, most physicists naturally assumed there to be errors in Crémieu’s laboratory setup, his physical reasoning, or both. This high degree of confidence in both the reality of the effect and its necessity from a Maxwellian standpoint meant that Crémieu’s work was unlikely to be given a hearing – let alone an objective evaluation – without the assistance of a leading scientist. Poincaré assumed this role with relish, observing wryly that many of Crémieu’s skeptics were English. Young Crémieu’s work was to be judged on its merits alone, in other words (Poincaré 1901a, 410). Prominent among the skeptics was Joseph Larmor, Britain’s leading expert on electron theory, who prompted H.A. Wilson to write an article explaining Crémieu’s result as the consequence of a basic flaw in his apparatus design.66endnote: 6 Indorato & Masotto 1989, 130. On Larmor’s view of Rowland’s experiment, see Darrigol (2000, 342). Others, like Augusto Righi and Tullio Levi-Civita, similarly expected a null effect on Maxwellian grounds, based on the design of Crémieu’s apparatus.77endnote: 7 See the correspondence with Levi-Civita (§ 2-37-2). The superior design of Crémieu’s experiment, however, figures strongly in Poincaré’s early support of his results (§ 2-62-6), which rendered the existence of Rowland’s effect “very doubtful”. Poincaré’s stormy relation with Maxwell’s theory began with his preface to the textbook Électricité et optique, where he took a celebrated swipe (Poincaré 1890, v) at the precision and logic of Maxwell’s Treatise on Electricity and Magnetism. In correspondence with Hertz, however, he aligned himself squarely on the side of Maxwell’s ideas (§ 2-30-3). Leading physicists, including G.F. FitzGerald and Paul Drude, contested details of Poincaré’s interpretation of Maxwell, and it seems clear that Poincaré was led astray on occasion by his flawed understanding of the basic Maxwellian notions of charge and current.88endnote: 8 FitzGerald 1892; Drude 1897, XXI. Even so, his Sorbonne lectures of 1888 and 1890 were instrumental to the diffusion of Maxwell’s ideas, and provided a convincing demonstration of the power of British abstract dynamics.99endnote: 9 Darrigol 1993, 223. On the French reception of Maxwell’s theory, including Poincaré’s role, see Coelho Abrantès (1985) and Atten (1992). When the second edition of these lectures appeared in 1901, with new material on the electron theories of Lorentz and Larmor, Poincaré warned readers that these theoretical innovations were threatened by Crémieu’s results. Nevertheless, when the existence of Maxwell’s displacement currents – denied by Crémieu – was attacked in the pages of a technical journal he co-edited, Poincaré (1901b) took it upon himself to respond in Maxwell’s stead. For Crémieu, the international attention to his work was surely flattering, but the pressure to respond to sharp, and often dismissive criticism from Wilson and others took its toll, as both he and Poincaré began to question the certainty of some of his results (§ 2-17-4). Crémieu’s results were subject to close scrutiny in France. At the Sorbonne, Lippmann and Bouty found them to be sound, and at the Collège de France, Marcel Brillouin (1904, 265) announced that they represented a substantial challenge to Maxwell’s theory. Yet Poincaré’s Sorbonne colleague Henri Pellat remained dubious of Crémieu’s results, while Alfred Potier, Poincaré’s former instructor at the École polytechnique, could see no fundamental contradiction between Crémieu’s results and Maxwell’s theory. Poincaré answered Potier in an exchange published in part by the technical journal L’Éclairage électrique (Poincaré 1902), and fully transcribed in (§ 2-48-12). He also sought – without success – to convince the reputed experimenter René Blondlot to reproduce Crémieu’s experiments (§ 2-9-11). The principal challenge to Poincaré and Crémieu came not from theorists, but from a young American experimenter, Harold Pender. At Johns Hopkins University, on the instigation of the bedridden Rowland, Pender carried out a series of experiments that confirmed Rowland’s earlier results, and set a new standard of precision for measurements of charge convection on a rotating disk. In addition to providing quantitative evidence of Rowland’s effect, Pender pointed out certain difficulties in Crémieu’s setup that had been overlooked by Poincaré. To settle the matter, Poincaré, after carefully seeking – and obtaining – William Thomson’s endorsement, made an extraordinary move: he called for a side-by-side repetition of Crémieu’s and Pender’s experiments.1010endnote: 10 Two years earlier, Poincaré (1902b, 218) had arranged for an exchange of astronomers between Paris and Greenwich, in order to resolve discordant measurements of the difference in longitude between the two observatories. As for Thomson, following Crémieu’s presentation at the Glasgow meeting of the British Association in September, 1901, he remarked that only a “repetition of the experiments under the simplest possible conditions” could confirm a result “against which there is so much indirect evidence, and which, if accepted, would necessitate the entire reconstruction of electromagnetic theory” (cited by Lees 1901). The Institut de France underwrote the expenses incurred by Bouty’s laboratory in hosting the face-off, probably on Poincaré’s suggestion, while the Carnegie Institution of Washington provided funds for Pender’s participation. When the American money was all spent, Poincaré asked his Hopkins colleague Joseph Ames (§ 2-1-1) to seek an extension.1111endnote: 11 The Carnegie Institution of Washington underwrote all of Pender’s expenses, in the amount of $662.57. According to Pender’s unpublished report to his sponsor, the Institut de France contributed 7000 francs to defray costs incurred by Bouty’s laboratory in hosting the experiments (Pender to D. C. Gilman, 09.05.1903, Harold Pender file, Archives of the Carnegie Institution of Washington). At the issue of a three-month collaboration in Bouty’s laboratory in 1903, Crémieu and Pender qualitatively confirmed Rowland’s effect on both sides,1212endnote: 12 Pender attributed his failure to obtain a quantitative reproduction of his earlier results to magnetic disturbances in Bouty’s Sorbonne laboratory (Pender to D.C. Gilman, 09.05.1903, op. cit.). and agreed that the effect had been masked by a dielectric coating applied to the disks and armatures of Crémieu’s apparatus. Using Pender’s equipment, the two physicists demonstrated Rowland’s effect for the French Physics Society, on April 17, 1903, marking the end of the controversy, as far as Pender was concerned (Crémieu & Pender 1903, 956). On the following day, however, another Sorbonne experimentalist challenged the explanation offered for Crémieu’s null results, denying that a dielectric coating could reduce the magnetic effect of convected charge (Vasilesco-Karpen, 1903, 168–169). Taking up the new challenge, this time with a colleague, Crémieu sought to establish what one physicist calls the “Crémieu-Pender effect,” publishing two notes on this investigation in the Comptes rendus, both of which were presented by Poincaré.1313endnote: 13 Sutherland 1904; Crémieu & Malclès 1904a, 1904b, presented 11.11.1904 and 12.12.1904, respectively. However, the latter effect resisted all efforts by others at reproduction, and was summarily dismissed as spurious by A. Eichenwald, who assigned to experimental error Crémieu’s repeated null findings for the Rowland effect.1414endnote: 14 Eichenwald (1908, 86) eliminated the hypothesis of a masking effect generated by the dielectric, and guessed that Crémieu’s null results were due to inadequate isolation and point discharge. Along with Eichenwald, F. Himstedt (1904) managed to reproduce Pender’s measurements quantitatively, and established the Rowland effect on firm ground. Since few leading physicists had entertained doubts over the reality of the Rowland effect, the upshot of the Crémieu-Pender meeting was a striking confirmation of the excellence of Rowland’s school of experimental physics, at the expense of Poincaré and, more generally, of French expertise in precision electrical measurement.1515endnote: 15 On the damage to French scientific prestige, see M.-J. Nye (1986, 71). The Pender-Crémieu outcome heralded the greater blow dealt by R.W. Wood (another Hopkins physicist), in his much-publicized debunking of René Blondlot’s N rays. On Rowland’s school, see Sweetnam (2000). Poincaré acknowledged the outcome by silently rewriting the passage of La science et l’hypothèse concerning Crémieu’s experiments. According to his mature view of the episode, Crémieu’s experiments shook the foundations of modern electrodynamics: L’édifice de l’électrodynamique semblait, au moins dans ses grandes lignes, définitivement construit ; cette quiétude a été récemment troublée par les expériences de M. Crémieu qui, un instant, ont semblé contredire le résultat autrefois obtenu par Rowland. Les recherches nouvelles ne les ont pas confirmées, et la théorie de Lorentz a subi victorieusement l’épreuve.1616endnote: 16 Poincaré 1906, 281. The cited passage was struck from later editions of Science et hypothèse. Poincaré’s first published view of Crémieu’s experiments (1901a) was published in modified form in chapter 13 of La science et l’hypothèse, in several editions (1902, 1906, 1907); excerpts of his correspondence with Alfred Potier on this topic appear in Poincaré (1902). Poincaré’s mature view of the Crémieu episode (Poincaré 1908, 387) emphasized that Rowland’s effect is required by the first law of thermodynamics. Later readings of Rowland’s experiment and the Crémieu-Pender experiments include J. Sivadjian (1953) and H. Arzeliès (1959, 92–98); both references include extensive bibliographies. On the Crémieu-Pender controversy see J.D. Miller (1972), and for discussions of Poincaré’s role see A.I. Miller (1973), O. Darrigol (1995), and the detailed study by L. Indorato & G. Masotto (1989). The latter reference provides a transcription of several of the letters exchanged between Poincaré, Crémieu, and W. Thomson. The Poincaré-Crémieu correspondence is not limited to the investigation of Rowland’s effect. Having invested heavily in his bid to overturn Maxwell’s theory – a doctoral thesis and some 21 papers were now rubbish – Crémieu was humiliated by the outcome. He was not totally defeated, however, as he had another line of research in progress, one with the potential to repair the damage done to his reputation by the face-off with Pender. This second project began in 1902 with financial support from the Institut de France, and was designed to overturn Newton’s law of gravitational attraction. Several physicists, including H.A. Lorentz, J.H. Poynting, and Poincaré had wondered if gravitation, like electromagnetism, is not a field-theoretical phenomenon, the action of which propagates in the manner of a wave; if this were so, then the gravitational force might depend in some way upon the characteristics of the matter through which it propagates.1717endnote: 17 On gravitational absorption and field theories of gravitation ca. 1900, see Zenneck (1903) and Roseveare (1982). On Vito Volterra’s attempt to characterize the energy of gravitation, see (§ 2-58-8). There was also the possibility of a small variation of the inverse-square law, which seemed likely to Poincaré (1900, 1165) in the form of additional terms, whose existence would be revealed at short range.1818endnote: 18 A short-range effect of this sort made headlines in the late 1980s, as Franklin (1993) observes. Crémieu’s experiments were meant to test the dependence of gravitational attraction on the substance separating two test bodies. The experiments were of two types: in the first set, he measured the displacement of oil droplets in liquid, while in the second set, he relied on a custom-built torsion balance to measure the attraction of bodies in air and in water. Both sets of experiments produced results at odds with the Newtonian prediction, yet once again, Crémieu convinced himself that they were artifacts of his apparatus.1919endnote: 19 Crémieu 1905a, 1905b, 1905d, 1906, 1910. The curious result of the first set of experiments – the mutual attraction of oil droplets in liquid – was explained away by Poincaré as the consequence of the liquid’s inhomogeneity (§ 2-17-15). Poincaré commented on various aspects of the oil-drop experiments, but warned Crémieu not to publish anything, or at least to forgo the interpretation of his experimental results, until such time as the charge convection controversy was resolved, lest he destroy his credibility. Crémieu followed this advice, while pursuing his investigations. In 1904 he worked on a Cavendish experiment based on a novel torsion balance, for which Poincaré provided the theory. Poincaré also communicated to the Paris Academy of Science several of the notes Crémieu submitted in connection with the torsion balance. Although Poincaré did not comment publicly on these experiments, he covered the subject of gravitational absorption in the course of his 1906–1907 lectures on the limits of Newton’s law (Poincaré 1953, 186–190). As for Crémieu, although he acknowledged Poincaré’s guidance in the Cavendish experiments, he seems to have relied more on the aid and counsel of Edmond Bouty.2020endnote: 20 For Crémieu’s acknowledgments see Crémieu (1905c, 499). There is no record of contacts between Poincaré and Crémieu from 1905 until Poincaré’s death in 1912, at which time Crémieu urged Paul Langevin to become Poincaré’s intellectual successor.2121endnote: 21 Crémieu to Langevin, ca. 07.1912, Langevin Archives 83. Time-stamp: " 3.05.2019 01:30" ### Notes • 1 Annuaire de la Société française de physique, 1912, 11; Pierre Crémieu, p.c., 25.05.1997. • 2 On the history of electrodynamics in this period, see Whittaker (1951, 305), Buchwald (1985), Jungnickel & McCormmach (1986, 231), Darrigol (2000, chap. 8). • 3 $F^{17}13248$, Archives nationales; Poincaré 1901a; Petiau 1954, 391–420. Only the first six sections of the 1901 article were reedited in Science et hypothèse (Poincaré 1902a), omitting the discussion of Crémieu’s experiments. • 4 For the view of Rowland’s effect as a crucial test of Lorentz’s theory, see the latter’s 1907 lecture “The experimental foundations of the theory of electricity” (Zeeman & Fokker 1935, 125–151), and Eichenwald (1908, 83). • 5 Larmor’s view ignored Hertz’s demonstration that the existence of displacement currents on a rotating, uniformly electrified disk was implied by Maxwell’s theory; on this and other errors in Larmor’s criticism of Maxwell’s theory, see Darrigol (2000, 342). • 6 Indorato & Masotto 1989, 130. On Larmor’s view of Rowland’s experiment, see Darrigol (2000, 342). • 7 See the correspondence with Levi-Civita (§ 2-37-2). • 8 FitzGerald 1892; Drude 1897, XXI. • 9 Darrigol 1993, 223. On the French reception of Maxwell’s theory, including Poincaré’s role, see Coelho Abrantès (1985) and Atten (1992). • 10 Two years earlier, Poincaré (1902b, 218) had arranged for an exchange of astronomers between Paris and Greenwich, in order to resolve discordant measurements of the difference in longitude between the two observatories. As for Thomson, following Crémieu’s presentation at the Glasgow meeting of the British Association in September, 1901, he remarked that only a “repetition of the experiments under the simplest possible conditions” could confirm a result “against which there is so much indirect evidence, and which, if accepted, would necessitate the entire reconstruction of electromagnetic theory” (cited by Lees 1901). • 11 The Carnegie Institution of Washington underwrote all of Pender’s expenses, in the amount of$662.57. According to Pender’s unpublished report to his sponsor, the Institut de France contributed 7000 francs to defray costs incurred by Bouty’s laboratory in hosting the experiments (Pender to D. C. Gilman, 09.05.1903, Harold Pender file, Archives of the Carnegie Institution of Washington). • 12 Pender attributed his failure to obtain a quantitative reproduction of his earlier results to magnetic disturbances in Bouty’s Sorbonne laboratory (Pender to D.C. Gilman, 09.05.1903, op. cit.). • 13 Sutherland 1904; Crémieu & Malclès 1904a, 1904b, presented 11.11.1904 and 12.12.1904, respectively. • 14 Eichenwald (1908, 86) eliminated the hypothesis of a masking effect generated by the dielectric, and guessed that Crémieu’s null results were due to inadequate isolation and point discharge. • 15 On the damage to French scientific prestige, see M.-J. Nye (1986, 71). The Pender-Crémieu outcome heralded the greater blow dealt by R.W. Wood (another Hopkins physicist), in his much-publicized debunking of René Blondlot’s N rays. On Rowland’s school, see Sweetnam (2000). • 16 Poincaré 1906, 281. The cited passage was struck from later editions of Science et hypothèse. Poincaré’s first published view of Crémieu’s experiments (1901a) was published in modified form in chapter 13 of La science et l’hypothèse, in several editions (1902, 1906, 1907); excerpts of his correspondence with Alfred Potier on this topic appear in Poincaré (1902). Poincaré’s mature view of the Crémieu episode (Poincaré 1908, 387) emphasized that Rowland’s effect is required by the first law of thermodynamics. • 17 On gravitational absorption and field theories of gravitation ca. 1900, see Zenneck (1903) and Roseveare (1982). On Vito Volterra’s attempt to characterize the energy of gravitation, see (§ 2-58-8). • 18 A short-range effect of this sort made headlines in the late 1980s, as Franklin (1993) observes. • 19 Crémieu 1905a, 1905b, 1905d, 1906, 1910. The curious result of the first set of experiments – the mutual attraction of oil droplets in liquid – was explained away by Poincaré as the consequence of the liquid’s inhomogeneity (§ 2-17-15). • 20 For Crémieu’s acknowledgments see Crémieu (1905c, 499). • 21 Crémieu to Langevin, ca. 07.1912, Langevin Archives 83. ## References • H. Arzeliès and J. Henry (1959) Milieux conducteurs ou polarisables en mouvement. Gauthier-Villars, Paris. Cited by: 2-17. Victor Crémieu. • M. Atten (1992) Les théories électriques en France (1870–1900) : la contribution des mathématiciens, des physiciens et des ingénieurs à la construction de la théorie de Maxwell. Ph.D. Thesis, École des hautes études en sciences sociales, Paris. Cited by: endnote 9. • M. L. Brillouin (1904) Propagation de l’électricité : histoire et théorie. Hermann, Paris. Cited by: 2-17. Victor Crémieu. • J. Z. Buchwald (1985) From Maxwell to Microphysics. University of Chicago Press, Chicago. Cited by: 2-17. Victor Crémieu, endnote 2. • P. C. Coelho Abrantès (1985) La réception en France des théories de Maxwell concernant l’électricité et le magnétisme. Ph.D. Thesis, Université Paris 1, Paris. Cited by: endnote 9. • V. Crémieu and L. Malclès (1904a) Recherches sur les diélectriques solides. Comptes rendus hebdomadaires des séances de l’Académie des sciences de Paris 139, pp. 790–792. Cited by: endnote 13. • V. Crémieu and L. Malclès (1904b) Recherches sur les diélectriques solides. Comptes rendus hebdomadaires des séances de l’Académie des sciences de Paris 139, pp. 969–972. Cited by: endnote 13. • V. Crémieu and H. Pender (1903) On the magnetic effect of electric convection. Philosophical Magazine 6, pp. 442–464. Cited by: 2-17. Victor Crémieu. • V. Crémieu (1905a) Attraction observée entre gouttes liquides suspendues dans un liquide de même densité. Comptes rendus hebdomadaires des séances de l’Académie des sciences de Paris 140, pp. 80–83. Cited by: endnote 19. • V. Crémieu (1905b) Dispositif auto-amortisseur applicable aux mouvements pendulaire et oscillatoire. Comptes rendus hebdomadaires des séances de l’Académie des sciences de Paris 140, pp. 1029–1031. Cited by: endnote 19. • V. Crémieu (1905c) Recherches expérimentales sur la gravitation. Bulletin des séances de la Société française de physique, pp. 485–499. Cited by: endnote 20. • V. Crémieu (1905d) Recherches sur la gravitation. Comptes rendus hebdomadaires des séances de l’Académie des sciences de Paris 141, pp. 653–655. Cited by: endnote 19. • V. Crémieu (1906) Recherches sur la gravitation. Comptes rendus hebdomadaires des séances de l’Académie des sciences de Paris 143, pp. 887–889. Cited by: endnote 19. • V. Crémieu (1910) Sur une erreur systématique qui limite la précision de l’expérience de Cavendish. Comptes rendus hebdomadaires des séances de l’Académie des sciences de Paris 150, pp. 863–866. Cited by: endnote 19. • O. Darrigol (1993) The electrodynamic revolution in Germany as documented by early German expositions of ‘Maxwell’s theory’. Archive for History of Exact Sciences 45, pp. 189–280. Cited by: endnote 9. • O. Darrigol (1995) Henri Poincaré’s criticism of fin de siècle electrodynamics. Studies in History and Philosophy of Modern Physics 26, pp. 1–44. Cited by: 2-17. Victor Crémieu. • O. Darrigol (2000) Electrodynamics from Ampère to Einstein. Oxford University Press, Oxford. Cited by: endnote 2, endnote 5, endnote 6. • P. Drude (1897) Ueber Fernewirkungen. Annalen der Physik und Chemie 62, pp. ix–xlix. Cited by: endnote 8. • A. Eichenwald (1908) Über die magnetischen Wirkungen elektrischer Konvektion. Jahrbuch der Radioaktivität und Elektronik 5, pp. 82–98. Cited by: endnote 14, endnote 4. • G. F. FitzGerald (1892) M. Poincaré and Maxwell. Nature 45, pp. 532–533. Cited by: endnote 8. • A. Franklin (1993) The Rise and Fall of the Fifth Force. AIP, New York. Cited by: endnote 18. • H. Hertz (1890) Über die Grundgleichungen der Elektrodynamik für bewegte Körper. Annalen der Physik und Chemie 41, pp. 369–399. Cited by: 2-17. Victor Crémieu. • F. Himstedt (1904) Quantitative Versuche über den Rowlandeffekt. Annalen der Physik 318, pp. 100–123. Cited by: 2-17. Victor Crémieu. • L. Indorato and G. Masotto (1989) Poincaré’s role in the Crémieu-Pender controversy over electric convection. Annals of Science 46 (2), pp. 117–163. Cited by: 2-17. Victor Crémieu, endnote 6. • C. Jungnickel and R. McCormmach (1986) Intellectual Mastery of Nature: Theoretical Physics from Ohm to Einstein, Volume 2: The Now Mighty Theoretical Physics, 1870–1925. University of Chicago Press, Chicago. Cited by: endnote 2. • C. H. Lees (1901) Mathematics and physics at the British Association. Nature 64 (1667), pp. 586–587. Cited by: endnote 10. • A. I. Miller (1973) A study of Henri Poincaré’s ‘Sur la dynamique de l’électron’. Archive for History of Exact Sciences 10 (3), pp. 207–328. Cited by: 2-17. Victor Crémieu. • J. D. Miller (1972) Rowland and the nature of electric currents. Isis 63, pp. 5–27. Cited by: 2-17. Victor Crémieu. • M. J. Nye (1986) Science in the Provinces. University of California Press, Berkeley. Cited by: endnote 15. • G. Petiau (Ed.) (1954) Œuvres d’Henri Poincaré, Volume 10. Gauthier-Villars, Paris. Cited by: endnote 3. • H. Poincaré and A. Potier (1902) Sur les expériences de M. Crémieu et une objection de M. Wilson. Éclairage électrique 31, pp. 83–93. Cited by: 2-17. Victor Crémieu, endnote 16. • H. Poincaré (1890) Électricité et optique, Volume 1. Georges Carré, Paris. Cited by: 2-17. Victor Crémieu. • H. Poincaré (1900) Les relations entre la physique expérimentale et la physique mathématique. Revue générale des sciences pures et appliquées 11, pp. 1163–1175. Cited by: 2-17. Victor Crémieu. • H. Poincaré (1901a) A propos des expériences de M. Crémieu. Revue générale des sciences pures et appliquées 12, pp. 994–1007. Cited by: 2-17. Victor Crémieu, endnote 16, endnote 3. • H. Poincaré (1901b) Sur les excitateurs et résonateurs hertziens (à propos d’un article de M. Johnson). Éclairage électrique 29, pp. 305–307. Cited by: 2-17. Victor Crémieu. • H. Poincaré (1902a) La science et l’hypothèse. Flammarion, Paris. Cited by: endnote 3. • H. Poincaré (1902b) Les progrès de l’astronomie en 1901. Bulletin de la Société astronomique de France 16, pp. 214–223. Cited by: endnote 10. • H. Poincaré (1906) La Science et l’hypothèse. Flammarion, Paris. Cited by: endnote 16. • H. Poincaré (1908) La dynamique de l’électron. Revue générale des sciences pures et appliquées 19, pp. 386–402. Cited by: endnote 16. • H. Poincaré (1953) Les limites de la loi de Newton. Bulletin astronomique 17 (3), pp. 121–269. Cited by: 2-17. Victor Crémieu. • N. T. Roseveare (1982) Mercury’s Perihelion: From Le Verrier to Einstein. Oxford University Press, Oxford. Cited by: endnote 17. • J. Sivadjian (1953) Le champ et le mouvement. Archives des sciences physiques et naturelles 6, pp. 191–228. Cited by: 2-17. Victor Crémieu. • A. Sommerfeld (Ed.) (1903) Encyklopädie der mathematischen Wissenschaften mit Einschluss ihrer Anwendungen V, Physik, Volume 1. Teubner, Leipzig. Cited by: J. Zenneck (1903). • W. Sutherland (1904) The Crémieu-Pender discovery. Philosophical Magazine 7, pp. 405–407. Cited by: endnote 13. • G. K. Sweetnam (2000) The Command of Light: Rowland’s School of Physics and the Spectrum. American Philosophical Society, Philadelphia. Cited by: endnote 15. • N. Vasilesco-Karpen (1903) Sur la Convection électrique. Bulletin des séances de la Société française de physique, pp. 162–172. Cited by: 2-17. Victor Crémieu. • E. T. Whittaker (1951) A History of the Theories of Aether and Electricity, Volume 1: The Classical Theories. T. Nelson, London. Cited by: 2-17. Victor Crémieu, endnote 2. • P. Zeeman and A. D. Fokker (Eds.) (1935) Collected Papers of H. A. Lorentz, Volume 8. Martinus Nijhoff, The Hague. Cited by: endnote 4. • J. Zenneck (1903) Gravitation. pp. 25–67. Cited by: endnote 17.
auto_math_text
web
# Published Articles 2018-04-20 06:05 Measurements of b-hadron lifetimes and b-meson oscillations / Abbaneo, Duccio (CERN) World averages are presented. Implications for the CKM matrix elementsare briefly discussed. comments: The current status of b-hadron lifetimes and b-meson oscillations measurements is reviewed.. 1999 - Published in : (1999) , pp. 3.01 Fulltext: PDF; External link: Fulltext In : Meeting of the Division of Particles and Fields (DPF) of the American Physical Society (APS), Los Angeles, CA, USA, 6 - 9 Jan 1999, pp.3.01 2018-04-20 06:05 Measurement of the e+ e- --> ZZ Production Cross Section / Loomis, C (CERN) /ALEPH The e+ e- --> ZZ cross sections at sqrt(s)=182.7 and 189 GeV have beenmeasured using the ALEPH detector and the 1997 and 1998 LEP2 datasamples representing integrated luminosities of 56.8 and 173.5 pb-1,respectively. The selections cover all of the visible ZZ final statesand yield cross section measurements of sigma_NC2(182.7 GeV) = 0.22 +- (0.18,0.22) (stat.) +- 0.04 (syst.) pb and sigma_NC2(188.6 GeV) = 0.63 +- 0.12 (stat.) +- 0.05 (syst.) pb, consistent with the Standard Model values of 0.25 and 0.63 pb, respectively.. 2005 - Published in : (2005) , pp. 1.09 Fulltext: PDF; External link: Fulltext In : Meeting of the Division of Particles and Fields (DPF) of the American Physical Society (APS), Los Angeles, CA, USA, 6 - 9 Jan 1999, pp.1.09 2018-04-20 06:05 Searches for GMSB at √s = 189 GeV at LEP / Taylor, Gary (UC, Santa Cruz) Searches for Gauge Mediated Supersymmetry Breaking topologies are performed on 170 pb−1 of data collected at a centre of mass energy of 188.6 GeV by each of the four LEP experiments. No evidence for such processes is found, allowing 95% C.L. lower limits to be set on the masses of various particles in the context of GMSB. [...] - Published in : , pp. 7.12 Fulltext: PDF; External link: Fulltext In : Meeting of the Division of Particles and Fields (DPF) of the American Physical Society (APS), Los Angeles, CA, USA, 6 - 9 Jan 1999, pp.7.12 2018-04-20 06:05 The NA62 Calorimeter Level 0 Trigger Operation and Performances / Salamon, A (INFN, Rome2 ; Rome U., Tor Vergata) ; Aliberti, Riccardo (Mainz U.) ; Ammendola, Roberto (INFN, Rome2 ; U. Rome 2, Tor Vergata (main)) ; Battista, Daniele (INFN, Rome2 ; U. Rome 2, Tor Vergata (main)) ; Barbanera, Mattia (INFN, Pisa ; U. Pisa (main)) ; Bizzarri, Marco (INFN, Perugia ; U. Perugia (main)) ; Bonaiuto, Vincenzo (INFN, Rome2 ; U. Rome 2, Tor Vergata (main)) ; Ceccucci, Augusto (CERN) ; Checcucci, Bruno (INFN, Perugia ; U. Perugia (main)) ; De Simone, Nicola (CERN) et al. The NA62 experiment at the CERN SPS aims to measure the branching ratio of the very rare kaon decay $K^+ \rightarrow \pi^+ \nu \bar{\nu}$, collecting $\sim 100$ events with a 10\% background to make a stringent test of the Standard Model. The calorimeter level 0 trigger is used to suppress one of the main backgrounds, the $K^+ \rightarrow \pi^+ \pi^0$ decay, and to select events with a $\pi^+$ in the final state. [...] SISSA, 2018 - 6 p. - Published in : PoS EPS-HEP2017 (2017) 517 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.517 2018-04-20 06:05 Performance and recent developments of the real-time track reconstruction and alignment of the LHCb detector. / Dziurda, Agnieszka (CERN) /LHCb The LHCb detector is a single-arm forward spectrometer designed for the efficient reconstruction decays of $c$- and $b$-hadrons. For Run II (2015-2018) LHCb has introduced a novel real-time detector alignment and calibration strategy. [...] SISSA, 2018 - 5 p. - Published in : PoS EPS-HEP2017 (2017) 492 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.492 2018-04-20 06:05 The CERN Neutrino Platform / Bordoni, Stefania (CERN) The long-baseline neutrino programme has been classified as one of the four highest-priority sci- entific objectives in 2013 by the European Strategy for Particle Physics. The Neutrino Platform is the CERN venture to foster and support the next generation of accelerator-based neutrino os- cillation experiments. [...] SISSA, 2018 - 5 p. - Published in : PoS EPS-HEP2017 (2017) 483 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.483 2018-04-20 06:05 Radiation studies on resistive bulk-micromegas chambers at the CERN Gamma Irradiation Facility / Alvarez Gonzalez, Barbara (CERN) ; Bortfeldt, Jonathan Frederik (CERN) ; Camerlingo, Maria Teresa (U. Naples (main) ; INFN, Naples) ; Farina, Edoardo (CERN ; U. Pavia (main) ; INFN, Pavia) ; Iengo, Paolo (CERN) ; Longo, Luigi (U. Salento, Lecce (main) ; INFN, Lecce) ; Samarati, Jerome (CERN) ; Sidiropoulou, Ourania (CERN ; U. Wurzburg (main)) ; Wotschack, Joerg (CERN) With the growing diffusion of resistive Micromegas detectors in HEP experiments the study of long-term aging behaviour is becoming more and more relevant. Two resistive bulk-Micromegas detectors were installed in May 2015 at the CERN Gamma Irradiation Facility and exposed to an intense gamma irradiation with the aim to study the detector behavior under high irradiation and the long-term aging. [...] SISSA, 2018 - 5 p. - Published in : PoS EPS-HEP2017 (2017) 477 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.477 2018-04-20 06:05 Recent results from LHCb on semileptonic decays of b-hadrons / Bozzi, Concezio (INFN, Ferrara ; CERN) /LHCb Recent results on semileptonic decays of $b$ hadrons at LHCbare presented, with particular emphasis on decays involving $\tau$leptons in the final state. A new measurement of $\mathscr{R}(D^{*-} \equiv \mathscr{B}(B^0 \to D^{*-} \tau^+ \nu_{\tau}/ \mathscr{B}(B^0 \to D^{*-} \mu^+ \nu_{\mu})$ is reported, by using for the first time the $\tau$ lepton decays withthree charged pions in the final state. [...] SISSA, 2018 - 7 p. - Published in : PoS EPS-HEP2017 (2017) 206 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.206 2018-04-20 06:05 The ENUBET project: high precision neutrino flux measurements in conventional neutrino beams / Terranova, F (Milan Bicocca U. ; INFN, Milan Bicocca) ; Ballerini, G (Insubria U., Como ; INFN, Milan Bicocca) ; Berra, A (Insubria U., Como ; INFN, Milan Bicocca) ; Boanta, R (INFN, Milan Bicocca ; Milan Bicocca U.) ; Bonesini, M (INFN, Milan Bicocca) ; Brizzolari, C (Insubria U., Como ; INFN, Milan Bicocca) ; Calviani, M (CERN) ; Catanesi, M G (INFN, Bari) ; Cecchini, S (INFN, Bologna) ; Cindolo, F (INFN, Bologna) et al. The ENUBET Collaboration is developing a technology to reduce by one order of magnitude the uncertainty on fluxes in conventional neutrino beam. The ENUBET beamline exploits the large angle production of positrons from $K^+ \rightarrow e^+ \pi^0 \nu_e$ in the decay tunnel to monitor the associated production of $\nu_e$. [...] SISSA, 2018 - 5 p. - Published in : PoS EPS-HEP2017 (2017) 138 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.138 2018-04-20 06:05 New limits on heavy neutrino from NA62 / Koval, Michal (CERN) /NA62 The NA62 experiment at CERN collected large samples of charged kaon decays in flight with a minimum bias trigger configuration in 2007 and in 2015 using a completely new detector setup. Upper limits on the rate of the charged kaon decay into a muon and a heavy neutral lepton (HNL) obtained from 2007 data and limits for the charged kaon decay into an electron and a HNL obtained from 2015 data, are reported in this proceedings.. SISSA, 2018 - 7 p. - Published in : PoS EPS-HEP2017 (2017) 116 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.116
auto_math_text
web
## Found 5,524 Documents (Results 1–100) 100 MathJax Full Text: Full Text: Full Text: ### Existence and regularity of solution of the liquid $${}^4 \mathrm{He}$$ model coupling with an applied magnetic field. (English)Zbl 07507385 MSC:  35Qxx 35K60 35Q40 Full Text: Full Text: ### Periodic solutions of parabolic equations with hysteresis in dimension 1. (English. Russian original)Zbl 1484.35021 J. Math. Sci., New York 260, No. 1, 21-32 (2022); translation from Zap. Nauchn. Semin. POMI 489, 36-54 (2020). MSC:  35B10 35K60 Full Text: Full Text: Full Text: ### Inverse problem related to boundary shape identification for a hyperbolic differential equation. (English)Zbl 07459468 MSC:  35R30 35B65 35K60 Full Text: Full Text: Full Text: ### Two-dimensional boundary value problem of heat conduction in a cone with special boundary conditions. (English)Zbl 1480.35096 MSC:  35C15 35K60 Full Text: Full Text: Full Text: ### Energy estimates and convergence of weak solutions of the porous medium equation. (English)Zbl 1481.60202 MSC:  60K35 76S05 35K60 Full Text: ### Sharp estimate of the life span of solutions to the heat equation with a nonlinear boundary condition. (English)Zbl 1473.35067 Ferone, Vincenzo (ed.) et al., Geometric properties for parabolic and elliptic PDE’s. Contributions of the 6th Italian-Japanese workshop, Cortona, Italy, May 20–24, 2019. Cham: Springer. Springer INdAM Ser. 47, 127-149 (2021). MSC:  35B44 35K60 Full Text: Full Text: Full Text: ### Blow-up analysis in a quasilinear parabolic system coupled via nonlinear boundary flux. (English)Zbl 1474.35389 MSC:  35K55 35K60 Full Text: ### Quasi-spherical metrics and prescribed scalar curvature. (English)Zbl 1464.53043 Bartnik, Robert A., Selected works. Edited by Piotr T. Chruściel, James A. Isenberg and Shing-Tung Yau. Somerville, MA: International Press. 327-367 (2021). ### Hydrodynamic entrance region in a flat porous channel with a pressure head isothermal laminar flow of a Newtonian medium. (Russian. English summary)Zbl 1472.35310 MSC:  35Q35 35K60 76S05 Full Text: Full Text: Full Text: ### A duality between scattering poles and transmission eigenvalues in scattering theory. (English)Zbl 1472.35229 MSC:  35K60 81U20 Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: ### Causal canonical decomposition of hysteresis systems. (English)Zbl 07265355 MSC:  47-XX 93C25 46N20 74S30 93A30 35B30 35K60 Full Text: Full Text: ### A potential well argument for a semilinear parabolic equation with exponential nonlinearity. (English)Zbl 1441.35146 Wood, David R. (ed.) et al., 2018 MATRIX annals. Cham: Springer. MATRIX Book Ser. 3, 265-273 (2020). Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: Full Text: ### Correct solvability of model boundary-value problems with and without initial conditions for a system parabolic in the Eidelman sense in Hölder spaces of increasing functions. (Ukrainian, English)Zbl 1463.35295 MSC:  35K20 35K60 ### A variational approach to nonlinear stochastic differential equations with linear multiplicative noise. (English)Zbl 1437.60035 MSC:  60H15 47H05 35K60 Full Text: ### Localized blow-up regimes for quasilinear doubly degenerate parabolic equations. (English. Russian original)Zbl 1429.35141 Math. Notes 106, No. 4, 639-650 (2019); translation from Mat. Zametki 106, No. 4, 622-635 (2019). MSC:  35K65 35K55 35K60 Full Text: Full Text: Full Text: ### Existence of weak solutions to an elliptic-parabolic equation with variable order of nonlinearity. (English. Russian original)Zbl 1427.35140 J. Math. Sci., New York 241, No. 3, 290-305 (2019); translation from Itogi Nauki Tekh., Ser. Sovrem. Mat. Prilozh., Temat. Obz. 139, 44-58 (2017). Full Text: ### Blow-up phenomena for some nonlinear parabolic problems under nonlinear boundary conditions. (English)Zbl 1427.35098 MSC:  35K20 35K55 35K60 Full Text: Full Text: Full Text: ### Solving a nonlinear variation of the heat equation: self-similar solutions of the second kind and other results. (English)Zbl 1428.65056 J. Evol. Equ. 19, No. 3, 915-929 (2019); correction ibid. 19, No. 3, 931 (2019). Full Text: ### Existence of weak solutions for the nonlocal energy-weighted fractional reaction-diffusion equations. (English)Zbl 1423.35192 MSC:  35K57 35K55 35K60 Full Text: ### Blow-up rate estimates for a system of reaction-diffusion equations with gradient terms. (English)Zbl 07122646 MSC:  35K57 35K60 35B44 Full Text: Full Text: Full Text: ### Blow-up, exponential grouth of solution for a nonlinear parabolic equation with $$p(x)$$-Laplacian. (English)Zbl 1438.35226 MSC:  35K55 35K61 35K60 Full Text: Full Text: Full Text: ### Solvability of the heat equation with a nonlinear boundary condition. (English)Zbl 07030674 MSC:  35A01 35K60 Full Text: Full Text: Full Text: ### A two-species weak competition system of reaction-diffusion-advection with double free boundaries. (English)Zbl 1404.35488 MSC:  35R35 35K60 Full Text: Full Text: Full Text: ### Existence and uniqueness results for an inverse problem for a semilinear equation with final overdetermination. (English)Zbl 07461930 MSC:  35K60 65M06 Full Text: Full Text: ### Research of compatibility of the redefined system for the multidimensional nonlinear heat equation (general case). (Russian)Zbl 1438.35241 MSC:  35K60 35K05 Full Text: MSC:  35K60 Full Text: Full Text: ### On analytic solutions of the problem of heat wave front movement for the nonlinear heat equation with source. (Russian. English summary)Zbl 1409.35123 MSC:  35K60 35K05 35K59 80A20 Full Text: ### Upper and lower bounds for the blow-up time in quasilinear reaction diffusion problems. (English)Zbl 1404.35237 MSC:  35K55 35K60 Full Text: Full Text: Full Text: Full Text: ### Solving macroscopic and microscopic pin-fin heat sink problems by adapted spectral method. (English)Zbl 1393.80005 MSC:  80M22 65N35 35K60 35Q79 80A20 Full Text: ### Blow-up of solutions to a parabolic system with nonlocal source. (English)Zbl 1391.35216 MSC:  35K57 35K60 35B40 Full Text: ### Entropy solutions for nonlinear parabolic problems with noncoercivity term in divergence form in generalized Musielak-Orlicz spaces. (English)Zbl 1391.35182 MSC:  35K15 35K20 35K60 Full Text: Full Text: Full Text: ### Existence and multiplicity of solutions to superlinear periodic parabolic problems. (English)Zbl 1390.35118 MSC:  35K20 35K60 35B10 Full Text: ### A free boundary problem for Aedes aegypti mosquito invasion. (English)Zbl 1443.92035 MSC:  92-10 35B35 35K60 Full Text: ### Binormal motion of curves with constant torsion in 3-spaces. (English)Zbl 1403.53054 MSC:  53C44 53A04 53A05 35K60 Full Text: ### On the structural properties of nonlinear flows. (English)Zbl 1387.35375 Colli, Pierluigi (ed.) et al., Solvability, regularity, and optimal control of boundary value problems for PDEs. In honour of Prof. Gianni Gilardi. Cham: Springer (ISBN 978-3-319-64488-2/hbk; 978-3-319-64489-9/ebook). Springer INdAM Series 22, 543-571 (2017). MSC:  35K60 47H05 49J40 Full Text: Full Text: Full Text: Full Text: ### On a model for the evolution of morphogens in a growing tissue II: $$\theta = \log (2)$$ case. (English)Zbl 1386.35228 MSC:  35K60 35Q92 34B15 Full Text: Full Text: Full Text: Full Text: ### Non-classical heat conduction problem with nonlocal source. (English)Zbl 1360.35041 MSC:  35C15 35K05 35K20 35K60 45D05 45E10 80A20 Full Text: Full Text: Full Text: all top 5 all top 5 all top 5 all top 3 all top 3
auto_math_text
web
# Sample win/loss data generator based on certain criteria Lets say we have this existing data: Total - 10 Won - 7 Lost - 3 Longest Winning Streak - 5 Longest Losing Streak - 2 Now you need to write a function which generates an array of random boolean values (true representing a win and false representing a loss) which fulfills the above criteria. So, in this case the output can be any of the following: 0011011111 1111101100 1010011111 .......... ### Rules: • Solution must be function which takes in Won, Lost, Longest Winning Streak and Longest Losing Streak and returns a boolean array or string formed of a boolean array like (110011001010). It can also display output instead of returning the generated data. • You can safely assume there are no ties/draws/no-results. • This is code-golf so shortest code wins. • Input format can be in any form (Standard Input/Command-Line,etc.). And the real input format should be - <Won> <Lost> <LWS> <LLS> where <..> are place-holders or they can be separate parameters for the function.. whatever you deem best for your code. The parameter Total is unnecessary. So you needn't use it. • As far as random is concerned, you can see that in the example, there are three (and more) possible outputs. Just output any one at random. • If you have any other question, ask in the comments below. ## Important - Not any answer as of now is correct. I provide another example (this one's real data) which is possible but not working with the codes in the current answers: Total - 171 Won - 111 Lost - 60 Longest Winning Streak - 10 Longest Losing Streak - 4 Real Data: 1110110100111111010000101111111100110001100110111101101011111011011011011111111110110001111011010001101011100101111101001011011110101101001100001111101010110001111110111111 Another possible output (Basically the same, just the third and fourth digits are interchanged): 1101110100111111010000101111111100110001100110111101101011111011011011011111111110110001111011010001101011100101111101001011011110101101001100001111101010110001111110111111 • What is the input format? Jan 31 '16 at 17:03 • @Doᴡɴɢᴏᴀᴛ How could I forget that... sorry but now I've posted it. Jan 31 '16 at 17:06 • Can we take input as an array or separate arguments? Jan 31 '16 at 17:06 • How do you measure "random"? I could just go and construct the Array in a deterministic way with the given parameters. If you wanna keep the random requirement, you should add some rules for it. Jan 31 '16 at 17:07 • Why so strict on the input? Jan 31 '16 at 17:09 # JavaScript ES6, 83 70 63 bytes (b,c,d,e)=>'1'[r='repeat'](d)+'0'[r](e)+'1'[r](b-d)+'0'[r](c-e) Try it online • Now this doesn't work for (50, 20, 10, 7). Jan 31 '16 at 20:26 # Python3 - 239 238 bytes W,L,O,S=map(int,input().split()) from itertools import* P=lambda A,B:len(max(A.split(str(B)),key=len)) for C in product("01",repeat=W+L): C="".join(C) if P(C,0)==O and P(C,1)==S and W==C.count("1")and L==C.count("0"): print(C) break Well, this is way too long, but it works. Very slow. Takes the inputs from STDIN as whitespace separated in the same order as OP does. ## Testcases Input: 7 3 5 2 Output: 0011011111 Input: 15 10 15 5 Output: 0000011111111111111100000 # Pyth - 33 bytes J_E.W|n/H1hQnJmeShMd.gekrH8mO2sQY Rejection testing, very slow. # Java 7, 319 317 bytes import java.util.*String c(int a,int b,int c,int d){String r="",w="",l="";List z=new ArrayList();int i,q=a+b-c-d;for(i=;i++<c;w+=1);for(i=0;i++<d;l+=0);for(;a---c>0;z.add(1));for(;b---d>0;z.add(0));Collections.shuffle(z);for(i=0;i<q;r+=z.get(i++));return new StringBuilder(r).insert(new Random().nextInt(q),w+l)+"";} NOTE: The winning streak and losing streak will always be right after one-another, everything else, including their positions in the resulting string, is random. Ungolfed & test cases: Try it here. import java.util.*; class M{ static String c(int a, int b, int c, int d){ String r = "", w = "", l = ""; List z = new ArrayList(); int i, q = a+b-c-d; for(i = 0; i++ < c; w += 1); for(i = 0; i++ < d; l += 0); for( ; a-- - c > 0; z.add(1)); for( ; b-- - d > 0; z.add(0)); Collections.shuffle(z); for(i = 0; i < q; r += z.get(i++)); return new StringBuilder(r).insert(new Random().nextInt(q), w+l)+""; } public static void main(String[] a){ System.out.println(c(7, 3, 5, 2)); System.out.println(c(171, 111, 10, 4)); } } Possible output: 1111100011 100100010110111111110011011100101111011010110011011101111010010010101111101111000010010000110100010111001100100110010011101100111111111111110110110101011100111111111100001011111000111011111010010001010001100111110011111011011011101010101011101011011101101001001100100101100111111101 ## R - (I don't know how to calculate bytes) a <- read.csv("a.csv") b <- seq(1,1, seq.length = a$3) a$1 <- a$1-a$3 c <- seq(0,0, seq.length = a$4) a$2 <- a$2 - a$4 append(b,c) d <- a$1 + a$2 while(d >= 0){append(b,c(1,0)); d <- d-2} d where a.csv is a file with the four inputs separated by commas. This program will output a long string of 1's (for the win streak), and then a long string of 0's for the loss streak, and finally a long string of 0,1 for the rest of the games.
auto_math_text
web
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Inequality in genetic cancer risk suggests bad genes rather than bad luck ## Abstract Heritability is often estimated by decomposing the variance of a trait into genetic and other factors. Interpreting such variance decompositions, however, is not straightforward. In particular, there is an ongoing debate on the importance of genetic factors in cancer development, even though heritability estimates exist. Here we show that heritability estimates contain information on the distribution of absolute risk due to genetic differences. The approach relies on the assumptions underlying the conventional heritability of liability model. We also suggest a model unrelated to heritability estimates. By applying these strategies, we describe the distribution of absolute genetic risk for 15 common cancers. We highlight the considerable inequality in genetic risk of cancer using different metrics, e.g., the Gini Index and quantile ratios which are frequently used in economics. For all these cancers, the estimated inequality in genetic risk is larger than the inequality in income in the USA. ## Introduction There are several approaches to quantify the contribution of heritable factors to disease1,2. A straightforward strategy is using familial recurrence risks, e.g., the recurrence risk in monozygotic co-twins, given a co-twin is affected (λ M), or the recurrence risk in a pair of siblings (λ S)3. If the relative risk in relatives of affected individuals is different from 1, family related factors influence the risk. Indeed, it has been argued that the majority of such factors are most likely genetic3,4,5. The familial risk estimates may have immediate interest for relatives of affected individuals. These estimates are simple predictors of the individual disease risk, and they may be particularly useful when few other risk factors are known. However, these familial risk estimates per se do not yield accurate information about the magnitude and inequality of genetic and environmental risk6. Nor do they indicate the relative importance of heritable, common environmental and other factors. The familial risk estimates are purely observational, and do not have a causal interpretation. Heritability, on the other hand, allows for comparison between heritable and other factors: The heritability denotes the fraction of the variation of the trait that is due to genetic differences2. These estimates are characteristics of the population under study, and cannot be immediately generalised to other populations. To interpret the heritability, we must make assumptions about the underlying causal structure, i.e. we must define a causal model1,7. Heritability is often used to evaluate the importance of genetic effects, but the interpretation is not always easy. Intuitively, a large heritability may correspond to a large variability in absolute genetic risk. Nevertheless, it is not straightforward to see how the absolute genetic risk distribution depends on heritability. Indeed, for cancer development the contribution of genetic, environmental factors and chance is debated8,9,10,11,12,13,14,15,16,17,18,19, 34, 36, despite the access to heritability data20. To better understand the importance of heritable factors, we obtain the distribution of absolute risks due to genetic differences. After estimating the absolute genetic risk distribution, we study the fundamental inequality in cancer risk across individuals, using e.g. Nordic twin data for 15 common cancers20. Our analysis suggests that genetic differences lead to substantial inequality in the risk of several cancers. ## Results ### Deriving the distribution of absolute genetic risk Human diseases are often considered to be dichotomous traits; you are either affected or unaffected. For such traits, the heritability of liability is frequently used to study inheritance2. The concept implies that every individual has a liability to disease, which is the sum of e.g. several genetic and environmental components. Usually the liability is assumed to be normally distributed in the population, and a threshold on the liability scale determines whether an individual acquires the disease. Hence, the standard liability model is usually interpreted as a threshold model7,21. This model allows for the decomposition of the variance into genetic and environmental components. It is appealing, because the variance on the liability scale does not depend on the disease prevalence. Furthermore, the normally distributed liability may have some justification in the central limit theorem; if we believe that the liability of a trait is due to several additive genetic and environmental factors, the liability may approximately follow a normal distribution. In the 1970s a mathematically equivalent interpretation of the threshold model was described, which is based on the genetic liability ι G, i.e., the liability solely due to genotype22. In the Methods section, we have derived the risk of disease given ι G, which we denote Y. Indeed, we express the distribution of Y to study how the genetic risk varies on an individual level. Wray et al.23 use some similar concepts to see that the probit model fits with real, observed family data24. Here, we will use summary estimates of the heritability h 2 from twin studies to derive the distribution of Y for 15 common cancers. When the absolute risk distribution is derived, we can obtain various measures of the genetic inequality in risk. ### Exploring inequality in risk for 15 cancers Mucci et al.20 recently reported heritability estimates for 15 common cancers based on the heritability of liability model, using data from Nordic twin registries. We will apply the sampling algorithm described in the Methods section to derive the distribution of absolute risk for these 15 cancers. To illustrate this, Fig. 1 shows the estimated genetic risk distribution for the 4 most common cancers. We interpret the genetic risk as the individual life-time risk of disease, given that the individual’s genetic make-up was known, but the environmental exposure unknown. The interpretation relies on the assumptions underlying the heritability of liability model, e.g. that genetic factors and the environmental factors are independent on the liability scale. By obtaining the risk distributions, we are able to explore the genetic contribution to disease risk. To do this, we will suggest some useful summary measures. ### Gini index First, we use the Lorenz curve, and its summary measure the Gini index. Although rarely used in medicine and epidemiology, this metric adequately describes the variation in disease risk25,26. Importantly, it allows for comparison across measurement scales; the Gini index does not depend on the cumulative risk of a disease in a population (or the total size of an economy), neither on the size of the population itself. It only relies on the relative mean absolute difference between individuals26. Crudely, the Gini index is a number between 0 and 1, describing the inequality in disease risk across individuals. More precisely, the Lorenz curve is represented by a function L(S), in which S is a cumulative proportion of the population, and L(S) is the fraction of the total risk that is carried by S. E.g. if the risk is equal among subjects in the population, the fraction of risk carried by any 50% of the population would be L(0.5) = 0.5, which means that the Lorenz curve is a straight line. The Gini index is a ratio describing the deviation from this straight line, which can be interpreted as a coefficient of deviation in risk, either on the absolute or the relative scale26 (A formal mathematical derivation is found in the Methods section). In our context, a Gini index of 0 means that everybody has the same genetic risk to a particular cancer, whereas a Gini index of 1 implies maximum inequality in risk across individuals. The Gini index is widely used in economics and demography, e.g., to study inequality in income and wealth. In Fig. 2, we show the Gini index for 15 common cancers. The Gini index is derived by using the heritability h 2 and life-time risk estimates form a recent Nordic twin study20. The red dashed line denotes the Gini index of income in the USA, using data from the World Bank27. Interestingly, the plot reveals a major inequality in cancer risk for the common cancers. For all specific cancers, the inequality in genetic risk seems to be larger than the inequality in income in the USA. We also studied the genetic risk of cancer overall, using the heritability of acquiring any type of cancer. This heritability estimate is lower than the individual cancers20, which is expected because a factor increasing the risk of a particular cancer does not necessarily increase the risk of other cancers. Still, the Gini index of acquiring any type of cancer was almost as large as the Gini index for income in the USA. We have displayed the relation between the Gini index and the heritability (Fig. 3a), and the relation between the Gini index and the observed relative risk in monozygotic co-twins of affected individuals (λ M) (Fig. 3b). The areas of the circles are proportional to the life-time risk of the cancers. The three different measures of genetic contribution are related, but not co-linear, indicating that they capture non-overlapping information about the risk of disease. In particular, for cancer sites with similar heritability, the Gini index is relatively larger for the rarer sites. ### Quantile ratios Alternatively, we may study the inequality in risk by using a quantile ratio. The population is partitioned into subset according to quantiles of genetic risk, and we may estimate the ratio of affected individuals in the highest risk partition compared to the lowest risk partition. This metric is also frequently used to compare incomes in economics, e.g., the 20:20 ratio (RR20:20) which assess the 20% richest compared to the 20% poorest of a population. Table 1 shows the RR20:20 of genetic risk, which highlight a substantial difference in risk across subgroups; those in the highest 20 percentile carry substantially more of the disease burden than those in the lowest 20 percentile. In comparison, RR20:20 for income is ~5 in the UK and ~9 in the USA28. ### A hypothetical intervention Related to quantile ratios, we may estimate the effect of hypothetical interventions on particular risk groups. Suppose, for example, that we were able to reduce the genetic risk of each individual in the upper 20 percentile to the average risk in the lowest 20 percentile. This question could be relevant for public health professionals, because it suggests the potential benefit of identifying and subsequently intervening on high-risk populations. We could calculate the relative risk of such interventions, assuming that the environment is left unaltered. Indeed, this relative risk is immediately obtained from the cumulative risk distribution. Let y 20 denote the 20 percentile of genetic risk and let y 80 denote the 80 percentile. Then $${\rm{RR}}_{{\rm{interv}}{\rm{.}}} = \frac{{{\int}_0^{y_{80}} yf_Y(y){\rm{d}}y + {\int}_0^{y_{20}} yf_Y(y){\rm{d}}y}}{{E(Y)}}.$$ Relative risk estimates after such hypothetical interventions are found in Table 1. Indeed, these risk estimates also suggest a major contribution of genes to disease development; if we, e.g., were able to reduce the risk of prostate cancer in the upper 20 percentile to the average risk in the lower 20 percentile, we would reduce the number of cancers by a proportion of 1 − 0.26 = 0.74. ### Using different sources of heritability data Heritability data may not only be derived from twin studies. Genome-wide association studies (GWAS) allows for the calculation of heritability estimates without relying on family structures29,30. These estimates account for the variability due to genetic variants tagged by single-nucleotide polymorphisms (SNPs), usually with a population frequency above 1–5%. Such array heritability estimates are therefore considered to be lower bounds of the overall heritability, but may yield important information about the inequality in risk due to genetic variants associated with common SNPs. Lu et al.29 estimated array heritability for a range of cancers, highlighting that array estimates captures approximately half the heritability from older twin studies. We may immediately apply our approaches to explore the inequality in cancer risk due to genetic variants tagged by SNPs. This could yield insight into, e.g., the benefit of targeting genetic variants tagged by SNPs in future interventions. In Fig. 4, we display the Gini indices derived from the array heritability estimates in Lu et al.29, again highlighting the substantial inequailty in genetic risk. ### Alternative to the threshold model Although frequently used, the assumptions of the heritability of liability model are not necessarily satisfied1. Considering the liability to be normally distributed is convenient and may agree with the central limit theorem, but testing this assumption is usually infeasible in practice7,24, and it may not be robust if the genetic risk is determined by few, rare genes1. When using twin data, we usually assume no gene-environment interaction on the liability scale1,31, and we consider monozygotic- and dizygotic twins to share the same amount of environmental factors. Another issue is the confidence intervals of heritability and common environmental components, which are often wide even when hundreds of thousands are included in the study20. Until now we have based our results on the heritability of liability assumptions. We may, however, suggest a different approach that does not rely on the concept of heritability. We achieve this by assuming that the risk due to both heritable factors and common environment follows a parametric distribution. First, we let this distribution be the beta distribution, which allows for a wide range of shapes of the risk distribution and is bounded by 0 and 1. Importantly, in this model the risk distribution is uniquely defined by the observed recurrence risk (e.g., λ m ) and the disease prevalence6. First, we use the beta model to investigate the risk distribution due to the total effect of genes and shared environment. That is, this measure will capture the maximum inequality in risk due to genes and shared environment. Hence, we would generally assume that inequality measures from this approach, e.g., the Gini index, are larger in magnitude than the heritability based estimates. Intuitively, the differences should be relatively large if the shared environmental component is substantial, and relatively small if the common environmental component is minor. In Table 1, the Gini index from the beta models (GCbeta) are shown together with the Gini index from the heritability model $$\left( {GC_{h^2}} \right)$$. The Gini indices from the beta model are generally larger than the estimates from the heritability model. As expected, the discrepancy is larger for the cancers with larger shared environmental components, which may be obtained by twin data as the fraction of the variance on the liability scale due to shared environment20 (env2 in Table 1). A plot similar to Fig. 2 including the beta Gini estimates is found in Fig. 5. For the cancers that were studied in both Mucci et al.20 and Lu et al.29, we have also compared twin heritability, array heritability and the estimates derived in this section (Fig. 6). We may also use similar derivations for other distributions than the beta distribution. In particular, a distribution equal to f Y (y) in Eq. (4) of the Methods section could be derived directly by using estimates of λ M and the life-time disease risk. Then, we replace h 2 by $$h_{{\rm{env}}}^{\rm{2}}$$ in Eq. (4), and we let $$h_{{\rm{env}}}^{\rm{2}}$$ be a parameter that determines the shape of f Y (y). Indeed, we may interpret $$h_{{\rm{env}}}^2$$ as the fraction of variance on the liability scale due to genes and common environment. ## Discussion The contribution of heritable factors to major diseases is debated14,16. The antagonising views may arise due to ambiguous use of terminology and misinterpretation of model assumptions3,18,19,32. To gain deeper insight into the importance of genetic factors in cancer development, we have studied the absolute genetic risk distribution, under explicitly defined models. Thereby we can use measures of inequality that may be easier to understand than heritability itself, e.g. the Gini index and the 20:20 ratio. These measures may be particularly desirable, because comparisons across scales can be made. Indeed, these measures are widely used in economics and demography, and they have also been successfully applied in biology previously33. Our results suggest that 15 common cancers show a major inequality in the genetic susceptibility to disease. As a curious comparison, we show that the inequality in cancer risk is larger than the income inequality in the USA. We must emphasise, however, that our main results are based on the basic assumptions of the heritability estimates. In particular, we cannot immediately extrapolate the results outside the study populations. Nevertheless, the major inequalities in risk suggest that many cancer cases are preventable in principle18. Even though preventative strategies are lacking today, our analysis therefore suggests that undiscovered targets for interventions may exist, at least in theory. The information on risk inequality may be useful for public health professionals and other decision makers, when prioritising future prevention strategies and research projects. In particular, being able to identify high-risk individuals, and target these individuals for genetic or environmental interventions could be cost-effective strategies. Fundamentally, our results put the debated role of chance in cancer development into perspective8,34,35: Irrespective of the definition of chance and the role of randomness in cancer development, we show that the genetic risk varies considerably across individuals. This points to major genetic variability in the individual risk of acquiring cancer. These findings do not contradict the results by either Tomasetti et al.34,36 or Wu et al.14 Rather, Tomasetti et al.34 suggest that the cancer incidence at a site is strongly correlated with the number of baseline stem cell divisions at this site. Thereby they study heterogeneity between sites. We rather study heterogeneity within a cancer site, and suggest that environmental and genetic factors lead to major differences between individuals. Despite the seemingly random nature of stem cell mutations, there may be currently unknown processes, which vary across individuals, that influence the risk of particular cancers. Some individuals may be loaded with considerably higher risk than others, due to genetic or common environmental factors. We may denote these individuals as “unlucky”. However, it is not necessarily sensible to assume that they are unlucky due to fundamentally random events19. ## Methods ### Deriving the distribution of absolute risk We will show how the absolute genetic risk distribution is derived from the liability threshold model. To do this, we use the conventional assumptions of the liability model. Let the liability L ~ N(μ = 0, σ 2 = 1) be the sum of several components, and let Φ(z) denote the cumulative standard normal distribution. An individual is affected by disease X with life-time risk Pr(X = 1) = 1 − q if $$L \ge \Phi ^{ - 1}(q).$$ To obtain estimates of h 2, it is usually assumed that L has a genetic component $$L_G\sim N\left( {\mu = 0,\sigma ^2 = h^2} \right),$$ which is independent of the other components. We aim to find $${\rm{Pr}}\left( {X = 1\left| {\iota _{\rm{G}}} \right.} \right) = y.$$ We define L E = L − L G , which is the component of L not determined by genotype. Usually, L G and L E are assumed to be independent, and therefore L E ~ N(0, 1 − h 2). Let L G =ι G. Then, $$L\left| {\iota _{{\rm{G}}}} \right.:N\left( {\iota _{\rm{G}},1 - h^2} \right).$$ We are now able to express the probability of disease, given the genetic liability $${g(\iota _{\rm{G}})} = \, {P\left( {X = 1\left| {\iota _{\rm{G}}} \right.} \right)} \\ = \,{P\left( {L >\Phi ^{ - 1}(q)\left| {\iota _{\rm{G}}} \right.} \right)} \\ = \, {P\left( {\left. L \right|\iota _{\rm{G}} >\Phi ^{ - 1}(q)} \right)} \\ = \, {\Phi \left( {\frac{{\iota _{\rm{G}} - \Phi ^{ - 1}(q)}}{{\sqrt {1 - h^2} }}} \right)}.$$ (1) This relation has been graphically illustrated by Smith21 and a mathematical expression was suggested by Mendell and Elston22. Due to the probit relation between t G and the absolute risk in Eq. (1), the liability threshold model has also been denoted a probit model23. We are interested in how y varies among individuals in the population. Hence, we view Y = g(L G) as a random variable and let g −1(Y) = L G. Then $$\begin{array}{l}g\left( {L_{\rm{G}}} \right) = \Phi \left( {\frac{{L_{\rm{G}} - \Phi ^{ - 1}(q)}}{{\sqrt {1 - h^2} }}} \right)\\ g^{ - 1}(Y) = \Phi ^{ - 1}(Y)\sqrt {1 - h^2} + \Phi ^{ - 1}(q)\end{array}$$ (2) ### Simulating the distribution of Y Equation (1) allows us to simulate the distribution of Y for a particular disease. To do this, we simply draw a standard Gaussian variable for each subject, which represents the genetic liability, and then transform this variable into an absolute risk. The procedure can be described more formally by the following algorithm: 1. Obtain h 2 and the population life-time prevalence 1 − q of the disease, e.g. from published data. 2. For each i in (1, …, n), draw the individual liability t G,i from a normal distribution $$L_{{\rm{G}},{\rm{i}}}\sim N(\mu = 0,\sigma ^2 = h^2)$$ 3. For each i, calculate the genetic risk y i from Eq. (1) $$y_i = \Phi \left( {\frac{{\iota _{{\rm{G}},i} - \Phi ^{ - 1}(q)}}{{\sqrt {1 - h^2} }}} \right)$$ ### Derivation of the distribution of Y We may also express the distribution of Y algebraically. The probability density of Y is expressed as $${f_Y(y)} = \,\, {f_{L_{\rm{G}}}\left( {g^{ - 1}(y)} \right) \times \frac{{{\rm{d}}g^{ - 1}(y)}}{{{\rm{d}}y}}} \\ = \,\, {f_{L_{\rm{G}}}\left( {g^{ - 1}(y)} \right)\frac{1}{{g{\prime}\left( {g^{ - 1}(y)} \right)}}},$$ (3) where $$f_{L_G}$$ denotes the distribution function of $$L_G \sim N\left( {0,h^2} \right)$$. Furthermore $${g{\prime}\left( {g^{ - 1}(y)} \right)} = \,\, {\frac{1}{{\sqrt {2\pi } \sqrt {1 - h^2} }} \times e^{ - \frac{{\left( {\frac{{g^{ - 1}(y) - \Phi ^{ - 1}(q)}}{{\sqrt {1 - h^2} }}} \right)^{\!\!2}}}{2}}} \\ = \,\, {\frac{1}{{\sqrt {2\pi } \sqrt {1 - h^2} }} \times e^{ - \frac{{\left( {\Phi ^{ - 1}(y)} \right)^2}}{2}}}.$$ Finally we plug into Eq. (3) to find $${f_Y(y)} = \,\, {\frac{{\sqrt {1 - h^2} }}{{\sqrt {h^2} }}e^{ - \frac{{g^{ - 1}(y)^2}}{{2h^2}}}e^{\frac{{\left( {\Phi ^{ - 1}(y)} \right)^2}}{2}}} \\ = \,\, {\frac{{\sqrt {1 - h^2} }}{{\sqrt {h^2} }}e^{ - \frac{{\left( {\Phi ^{ - 1}(y)\sqrt {1 - h^2} + \Phi ^{ - 1}(q)} \right)^2}}{{2h^2}} + \frac{{\left( {\Phi ^{ - 1}(y)} \right)^2}}{2}}}.$$ (4) By the definition of Y, we have that $$E(Y) = E_{L_{\rm{G}}}\left( {P\left( {X = 1\left| {\iota _{\rm{G}}} \right.} \right)} \right) = P\left( {X = 1} \right) = 1 - q.$$ The variance of Y can be found numerically by solving $${\rm{VAR}}(Y) = E\left( {Y^2} \right) - E(Y)^2 = {\int}_0^1 y^2f_Y(y){\rm{d}}y - (1 - q)^2.$$ (5) These derivations allow us to study how the absolute risk due to genetic differences is distributed in the population. ### Theoretic derivation of the Gini index We will present a formal definition of the Gini index as a function of the Lorenz curve. Let f Y and F Y be the probability density function (pdf) and cumulative density function (cdf) of Y, respectively. The Lorenz curve of the distribution of Y is defined as $$L(x) = \frac{1}{{E(Y)}}{\int}_{\!\!0}^x tf_{\rm{Y}}(t){\rm{d}}t,\quad 0 \le x \le 1.$$ The Gini index of the distribution of Y is then defined as $${G_{\rm{Y}}} = \,\, {2{\int}_{\!\!0}^1 \left( {F_{\rm{Y}} - L\left( {F_{\rm{Y}}} \right)} \right){\rm{d}}F_{\rm{Y}}} \\ = \,\, {2{\int}_{\!\!0}^1 \left( {F_{\rm{Y}}(x) - L(x)} \right)f_{\rm{Y}}(x){\rm{d}}x.}$$ The last equality (the integral limits) follows since f Y has support [0,1]. In general, the Gini index of the distribution of Y may easily be found using numerical integration. For a Beta(α, β) distributed variable, the Gini index is explicitly given as $$G_{{\rm{Beta}}} = \frac{{2B(2\alpha ,2\beta )}}{{\alpha B(\alpha ,\beta )^2}},$$ where B is the beta function37. ### Risk due to heritable factors and shared family environment We assume that the risk of a particular cancer varies continuously across individuals in the population. More precisely, let X i be a binary variable taking value 1 if a subject is affected and 0 if a subject is unaffected. The probability of developing cancer in individual i, p i  = P(X i  = 1), is drawn from a distribution f(p i ) with support [0,1] and mean μ = E(p i ). Let f(p i ) follow a parametric beta distribution, which allows for a range of shapes. To completely specify f(p i ), we must define E(p i ) and VAR(p i ). We find E(p i ) using published data on the life-time incidence of the disease I life. To derive an estimate of VAR(p i ), we make use of studies on monozygotic (MZ) twins. Following the terminology of Risch[3], let λ r denote the risk ratio of a relative of an affected individual. We assume that p i is equal in a pair of MZ twins. We interpret p i as the risk of disease due to heritable factors and shared family environment. Then we find λ M, the risk ratio for disease given a co- MZ twin is affected $${\lambda _M} = \,\, {\frac{{P\left( {X_2 = 1\left| {X_1 = 1} \right.} \right)}}{{P\left( {X_i = 1} \right)}}} \\ = \,\, {\frac{{P\left( {X_2 = 1,X_1 = 1} \right)}}{{P\left( {X_i = 1} \right)P\left( {X_1 = 1} \right)}}} \\ = \,\, {\frac{{P\left( {X_2 = 1,X_1 = 1} \right)}}{{P\left( {X_i = 1} \right)^2}},\,{\rm{since}}\,P\left( {X_1 = 1} \right) = P\left( {X_i = 1} \right)} \\ = \,\,{\frac{{E\left( {p_{\rm{i}}^{\rm{2}}} \right)}}{{E\left( {p_{\rm{i}}} \right)^2}},\,{\rm{since}}\,P\left( {X_1 = 1} \right) = P\left( {X_2 = 1} \right)} \\ = \,\, {1 + \frac{{VAR\left( {p_{\rm{i}}} \right)}}{{E\left( {p_{\rm{i}}} \right)^2}}}.$$ (6) Using estimates of λ M from MZ twin studies, we can find $${{\rm{VAR}}\left( {p_i} \right)} = \,\, {\left( {\lambda _{\rm{M}} - 1} \right)E\left( {p_i} \right)^2} \\ \approx \,\, {\left( {\hat \lambda _{\rm{M}} - 1} \right)I_{{\rm{life}}}^2}.$$ (7) Hence, under these assumptions we can completely specify the distribution of risk in the population, f(p i), if estimates of the cumulative incidence (I life) and the twin recurrence risk (λ M) are available. We may interpret this as follows: Each subject obtains a risk (probability of developing disease) due to genetic factors and common environment. Then, this probability, combined with unmeasured individual factors and chance, determines whether the subject gets the disease. Indeed, we can use exactly the same approach to specify the probit liability distribution in the main text. Then, we use Expression (4) as a parameterisation of the probit liability distribution, with parameters E(y) = 1 − q and $$h_{{\rm{env}}}^{\rm{2}}$$. Here, we have replaced h 2 by $$h_{{\rm{env}}}^{\rm{2}}$$ in Eq. (5), because it no longer denotes heritability. Rather, $$h_{{\rm{env}}}^2$$ is the fraction of the trait variance on the liability scale due to both heritable factors and common environment. Mathematically, $$h_{{\rm{env}}}^{\rm{2}}$$ is a shape parameter of the the probit liability distribution. Then, we combine Expressions (3) and (5) to $$\left( {\lambda _{\rm{M}} - 1} \right)E\left( {p_{\rm{i}}} \right)^2 - E\left( {Y^2} \right) + E\left( Y \right)^2 = 0 \\ \left( {\lambda _{\rm{M}} - 1} \right)\left( {1 - q} \right)^2 - {\int}_{\!\!0}^1 y^2f_{\rm{Y}}(y){\rm{d}}y + \left( {1 - q} \right)^2 = 0 \\ \left( {\lambda _{\rm{M}} - 1} \right)\left( {1 - q} \right)^2 - {\int}_{\!\!0}^1 y^2\frac{{\sqrt {1 - h_{{\rm{env}}}^2} }}{{\sqrt {h_{{\rm{env}}}^2} }}e^{ - \frac{{\left( {\Phi ^{ - 1}(y)\sqrt {1 - h_{{\rm{env}}}^2} + \Phi ^{ - 1}(q)} \right)^2}}{{2h_{{\rm{env}}}^2}} + \frac{{\left( {\Phi ^{ - 1}(y)} \right)^2}}{2}}{\rm{d}}y \!+\! (1 - q)^2 \!= 0.$$ (8) Indeed, Expression (8) can be solved numerically to find $$h_{\rm env}^2$$. ### Numeric results To derive our numeric estimates, we have used the results from Tables 2 and 3 in Mucci et al.20 and Table 2 in Lu et al.29 Confidence intervals were obtain by inserting the confidence bounds reported in Mucci et al.20 and Lu et al.29 into our expressions for genetic risk. For the beta distribution, we used the confidence intervals in Table 2 in Mucci et al.20 for recurrence risks in monozygotic twins. All our numeric results were obtained by two independent approaches, numeric integration of analytic expressions and simulations. Both approaches yielded the same results. ### Code availability The computer code for all the calculations was written in R version 3.3.2 using RStudio version 1.0.136. This computer code is available in Supplementary Data 1. ### Data availability We have solely used data that are readily available in previously published articles20,29. ## References 1. Risch, N. The genetic epidemiology of cancer. Cancer Epidemiol. Biomarkers. Prev. 10, 733–741 (2001). 2. Visscher, P. M., Hill, W. G. & Wray, N. R. Heritability in the genomics era—concepts and misconceptions. Nat. Rev. Genet. 9, 255–266 (2008). 3. Risch, N. Linkage strategies for genetically complex traits. I. Multilocus models. Am. J. Hum. Genet. 46, 222–228 (1990). 4. Khoury, M. J., Beaty, T. H. & Kung-Yee, L. Can familial aggregation of disease be explained by familial aggregation of environmental risk factors? Am. J. Epidemiol. 127, 674–683 (1988). 5. Aalen, O. O. Modelling the influence of risk factors on familial aggregation of disease. Biometrics. 47, 933–945 (1991). 6. Valberg, M., Stensrud, M. J. & Aalen, O. O. The surprising implications of familial association in disease risk. arXiv preprint arXiv:1707.00014 (2017). 7. Tenesa, A. & Haley, C. S. The heritability of human disease: estimation, uses and abuses. Nat. Rev. Genet. 14, 139–149 (2013). 8. Tomasetti, C. & Vogelstein, B. Cancer risk: role of environment—response. Science 347, 729–731 (2015). 9. Tomasetti, C. & Vogelstein, B. Musings on the theory that variation in cancer risk among tissues can be explained by the number of divisions of normal stem cells. arXiv preprint arXiv:1501.05035 (2015). 10. Thomas, F., Roche, B. & Ujvari, B. Intrinsic versus extrinsic cancer risks: the debate continues. Trends Cancer 2, 68–69 (2016). 11. Couzin-Frankel, J. The bad luck of cancer. Science 347, 12–12 (2015). 12. Weinberg, C. & Zaykin, D. Is bad luck the main cause of cancer? J. Natl Cancer I. 107, djv125 (2015). 13. Luzzatto, L. & Pandolfi, P. P. Causality and chance in the development of cancer. N. Engl. J. Med. 373, 84–88 (2015). 14. Wu, S., Powers, S., Zhu, W. & Hannun, Y. A. Substantial contribution of extrinsic risk factors to cancer development. Nature 529, 43–47 (2016). 15. Noble, R., Kaltz, O. & Hochberg, M. E. Peto’s paradox and human cancers. Phil. Trans. R. Soc. B 370, 20150104 (2015). 16. Noble, R. J., Kaltz, O., Nunney, L. & Hochberg, M. E. Overestimating the role of environment in cancers. Cancer Prev. Res. 9, 773–776 (2016). 17. Tarone, R. E. RE: is bad luck the main cause of cancer? J. Natl Cancer I 107, djv227 (2015). 18. Smith, G. D., Relton, C. L. & Brennan, P. Chance, choice and cause in cancer aetiology: individual and population perspectives. Int. J. Epidemiol. 45, 605–613 (2016). 19. Stensrud, M. J., Strohmaier, S., Valberg, M. & Aalen, O. O. Can chance cause cancer? A causal consideration. Eur. J. Cancer 75, 83–85 (2017). 20. Mucci, L. A. et al. Familial risk and heritability of cancer among twins in nordic countries. JAMA 315, 68–76 (2016). 21. Smith, C. Recurrence risks for multifactorial inheritance. Am. J. Hum. Genet. 23, 578 (1971). 22. Mendell, N. R. & Elston, R. Multifactorial qualitative traits: genetic analysis and prediction of recurrence risks. Biometrics 30, 41–57 (1974). 23. Wray, N. R. & Goddard, M. E. Multi-locus models of genetic risk of disease. Genome Med 2, 10 (2010). 24. Visscher, P. M. & Wray, N. R. Concepts and misconceptions about the polygenic additive model applied to disease. Hum. Hered. 80, 165–170 (2016). 25. Mauguen, A. & Begg, C. B. Using the Lorenz curve to characterize risk predictiveness and etiologic heterogeneity. Epidemiology 27, 531–537 (2016). 26. Lee, W.-C. Characterizing exposure–disease association in human populations using the Lorenz curve and Gini index. Stat. Med. 16, 729–739 (1997). 27. World Bank. World Bank Gini Inidices (2017). URL: http://data.worldbank.org/indicator/SI.POV.GINI. 28. World Bank. Qunitle of income from http://data.worldbank.org (2017). 29. Lu, Y. et al. Most common’sporadic’ cancers have a significant germline genetic component. Hum. Mol. Genet. 23, 6112–6118 (2014). 30. Sampson, J. N. et al. Analysis of heritability and shared heritability based on genome-wide association studies for 13 cancer types. J. Natl Cancer I. 107, djv279 (2015). 31. Benchek, P. H. & Morris, N. J. How meaningful are heritability estimates of liability? Hum. Genet. 132, 1351–1360 (2013). 32. Wray, N. R., Yang, J., Goddard, M. E. & Visscher, P. M. The genetic interpretation of area under the ROC curve in genomic profiling. PLoS Genet. 6, e1000864 (2010). 33. Wittebolle, L. et al. Initial community evenness favours functionality under selective stress. Nature 458, 623–626 (2009). 34. Tomasetti, C., Li, L. & Vogelstein, B. Stem cell divisions, somatic mutations, cancer etiology, and cancer prevention. Science 355, 1330–1334 (2017). 35. Nowak, M. A. & Waclaw, B. Genes, environment, and “bad luck”. Science 355, 1266–1267 (2017). 36. Tomasetti, C. & Vogelstein, B. Variation in cancer risk among tissues can be explained by the number of stem cell divisions. Science 347, 78–81 (2015). 37. Pham-Gia, T. & Turkkan, N. Determination of the Beta distribution form its Lorenz curve. Math. Comput. Model. 16, 73–84 (1992). ## Acknowledgements This work was partially supported by the Norwegian Cancer Society, grant number 4493570, and the Nordic Cancer Union, grant number 186031. We thank Odd O. Aalen for his valuable comments to the manuscript. ## Author information Authors ### Contributions M.J.S. conceived the study. M.J.S. and M.V. performed and interpreted the data analysis. M.J.S. and M.V. drafted and critically revised the article. M.J.S. and M.V. approved the final version of the manuscript. ### Corresponding author Correspondence to Mats Julius Stensrud. ## Ethics declarations ### Competing interests The authors declare no competing financial interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Stensrud, M.J., Valberg, M. Inequality in genetic cancer risk suggests bad genes rather than bad luck. Nat Commun 8, 1165 (2017). https://doi.org/10.1038/s41467-017-01284-y • Accepted: • Published: • DOI: https://doi.org/10.1038/s41467-017-01284-y • ### Effect of increased body mass index on risk of diagnosis or death from cancer • Puya Gharahkhani • Jue-Sheng Ong • Stuart MacGregor British Journal of Cancer (2019) • ### Introducing risk inequality metrics in tuberculosis policy development • M. Gabriela M. Gomes • Juliane F. Oliveira • Christian Lienhardt Nature Communications (2019) • ### The surprising implications of familial association in disease risk • Morten Valberg • Mats Julius Stensrud • Odd O. Aalen BMC Public Health (2018)
auto_math_text
web
View Single Post 2008-02-26, 12:26 #1 colo   Feb 2008 2 Posts prime95 for amd64 on GNU/Linux OR alternative? Hello there, I don't know if this is the right subform to post this thread to, so if you feel it needs moving, please do so. I'm running an ~amd64-box without any legacy 32bit-support at all. Due to recently developed stability issues, I'd like to run test suites for different hardware components on my machine (Core 2 Quad, 8GB RAM), one of those being prime95 ( available at www.mersenne.org ). The site does provide binaries for IA32 and also a source archive, however, I can't get the latter to compile (or rather link) correctly on my machine. There are some precompiled object files included in the archive which are 32bit only, in turn shooting ld in the foot. So what I'd need is either a) a prime95 build that indeed is compiled for x86_64 in binary form OR b) a way to have the archive build on pure 64bit OR c) an alternative program that does a job about as good as prime95 to mathematically verify the correctness of my processors' computational results. I'd be delighted if someone knowledgable could step up and guide me to a fitting solution to my problem :) Update: Meanwhile, I discovered Mlucas, a program designed to find mersenne prime numbers on "uncommon" arches - it's available at http://www.hogranch.com/mayer/README.html I uploaded a slightly modified source archive (providing a makefile, wa wa wee wa!) to http://johannes.truschnigg.info/uplo...as_src.tar.bz2 Executing the resulting binary, however, gives rather strange errors on my machine, which I do not understand to interpret, along the lines of Code: 100 iterations of M2550001 with FFT length 131072 Res64: 0000000000000000. AvgMaxErr = 0.000000000. Program: E2.8x *** Res64 Error *** current = 0000000000000000 should be = CB6030D5790E2460 Clocks = 00:00:04.980 Two other fellows who tried compiling the source tarball did not get any further than to a forced exit with Code: INFO: using 64-bit-double form of rounding constant INFO: Using subroutine form of MUL_LOHI ERROR 12 in qfloat.c I copied the CFLAGS from the last recorded build comments on GNU/Linux-amd6 (and also noticed that my binaries failed to run with lesser optimization than -O3; is this expected?). Could any kind person please try to reproduce any of the problems listed above? If anyone familiar with Mlucas is able to explain what is blowing up why, I'd really be thankful (I'm going to bug the developer if everything stays silent). Thanks in advance, cheers! - colo
auto_math_text
web
http://iet.metastore.ingenta.com 1887 ## A hybrid clustering-classification for accurate and efficient network classification • Author(s): • DOI: $16.00 (plus tax if applicable) ##### Buy Knowledge Pack 10 chapters for$120.00 (plus taxes if applicable) IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied. Recommend Title Publication to library You must fill out fields marked with: * Librarian details Name:* Email:* Name:* Email:* Department:* Why are you recommending this title? Select reason: Network Classification for Traffic Management: Anomaly detection, feature selection, clustering and classification — Recommend this title to your library ## Thank you The traffic classification is the foundation for many network activities, such as quality of service (QoS), security monitoring, lawful interception, and intrusion detection system (IDS). A recent statistics-based method to address the unsatisfactory results of traditional port-based and payload-based methods has attracted attention. However, the presence of non-informative attributes and noise instances degrade the performance of this method. Thus, to address this problem, in this chapter, a hybrid clustering-classification method (called CluClas) is described to improve the accuracy and efficiency of network traffic classification by selecting informative attributes and representative instances. An extensive empirical study on four traffic data sets shows the effectiveness of the CluClas method. Chapter Contents: • 10.1 Introduction • 10.2 Existing solutions • 10.3 CluClas—a hybrid clustering and classification method • 10.3.1 Discarding irrelevant and redundant attributes • 10.3.2 Identifying representative instances in CluClas • 10.3.3 The CluClas learning process • 10.3.4 Classification/Prediction process in CluClas method • 10.4 Experimental evaluation • 10.4.1 Experimental setting • 10.4.2 Traffic data sets • 10.4.3 Evaluation metrics • 10.4.4 Results and discussion • 10.5 Conclusion Preview this chapter: A hybrid clustering-classification for accurate and efficient network classification, Page 1 of 2 | /docserver/preview/fulltext/books/pc/pbpc032e/PBPC032E_ch10-1.gif /docserver/preview/fulltext/books/pc/pbpc032e/PBPC032E_ch10-2.gif ### Related content content/books/10.1049/pbpc032e_ch10 pub_keyword,iet_inspecKeyword,pub_concept 6 6 This is a required field
auto_math_text
web
Home > 20 (1), 9 # From Micro Behaviors to Macro Dynamics: An Agent-Based Economic Model with Consumer Credit and aRuhr University Bochum, Germany; bUniversity of Pescara, Italy Journal of Artificial Societies and Social Simulation 20 (1) 9 <http://jasss.soc.surrey.ac.uk/20/1/9.html> DOI: 10.18564/jasss.3260 Received: 18-Apr-2016    Accepted: 30-Sep-2016    Published: 31-Jan-2017 ### Abstract The paper develops an agent-based model populated by heterogeneous consumers, a productive sector and a banking sector. Taking a bottom up approach, the paper aims at providing a first tool to analyze households' borrowing dynamics in the different phases of the business cycle by relaxing some assumptions of mainstream consumption models and considering more realistic household borrowing behaviors. Although very simple, the model allows us to grasp the main implications of the interaction between consumers' wants (desired consumption), consumers' beliefs (their expectations about their future income), the behavior of the banking sector (rationing) and the behavior of the production sector (forecasting future demand). After presenting and discussing sensitivity analysis over a parameters' set, the paper reports results and the ex-post validation by comparing artificial and empirical distributions computed using the European Household Finance and Consumption Survey data set. Keywords: Agent-Based Model, Credit Supply, Consumer Debt, Precautionary Saving, Wealth Distribution, Labor Market Matching ### Household debt, precautionary motives and macroeconomic dynamics As extensively discussed in the recent theoretical and empirical literature, household debt played a key role in the Great Recession in all main advanced economies (Barba & Pivetti 2009). Therefore, the monitoring of the buildup in household debt has been drawing a renewed attention, both among policy makers and in the economic profession. Empirical evidence made clear that both high levels and high growth rates of debt imply an increased vulnerability for households’ balance sheets and question its long run sustainability (Perugini et al. 2016). Because of its role in shaping the business cycle, many studies have been pointing out that high levels of private debt may lead to banking crises (Buyukkarabacak & Valev 2010) and influence the stability of the macroeconomic system (Jordá et al. 2013), especially the intensity of recessions and the likelihood of a financial crisis. The geography of private debt differs across advanced economies; we deem thus worth discussing how it is actually distributed among OECD countries. We start by considering extra-European countries. Empirical analyses on US data as those by Cynamon & Fazzari (2013) and Zinman (2014) show that, between 2000 and 2007, total household debt doubled and the household debt to GDP ratio has four-folded over the post World-War II period. If we consider also other advanced economies, we see that US patterns are shared among OECD countries. Here, we report an analysis based on OECD time series on the percentage of household debt on net disposable income over a time span that goes from 2000 to 2014. Figure 1 makes evident that, among extra European countries, the higher ratios of debt-to-disposable income have been experienced by Australia and Switzerland. However, the (rising) trend is very similar across the country’s set, except for Japan where the private sector’s gave rise to a deleveraging process which - following a Fisherian spiral - has been depressing the aggregate demand. The deleveraging phase started in 2004 and it was also more pronounced starting from 2009. Japanese households’ balance sheets were strongly damaged, also as a consequence of the 1990s lost decade (Hayashi & Prescott 2002; Ito & Mishkin 2006) and, as emphasized by Koo (2013), as a result, firms have been minimizing their debts rather than maximizing profits, as well as consumers have been worrying mainly about the level of their loans. In order to distinguish this type of recession from ordinary recessions, it is usually referred to as a balance sheet recession. Investigations on household debt in the Euro-Area abound in the recent empirical literature and the overall evidence reports consistent cross-country heterogeneity. Figure 1 shows how high ratios of debt characterize both extra-EU countries as well as the main European economies. Denmark, Great Britain and Ireland display the highest percentage of household debt relative to net disposable income[1]. Considering the period from 2000 to 2014, in Figure 1 we observe that previously to 2007, households’ leverage has grown remarkably in Germany, Italy, Greece and Spain. If we take 2007 as the reference point of the Great Recession’s start, we see as prior to it, debt has been growing steadily in all the considered economies, while in the successive years European households implemented a consistent deleveraging which is especially evident for Greece and Spain. German households began the deleveraging process earlier (in 2004), while Italian households experienced a steadily increasing growth of debt as a percentage of their net disposable income in the time span considered but the ratios are always smaller than those observed in the other countries. #### Household debt and macro dynamics: taking a bottom-up approach Because of the policy concerns raised by the Great Recession, household debt has been drawing a lot of attention by researchers so that the literature on this topic has been blossoming in the last few years. However, several issues are at stake in the investigation of consumption behaviors (Carroll 2012) - and in particular of indebtedness behaviors - in existing mainstream models of consumption. First, many of these models assume representative debtors and creditors, i.e., all households have the same consumption function and the same marginal propensity to consume, which is in contrast with the empirical evidence that marginal propensities to consume differ for people with different financial conditions (wealth, net worth)[2]. In other words, agents’ heterogeneity matters. Second, the budget constraint used in these consumption models implies a borrowing limit, i.e., the consumption problem is solved in face of a budget constraint which prevents households “to die in debt”. This seems an inadequate representation of the consumer problem which flies in the face of reality; indeed, violations of the constraint and illiquid positions or bankruptcy are often observed in the real world. Third, they usually assume a flow of funds from lenders to borrowers within the same sector, namely the private sector (as e.g. Eggertsson & Krugman 2012), but this flow is not intermediated by a banking sector. Moreover, it is usually assumed that consumers are optimizing agents who know exactly how to solve their dynamic programming consumption problem, even in presence of an idiosyncratic shock to their income stream. The optimal solution (consumption level and debt level) is thus always reached because expectations about the future values are always fulfilled. Recently, many Agent-Based Models (ABM) have investigated the role played by consumption and in particular by household debt. Among the firsts in considering household indebtedness, Erlingsson et al. (2013) focused on the housing market and integrated it into a larger agent-based artificial economy. The model was characterized by four types of agents: workers, firms, banks and a central bank, which interacted through different types of markets: a consumption goods market, a labor market, a housing market and a credit market. They modelled a wealth effect of housing wealth into workers consumption budget as the main link between the housing market and the real economy. Banks extended mortgages to workers only if the expenditure on housing, as a proportion of total income, was lower than a given threshold. Konig & Grossl (2014) explicitly focused on consumption credit in a framework in which desired consumption was driven by workers disposable income as well as a social norm of consumption, namely the so-called “catch up with the Joneses”, a behavior that reflects a willingness to take on loans. Results showed that varying the strength of the social orientation and prevailing credit constraints, the evolution of macroeconomic time series was largely affected by the “Joneses effect”, while credit constraints determined their volatility. Seppecher & Salle (2015) built a stock-flow consistent ABM populated by heterogeneous agents. The focus here was on the role played by animal spirits, which propagate the market sentiment (optimism or pessimism) through a contagion model (feedback effect). Agents adapted their financial behavior to their market sentiment, thus influencing the aggregate dynamics and leading to alternating periods of stability and downturns. Russo et al. (2016) investigated the causal link between increasing inequality and consumer credit in a complex macroeconomic system with financially fragile heterogeneous households, firms and banks. They focused on consumer credit and studied its aggregate effects with particular attention to unsustainable debt levels and the emergence of financial crises. Results showed a mixed support for the increase of household debt as beneficial for the systemic level. On the one hand, the greater availability of credit on the household side boosted aggregate demand; on the other hand, it could progressively lead to a crisis. Finally, Cardaci & Saraceno (2015) developed a macro stock-flow consistent ABM to study how economic crises emerge in the presence of different credit conditions and income inequality. In particular, they showed how different institutional settings and levels of financialization affect the dynamics of an economy where income inequality plays a role. They discussed the implications of rising household debt and of a policy aimed at tackling inequality by means of a more progressive tax system. Their results showed that fiscal policies can compensate for the rise in income disparities and therefore stabilize the economy. Taking a bottom up approach, our paper develops an agent-based model populated by heterogeneous consumers, a production sector and a banking sector. The model investigates the complex relationship that emerges from the interaction of the three sectors in a stylized labor market, a credit market and a goods market. The main features of the model can be summarized as follows: 1. it stresses the role played by beliefs formed by backward-looking households on future macroeconomic conditions; 2. it emphasizes the role of expectations in the production decision: the production sector is considered as bounded rational and endowed with a mechanism to forecast the demand it will receive from the household sector; 3. it accounts for different labor market matching mechanisms which contribute to the aggregate performance of the model. In particular, considering belief and expectations in our model is especially relevant because they allow us to account for more realistic features of human behaviors (Simon 1955). This modeling choice rests on the critique developed by Muth (1961) about the unsoundness of the rational expectation hypothesis (REH), which assumed away any forecasting error possibly made by economic agents. Following Kirman (2014), the main objections to rational expectations fall in four classes: logical or philosophical, econometric or statistical, empirical and experimental evidence. In the last 20 years, theoretical models of heterogeneous bounded rational agents (HAM) (Hommes 2006) and agent-based financial models have literally blossomed. Starting from the seminal contribution of Brock & Hommes (1997), many researchers have engaged in the building of “heuristics switching models”. In those models agents have a set of simple forecasting heuristics (adaptive, trend extrapolating and so on) and choose those that had a better past performance. Hommes (2007, 2011) explore the behavioral space of the heterogeneous expectations hypothesis. By combining the experimental method and evolutionary techniques, Hommes (2007) provided evidence for the importance of heterogeneity in a theory of expectations: by means of a simple heuristics switching model it is possible to fit different behaviors collected from learning to forecast experiments. These results are indeed crucial for economic theory in that they clearly demonstrate that the rational expectation hypothesis occurs only in stable markets (Hommes 2011). Our model shares with the models described above the acknowledgment that economic agents have different expectations so that working with a rational representative agents is reductive and can lead to misleading results (Kirman 1992). By considering heterogeneous consumers we are able to overcome the so-called Aristotelian fallacy of division[3] as well as the fallacy of composition[4]. Regarding the way in which the expectations are formed, we consider a bounded rational mechanism according to which consumers are backward-looking. In particular, we do not assume mathematical expectations; we rather define them beliefs. The bounded rational behavior of consumers implies that they look at their past employment or unemployment states and form beliefs over future states of the economy accordingly. The focus of our investigation is primarily on the link among unemployment and individual willingness to borrow and to the extent to which their interaction contributes to the aggregate dynamics of the artificial economy. Nevertheless, we take a different perspective with respect to other ABM that deal with household debt because the micro level of our model tends to emphasize the role of precautionary motives. Our choice is motivated by the emphasis, both theoretical and empirical, on the role played by precautionary motives over the business cycle (Skinner 1988; Carroll 1992; Gourinchas & Parker 2001; Challe & Ragot 2016) and because they can explain a large fraction of individual and aggregate wealth accumulation (see Carroll et al. 2014b; LeBlanc et al. 2015, among others)[5]. Moreover, we think that observing precautionary motives becomes especially relevant when consumers face borrowing constraints (as during economic downturns). Indeed, because of the existence of a link between precautionary motives and imperfections in financial markets, we decided to study the aggregate dynamics of consumers’ behaviors in an agent-based economy that has allowed us to “externalize” the so-called natural borrowing constraint[6]. In this way, we are able to account for agents’ heterogeneity and to consider consumers’ willingness to borrow that strongly depends on macroeconomic conditions. The rest of the paper is organized as follows. Section 2 presents an overview of the model and the different sectors it is composed of. We provide detailed behavioral equations at the micro level for the household sector, while, for the sake of tractability, the other sectors are treated as aggregates. Section 3 describes how we implement the model in an agent-based setting, the sequence of events and the baseline parametrization. The section is organized in three subsections. The first subsection presents the baseline parameters and their setting. The second subsection reports and discusses the main results gathered from simulations run over the baseline parametrization. The third subsection describes the comparison between artificial and empirical wealth distributions and discusses some sensitivity analyses we perform over the labor market matching mechanisms. Section 4 provides concluding remarks. ### The model The model is composed of an household sector, a production sector and a banking sector. The household sector is made up of heterogeneous agents which are workers and consumers at the same time. They interact with the banking sector and the production sector, which are both treated as a whole. Before describing the details of the model, is worth emphasizing that it has been mainly motivated by the aim to investigate the role of households financial position over changing macroeconomic conditions. The purpose of the model is indeed to build a minimal framework in order to study this specific phenomenon, leaving aside at this stage of the research other major issues and modeling details related to the other sectors of the macro setting. It can thus be thought of as a partial model in which the aspects that are not explicitly focused on end up in an hidden black box that represent the complement of the analyzed part of the economy. Therefore, several issues such as the destination of the production sector’s profits (outflow), the financing of the unemployment dole (inflow) and many other flow variables are thus not treated in a partial model. The results presented and discussed in the paper depend thus mainly on the combination of parameters as described in Tables 1 and 2. Our strategy is to explore the parameters’ space in order to exclude those regions that lead to unreasonable results for the agents included in the observed part of the economy. Sensitivity analyses, which consist in changing one parameter at a time, allow an accurate evaluation of the model aggregate behavior in the selected part of the parameters’ space. The extension of the model in order to include more sophisticated mechanisms in all the main sectors of the economy is an ambitious goal for future research. The following paragraph presents the sequence of events performed at each time step by the ABM that implements the model. #### Sequence of events The artificial economy is considered as a discrete iterative system where agents repeat the same set of actions at each time step. 1. the Production sector sets production and demands production factors; 2. the Labor market opens; 3. Consumers receive either wages (if employed) or dole (if unemployed); 4. the Production sector makes the production according to the factors obtained in the market; 5. the Bank computes interests and asks for loan repayment to indebted consumers; 6. Consumers refund if they have enough financial resources; otherwise they are labeled as in “financial difficulty”; 7. the Bank updates its balance sheet; 8. Consumers form beliefs by looking at the rate of unemployment and set their desired consumption; if it is higher than financial resources, the consumer asks for credit; 9. the Credit market opens; 10. the Bank decides how much credit to extend; 11. the Bank computes the sum of credit demanded and the new credit it can offer; credit requests are either fulfilled or rationed; 12. Consumers who asked for credit set their effective consumption according to the obtained credit; 13. the Goods market opens: the production sector sells the produced items to households (supply is constrained by production capacity); 14. Consumers update their financial position; 15. the Production sector computes the economic result. In the following subsection, we present in details the microeconomics of the three actors that compose our model. #### Households Each household: • receives either a wage (if employed) from the firm or a dole (if unemployed); • has a minimum consumption level, $$\bar{c}$$ (which is assumed to take the same value as the unemployment dole); Household’s $$h$$ wealth at time $$t$$ is denoted by $$W_{h,t}$$and it is equal to the household’s bank account, being other stores of value absent in this model. The wealth level significantly affects the possibility to consume: if a household obtained credit in the past ($$W_{h,t}<0$$), the bank asks her/him to payback the sum of interests ($$i_L$$ is the interest rate on borrowing) and the installment ($$\theta$$ is the share of principal to be refunded), namely $$payback_{h,t} = i_LW_{h,t} + \theta W_{h,t}.$$ (1) In this model, a household’s consumption level cannot be lower than the subsistence level; the payback is thus delayed for households in financial difficulty. Households in good economic conditions (employed or unemployed having a positive bank account) evaluate the possibility to consume more than the subsistence level. They first compute the desired consumption as follows: $$c^d_{h,t}=\max\left(\frac{1+\rho_{h,t}}{1+i_L} y^A_{h,t}+ \beta \max(W_{h,t},0),\bar{c}\right)$$ (2) where $$\beta$$ is the propensity to consume out of wealth and $$y^A_{h,t}$$ is the available income which is determined as: $$y^A_{h,t} = wage_{h,t} + payback_{h,t}$$ where $$payback_{h,t}$$ computed as explained above for indebted households, while for those with positive bank account $$payback_{h,t}=i_DW_{h,t}$$, where $$i_D$$ is the interest rate on deposits. $$\rho_{h,t}$$ is a “behavioral” parameter describing individual household’s beliefs. It is computed using the following logistic function (more details are given in next subsections): $$\rho_{h,t}(x_{h,t}) = \frac{2}{1+\exp(-\tau_h(x_{h,t}-\hat{x}_h))}-1+i_L \quad \text{with} \quad x_{h,t} = \sum_{j=1}^{m_h} E_{h,t-j}$$ (3) $$\tau_h$$ and $$\hat{x}_h$$ are households’ specific parameters; they determine the slope and the position of the logistic function respectively. $$m_h$$ is the households memory length and $$E_{h,t}=0$$ is household’s $$h$$ employment state in $$t$$. $$\sum_{j=1}^{m_h} E_{h,t-j}$$ is thus the sum of all the employment states experienced by the household in the time periods stored in his memory. #### Beliefs and consumption behaviors Equation (3) states that household’s beliefs depend on the employment record. Households’ sensitivity to the employment record is controlled by the parameter $$\tau_h$$: the higher $$\tau_h$$, the more consumers respond to changes in $$x_{h,t}$$. Note that $$x_{h,t}\in\{0,1,\cdots,m_h\}$$. Consider an indebted household. Because $$\rho_{h,t}(0)< i_L<\rho_{h,t}(m_{h})$$, s/he can switch from asking for additional credit to saving or vice versa according to his past employment experience and behavioral features. In particular, equations (3) and (2) imply that s/he will ask for additional credit if $$x_{h,t}>\hat{x}_h$$ and will save (reducing her/his debt) in the opposite case. It follows that $$\hat{x}_h$$ is a crucial parameter: the higher $$\hat{x}_h$$, the lower credit demands will be. Consider for example a household with $$\hat{x}_h=m_h$$. In this case, s/he is very prudent because he will never borrow; at most he will consume all the available income if s/he was always employed in the latest $$m_h$$ periods. Below we will focus in particular on the $$\hat{x}_h$$ parameter. It is thus convenient to gain a fine tuning control on its setting. To this aim, we use a beta distribution $$\mathcal{B}(s_1,s_2)$$: $$\hat{x}_h= \hat{x}_{\min}+\mathcal{B}(s_1,s_2)(\hat{x}_{\max}-\hat{x}_{\min})$$ (4) In particular, we will analyze the effects of changing the $$s_1$$ shape parameter while the second shape parameter is kept constant at 1. As it is known, $$\mathcal{B}(1,1)$$ is a uniform distribution. One can cumulate density on the higher values of the distribution by increasing $$s_1$$. Summing up, $$\rho_{h,t}$$affects the slope of the consumption function. Consider indebted consumers: • those who have $$\frac{1+\rho_{h,t}}{1+i}<1$$ are able to consume the desired levels, thus effective consumption $$c_{h,t}$$ equals desired consumption $$c_{h,t}=c^d_{h,t}$$ and they have a positive cash flow (saving); • $$\frac{1+\rho_{h,t}}{1+i}>1$$ implies $$c^d_{h,t}>y^A_{h,t}$$. These consumers ask for new loans to the bank to meet their desired consumption levels. By “externalizing” the borrowing constraint (compared to the so-called natural borrowing constraint as discussed in Carroll et al. 2012), i.e., by explicitly considering a banking sector that decides whether to provide credit to consumers, households are not able to correctly anticipate the credit rationing possibly implemented by the bank. This has important implications for their consumption decisions (micro level) and for the aggregate consumption function. #### Credit demand According to (2), new credit is demanded in two occurrences: • households with positive wealth, whose desired consumption is higher than the sum of income and wealth. In this case, new demanded credit is: $$\Delta L^d_{h,t}=c^d_{h,t}-wage_{h,t}-W_{h,t}(1+i_D)$$ (5) • households with negative wealth, whose income ensures the subsistence consumption level and the bank repayment. In this case new demanded credit is: $$\Delta L^d_{h,t}=c^d_{h,t}-[wage_{h,t}+(i_L+\theta)W_{h,t}]$$ (6) If $$y^A_{h,t} < \bar{c}$$, we say the household has financial difficulties and the payback is delayed. #### The production sector In this version of the model, we do not consider a multiplicity of firms, rather we decide to model the production sector as a whole. It produces non-durable perishable consumption goods that are not previously ordered by households. Therefore, production is carried out in advance with respect to the demand by households and inventories cannot be carried over to the next period. In this setting, forecasting the future level of demand as precisely as possible is a crucial task for the entrepreneur. On the same line, for example, Mandel et al. (2010), endow the production sector with an extrapolation methods (i.e., the Winter-Holt forecasting), so that firms forecast future sales and households future income. In their model, expectations updating takes place every period. By means of exponential smoothing, firms update their sales expectations, they thus update their target production and decide of future investments in fixed capital. A similar mechanism has been implemented by Assenza et al. (2015). In their model, at the beginning of each time step each firm set its selling price and its current production. At the end of each period, firms learn also the average price. Once production has been carried out and search and matching has taken place, each firm can observe the amount of consumption goods actually sold. Since sales occur only after the firm has carried out production, actual demand can differ from current production, this implies a positive/negative forecasting error. The production process in our model has similarities with that adopted in the cited literature. The production sector performs the following activities: • decides the level of production; • demands the production factors needed to make the production; • makes the production according to the factors obtained on the markets; • sells the produced items to households who demand for them; • computes the economic result. We explain them in more details in the following sections. #### Deciding the level of production We depart from the existing literature by developing a method to forecast the next period demand. The production sector decides the production $$\hat{Y}_t$$ extrapolating a value from the trend of past production levels. In particular, a linear and nonlinear fit of the latest $$F$$ levels of demand are performed. The best performing between the two models is chosen by comparing their sum of squared errors ($$SSE$$) and it is then used to extrapolate the demand trend. We use the ordinary least square regression on the $$F$$ observations as a linear model. The nonlinear fit is obtained smoothing a wider window of demand values by the LOESS non parametric technique (Cleveland et al. 1992). As it is well known, a non parametric fit crucially depends on the bandwidth used; we develop a procedure to choose this parameter. Our procedure requires that the final part of the fitting line (i.e. the latest $$F$$ points) is concave or convex. In other words, we choose the lowest bandwidth which implies the same sign for all the second differences of the latest $$F$$ fitted values. Once the bandwidth is selected, we compute the $$SSE$$ of the latest $$F$$ observations for comparison with the $$SSE$$ obtained with the linear model. If we observe a lower $$SSE$$ for the linear model, the regression line is used to obtain the demand forecast, $$\hat{Y}_t$$; otherwise, we obtain a parametric version of the final part of the non parametric fitting line by computing the Lagrange interpolation polynomial on the latest $$F$$ fitted values. We then use the Lagrange polynomial to obtain $$\hat{Y}$$. #### Making production For the sake of simplicity, production in our economy requires only labor as an input. We use the following production function: $$Y_t=\sum_{h=1}^{H} \psi_hE_{h,t}$$ (7) where $$\psi_h$$ and $$E_{h,t}$$ are worker's $$h$$ productivity and employment state respectively. Each worker has her/his own productivity (different from its peers) that is set at the beginning of the simulation and remains constant up to the end.\footnote{In this paper we assume $$\psi_h$$ Pareto distributed ($$\mathcal{P}(p_1,p_2)$$). We remember that the average of this distribution is given by $$(p_1p_2)/(p_1-1)$$. Parameters will be set in such a way that this average is constant.} As already explained above, the employment state is an indicator function ($$E_h=1$$ if employed and $$E_h=0$$ if not) whose value changes over time Therefore, to realize the production $$\hat{Y}_t$$, workers are hired until the sum of their productivities allows to obtain a level of production sufficient to satisfy the expected demand: $$Y_t \ge \hat{Y}_t.$$ In this setting, the production sector’s problem is thus to set households’ employment states at each time, i.e. to identify the dynamics of $$\mathbf{E}:=\{E_1,E_2,\dots,E_H\}$$. A simple and intuitive way to proceed in our heterogeneous productivity system is to sort individual productivities in a decreasing order and let the production sector hiring at each time step, starting ^from the first ranked until $$Y \ge \hat{Y}$$. However, this mechanism implies that the most productive workers are always employed as well as the less productive are always unemployed, therefore the turnover will involve a limited number of workers. Furthermore, this mechanism would pose problems if homogeneous productivities would be considered. We thus propose an implementation of a hiring mechanism that enlivens the workers’ turnover and that will work well also for a degenerate productivity distributions. The main idea at the core of this implementation is that each household sends a signal about its productivity to the production sector. However, because of market imperfections, signals perceived by the production sector can be different from those sent: $$\text{worker sends } \psi_h \qquad \rightarrow \qquad \text{production sector receives } \phi_h$$ Technically, we model this communication process as follows: $$\phi_{h,t}=SP \times \psi_h+SR \times u_{h,t}$$ (8) where $$SP$$ and $$SR$$ are parameters and $$u$$ is the realization of a random variable $$U$$. This formulation allows us to tune the relevance of asymmetric information and other market imperfections on the labor market: the higher $$SR$$ with respect to $$SP$$ the more imperfect the market is. Different market labor dynamics are obtained as follows. At each time step, all the households’ employment states are set to 0 before the hiring process is started. Each household sends her/his signal ($$\psi_{h}$$) to the production sector. New values for the employment states are assigned by sorting the perceived signals $$\phi_{h,t}$$ in decreasing order i.e. the production sector, starts from the top and continues hiring until $$Y_t \ge \hat{Y}_t.$$ According to this modeling choice, a static labor market where the most productive workers always get a job can be obtained by setting $$SP>0$$ and $$SR=0$$. On the other hand, high values of $$SR$$ denote lively households’ employment states dynamics. As will be explained below, this dynamics is our model’s main determinant of households’ consumption. #### Economic result The production sector costs are given by wages. Provided that a worker was hired, his wage is $$w_h=w_{\min}+\xi \psi_h$$ (9) where $$w_{min}$$ is the minimum wage and is a parameter. The total wage to be payed is thus $$WB_t=\sum_{h=1}^{H}w_hE_{h,t}$$ In our model, revenue from sales are collected at the end of the production cycle while workers must be paid during production. This creates an important role for credit: the production sector asks for loans to pay wages: $$L^d_{f,t}=WB_t.$$ Revenues comes from sales and are equal to the obtained demand $$DH_t$$. Part of them are used to refund the bank, therefore, the economic result from the entrepreneurial activity is $$\pi_t=DH_t-WB_t$$ #### The banking sector For the sake of simplicity, a representative commercial bank is considered. According to what has been explained in the previous section, some households deposit money at the bank (income after consumption), while others demand credit according to their desired consumption levels and the resulting cash flow gaps. The banking sector also lends to the production sector that asks credit to pay wages. The bank balance sheet in this model is $$L^H_{t}+L^F_t=D_t+A_t$$, where $$L^H_{t}$$ is total credit to all household, $$L^F_t$$ is credit to the production sector, $$D_t$$ deposits from households and $$A_t$$ is bank’s equity. Because the focus of the model presented in this paper is on households, we let the banking sector extend to the production sector all the asked credit ($$L^F_t=L^d_{f,t}$$), while we will model explicitly the banking sector balance sheet items which are affected by households’ behavior: $$L^H_{t}$$ and $$D_t$$. At each time, households can be divided in two groups: the set of those having a positive wealth $$H^+_{W,t}$$, and the set of those with negative wealth $$H^-_{W,t}$$, so that we have: $$D_t=\sum_{h \in H^+_{W,t}}W_{h,t} \quad \text{and} \quad L^H_{t}=\sum_{h\in H^-_{W,t}}W_{h,t}.$$ It is assumed that the bank uses the following rule for limiting $$L^H$$: $$L^H_{t}<\lambda D_t.$$ (10) So, given $$L^H_{t-1}$$ and $$D_t$$, the bank computes credit supply and demand as follows: $$\Delta {L^H_{t}}^s = \max(\lambda D_t-L^H_{t-1},0) \qquad \text{and} \qquad \Delta {L^H_{t}}^d =\sum_h \Delta L^d_{h,t}.$$ At each time step, provided that $$\Delta {L^H_{t}}^d>0$$, the rationing coefficient is computed: $$r_t=\frac{\Delta {L^H_{t}}^d-\Delta {L^H_{t}}^s}{\Delta {L^H_{t}}^d} \qquad\text{note that} \ 0\le r_t \le 1.$$ This coefficient is used to adjust both the bank and the households’ balance sheets. Indeed, the new allowed aggregate credit and the new credit allowed to each household who asked for additional credit are: $$\Delta L^H_{t}=\Delta {L^H_{t}}^d(1-r_t) \qquad \text{and} \qquad \Delta L_{h,t}=\Delta L^d_{h,t}(1-r_t).$$ ### Simulations #### Parametrization Tables 1 and 2 report the baseline parameters values used in simulations[9]. In particular, the parameters in Table 1 are single instances, i.e. they are valid for the whole system and their values are used in computations performed by all agents. Parameters reported in Table 2 regulate households’ heterogeneity and are set in term of statistical distributions. Differently from the single instance parameters, their values differ among agents. Table 1: Parameters setting: baseline scenario ParameterDescriptionValuein equation(s) HNumber of agents1000 SPsignal productivity1(8) SRsignal randomness0.5(8) propensity to consume out of wealth0.2(2) cminimum consumption45(2) dunemployment dole46 $$w_{min}$$minimum wage46(9) wage slope0.25(9) iLmonthly interest rate on loans0.003(1),(6) iDmonthly interest rate on deposits0.001(5) monthly installment share0.01(1),(6) bank LH /DH target0.5(10) #### Baseline scenario: simulation results We investigated the micro and macro properties of the model described in the previous section through extensive computer simulations. Hereby, we report the simulation analysis of the model in two steps. In the first one, we give an overall description of the dynamics generated by the model, while in the second one we focus on the effects of different features of the labor market. Parameter Description Distribution in equation(s) $$m_h$$ agent’s memory length Uniform (3) mean=5 variance=0 $$\tau_h$$ $$\rho$$ slope Uniform (3) mean=0.5 variance=0 $$\hat{x}$$ $$\rho$$ location Beta $$\mathcal{B}(s_1,s_2)$$ (3),(4) $$s_1^{\min}$$=1 $$s_1^{\max}$$=100 $$s_2$$=1 $$\hat{x}_{\min}$$=1 $$\hat{x}_{\max}$$=100 $$\psi_{h}$$ workers’ productivity Pareto $$\mathcal{P}(p_1,p_2)$$ (7),(8) slope $$p_1$$ = 100 position $$p_2$$ = 74.25 To gain a global knowledge of the model output, starting from the next, we present the effects of changing one at a time the parameters governing a particular aspect of the model. Remarkable attention is given to the effects of changing the consumers’ willingness to borrow attitude ($$s_1$$ parameter). Because this parameter plays a crucial role in our model, we report detailed results and sensitivity analysis for three values of this parameter. We choose a low, medium and high value because they grasp the variety of results generated by the model. We then focus our attention on the effects of the different $$s_1$$ parametrizations on the wealth distribution. In this respect, we are interested in assessing whether data gathered from our simulations have similarities with the empirical wealth distributions computed from the HFCS (European Household Finance and Consumption Survey) data set to perform an ex-post validation[10] of our agent-based model (Klügl 2008). We report our analysis starting from paragraph 3.21. The main aim of this section is to monitor the evolution of the real (employment and consumption) and financial (deposits, loans and wealth) variables in the baseline parametrization. Figure 2 offers a global overview of the model outcomes with special attention to the health of the banking sector, the employment level and its fluctuations. It shows average values of these variables for different shapes of the willingness to borrows distribution (different levels of $$s_1$$) and unemployment dole ($$d$$). Figure 2A reports three lines for each level of the unemployment dole. They are the minimum, the average and the maximum number of employees observed in each run. The chart allows us to assess the average performance of the economy as well as of the volatility observed over different parametrizations. Looking at the average values, we can see how an increase in the dole implies a higher employment level. The shape of the willingness to borrow distribution ($$s_1$$) does not affect significantly the average employment level, but it impacts on its fluctuations: the employment range decreases for low values of $$s_1$$, but then it increases as $$s_1$$ becomes higher. An exception is detected when the dole is equal to the subsistence level of consumption ($$\bar{c}=45$$). In this case, fluctuations amplitude are constant when $$s_1$$ increases (see the black lines in Figure 2A). The progressive increase of the dole speeds up the appearance of fluctuations whose amplitude increase faster with $$s_1$$ for higher levels of the dole. By looking at Figure 2A, we can observe that blue lines ($$d=48$$) show a dynamics that precedes the green lines ($$d=47$$) which in turn precede the red lines ($$d=46$$). Figures 2B and 2C aim at highlighting particular aspects of the labor market whose deduction from Figure 2A could be hampered by the several lines included in the plot. In particular, Figure 2B reports the highest level of the unemployment rate observed in simulations for each combination ($$d$$; $$s_1$$), while Figure 2C display the difference between the highest and the lowest unemployment rate observed in the same simulations. When the dole is equal to the subsistence level of consumption ($$d=\bar{c}=45$$), the maximum unemployment rate decreases when s1 goes from 1 to about 15 and roughly keeps constant for higher level of $$s_1$$ (black line in Figure 2B). The gap between the maximum and the minimum unemployment rate has a similar pattern and fluctuates around a value slightly below 5% for $$s_1>15$$ (black line in Figure 2C). Both the maximum unemployment and the gap between the maximum and minimum unemployment rate approach a level close to 35% for higher levels of the dole although this happens at different speeds as highlighted above (see the coloured lines in Figures 2B and 2C). Overall, Figure 2 might be useful to a policy maker that faces the choice of the level of the unemployment dole. It suggests that a low level of the dole reduces employment fluctuations and implies a more stable banking sector. We use the Loans-to-Deposits ratio (LDR) to report on the banking sector’s health. This ratio is taken as a liquidity indicator (Bonfin & Moshe 2014) and in some countries it is used as a prudential liquidity regulation measure (Sanya et al. 2012). Figure 2D shows how the LDR increases when the dole increase or s1 decreases; which in turn implies that these changes in the parameters both worsen the bank liquidity position. In the following, we will provide a detailed description of the effect of changing the households’ willingness to borrow. To this aim, we provide a detailed report for three specific levels of $$s_1$$ : $$s_1$$ = 6, $$s_1$$ = 10 and $$s_1$$ = 50. The first value minimizes employment’s fluctuations, but it represents a borderline case for bank liquidity. The second case ($$s_1$$ = 10) can be thought of as an intermediate benchmark framework both for employment fluctuations and for bank liquidity. The third case ($$s_1$$ = 50) corresponds to a safe bank liquidity position, but it is extreme for employment fluctuations. We recall that an increase in $$s_1$$ weakens the borrowing attitude and promotes saving among households. Table 3 presents the main results of the simulations run which adopt the three different values of $$s_1$$; the other parameters are from the baseline parametrization reported in Tables 1 and 2. The reported values consider the averages over 2500 time periods: simulations last 3000 time step, but we discarded the first 500 periods in order to get rid of the transients and of the initialization dynamics of the simulation. Table 3: Baseline scenarios: main results. Averages per capita over 2500 time steps considering three different willingness to borrow, $$s_1$$ = 6, $$s_1$$ = 10 and $$s_1$$ = 50 Variables →Unempl RateConsumptionWealthLoansDeposits Statistics ↓$$\langle{U_t}\rangle$$$$\langle{C_{h,t}}\rangle$$$$\langle{W_{h,t}}\rangle$$$$\langle{L_{h,t}}\rangle$$$$\langle{D_{h,t}}\rangle$$ $$s_1$$=6 mean18.23%60.8855.662.2657.61 min13.2%57.6247.0800 max22.8%63.1762.53.2682.59 sd0.99%0.622.190.234.5 IQR0.792.9 $$s_1$$=10 mean18.35%60.8872.281.0472.9 min14.2%57.5959.6700 max22.6%63.7781.632.14112.7 sd1.08%0.753.150.177.3 IQR1.0554.5 $$s_1$$=50 mean18.3%60.999.32.2657.6 min0%48.49000 max35%74.53160.23.282.6 sd10.7%7.938.50.234.5 IQR15.775.0 A first difference between the three cases is that the framework characterized by a lower willingness to borrow ($$s_1$$ = 50) is more volatile compared to the other two, as confirmed by the inspection of the standard deviations for all the core variables under scrutiny. Moreover, the third framework ($$s_1$$ = 50) features a greater inequality in the distribution of wealth which is in turn mirrored by the distribution of consumption. Indeed, if we look at the Inter-Quantile Range measure (IQR), we see that it is higher for both consumption and wealth in presence of a lower willingness to borrow. In order to deeply investigate the source and the role of inequality in the three frameworks, we report in Figure 3A the dynamics of the Gini index, $$G$$ for the wealth distribution in a run for the three s1 values considered in this comparison. Simulations with $$s_1$$ = 6 exhibit more wealth inequality (on average) compared with the $$s_1$$ = 10 and $$s_1$$ = 50 scenarios; the average (rounded) values of the Gini index are respectively $$G_{s_1=6}=0.61$$, $$G_{s_1=10}=0.48$$ and $$G_{s_1=50}=0.29$$. These values, in particular those observed in the cases of higher willingness to borrow, are in line with the Gini index observed in the empirical time series of several European countries. Summary statistics and the value of the Gini index for a set of European countries are reported in Table 5. Considering the role played by credit in our framework, we take a closer look at the bank balance sheet in order to better understand the dynamics of the baseline scenarios. Figure 3 also reports the results of this investigation. We focus on the core financial variables and discuss the implications of their dynamics on the bank balance sheets’ health. We observe that in presence of high willingness to borrow the household sector (at the aggregate level) has a higher debt-to-income ratio (DTI-R) compared to the case in which households have a lower willingness to borrow. The DTI-R is in turn mirrored by a higher loans-to-deposits ratio (LDR) (see Figure 3B and C), which often presents values higher than 1; this means that the banking sector runs often in liquidity problems. These dynamics affect the bank’s balance sheet, as reported in Figures 3D, E and F. In the case of higher willingness to borrow, we observe that the bank often runs into liquidity problems due to the higher debt-to-income ratios and the LDR’s dynamics, while in the case of lower willingness to borrow ($$s_1$$ = 10) the LDR ranges between 0.6 and 0.9, implying a healthier bank’s balance sheet. We deeper investigate the debt dynamics and their effects on the bank’s balance sheet by focusing on a subset of 200 time periods for the case of $$s_1$$ = 6 and $$s_1$$ = 50; we report them in Figure 4 where the gray stripes highlight the liquidity shortages’ time spans (LDR>1). During these periods, the deposits (light blue line) in the bank’s balance sheet are lower than aggregate loans (dark blue line), causing liquidity shortages. The credit cycles seem quite regular in the considered time span in both cases and the grey areas allow us to emphasize the duration of the fluctuations. The focus over these dynamics in a subset of the overall simulation periods allows us to emphasize also another important feature of the model; namely that unemployment dynamics are strongly related to the credit cycle. Indeed, we observe that even in the presence of a higher willingness to borrow ($$s_1$$ = 6), because of the precautionary saving motive at work in the household sector, consumers decide to decrease the demand for loans as the unemployment rate increases. In particular, Figure 4 clearly illustrates that unemployment dynamics drive the demand for loans: when the unemployment rate decreases, the demand for loans increase; in presence of a higher willingness to borrow across the population, this eventually leads to a higher LDR and to some peaks in the relationship between loans and deposits that result in liquidity shortages. Focusing on the more volatile case $$s_1$$ = 50, we provide in Figure 5 the phase diagrams that associate households’ financial variables (credit and wealth) to the employment level. To have a better under-standing of the ongoing dynamics, we add a time marker $$t$$ to the figure. $$t_1$$ denotes the trough of the business cycle. The arrows help to understand how the economy moves. During the recovery, it moves from $$t_1$$ to $$t_2$$ and then arrives at the top of the business cycle in $$t_3$$. Following the time marker, we can see how during the recovery households continue the financial position improvement process started in the final part of the recession phase (since $$t_0$$). Indeed, starting from $$t_0$$, which corresponds to the central periods of the recession, households’ debt start decreasing and wealth, also (and especially) thanks to deposits, increases. The process continues until the middle of the expansion ($$t_3$$) where the trends reverts signalling households’ will to move towards a more fragile financial position[11]. These movements of the agents financial position over the business cycle basically match those identified by Hyman Minsky as responsible for macroeconomic fluctuation (and more specifically, for financial crisis) in capitalistic economies (Minsky 1986). #### Wealth distribution: comparing empirical and simulated distributions At this stage of our investigation and given the simple structure of our model, we report empirical and simulated wealth distributions for a visual comparison and leave a more quantitative investigation for future works. Empirical data are drown from the HFCS (European Household Finance and Consumption Survey) dataset[12]. It is a relatively new harmonized data set that collects household-level data on balance sheets, wealth and income distribution for 15 Euro Area countries from which we selected a set of countries: Germany, Spain, Italy, France, Belgium, Portugal, Finland and Greece. For the comparison between artificial and empirical wealth distribution, we considered the derived variable net wealth $$DN3001$$ which is computed as the sum of real and financial wealth net of total debt: $$DN3001=DA3001-DL1000.$$ Data set Description Variable Description European data - HFCS D1 Derived variables DN 3001 Net wealth DA 3001 Total assets DL 1000 Total liabilities In Table 5, we report for each country the summary statistics of the net wealth distributions and the reference year of the survey for the set of considered EU countries. In order to have also a visual inspection of the whole net wealth distribution and perform a quick cross country comparison, we report in Figure 6 the plots with the densities, mean and median for each country mentioned above. Germany Italy Spain Greece Belgium Portugal France Finland Reference year 2010 2010 2008 2009 2009 2009 2009 2009 Total numb. obs. 3565 7951 6197 2971 2326 4404 15006 10989 Minimum -358500 -44600 -1143000 -90950 -420500 -87500 -404400 -633600 Maximum 76300000 26130000 401100000 11700000 8408000 2708000 84410000 14720000 Median 148200 183500 287400 95700 272000 78450 207200 144300 Mean 377700 281300 1140000 147400 441200 173200 516300 230800 1stQ 20000 41500 128700 18580 88780 16030 35400 24780 3rdQ 388600 335000 721700 187600 510600 171800 469000 303800 IQR 368600 293500 593022 169066 421789 155729 433569 278993 Gini coefficient 0.72 0.59 0.78 0.6 0.6 0.70 0.71 0.62 Wealth distributions obtained by simulations are reported in Figure 7. The reported distributions are taken at the latest time tick ($$t$$ = 3000) of a simulation run for the three considered levels of s1. Because the wealth distribution evolves over time, much more information (in addition to the static pictures supplied by Figure 7) is needed to understand the dynamic of the wealth distribution. To this aim, in the Appendix, we provide the wealth_distribution_dynamics.mp4 video showing the dynamics under different parametrizations. The movements of the distribution are evident for high values of $$s_1$$. The video shows that, for high $$s_1$$, during economic upswings the mass on the medium and high values of the wealth distribution gradually moves to the left towards low and negative values. This is because deposits shrink and households ask for new credit. At the top of the business cycle, the wealth distribution is right skewed: it presents a high peak at low and negative values and is flat on its right side. A comparison between the empirical distributions displayed in Figure 6 and those obtained from simulations (Figure 7) allows us to check whether the model presented in this paper can produce wealth distributions that are comparable to those observed in real economies. The visual inspection of Figure 7 shows similarities between the artificial distribution observed in the case of $$s_1$$ = 6 and the wealth distribution of Germany, France and Finland. The distribution observed in the case of $$s_1$$ = 10 shows instead similarities with the distribution observed for Greece, Italy and Belgium. The wealth distribution observed in the case of $$s_1$$ = 50 instead does not show any similarity with real world data. The distributions shows indeed a left skew which is not present in real data. This finding is in line with the observation that the majority of European countries are characterized by a borrowing behavior in the household sector - which is in turn mirrored by higher debt to income ratios - that is better approximated by the scenarios with a higher willingness to borrow. Considering that the main focus of the paper is on the left tail of the distribution, namely on negative wealth levels (debt) and their changes due to households’ willingness to borrow, we believe that being able to replicate it represents a promising result of the model at this stage of the research. However, more efforts are needed to provide a more sensible replication of the whole wealth distribution observed in real data, with particular regard to the right tail for which the bulk of the economic and econophysic literature (e.g. Chakraborti 2007; Yakovenko & Rosser 2009, for more details on this line of research), reports that it follows a power law[13]. We leave this investigation for future research. #### Aggregate effects of different labor market matching mechanisms In this section, we analyze the effects of different labor market matching mechanisms. As noted by Freeman (1998), adopting an agent-based approach to shape the labor market can help in studying the performance of the market. Indeed, it allows us to compare the outcomes of a model when the structure is imposed by the policymaker (centralized market) with that of a model in which the matching emerges from the bottom-up interactions between workers and firms (decentralized market)[14]. Our model can be used to perform such analysis by changing the $$SP$$ and $$SR$$ parameters. We consider three scenarios parametrized as reported in Table 6, whose implications can be easily understood by remembering equation (8). In the perfect information scenario, the production sector is able to identify the productivity of each worker; given the hiring mechanisms (the most productive workers are hired first), working positions are stable and they evolve slowly. In the realistic imperfections scenario, individual working positions can change in every period, and low productivity workers have the same possibility of getting a job than those characterized by high productivity. We define this scenario as “realistic” because of the ex-post observation that it allows for realistic macro and micro dynamics. The weak imperfections scenario makes that less productive workers are able to get a job, but with lower probabilities than those with high productivity. Table 6: Parameterization of three labor market matching mechanisms: benchmark scenarios Matching mechanismSignal productivity,SPSignal random,SR Perfect Information10 Realistic Imperfections10.5 Weak Imperfections10.05 Figure 8 offers a visual representation of the labor market dynamics by considering the different matching mechanisms. We report time on the horizontal axis and workers’ productivity rank on the vertical axis: the lower the productivity, the higher the rank. As expected, in the scenario characterized by a matching mechanism based on perfect information the system is static and the employment (green) and unemployment (red) areas are compact and sharply delimited. As the graph emphasizes, unemployment is concentrated among less productive workers. In the realistic imperfections scenario, red and green pixels are uniformly mixed signaling a very dynamic labor market. Finally, the red gradually fades while the green intensifies moving from the top towards the bottom in the weak imperfections scenario. In Figure 9, we report the distribution of wealth corresponding to the matching mechanisms described in Figure 8. We observe that in the case of a matching mechanism based on perfect information the distribution is not positive skewed as observed in empirical distributions; rather it presents a peak on the lower side and another smaller peak on the right side. This particular shape of the distribution results from the hiring mechanism at work in this case according to which workers with lower productivity have less probabilities to be hired with respect to the workers characterized by higher productivity. Since productivity is considered in the determination of the wage level (see Equation 9) and affect thus the level of available income of each household, these dynamics have effects also on the distribution of wealth. Looking at the Gini Index in the case of $$s_1$$ = 10 and $$s_1$$ = 50 in presence of matching based on perfect information, we observe that it is on average equal to $$G_{s_1=10}=0.74$$ and $$G_{s_1=50}=0.77$$ which are consistently higher than those observed in the baseline scenario, i.e., $$G_{s_1=10}=0.48$$and $$G_{s_1=50}=0.29$$ (see paragraph 3.17). This can be taken as an indirect proof that real world labor markets are characterized by imperfections. The visual inspection shows that the wealth distribution obtained from simulations gradually approaches the shape of the empirical one when imperfections grow in magnitude. The realistic imperfection case with a higher willingness to borrow is the most suitable setting to replicate the empirically observed wealth distribution. ### Concluding remarks The model presented in the paper has focused on behavioral features and financial choices that characterize the household sector and that can affect the shape of the business cycle. The model is composed of a production sector, a banking sector and a household sector populated by heterogeneous consumers that differ in many aspects: employment state, beliefs, wealth distribution, productivity and credit constraints. It is important to note that by considering this heterogeneity, we are able to analyze consumers’ behavior and their responses to changes to their wealth over changing macroeconomic conditions. Investigating these issues is relevant, especially during the recession phase of the business cycle, when policy makers face the challenge of designing stabilization policies. The paper emphasized certain important issues related to the distribution of wealth and the implications of inequality, the concerns for which have been brought back by the Great Recession. Many authors have indeed observed that the emergence and unfolding of the financial crisis can be explained also by rising socio-economic inequality (see Iacoviello 2008; Fitoussi & Saraceno 2010; Galbraith 2012; VanTreeck 2013; Cynamon & Fazzari 2013, among others), with particular attention to the implications of the rising income inequality. In this paper, we rather focused on the implications of wealth inequality and reported extensive sensitivity analyses over the parameter that regulates the willingness to borrow $$s_1$$. By comparing three frameworks that differed in the $$s_1$$ parameter, we found that, with a lively labor market, the one characterized by a lower willingness to borrow is more volatile compared to the other two characterized by a higher willingness to borrow. Moreover, it features a greater inequality in the distribution of wealth, which is in turn mirrored by the distribution of consumption. In order to gain a deeper understanding of the distributional issues at work in the model, we inspected also the dynamics of the Gini index over the whole simulation time periods. We found that the distribution of wealth is more concentrated in the framework characterized by high willingness to borrow, for which the observed average Gini index is $$G_{s_1=6}=0.61$$, and less concentrated but more volatile in the case of lower willingness to borrow: $$G_{s_1=50}=0.29$$. This finding has some important implications for the stability of the banking sector and of the macro economy as a whole; indeed, it stresses that by pushing the LDR ratio too high and causing liquidity problems to the banking sector, the concentration of wealth can directly affect the stability of the system. Indeed, Figure 4 showed that the amplitude of the fluctuations are quite similar in the two scenarios, but the liquidity crises are longer in the presence of higher willingness to take on debt. We also considered some possible (although very stylized) fiscal policy scenarios. In Section 3.4, Figure 8 suggested that a low level of the unemployment dole increases unemployment and, at the same time, it reduces employment fluctuations, also implying a more stable banking sector. This result can offer some insights to a policy maker who has to decide about the level of the unemployment dole over different phases of the business cycle. After presenting and discussing sensitivity analysis over a set of parameters, the paper has reported the results of an ex-post validation exercise by comparing simulated and empirical wealth distributions. The paper stresses indeed the importance of matching stylized facts at the household level for thinking about the reaction of economies to recessions. In this, the availability of microeconomic data have been crucial for the macroeconomic insights they are able to provide. In this version of the model, microeconomic European data on the distribution of wealth retrieved from the HFCS dataset level have been considered. A visual inspection and comparison between empirical distributions and those obtained from simulations reveals shapes’ similarities in the case of the matching based on realistic imperfections. These results deserve however a more quantitative investigation that we leave for future works. The effect of different levels of labor market imperfections on the wealth distribution is also analyzed. In our model, the wealth distribution shape is different from the empirical one in the perfect information case, i.e. when the production sector can easily identify and hire most productive workers. Higher uncertainty on the employment state implied by labor market’s matching imperfections implies instead more hump shaped distributions. Using empirical data from the dataset discussed in Section 3.20, we observed that in these cases the shape of the French, Finnish, German and Italian wealth distributions can be generated by the model. In conclusion, the model aims to provide a useful benchmark for grasping the main implications of the interaction between consumers’ wants (desired consumption), consumers’ beliefs (their expectations about their future income and employment state), the behavior of the banking sector (rationing) and the decision of the production sector (forecasting future demand). The structure of the model has been kept as simple as possible to clarify the mechanisms at work in the build up of consumer credit when we are in presence of precautionary saving motives. As a consequence, some important issues as bubbles linked to assets prices and monetary policies implications have been assumed away. Moreover, the model omits any role for a policy aimed at bringing the economy back to a healthy unemployment rate and does not consider, at this stage of the investigation, the possible macroeconomic implications of different banking regulations. We leave these investigations for future versions of the model aiming at incorporating a more sophisticated production sector and long run factors. ### Acknowledgements A previous version of this paper has been presented at the 11th edition of the Artificial Economics Conference, held in Porto on 3-4 September 2015. We would like to thank Tim Verwaart and Pedro Campos for editing this special section for JASSS. ### Notes 1. For similar analysis on household debt using data from the Eurosystem Household Finance and Consumption Survey (HFCS) see Christelis et al. (2015). They consider two types of debt, namely col-lateralized debt (which include mortgages, home equity loans, and debts for other real estate) and non-collateralised debts (i.e. credit card debt, instalment loans, overdrafts and other loans). The main finding of the paper is an extensive cross-country heterogeneity in holdings of collateralised debt: whereas less than 20% of Austrian and Italian households hold collateralised debt, this number stands around 40% in Cyprus, the Netherlands and Luxembourg. Furthermore, in a comparison with US households, they find them to be consistently more indebted than European households. The prevalence of non-collateralized debt in US is substantially larger than in all other countries, with a particularly large gap for the case of non-collateralised debt, where more than 60% of U.S. households participate, in contrast to around 20%-50% for European households. 2. Marginal propensity to consume is higher for households with lower levels of wealth (Carroll et al. 2014a). 3. What is true for the whole must be true for all or some of its parts. 4. The attributes of some parts of a thing are attributed to the thing as a whole. 5. As discussed in LeBlanc et al. (2015), there is an important percentage of European households that report precautionary saving as an important reason for saving. Their investigation on a panel of European households collected in the HFCS dataset that elicits information on the role of several saving motives show that the percentage ranges between 89% in Netherlands and 42% in Germany. 6. As discussed in Carroll et al. (2014a). In traditional precautionary saving models, because the employed consumer is always at risk of a transition into the unemployed state where income will be zero, the natural borrowing constraint that characterizes these models prevents the consumer from ever choosing to go into debt. An indebted unemployed consumer with zero income might indeed be forced to consume zero or a negative amount (incurring negative infinity utility) in order to satisfy the budget constraint. 7. We take advantage of the facilities provided by The Apache Commons Mathematics Library 3.6 to code the Extrapolator class of our model. For a more detailed description of the forecasting process, see the extraplator_compute.pdf file reporting the UML sequence graph of the Extrapolator class computation method. 8. In this paper we assume $$\psi_h$$ Pareto distributed ($$\mathcal{P}(p_1,p_2)$$). We remember that the average of this distribution is given by ($$(p_1p_2)/(p_1-1)$$). Parameters will be set in such a way that this average is constant. 9. The model has been developed in Java taking advantage of the Repast functionalities. Full instructions for installing and running it are available at https://www.openabm.org/model/4990/. 10. See Fagiolo et al. (2007) for a comprehensive discussion about empirical validation in DSGE and ABM models and Windrum et al. (2007) for a methodological appraisal of problems arising in validation. 11. See the Appendix in which we provided the employment_financial_phase_diagram.mp4 video clip to visualise this dynamics. 12. Data and detailed information are available at https://www.ecb.europa.eu/pub/economic-research/research-networks/html/researcher_hfcn.en.html. 13. However, as emphasized by Clauset et al. (2009) “the empirical detection and characterization of power laws is made difficult by the large fluctuations that occur in the tail of the distribution”. 14. Usually, models of labor markets assume an exogenous (aggregate) matching function (Petrongolo & Pissarides 2001), often in the form of a Cobb-Douglass given that it can be easily log-linearized, thus estimated. Recently, the agent-based approach has been widely used in labor economics (Ballot & Taymaz 1997; Ballot 2002; Neugart & Richiardi 2012), especially in performing policy analyses (Dawid & Neugart 2011) which allows us to study the matching mechanism and the endogenous matching function (Neugart 2004; Phelps et al. 2002). ### Appendix Movie 1. The employment_financial_phase_diagram video Movie 2. The wealth_distribution_dynamics video ### References ASSENZA, T., Delli Gatti, D. & Grazzini, J. (2015). Emergent dynamics of a macroeconomic agent based model with capital and credit. Journal of Economic Dynamics and Control, 50(January), 5–28. [doi:10.1016/j.jedc.2014.07.001] BALLOT, G. (2002). Modeling the labor market as an evolving institution: model {ARTEMIS}. Journal of Economic Behavior & Organization, 49(1), 51 – 77. BALLOT, G. & Taymaz, E. (1997). The dynamics of firms in a micro-to-macro model: The role of training, learning and innovation. Journal of Evolutionary Economics, 7(4), 435–457. [doi:10.1007/s001910050052] BARBA, A. & Pivetti, M. (2009). Rising household debt: Its causes and macroeconomic implications—a long-period analysis. Cambridge Journal of Economics, 33(1), 113–137. BONFIN, D. & Moshe, K. (2014). Liquidity risk in banking: Is there herding? Discussion Paper 2012-024, European Banking Center. BROCK, W.A. & Hommes, C.H. (1997). A rational route to randomness. Econometrica, 65(5), 1059–1096. BUYUKKARABACAK, B. & Valev, N. (2010). The role of household and business credit in banking crises. Journal of Banking & Finance, 34(6), 1247–1256 [doi:10.1016/j.jbankfin.2009.11.022] CARDACI, A. & Saraceno, F. (2015). Inequality, financialisation and economic crisis: an agent-based model. Documents de Travail de l’OFCE 2015-27, Observatoire Francais des Conjonctures Economiques (OFCE): http://EconPapers.repec.org/RePEc:fce:doctra:1527. CARROLL, C.D. (1992). The buffer-stock theory of saving: some macroeconomic evidence. Tech. Rep. 2, 61-156, Brookings Papers on Economic Activity. [doi:10.2307/2534582] CARROLL, C.D. (2012). Representing consumption and saving without a representative consumer. In Measuring Economic Sustainability and Progress, NBER Chapters. National Bureau of Economic Research, Inc. CARROLL, C., Sommer, M. & Slacalek, J. (2012). Dissecting saving dynamics: Measuring wealth, precautionary, and credit effects. IMF Working Papers 12/219, International Monetary Fund. [doi:10.5089/9781475505696.001] CARROLL, C.D., Slacalek, J. & Tokuoka, K. (2014a). The distribution of wealth and the marginal propensity to consume. Tech. rep., ECB Working Paper No. 1655, Available at SSRN: http://ssrn.com/abstract=2404862. CARROLL, C.D., Slacalek, J. & Tokuoka, K. (2014b). The distribution of wealth and the MPC: implications of new European data. Tech. rep., ECB Working Paper. CHAKRABORTI, A. (2007). Econophysics: A brief introduction to modeling wealth distribution. Science and Culture, 73(3/4), 55. CHALLE, E. & Ragot, X. (2016). Precautionary saving over the business cycle. The Economic Journal, 126(590), 135–164. [doi:10.1111/ecoj.12189] CHRISTELIS, D., Ehrmann, M. & Georgarakos, D. (2015). Exploring differences in Household debt across Euro Area countries and the US. Tech. Rep. 2015-16, Bank of Canada Working Papers. CLAUSET, A., Shalizi, C.R. & Newman, M. E.J. (2009). Power-law distributions in empirical data. SIAM Review, 51(4), 661–703. [doi:10.1137/070710111] CLEVELAND, W., Grosse, E. & Shyu, W. (1992). ‘Local regression models.’ Chapter 8. In J.M. Chambers & T. Hasti (Eds.), Statistical Models. Wadsworth & Brooks/Cole. CYNAMON, B.Z. & Fazzari, S.M. (2013). Inequality and household finance during the consumer age. Economics Working Paper Archive WP 752, The Levy Economics Institute. [doi:10.2139/ssrn.2205524] DAWID, H. & Neugart, M. (2011). Agent-based models for economic policy design. Eastern Economic Journal, 37(1), 44–50. EGGERTSSON, G.B. & Krugman, P. (2012). Debt, Deleveraging, and the Liquidity Trap: A Fisher-Minsky-Koo Approach. The Quarterly Journal of Economics, 127(3), 1469–1513. [doi:10.1093/qje/qjs023] ERLINGSSON, E., Raberto, M., Stefansson, H. & Sturluson, J. (2013). ‘Integrating the housing market into an agent-based economic model.’ In A.Teglio, S.Alfarano, E.Camacho-Cuena & M.Ginés-Vilar (Eds.), Managing Market Complexity, vol. 662 of Lecture Notes in Economics and Mathematical Systems, (pp. 65–76). Springer Berlin Heidelberg. FAGIOLO, G., Birchenhall, C. & Windrum, P. (2007). Empirical Validation in Agent-based Models: Introduction to the Special Issue. Computational Economics, 30(3), 189–194. [doi:10.1007/s10614-007-9109-z] FITOUSSI, J. & Saraceno, F. (2010). Inequality and macroeconomic performance. Documents de Travail de l’OFCE 2010-13, OFCE. FREEMAN, R.B. (1998). War of the models: Which labour market institutions for the 21st century? Labour Economics, 5(1), 1–24: http://ideas.repec.org/a/eee/labeco/v5y1998i1p1-24.html. [doi:10.1016/S0927-5371(98)00002-5] GALBRAITH, J. K. (2012). Inequality and instability: A study of the world economy just before the great crisis. Oxford University Press. GOURINCHAS, P. & Parker, J. (2001). The empirical importance of precautionary savings. Working paper series, NBER. [doi:10.1257/aer.91.2.406] HAYASHI, F. & Prescott, E.C. (2002). The 1990s in japan: A lost decade. Review of Economic Dynamics, 5(1), 206–235. HOMMES, C. (2006). ‘Heterogeneous agent models in economics and finance.’ In L. Tesfatsion & K.L. Judd (Eds.), Handbook of Computational Economics, vol.2 of Handbook of Computational Economics, chap.23, (pp. 1109–1186). Elsevier. HOMMES, C. (2007). Bounded rationality and learning in complex markets. CeNDEF Working Papers 07-01, Universiteit van Amsterdam, Center for Nonlinear Dynamics in Economics and Finance. HOMMES, C. (2011). The heterogeneous expectations hypothesis: Some evidence from the lab. Journal of Economic Dynamics and Control, 35(1), 1–24. [doi:10.1016/j.jedc.2010.10.003] IACOVIELLO, M. (2008). Household debt and income inequality, 1963-2003. Journal of Money, Credit and Banking, 40(5), 929–965. ITO, T. & Mishkin, F.S. (2006). ‘Two decades of Japanese monetary policy and the deflation problem.’ In Monetary Policy with Very Low Inflation in the Pacific Rim, NBER-EASE, Volume 15, (pp. 131–202). University of Chicago Press. [doi:10.7208/chicago/9780226379012.003.0005] JORDÁ, O., Schularick, M. & Taylor, A.M. (2013). When credit crises bites back. Journal of Money, Credit and Banking, 45(2), 3–28. KIRMAN, A. (2014). Is it rational to have rational expectations? Mind & Society, 13(1), 29–48. [doi:10.1007/s11299-014-0136-x] KIRMAN, A.P. (1992). Whom or What Does The Representative Individual Represent. Journal of Economic Perspective, 6, 117–36. KLÜGL, F. (2008). A validation methodology for agent-based simulations. In Proceedings of the 2008 ACM symposium on Applied computing, SAC ’08, (pp. 39–43). New York, NY, USA: ACM. [doi:10.1145/1363686.1363696] KONIG, N. & Grossl, I. (2014). Catching up with the joneses and borrowing constraints: An agent-based analysis of household debt. Working paper, University of Hamburg, Department of socioeconomics. KOO, R.C. (2013). Balance sheet recession as the ‘other half’ of macroeconomics. European Journal of Economics and Economic Policies, 10(2), 136–157: http://www.elgaronline.com/journals/ejeep/10-2/ejeep.2013.02.01.xml. [doi:10.4337/ejeep.2013.02.01] LEBLANC, J., Porpiglia, A., Teppa, F., Zhu, J. & Ziegelmeyer, M. (2015). Household saving behaviour and credit constraints in the Euro area. Tech. rep., ECB Working Paper 1790. MANDEL, A., Jaeger, C., Fuerst, S., Lass, W., Lincke, D., Meissner, F., Pablo-Marti, F. & Wolf, S. (2010). Agent-based dynamics in disaggregated growth models. Tech. Rep. 2010.77, CES working papers. MINSKY, H.P. (1986). Stabilizing an Unstable Economy. New Haven and London: Yale University Press. MUTH, J.F. (1961). Rational expectations and the theory of price movements. Econometrica, 29(3), 315–335. [doi:10.2307/1909635] NEUGARTNeugart, M. (2004). Endogenous matching functions: an agent-based computational approach. Advances in Complex Systems, 07(02), 187–201. NEUGART, M. & Richiardi, M. (2012). Agent based models of the labor market. WORKING PAPER SERIES 125, Laboratorio R.Revelli - Centre for Employment studies. PERUGINI, C., Hölscher, J. & Collie, S. (2016). Inequality, credit and financial crises. Cambridge Journal of Economics, 40(1), 227–257. PETRONGOLO, B. & Pissarides, C.A. (2001). Looking into the black box: A survey of the matching function. Journal of Economic Literature, 39(2), 390–431. [doi:10.1257/jel.39.2.390] PHELPS, S., Parsons, S., Mcburney, P. & Sklar, E. (2002). ‘Co-evolution of auction mechanisms and trading strategies: Towards a novel approach to microeconomic design.’ In GECCO-02 Workshop on Evolutionary Computation in Multi-Agent Systems, (pp. 65–72). RUSSO, A., Riccetti, L. & Gallegati, M. (2016). Increasing inequality, consumer credit and financial fragility in an agent based macroeconomic model. Journal of Evolutionary Economics, 26(1), 25–47. [doi:10.1007/s00191-015-0410-z] SANYA, S., Mitchell, W. & Kantengwa, A. (2012). Prudential liquidity regulation in developing countries: A case study of Rwanda. Working Paper WP/12/20, IMF. SEPPECHER, P. & Salle, I. (2015). Deleveraging crises and deep recessions: A behavioural approach. Applied Economics, 47(34-35), 3771–3790. [doi:10.1080/00036846.2015.1021456] SIMON, H.A. (1955). A Behavioral Model of Rational Choice. The Quarterly Journal of Economics, 69(1), 99–118. SKINNER, J. (1988). Risky income, life cycle consumption, and precautionary savings. Journal of Monetary Economics, 22(2), 237–255. [doi:10.1016/0304-3932(88)90021-9] VANTREECK, H. (2013). Did inequality caused the US financial crisis? Journal of Economic Surveys, 28(3), 421–448. WINDRUM, P., Fagiolo, G. & Moneta, A. (2007). Empirical validation of agent-based models: Alternatives and prospects. Journal of Artificial Societies and Social Simulation, 10(2), 8: http://jasss.soc.surrey.ac.uk/10/2/8.html. YAKOVENKO, V.M. & Rosser, J.B. (2009). Colloquium: Statistical mechanics of money, wealth, and income. Rev. Mod. Phys., 81, 1703–1725. ZINMAN, J. (2014). Household Debt: Facts, Puzzles, Theories, and Policies. Nber working papers, National Bureau of Economic Research, Inc. [doi:10.3386/w20496]
auto_math_text
web
# Dragon Notes UNDER CONSTRUCTION Latest content: Apr 05 Deep Learning Mar 19 Anomaly Detection - ML Mar 13 +Data Tables Mar 08 Clustering - Machine Learning Feb 28 Support Vector Machines - ML Feb 20 Regression - Data Science # Home Welcome to Dragon Notes. Select a topic below. ## News ### A.I. Detects Alzheimer's in Brain Scans Six Years Before a Diagnosis • - Researchers programmed a machine-learning algorithm to diagnose early-stage Alzheimer's disease • - When tested, the algorithm correctly identified 92 percent of patients who developed Alzheimer's disease in the first test set, and 98 percent in the second • - The AI made predictions, on average, 78.8 months before the patient received their final diagnosis • - Early diagnosis is critical, as it allows to treat symptoms before damage becomes irreversible • - For full clinical relevance, researchers deem necessary to train the algorithm on a larger, more diverse patient dataset D. Smith; USCF. Jan 2, 2019.
auto_math_text
web
Tuesday, November 25, 2008 New Phrases I've just come up with a new phrase synonymous with sympathetic magic: Malleable Induced Macroscopic Synchrony. Any of you neo-pagans out there feel free to use it if you want a technical-sounding term for your gibberish. Thursday, October 23, 2008 Pithy Question Would you rather have medicine which works whether you think it's working or not, or medicine you think is working, whether or not it actually is? Hint: The two options presented are not the same thing, as some people would like to believe. Wednesday, September 17, 2008 So Wrong It's Funny This post is here to serve two purposes. First, I'd like to start recording some of the more comical mistakes I encounter while marking assignments, tests and whatnot. Second, I found something which allows for LaTeX equations to be embedded in a blogger post and I'm testing it out. Obviously, whenever I post something from an assignment I mark I will not include any names or other identifying information. The question: $\mbox{Let }x\mbox{ be a real number. For what values of }x\mbox{ is the identity}\\ \log(x^2)=2\log(x)\\ \mbox{ valid? Explain and justify.}$ $\log(x^2)=2\log(x)\\ \log(x^2)-2\log(x)=0\\ \log(x)(\log(x)-2)=0\\ \mbox{So }\log(x)=0\mbox{ or }\log(x)=2\\ x=1\mbox{ or }x=100$ $\log(x^2)=2\log(x)\\ \log(x)(x)=2\log(x)\\ \log(x)=2\log\\ x=2$ Now, for a little context. The course for which I am marking is intended for first-year university students who want to major in math. The course is designed to teach basic proof techniques, some very elementary number theory, and the basic idea of sets. The professor currently teaching the course is also placing emphasis on communication skills and the ability to express ideas in words as well as equations (something I find astonishingly lacking among most science and engineering undergraduates) and proper, unambiguous use of notation. Clearly she has her work cut out for her, but the reason I bring this up is that these mistakes are only funny (and disheartening) if made by people who should know better; if these same mistakes were made by high school students I would be less likely to find them funny or surprising. I would also like to add to those of you who feel like you might make similar mistakes: If you are not actually planning to focus on a mathematical area, you have very little reason to know how to properly use logarithms in these kinds of situations. You should not feel bad for not seeing the mistakes above unless you actually should understand the math in question. On a lighter note, the LaTeX appears to be working quite well. Tuesday, July 8, 2008 Video Memory I just had an idea for one of the worst video games ever. First, some philosophical setup. As a species, we remember things by passing them down the generations first orally, then in pictures, then in writing. Each of us reads the writing of previous generations about events in our past and creates anew our own interpretation of the experiences. In modern times, we have already developed video into another medium in this same vein, allowing a memory to be retained more accurately as more details are directly imparted. We are in the process of developing computer controls in such a way as to create what I believe will be the next stage in inter-generational memory: the interactive virtual experience. Essentially, it won't be very long before people will be capable of recording their sensory input to digital form in real-time and others will then be able to experience what they did through playing that back directly into their nervous system. One step beyond that, however, is to allow the person experiencing the memory to take an active role in the experience and have the computer relaying the experience judge likely outcomes for actions the person takes. This idea amounts, basically, to full-realism in a completely free-form video game constructed directly from someone's real experience. I'm guessing 50 to 100 years at most before this technology is feasibly in place. Scaling that idea back to modern technology, it roughly translates into constructing a video game with a real physics engine, detailed psychological AIs for characters other than the player, and with characters and setting drawn from real experience and constructed as true-to-life as possible. Now, if you're still reading this, you're probably thinking "Why would I want to play a video game where I actually have to sit through an hour of driving just to get somewhere an hour away? Why would I want to play something so close to real life when I could just live?" My answer is that some things should be remembered, and it might be worth recreating them as thoroughly as possible in order to do so. So here's the idea: Auschwitz. Take a high-powered realistic physics engine, construct a full-scale 3D rendering of the death camp as it existed during the war, populate it with guards and prisoners with AIs based on psychological profiles of people who historically were there. There is no goal, no quest, no "good ending". The only purpose of this game is to deliver an experience of what it may have been like to be there. No restrictions on the player - if you want to escape, try and probably get killed in the process. If you want to take on the guards, try and get killed anyway. If you do as you are told, experience the horror of watching almost everyone around you get gassed, shot or worked to death. As I said, one of the worst video games ever. In fact, I doubt it should even be called a game - I envision it more as an interactive cultural memory. Why make a game like this? Because some things should be remembered. Okay, now you can go ahead and tell me how appalled you are at the idea. Tuesday, February 19, 2008 Reflections on the Brutal Murder of Small Rodents For quite some time now, there have been a good number of mice living in this house. As with almost everything, I am extremely tolerant of the mild annoyances mice present. I am slightly surprised whenever I see one scurry across the floor and I generally attempt to guide it away from myself and the wires attached to my various electronic devices. I was mildly annoyed upon discovering that they had eaten most of my packets of chicken soup, but I can hardly blame the mice for obeying their instincts and soup is not exactly expensive or difficult to replace. The only thing these mice have done to particularly annoy me is to chew loudly on something in my room while I try to get to sleep. In these circumstances I usually attempt to scare them out of the room, but for amazingly this rarely seems to work. I have grown accustomed to whispering to them when I hear them, often referring affectionately to an individual mouse as "you stupid little shitling" and musing on how, if the mouse were to encounter my foot, I might sustain a mild injury easily remedied by a bandage and a rabies vaccine, they would have every bone in their tiny body broken and be splattered into a bloody pulp. Of course, this is not something I would ever deliberately do. Chances are, I would not have actually taken the initiative to get any traps until they chewed up something I consider valuable, and perhaps not even then. Of course, I am not the only one who lives in this house, and one of my housemates indulges in screaming fits whenever she sees one of these mice. She decided, after a failed attempt at poisoning, to get some mouse traps from the landlord and she set them up today. Of five traps the landlord gave us, she set up three and then returned to her parents' house for reading week. Within hours, two of the traps had killed mice, and I haven't been able to find the third. I then set up the remaining two, one of which has also already killed a mouse. Chances are, I will have to ask the landlord for more traps tomorrow. These are the standard mouse-traps you see everywhere and, ironically, they seem to me to be more humane than the "humane" traps my parents used when I was younger. The first kid of "humane" mousetrap was a sort of cage designed to trap the mouse inside when it tried to eat the cheese so that it could then be released outside alive and healthy. From what I remember, these traps didn't really work at all. Either the trigger wasn't sensitive enough or the mice never entered the trap to begin with. The second kind of "humane" mousetrap I only saw my parents use once because of its effect. It was a small tray of strongly adhesive material designed to stick to the mouse's feet when it went after the peanut in the middle of the tray. This adhesive would not kill the mouse, and it was designed to lose its adhesive properties when soaked in luke-warm water. The idea was that you would take the trapped mouse outside, pour some luke-warm water over it and the mouse would scurry away. Unfortunately, that's not what happened. It being winter, my parents took the trapped mouse outside and poured the warm water into the tray. The mouse struggled, but couldn't get free. They continued to bring more warm water attempting to keep the water's temperature from dropping and to allow the tray to de-adhere, but the mouse continued to remain fixed to the tray. Eventually the mouse froze to death, still struggling to get free. These standard mousetraps are designed to break the mouse's neck when they go after the cheese. Earlier today I saw one activate - it was very fast. The mouse twitched for a few seconds (which was painful to watch), but it died fairly quickly. All in all, I'd say a quick broken neck is more humane than hours of torture followed by freezing to death. So yes, friends, I am now a mammal-murderer. However, seeing as I don't actually have a moral difficulty with killing small rodents, why have I been going on for so long about this experience? Essentially, I am going over this because I thought I would not participate in this particular endeavour, but I wound up taking part despite my finding it aesthetically displeasing. I suppose I don't know myself quite as well as I thought. Monday, February 11, 2008 Skepticism and the Obvious Once we thought it obvious that the universe was infinite. After all, how else could it be? If the universe is finite, then that means we'll eventually hit a wall traveling in some direction, but whatever is past that wall must still be part of the universe, right? Relativity turned that idea on its head. Now we are fairly certain that the universe is, in fact, finite in diameter but without any boundaries. Once we thought that time could not have a beginning or an end, after all what would that even mean? From relativity again we learn that in all likelihood time is finite in a similar manner to space. Once we thought it was trivial that a thing is either here or not, but quantum mechanics tossed that idea in the trash. When not being observed, a thing can be in a superposition of "here" and "not here" - not merely that we don't know, but that neither intuitive possibility captures the fact of the matter. What is the lesson from all this? The more obvious a proposition is, the more thorough you must be in proving its validity - never accept "how else could it be?" as an argument. If presented with a list of possibilities, always question the exhaustiveness of that list. Sunday, January 27, 2008 A Truly Horrible Super-Power A week or so ago, I was thinking as I tried to sleep. For some reason, superhero role-playing games came into my head and I started to think of an interesting character to play if I ever get the chance again. The fairly cliché character who is afraid of their own power came to mind, and as usual I decided to put my own twist on it by taking it to real extremes. I wanted to play a character who was not new to their power, no longer surprised by it, but who has been so traumatized by their power's effects that they really hate to use it. Of course, for this to make sense, the power would have to be truly horrific to watch or experience. More difficult than simply an excruciating power, it had to be one which might actually be useful to a party of characters and which could be considered balanced in-game, so this basically eliminated large-scale blanket powers like making everyone in a 5-mile radius suddenly become severely schizophrenic. The power I came up with was as follows: The power is used through application of will on a specific person who is relatively nearby and concentration and line-of-sight must be maintained until it is completed. Upon activation, the target begins having their blood systematically replaced by stomach acid, starting at their venous capillaries near their extremities and moving inward through their veins to their torso and finally their heart. I imagine that this would be quite disgusting and painful to watch. First they would scream in pain as every nerve ending in their body simultaneously began dissolving. The next thing to happen would be their skin becoming loose and sliding down their flesh (I think skin takes a little longer to dissolve than some internal tissues), as skin separates from muscles internally. Next their limbs would fall limp and their bones would begin to disarticulate as tendons are separated from bone and flesh begins to dissolve. About here we would probably begin seeing pieces of the skin dissolve and they would be unable to maintain a standing position. Next, I think, their screaming would stop as the stomach acid reaches the inside of their heart and they die, but their body continues to dissolve over the next few minutes until they are basically left as a pile of bones and pieces of partially dissolved flesh in a pool of blood and stomach acid. Like I said, this would probably be pretty traumatizing to watch and any moral human being who knew they were the cause of this would try to avoid it almost pathologically from then on. Of course, I haven't done any medical research to see how long any of this would take or even if I've got the basic order of things right, but since I'm not going to be playing in a superhero game anytime soon, I don't really see the need.
auto_math_text
web
### The fault span of crash failuresThe fault span of crash failures Access Restriction Subscribed Author Varghese, George ♦ Jayaram, Mahesh Source ACM Digital Library Content type Text Publisher Association for Computing Machinery (ACM) File Format PDF Copyright Year ©2000 Language English Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science Abstract A $\textit{crashing}$ network protocol is an asynchronous protocol whose memory does not survive crashes. We show that a crashing network protocol that works over unreliable links can be driven to arbitrary global states, where each node is in a state reached in some (possibly different) execution, and each link has an arbitrary mixture of packets sent in (possibly different) executions. Our theorem considerably generalizes an earlier result, due to Fekete et al., which states that there is no correct crashing Data Link Protocol. For example, we prove that there is no correct crashing protocol for token passing and for many other resource allocation protocols such as $\textit{k}-exclusion,$ and the drinking and dining philosophers problems. We further characterize the reachable states caused by crash failures using reliable non-FIFO and reliable FIFO links. We show that with reliable non-FIFO links any acyclic subset of nodes and links can be driven to arbitrary states. We show that with reliable FIFO links, only nodes can be driven to arbitrary states. Overall, we show a $\textit{strict}$ hierarchy in terms of the set of states reachable by crash failures in the three link models. ISSN 00045411 Age Range 18 to 22 years ♦ above 22 year Educational Use Research Education Level UG and PG Learning Resource Type Article Publisher Date 2000-03-01 Publisher Place New York e-ISSN 1557735X Journal Journal of the ACM (JACM) Volume Number 47 Issue Number 2 Page Count 50 Starting Page 244 Ending Page 293 #### Open content in new tab Source: ACM Digital Library
auto_math_text
web
# Statement : Specific charge of alpha-particles is twice to that of proton . Explanation : Specific charge is given by e//m 34 views closed Statement : Specific charge of alpha-particles is twice to that of proton . Explanation : Specific charge is given by e//m A. sattement -1 si true statement -2 is ture statement =-2 si correct explantion for statement -1 B. statement -1 si true statement -2 si true statement -2 is not a correct explation for statement -1 C. statement -1 si true statemention -1 D. statement -1 and statement -2 both are false . by (61.2k points) selected by specfic charnge of alpha - pariticle is half that of a proton.
auto_math_text
web
Sales Toll Free No: 1-855-666-7446 # Acoustic Top Sub Topics Acoustics is the study of pressure waves in the atmosphere. These pressure waves are nothing but the sound. So, simply we can say that acoustic is the study of sound. The scope extends to higher (ultrasound) and lower (infra sound) frequencies. Acoustics reveals the information regarding the structural vibrations. Some cases, acoustic is a part of fluid dynamics. ## Acoustic Definition Acoustics is the scientific study of mechanical waves especially sound waves. The sound wave is a longitudinal wave. It is produced due to the vibrations of the medium. ## Acoustic Sounds Acoustic sound is nothing but the sound which is present in the acoustic domain. It can be sound or noise. Noise is the unwanted sound, the study of noise which is in acoustic domain is known as the environmental noise. ## Acoustic Wave $\frac{\partial^2 p}{\partial x^2}$-$\frac{1}{c^{2}}\frac{\partial^2 p}{\partial t^2}$
auto_math_text
web
# panda3d.core.NurbsSurfaceEvaluator¶ from panda3d.core import NurbsSurfaceEvaluator class NurbsSurfaceEvaluator Bases: ReferenceCount This class is an abstraction for evaluating NURBS surfaces. It accepts an array of vertices, each of which may be in a different coordinate space (as defined by a NodePath), as well as an optional knot vector. Inheritance diagram __init__() → None __init__(param0: NurbsSurfaceEvaluator) → None setUOrder(u_order: int) → None Sets the order of the surface in the U direction. This resets the knot vector to the default knot vector for the number of vertices. The order must be 1, 2, 3, or 4, and the value is one more than the degree of the surface. getUOrder() → int Returns the order of the surface in the U direction as set by a previous call to setUOrder(). setVOrder(v_order: int) → None Sets the order of the surface in the V direction. This resets the knot vector to the default knot vector for the number of vertices. The order must be 1, 2, 3, or 4, and the value is one more than the degree of the surface. getVOrder() → int Returns the order of the surface in the V direction as set by a previous call to setVOrder(). reset(num_u_vertices: int, num_v_vertices: int) → None Resets all the vertices and knots to their default values, and sets the surface up with the indicated number of vertices. You must then call setVertex() repeatedly to fill in all of the vertex values appropriately. getNumUVertices() → int Returns the number of control vertices in the U direction on the surface. This is the number passed to the last call to reset(). getNumVVertices() → int Returns the number of control vertices in the V direction on the surface. This is the number passed to the last call to reset(). setVertex(ui: int, vi: int, vertex: LVecBase3, weight: float) → None Sets the nth control vertex of the surface. This flavor sets the vertex as a 3-d coordinate and a weight; the 3-d coordinate values are implicitly scaled up by the weight factor. setVertex(ui: int, vi: int, vertex: LVecBase4) → None Sets the nth control vertex of the surface, as a vertex in 4-d homogeneous space. In this form, the first three components of the vertex should already have been scaled by the fourth component, which is the homogeneous weight. getVertex(ui: int, vi: int) → LVecBase4 Returns the nth control vertex of the surface, relative to its indicated coordinate space. Return type LVecBase4 getVertex(ui: int, vi: int, rel_to: NodePath) → LVecBase4 Returns the nth control vertex of the surface, relative to the given coordinate space. Return type LVecBase4 setVertexSpace(ui: int, vi: int, space: NodePath) → None Sets the coordinate space of the nth control vertex. If this is not specified, or is set to an empty NodePath, the nth control vertex is deemed to be in the coordinate space passed to evaluate(). This specifies the space as a fixed NodePath, which is always the same NodePath. Also see setting the space as a path string, which can specify a different NodePath for different instances of the surface. setVertexSpace(ui: int, vi: int, space: str) → None Sets the coordinate space of the nth control vertex. If this is not specified, or is set to an empty string, the nth control vertex is deemed to be in the coordinate space passed to evaluate(). This specifies the space as a string, which describes the path to find the node relative to the rel_to NodePath when the surface is evaluated. getVertexSpace(ui: int, vi: int, rel_to: NodePath) → NodePath Returns the coordinate space of the nth control vertex of the surface, expressed as a NodePath. Return type NodePath setExtendedVertex(ui: int, vi: int, d: int, value: float) → None Sets an n-dimensional vertex value. This allows definition of a NURBS surface or surface in a sparse n-dimensional space, typically used for associating additional properties (like color or joint membership) with each vertex of a surface. The value d is an arbitrary integer value and specifies the dimension of question for this particular vertex. Any number of dimensions may be specified, and they need not be consecutive. If a value for a given dimension is not specified, is it implicitly 0.0. The value is implicitly scaled by the homogenous weight value–that is, the fourth component of the value passed to setVertex(). This means the ordinary vertex must be set first, before the extended vertices can be set. getExtendedVertex(ui: int, vi: int, d: int) → float Returns an n-dimensional vertex value. See setExtendedVertex(). This returns the value set for the indicated dimension, or 0.0 if nothing has been set. setExtendedVertices(ui: int, vi: int, d: int, values: PN_stdfloat_const_[], num_values: int) → None Simultaneously sets several extended values in the slots d through (d + num_values - 1) from the num_values elements of the indicated array. This is equivalent to calling setExtendedVertex() num_values times. See setExtendedVertex(). getNumUKnots() → int Returns the number of knot values in the surface in the U direction. This is based on the number of vertices and the order. setUKnot(i: int, knot: float) → None Sets the value of the nth knot. Each knot value should be greater than or equal to the preceding value. If no knot values are set, a default knot vector is supplied. getUKnot(i: int) → float Returns the value of the nth knot. normalizeUKnots() → None Normalizes the knot sequence so that the parametric range of the surface in the U direction is 0 .. 1. getNumVKnots() → int Returns the number of knot values in the surface in the V direction. This is based on the number of vertices and the order. setVKnot(i: int, knot: float) → None Sets the value of the nth knot. Each knot value should be greater than or equal to the preceding value. If no knot values are set, a default knot vector is supplied. getVKnot(i: int) → float Returns the value of the nth knot. normalizeVKnots() → None Normalizes the knot sequence so that the parametric range of the surface in the U direction is 0 .. 1. getNumUSegments() → int Returns the number of piecewise continuous segments in the surface in the U direction. This is based on the knot vector. getNumVSegments() → int Returns the number of piecewise continuous segments in the surface in the V direction. This is based on the knot vector. evaluate(rel_to: NodePath) → NurbsSurfaceResult Returns a NurbsSurfaceResult object that represents the result of applying the knots to all of the current values of the vertices, transformed into the indicated coordinate space. Return type NurbsSurfaceResult output(out: ostream) → None getUKnots() → list getVKnots() → list property u_order Getter Returns the order of the surface in the U direction as set by a previous call to setUOrder(). Setter Sets the order of the surface in the U direction. This resets the knot vector to the default knot vector for the number of vertices. The order must be 1, 2, 3, or 4, and the value is one more than the degree of the surface. Return type int property v_order Getter Returns the order of the surface in the V direction as set by a previous call to setVOrder(). Setter Sets the order of the surface in the V direction. This resets the knot vector to the default knot vector for the number of vertices. The order must be 1, 2, 3, or 4, and the value is one more than the degree of the surface. Return type int property u_knots Getter Returns the value of the nth knot. Setter Sets the value of the nth knot. Each knot value should be greater than or equal to the preceding value. If no knot values are set, a default knot vector is supplied. Return type Sequence[float] property v_knots Getter Returns the value of the nth knot. Setter Sets the value of the nth knot. Each knot value should be greater than or equal to the preceding value. If no knot values are set, a default knot vector is supplied. Return type Sequence[float]
auto_math_text
web
Viser treff 1-20 av 167 • #### Angular analysis of B d 0  → K∗μ+μ− decays in pp collisions at √s=8 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2018-10-08) An angular analysis of the decay B_d^0  → K^∗μ^+μ^− is presented, based on proton-proton collision data recorded by the ATLAS experiment at the LHC. The study is using 20.3 fb^−1 of integrated luminosity collected during ... • #### ATLAS search for new phenomena in dijet mass and angular distributions using pp collisions at $\sqrt{s}$=7 TeV  (Peer reviewed; Journal article, 2013-01) Mass and angular distributions of dijets produced in LHC proton-proton collisions at a centre-of-mass energy $\sqrt{s}$=7 TeV have been studied with the ATLAS detector using the full 2011 data set with an integrated ... • #### Combination of inclusive and differential tt¯ charge asymmetry measurements using ATLAS and CMS data at √s=7 and 8 TeV  (Peer reviewed; Journal article, 2018-04-09) This paper presents combinations of inclusive and differential measurements of the charge asymmetry (AC) in top quark pair (tt¯) events with a lepton+jets signature by the ATLAS and CMS Collaborations, using data from LHC ... • #### Comprehensive measurements of $t$-channel single top-quark production cross sections at $\sqrt{s} = 7$ TeV with the ATLAS detector  (Peer reviewed; Journal article, 2014-12-11) This article presents measurements of the $t$.channel single top-quark $t$ and top-antiquark $\bar{t}$ total production cross sections $\sigma(tq)$ and $\sigma(\bar{t}q)$. their ratio $R_{t}=\sigma(tq)/\si ... • #### Constraints on the off-shell Higgs boson signal strength in the high-mass \(ZZ$ and $WW$ final states with the ATLAS detector  (Peer reviewed; Journal article, 2015-07) Measurements of the $ZZ$ and $WW$ final states in the mass range above the $2m_Z$ and $2m_W$ thresholds provide a unique opportunity to measure the off-shell coupling strength of the Higgs boson. This paper presents ... • #### Direct top-quark decay width measurement in the tt¯lepton+jets channel at √s=8 TeV with the ATLAS experiment  (Peer reviewed; Journal article, 2018-02-15) This paper presents a direct measurement of the decay width of the top quark using tt¯ events in the lepton+jets final state. The data sample was collected by the ATLAS detector at the LHC in proton–proton collisions at a ... • #### Electron and photon energy calibration with the ATLAS detector using LHC Run 1 data  (Peer reviewed; Journal article, 2014-10-01) This paper presents the electron and photon energy calibration achieved with the ATLAS detector using about 25 fb −1 of LHC proton–proton collision data taken at centre-of-mass energies of s√=7 and 8 TeV. The reconstruction ... • #### Evidence for electroweak production of W±W±jj in pp collisions at √s =8 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2014-10-03) This Letter presents the first study of W±W±jj, same-electric-charge diboson production in association with two jets, using 20.3  fb−1 of proton-proton collision data at s√=8  TeV recorded by the ATLAS detector at the Large ... • #### Evidence for the Higgs-boson Yukawa coupling to tau leptons with the ATLAS detector  (Peer reviewed; Journal article, 2015-04-21) Results of a search for $H \to \tau \tau$ decays are presented, based on the full set of proton--proton collision data recorded by the ATLAS experiment at the LHC during 2011 and 2012. The data correspond to integrated ... • #### Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2014-11-10) Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ... • #### Flavour tagged time dependent angular analysis of the $B_s^{0} \rightarrow J/\psi \phi$ decay and extraction of $\Delta\Gamma_s$ and the weak phase $\phi_s$ in ATLAS  (Peer reviewed; Journal article, 2014-09) A measurement of the $B_s ^{0}\rightarrow J/\psi \phi$ decay parameters, updated to include flavour tagging is reported using $4.9 fb^{-1}$ of integrated luminosity collected by the ATLAS detector from $\sqrt{s}= ... • #### Jet energy measurement and its systematic uncertainty in proton–proton collisions at √s=7TeV with the ATLAS detector  (Peer reviewed; Journal article, 2015-01-15) The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector using proton–proton collision data with a centre-of-mass energy of s√=7 TeV corresponding to an integrated ... • #### Light-quark and gluon jet discrimination in pp collisions at √s = 7 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2014-08-21) A likelihood-based discriminant for the identification of quark- and gluon-initiated jets is built and validated using 4.7 fb−1 of proton–proton collision data at √s = 7 TeV collected with the ATLAS detector at the LHC. ... • #### Measurement of \(W$ boson angular distributions in events with high transverse momentum jets at $\sqrt{s}=$ 8 TeV using the ATLAS detector  (Peer reviewed; Journal article, 2017-02) The $W$ boson angular distribution in events with high transverse momentum jets is measured using data collected by the ATLAS experiment from proton--proton collisions at a centre-of-mass energy $\sqrt{s}=$ 8 TeV at ... • #### Measurement of charged-particle spectra in Pb+Pb collisions at (Formula presented.) = 2.76 TeV with the ATLAS detector at the LHC  (Peer reviewed; Journal article, 2015-09-09) Charged-particle spectra obtained in Pb+Pb interactions at sNN−−−√=2.76 TeV and pp interactions at sNN−−−√=2.76 TeV with the ATLAS detector at the LHC are presented, using data with integrated luminosities of 0.15 nb−1 and ... • #### Measurement of differential cross sections and W+/W− cross-section ratios for W boson production in association with jets at s√=8 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2018-05-11) This paper presents a measurement of the W boson production cross section and the W +/W − cross-section ratio, both in association with jets, in proton-proton collisions at 𝑠√=8 TeV with the ATLAS experiment at the Large ... • #### Measurement of differential cross sections of isolated-photon plus heavy-flavour jet production in pp collisions at √s = 8 TeV using the ATLAS detector  (Peer reviewed; Journal article, 2018-01-10) This Letter presents the measurement of differential cross sections of isolated prompt photons produced in association with a b-jet or a c-jet. These final states provide sensitivity to the heavy-flavour content of the ... • #### Measurement of differential cross-sections of a single top quark produced in association with a W boson at s√=13TeV with ATLAS  (Peer reviewed; Journal article, 2018-03-06) The differential cross-section for the production of a W boson in association with a top quark is measured for several particle-level observables. The measurements are performed using 36.1fb^−1 of pp collision data collected ... • #### Measurement of dijet azimuthal decorrelations in pp collisions at √s=8  TeV with the ATLAS detector and determination of the strong coupling  (Peer reviewed; Journal article, 2018-11-07) A measurement of the rapidity and transverse momentum dependence of dijet azimuthal decorrelations is presented, using the quantity RΔϕ. The quantity RΔϕ specifies the fraction of the inclusive dijet events in which the ... • #### Measurement of distributions sensitive to the underlying event in inclusive Z-boson production in pp collisions at √s = 7 TeV with the ATLAS detector  (Peer reviewed; Journal article, 2014-12-10) A measurement of charged-particle distributions sensitive to the properties of the underlying event is presented for an inclusive sample of events containing a TeX -boson, decaying to an electron or muon pair. The measurement ...
auto_math_text
web
Now showing items 1-1 of 1 • #### Net-baryon fluctuations measured with ALICE at the CERN LHC  (Elsevier, 2017-11) First experimental results are presented on event-by-event net-proton fluctuation measurements in Pb- Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, recorded by the ALICE detector at the CERN LHC. The ALICE detector is well ... Journal article
auto_math_text
web
chart of saponification values for making soap for kids Production Environment COOPERATIVE PARTNER 12: Making Soap - Saponification (Experiment) - Chemistry ...- chart of saponification values for making soap for kids ,Liquid cooking oils originate from corn, peanuts, olives, soybeans, and many other plants. For making soap, all different types of fats and oils can be used – anything from lard to exotic tropical plant oils. Saponification Reactions: $\text{Fat} + \text{Lye} → \text{Soap} + \text{Glycerol}$Saponification Values - Caveman ChemistryMeasuring SAP Values Making Liquid Soap • Synthetically weigh 100.00 g of alcoholic KOH into a one-quart glass jar • Analytically weigh 20.XX g of melted oil into the jar • Screw lid onto jar--lid should have small hole in it • Shake the solution until thoroughly mixed • Cure in oven for one hour at 160oF saponification numbers result in a milder soap. This is a GREAT tool, and before you make a recipe, you plug your ingredients into the open fields, hit a button and it will tell you how much lye is needed for the amount of fats you are using. It is recommended that you make your soap with an extra 5-8% fat to be sure that it will be mild. Lye Calculator Disclaimer: The Handcrafted Soap & Cosmetic Guild is providing this lye calculator to soapmakers free of charge. Every effort has been made to ensure that the SAP values used herein are accurate. However, even in the best case, the SAP values provided by reputable … 11 Wonderful Essential Oils for Soap Making (and Bonus Chart) Lemon essential oil is a good choice for soap making because of its ability to anchor especially for soap processed, using the saponification method. Best blend partners Lemon will blend with most other essential oils, such as eucalyptus, ginger, lavender, May Chang, as well as other citruses, such as orange and tangerine. Saponification Chart | Suburban DIY How-to and DIY from a Suburban Farm. A Rooster Hooch website. Search. Menu Experiment 13 – Preparation of Soap Jan 13, 2012·Part 1 – Saponification – Preparation of Soap 1. Weigh a 150-mL beaker and record the mass. Add about 5 g of a fat or oil, reweigh, and record the mass. Calculate the mass of fat or oil used by subtraction. Record the type of fat or oil you are using. 2. Add 15 … Lye Calculation Using a Saponification Chart - Tutorial A saponification chart is a list of oils and fats and their respective SAP (saponification) values. On technical data sheets saponification values are expressed as milligrams of KOH per gram of oil. They tell you how many milligrams of potassium hydroxide you … The Most Popular Fatty Acid Profiles in Soapmaking Mar 19, 2015·The average percentage of myristic acid in the favorite soap recipes of soapmakers polled rounds in at 7%. Most recipes clocked in at 4% to 7% myristic acid, but there were a few outliers with slightly higher percentages of myristic acid. Saponification Chart - From Nature With Love 171 行·Saponification Chart. Our saponification values have been gathered primarily from our … How Is Soap Made? Learn the Science and the Art of ... In simple terms, saponification is the name for a chemical reaction between an acid and a base to form a salt. When you make soap using the cold process soap making method, you mix an oil or fat (which is your acid) with Lye (which is your base) to form soap (which is a salt).. How exactly does this happen? (PDF) Production and Analysis of Soap using Locally ... Jul 04, 2016·Mean moisture content was 0.98%, free fatty acids 62.6% (palmitic acid), peroxide value 4.1 meq/kg, iodine value 50.2, saponification value 186 and unsaponifiable matter 0.53 HSGC profiles of … Saponification Table Saponification Table . How much Lye should you use in order to saponify a specific fat or oil? Use this simple saponification table of the most commonly used ingredients to find out. Multiply the number of grams per fat/oil ingredient by the SAP Number below. Remember, most recipes call … The Science of Soap Making in a Lab : 9 Steps (with ... The Science of Soap Making in a Lab: Making soap doesn't seem like something you'd do in a lab, but it's actually more scientific than you'd think. Saponification is the soap making process, which uses the basic solution lye and different types of fats. The science behind soap making i… Preparation of Soap Using Different Types of Oils and ... The saponification values, iodine values of coconut oil and castor oil were found out and these values were also found for the blend. It was found that the blend was having ... 2.4 Different types of soap making oils 7 2.5 Castor Oil 9 ... Lye Calculator Disclaimer: The Handcrafted Soap & Cosmetic Guild is providing this lye calculator to soapmakers free of charge. Every effort has been made to ensure that the SAP values used herein are accurate. However, even in the best case, the SAP values provided by reputable … (PDF) Production and Analysis of Soap using Locally ... Jul 04, 2016·Mean moisture content was 0.98%, free fatty acids 62.6% (palmitic acid), peroxide value 4.1 meq/kg, iodine value 50.2, saponification value 186 and unsaponifiable matter 0.53 HSGC profiles of … Soapmaking Oil Chart – Lovin Soap Studio We’ve updated our soap making oil chart and have added two more charts to help you better formulate soap recipes! Scroll down to see! Soap Making Oil Chart. Base Oil, Butter or Fat ... and it adds luxurious conditioning and moisturizing values as well. 5-20%: I typically use 5-15% but occasionally will experiment with using up to 20%. Castor Oil: Saponification. The soapy facts | Realize Beauty Mar 26, 2010·Saponified soap can at worse damage the hair and at best, leave it dull and tricky to comb. Soap for kids. This reaction is far too caustic to make at school but it is worth demonstrating as part of an acid/base reaction class. However, if you do want to make soap with your kids it is totally do-able as long as you remain vigilant. Soap Oils SAP Saponification Values Factors | Soapmaking ... Soap Oils SAP Saponification Values Factors. February 4, 2011 Steven Cole. The SAP values below are set to 1% excess fat. To calculate how many ounces of lye are needed for the oil that you use take the ounces and multiply by the factor. ... SAP Value Chart: A-H Oil, Butter or Other Ingredient: NaOH (Sodium Hydroxide aka Lye) (for hard bar soap ... How to Better Understand SoapCalc's Soap Quality Numbers ... Jul 11, 2016·SoapCalc’s Soap Quality numbers for Olive Oil, which contains very little saturated fatty acids. Looking at olive oil’s fatty acid profile on the bottom left, you see that olive oil contains 0% lauric, 0% myristic, 14% palmitic, and 3% stearic. LIPIDS: SAPONIFICATION (THE PROPERTIES AND SYNTHESIS … The soap you will be making in lab is different than what is purchased commercially in stores. For one thing, commercial bars of soap are often a mixture of soaps and detergents; this soap is a completely vegetable (or plant) based soap. Also, in the commercial saponification reaction, the glycerol (or glycerin) that is produced is removed (it ... Guide to Saponification (SAP) Values in Soap Making Saponification Chart (sodium hydroxide, NaOH) To use this simple chart, just multiply the weight of each oil (in either Grams or Ounces) by the value for that oil in the table below. This will give the amount of Sodium Hydroxide required to fully saponify that amount of that particular oil (in Grams or Ounces).
auto_math_text
web
Training an audio keyword spotter with PyTorch by Chris Lovett This tutorial will show you how to train a keyword spotter using PyTorch. A keyword spotter listens to an audio stream from a microphone and recognizes certain spoken keywords. Since it is always listening there is good reason to find a keyword spotter that can run on a very small low-power co-processor so the main computer can sleep until a word is recognized. The ELL compiler makes that possible. In this tutorial you will train a keyword spotter using the the speech commands dataset which contains 65,000 recordings of 30 different keywords (bed, bird, cat, dog, down, eight, five, four, go, happy, house, left, marvin, nine, no, off, on, one, right, seven, sheila, six, stop, three, tree, two, up, wow, yes, zero) each about one second long. This is the dataset used to train the models in the speech commands model gallery. The Getting started with keyword spotting on the Raspberry Pi tutorial uses these models. Once you learn how to train a model you can then train your own custom model that responds to different keywords, or even random sounds, or perhaps you want just a subset of the 30 keywords for your application. This tutorial shows you how to do that. Before you begin Complete the following steps before starting the tutorial. What you will need • Laptop or desktop computer with at least 16 GB of RAM. • Optional NVidia Graphics Card that supports CUDA. You will get great results with a GTX 1080 which is commonly used for training neural networks. Overview The picture below illustrates the process you will follow in this tutorial. First you will convert the wav files into a big training dataset using a featurizer. This dataset is the input to the training process which outputs a trained keyword spotter. The keyword spotter can then be verified by testing. Training a Neural Network is a computationally intensive task that takes millions or even billions of floating point operations. That is why you probably want to use CUDA accelerated training. Fortunately, audio models train pretty quickly. On an NVidia 1080 graphic card the 30 keyword speech_commands dataset trains in about 3 minutes using PyTorch. Without CUDA the same training takes over 2.5 hours on an Intel Core i7 CPU. ELL Root After you have installed and built the ELL compiler, you also need to set an environment variable named ELL_ROOT that points to the location of your ELL git repo, for example: [Linux] export ELL_ROOT="~/git/ell" [Windows] set ELL_ROOT=d:\git\ell Subsequent scripts depend on this path being set correctly. Now make a new working folder: mkdir tutorial cd tutorial Installing PyTorch Installing PyTorch with CUDA is easy to do using your Conda environment. If you don’t have a Conda environment, see the ELL setup instructions (Windows, Ubuntu Linux, macOS). You may want to create a new Conda environment for PyTorch training, or you can add PyTorch to your existing one. You can create a new environment easily with this command: conda create -n torch python=3.6 Activate it [Linux] source activate torch [Windows] activate torch Installing pyaudio This tutorial uses pyaudio which can be installed using: [Linux] sudo apt-get install python-pyaudio python3-pyaudio portaudio19-dev && pip install pyaudio [Windows] pip install pyaudio [macOS] brew install portaudio && pip install pyaudio Helper Python Code This tutorial uses python scripts located in your ELL git repo under tools/utilities/pythonlibs/audio and tools/utilities/pythonlibs/audio/training. When you see a python script referenced below like make_training_list.py, just prefix that with the full path to that script your ELL git repo. Google crowd sourced the creation of these recordings so you get a nice variety of voices. Google released it under the Creative Commons BY 4.0 license. Go ahead and download that file and move it into a folder named audio then unpack it using this Linux command: tar xvf speech_commands_v0.01.tar.gz On Windows you can use the Windows Subsystem for Linux to do the same. Alternatively, you can install 7-zip. 7-zip will install a new menu item so you can right click the speech_commands_v0.01.tar.gz and select “Extract here”. The total disk space required for the uncompressed files is about 2 GB. When complete your audio folder should contain 30 folders plus one named background_noise. You should also see the following additional files: • validation_list.txt - the list of files that make up the validation set • testing_list.txt - the list of files in the testing set Lastly, you will need to create the training_list.txt file containing all the wav files (minus the validation and test sets) which you can do with this command: python make_training_list.py --wav_files audio --max_files_per_directory 1600 copy audio\categories.txt . Where audio is the path to your unpacked speech command wav files. This will also create a categories.txt file in the same folder. This file lists the names of the keywords (directories) found in the audio folder. Copy that file to your working tutorial folder. Note that the command line above includes the option --max_files_per_directory 1600. This options limits the training list to a maximum of 1600 files per subdirectory and will result in a training dataset of around 1.2 GB (using the make_dataset command line options shown below). Feel free to try other numbers here or remove the limit entirely to use every available file for training. You will notice there are not exactly the same number of training files in each subdirectory, but that is ok. Without any limits the full speech_commands training dataset file will be about 1.6 GB and the make_training_list.py script may use up to 6 gb RAM to get the job done. As you can see you can try different sized training datasets. When you are ready to experiment, figure out which gives the best results, all the training files, or a subset. Create a Featurizer Model As shown in the earlier tutorial the featurizer model is a mel-frequency cepstrum (mfcc) audio transformer which preprocesses audio input, preparing it for use by the training process. This featurizer is created as an ELL model using the make_featurizer command: python make_featurizer.py --sample_rate 16000 --window_size 512 --input_buffer_size 512 --hamming_window --filterbank_type mel --filterbank_size 80 --filterbank_nfft 512 --nfft 512 --log --auto_scale The reason for the --sample_rate 16000 argument is that small low powered target devices might not be able to record and process audio at very high rates. So while your host PC can probably do 96kHz audio and higher just fine, this tutorial shows you how to down sample the audio to something that will run on a tiny target device. The main point being that you will get the best results if you train the model on audio that is sampled at the same rate that your target device will be recording. The --auto_scale option converts raw integer audio values to floating point numbers in the range [-1, 1]. You should see a message saying “Saving featurizer.ell” and if you print this using the following command line: [Linux] $ELL_ROOT/build/bin/print -imap featurizer.ell [Windows] %ELL_ROOT%\build\bin\release\print -imap featurizer.ell -fmt dgml -of graph.dgml then you will see the following nodes: You can now compile this model using the ELL model compiler to run on your PC using the familiar wrap command: [Windows] python %ELL_ROOT%\tools\wrap\wrap.py --model_file featurizer.ell --outdir compiled_featurizer --module_name mfcc [Linux] python$ELL_ROOT/tools/wrap/wrap.py --model_file featurizer.ell --outdir compiled_featurizer --module_name mfcc Then you can build the compiled_featurizer folder using cmake as done in prior tutorials. You will do this a lot so it might be handy to create a little batch or shell script called makeit that contains the following: Windows: mkdir build cd build cmake -G "Visual Studio 16 2019" -A x64 .. cmake --build . --config Release cd .. Linux: #!/bin/bash mkdir build cd build cmake .. make cd .. So compiling a wrapped ELL model is now this simple: pushd compiled_featurizer makeit popd Create the Dataset using the Featurizer Now you have a compiled featurizer, so you can preprocess all the audio files using this featurizer and create a compressed numpy dataset with the result. This large dataset will contain one row per audio file, where each row contains all the featurizer output for that file. The featurizer output is smaller than the raw audio, but it will still end up being a pretty big file, (about 1.2 GB). Of course it depends how many files you include in the set. Remember for best training results the more files the better, so you will use the training_list.txt you created earlier which selected 1600 files per keyword. You need three datasets created from each of the list files in your audio folder using make_dataset as follows: python make_dataset.py --list_file audio/training_list.txt --featurizer compiled_featurizer/mfcc --window_size 40 --shift 40 python make_dataset.py --list_file audio/validation_list.txt --featurizer compiled_featurizer/mfcc --window_size 40 --shift 40 python make_dataset.py --list_file audio/testing_list.txt --featurizer compiled_featurizer/mfcc --window_size 40 --shift 40 Where the audio folder contains your unpacked .wav files. If your audio files are in a different location then simply provide the full path to it in the above commands. Creating the datasets will take a while, about 10 minutes or more, so now is a great time to grab a cup of tea. It will produce three files in your working folder named training.npz, validation.npz and testing.npz which you will use below. Note that make_training_list.py skipped the _background_noise folder. But make_dataset.py has options to use that background noise to randomly mix in with each training word. By default make_dataset.py does not do that. But you can experiment with this and see if it helps or not. Train the Keyword Spotter You can now finally train the keyword spotter using the train_classifier script: python train_classifier.py --architecture GRU --num_layers 2 --dataset . --use_gpu --outdir . This script will use PyTorch to train a GRU based model using the datasets you created earlier then it will export an onnx model from that. The file will be named KeywordSpotter.onnx and if all goes well you should see console output like this: Loading .\testing_list.npz... Loaded dataset testing_list.npz and found sample rate 16000, audio_size 512, input_size 80, window_size 40 and shift 40 Loaded dataset training_list.npz and found sample rate 16000, audio_size 512, input_size 80, window_size 40 and shift 40 Loaded dataset validation_list.npz and found sample rate 16000, audio_size 512, input_size 80, window_size 40 and shift 40 Training model GRU128KeywordSpotter.pt Training 2 layer GRU 128 using 46256 rows of featurized training input... RMSprop ( Parameter Group 0 alpha: 0 centered: False eps: 1e-08 lr: 0.001 momentum: 0 weight_decay: 1e-05 ) Epoch 0, Loss 1.624, Validation Accuracy 48.340, Learning Rate 0.001 Epoch 1, Loss 0.669, Validation Accuracy 78.581, Learning Rate 0.001 Epoch 2, Loss 0.538, Validation Accuracy 88.623, Learning Rate 0.001 Epoch 3, Loss 0.334, Validation Accuracy 91.423, Learning Rate 0.001 Epoch 4, Loss 0.274, Validation Accuracy 92.041, Learning Rate 0.001 Epoch 5, Loss 0.196, Validation Accuracy 93.945, Learning Rate 0.001 Epoch 6, Loss 0.322, Validation Accuracy 93.652, Learning Rate 0.001 Epoch 7, Loss 0.111, Validation Accuracy 94.548, Learning Rate 0.001 Epoch 8, Loss 0.146, Validation Accuracy 95.296, Learning Rate 0.001 Epoch 9, Loss 0.109, Validation Accuracy 95.052, Learning Rate 0.001 Epoch 10, Loss 0.115, Validation Accuracy 95.492, Learning Rate 0.001 Epoch 11, Loss 0.116, Validation Accuracy 95.931, Learning Rate 0.001 Epoch 12, Loss 0.064, Validation Accuracy 95.866, Learning Rate 0.001 Epoch 13, Loss 0.159, Validation Accuracy 95.736, Learning Rate 0.001 Epoch 14, Loss 0.083, Validation Accuracy 95.898, Learning Rate 0.001 Epoch 15, Loss 0.094, Validation Accuracy 96.484, Learning Rate 0.001 Epoch 16, Loss 0.056, Validation Accuracy 95.801, Learning Rate 0.001 Epoch 17, Loss 0.096, Validation Accuracy 95.964, Learning Rate 0.001 Epoch 18, Loss 0.019, Validation Accuracy 96.305, Learning Rate 0.001 Epoch 19, Loss 0.140, Validation Accuracy 96.501, Learning Rate 0.001 Epoch 20, Loss 0.057, Validation Accuracy 96.094, Learning Rate 0.001 Epoch 21, Loss 0.025, Validation Accuracy 96.289, Learning Rate 0.001 Epoch 22, Loss 0.037, Validation Accuracy 95.947, Learning Rate 0.001 Epoch 23, Loss 0.008, Validation Accuracy 96.191, Learning Rate 0.001 Epoch 24, Loss 0.050, Validation Accuracy 96.419, Learning Rate 0.001 Epoch 25, Loss 0.010, Validation Accuracy 96.257, Learning Rate 0.001 Epoch 26, Loss 0.014, Validation Accuracy 96.712, Learning Rate 0.001 Epoch 27, Loss 0.044, Validation Accuracy 96.159, Learning Rate 0.001 Epoch 28, Loss 0.011, Validation Accuracy 96.289, Learning Rate 0.001 Epoch 29, Loss 0.029, Validation Accuracy 96.143, Learning Rate 0.001 Trained in 299.81 seconds Training accuracy = 99.307 % Evaluating GRU keyword spotter using 6573 rows of featurized test audio... Saving evaluation results in '.\results.txt' Testing accuracy = 93.673 % saving onnx file: GRU128KeywordSpotter.onnx So here you see the model has trained well and is getting an evaluation score of 93.673% using the testing_list.npz dataset. The testing_list contains files that the training_list never saw before so it is expected that the test score will always be lower than the final training accuracy (99.307%). The real trick is increasing that test score. This problem has many data scientists employed around the world! Importing the ONNX Model In order to try your new model using ELL, you first need to import it from ONNX into the ELL format as follows: [Linux] python $ELL_ROOT/tools/importers/onnx/onnx_import.py GRU128KeywordSpotter.onnx [Windows] python %ELL_ROOT%\tools\importers\onnx\onnx_import.py GRU128KeywordSpotter.onnx This will generate an ELL model named GRU128KeywordSpotter.ell which you can now compile using similar technique you used on the featurizer: [Linux] python$ELL_ROOT%/tools/wrap/wrap.py --model_file GRU128KeywordSpotter.ell --outdir KeywordSpotter --module_name model [Windows] python %ELL_ROOT%\tools\wrap\wrap.py --model_file GRU128KeywordSpotter.ell --outdir KeywordSpotter --module_name model then compile the resulting KeywordSpotter project using your new makeit command: pushd KeywordSpotter makeit popd Testing the Model So you can now take the new compiled keyword spotter for a spin and see how it works. You can measure the accuracy of the ELL model using the testing list. The test_ell_model.py script will do that: python test_ell_model.py --classifier KeywordSpotter/model --featurizer compiled_featurizer/mfcc --sample_rate 16000 --list_file audio/testing_list.txt --categories categories.txt --reset --auto_scale This is going back to the raw .wav file input and refeaturizing each .wav file using the compiled featurizer, processing each file in random order. This is similar to what you will do on your target device while processing microphone input. As a result this test pass will take a little longer (about 2 minutes). You will see every file scroll by telling you which one passed or failed with a running pass rate. The last page of output should look something like this: ... Saving 'results.json' Test completed in 157.65 seconds 6090 passed, 483 failed, pass rate of 92.65 % Best prediction time was 0.0 seconds The final pass rate printed here is 92.65% which is close to the pytorch test accuracy of 93.673%. But how will this model perform on a continuous stream of audio from a microphone? You can try this out using the following tool: [Linux] python $ELL_ROOT/tools/utilities/pythonlibs/audio/view_audio.py --classifier KeywordSpotter\model --featurizer compiled_featurizer/mfcc --categories categories.txt --sample_rate 16000 --threshold 0.8 --auto_scale [Windows] python %ELL_ROOT%\tools\utilities\pythonlibs\audio\view_audio.py --classifier KeywordSpotter\model --featurizer compiled_featurizer/mfcc --categories categories.txt --sample_rate 16000 --threshold 0.8 --auto_scale Speak some words from categories.txt slowly and clearly. You will find that the accuracy is not as good, it recoginizes the first word you speak, but nothing else. If you run the test_ell_model.py script without “–reset” argument then the test is run as one continuous steam of audio with no model reset between each .wav file. In this case you will see the test score drop to about 70%. So why is this? Well, remember the trainer has one row per wav recording and this helps the trainer know when to reset the GRU node hidden state. See the init_hidden method. But in live audio how do you know when one word stops and another starts? Sometimes spoken words blur together. How does ELL then know when to reset the GRU nodes hidden state? By default ELL does not reset the hidden state so the GRU state blurs together over time and gets confused especially if there is no clear silence between consecutive words. So how can you fix this? Well, this is where Voice Activity Detection (VAD) can come in handy. ELL actually has a node called VoiceActivityDetectorNode that you can add to the model. The input is the same featurizer input that the classifier uses, and the output is an integer value 0 if there is no activity detected and 1 if there is activity. This output signal can then be piped into the GRU nodes as a “reset_trigger”. The GRU nodes will then reset themselves when they see that trigger change from 1 to 0 (the end of a word). To enable this you will need to edit the ELL GRU128KeywordSpotter.ell file using the add_vad.py script: python add_vad.py GRU128KeywordSpotter.ell --sample_rate 16000 --window_size 512 --tau_up 1.5 --tau_down 0.09 --large_input 4 --gain_att 0.01 --threshold_up 3.5 --threshold_down 0.9 --level_threshold 0.02 This will edit the ELL model, remove the dummy reset triggers on the two GRU nodes and replace them with a VoiceActivityDetectorNode. Your new GRU128KeywordSpotter.ell should now look like this: Note: you can use the following tool to generate these graphs: [Linux]$ELL_ROOT/build/bin/release/print -imap GRU128KeywordSpotter.ell -fmt dot -of graph.dot [Windows] %ELL_ROOT%\build\bin\release\print -imap GRU128KeywordSpotter.ell -fmt dgml -of graph.dgml And you can view graph.dgml using Visual Studio. On Linux you can use the dot format which can be viewed using GraphViz. You can now compile this new GRU128KeywordSpotter.ell model using wrap.py as before and try it out. You should see the test_ell_model accuracy increase back up from 70% to about 85%. The VoiceActivityDetector is not perfect on all the audio test samples, especially those with high background noise. The VoiceActivityDetector has many parameters that you can see in add_vad.py. These parameters can be tuned for your particular device to get the best result. You can use the <ELL_ROOT>/tools/utilities/pythonlibs/audio/vad_test.py tool to help with that. You can also use the view_audio.py script again and see how it behaves when you speak the 30 different keywords listed in categories.txt. You should notice that it works better now because of the VoiceActivityDetection whereas previously you had to click “stop” and “record” to reset the model. Now it resets automatically and is able to recognize the next keyword after a small silence. You still cannot speak the keywords too quickly, so this solution is not perfect. Understanding full conversation speech is a different kind of problem that requires bigger models and audio datasets that include whole phrases. The add_vad.py script takes many parameters that you may need to tune for your particular microphone. To do this use the following tool: tools/utilities/pythonlibs/audio/vad_test.py This tool takes the featurizer.ell and generates vad.ell models matching the given parameters you provide in the dialog, and it will test that on a given wav file. So record wav files off the STM32F469-disco of you speaking a few words (in a quiet place) and run them, then calibrate the vad.ell model parameters until the vad output detects words and silence correctly. You are done when the graph looks like this. You may also need to configure the microphone gain on your device. Specifically you should see the Orange VAD signal nicely frame each word spoken. Use --help on the add_vad.py command line to see what each parameter means. Experimenting The train_classifier.py script has a number of other options you can play with including number of epochs, batch_size, learning_rate, and the number of hidden_units to use in the GRU layers. Note that training also has an element of randomness to it, so you will never see the exact same numbers even when you retrain with the exact same parameters. This is due to the Stochastic Gradient Descent algorithm that is used by the trainer. The neural network you just trained is described by the KeywordSpotter class in the train_classifier.py script. You can see the __init__ method of the GRU128KeywordSpotter class creating two GRU nodes and a Linear layer which are used by the forward method as follows: def forward(self, input): # input has shape: [seq,batch,feature] gru_out, self.hidden1 = self.gru1(input, self.hidden1) gru_out, self.hidden2 = self.gru2(gru_out, self.hidden2) keyword_space = self.hidden2keyword(gru_out) result = F.log_softmax(keyword_space, dim=2) # return the mean across the sequence length to produce the # best prediction of which word it found in that sequence. # we can do that because we know each window_size sequence in # the training dataset contains at most one word and a single # word is not spread across multiple training rows result = result.mean(dim=0) return result You can now experiment with different model architectures. For example, what happens if you add a third GRU layer? To do that you can make the following changes to the KeywordSpotter class: 1. Add the construction of the 3rd GRU layer in the __init__ method: self.gru3 = nn.GRU(hidden_dim, num_keywords) 2. Change the Linear layer to take a different number of inputs since the output of gru3 will now be size num_keywords instead of hidden_dim: self.hidden2keyword = nn.Linear(num_keywords, num_keywords) 3. Add a third hidden state member in the init_hidden method: self.hidden3 = None 4. Use this new GRU layer in the forward method right after the gru2. gru_out, self.hidden3 = self.gru3(gru_out, self.hidden3) 5. That’s it! Pretty simple. Now re-run train_classifier as before. Your new model should get an evaluation accuracy that is similar, so the 3rd GRU layer didn’t really help much. But accuracy is not the only measure, you might also want to compare the performance of these two models to see which one is faster. It is often a speed versus accuracy trade off with neural networks which is why you see a speed versus accuracy graph on the ELL Model Gallery. There are many other things you could try. So long as the trainer still shows a decreasing loss across epochs then it should train ok. You know you have a broken model if the loss never decreases. As you can see, PyTorch makes it pretty easy to experiment. To learn more about PyTorch see their excellent tutorials. Hyper-parameter Tuning So far your training has used all the defaults provided by train_classifier, but an important step in training neural networks is tuning all the training parameters. These include learning_rate, batch_size, weight_decay, lr_schedulers and their associated lr_min and lr_peaks, and of course the number of epochs. It is good practice to do a set of training runs that test a range of different values independently testing all these hyper-parameters. You can probably get a full 1% increase in training accuracy by finding the optimal parameters. Cleaning the Data The speech commands dataset contains many bad training files, either total silence, or bad noise, clipped words and some completely mislabelled words. Obviously this will impact your training accuracy. So included in this tutorial is a bad_list.txt file. If you copy that to your speech commands folder (next to your testing_list.txt file) then re-run make_training_list.py with the additional argument --bad_list bad_list.txt then it will create a cleaner test, validation and training list. Re-run the featurization steps with make_dataset.py and retrain your model and you should now see the test accuracy jump to about 94%. So this is a good illustration of the fact that higher quality labelled data can make a big difference in model performance. Next steps That’s it, you just trained your first keyword spotter using PyTorch and you compiled and tested it using ELL, congratulations! You can now use your new featurizer and classifier in your embedded application, perhaps on a Raspberry Pi, or even on an Azure IOT Dev Kit, as shown in the DevKitKeywordSpotter example. There are also lots of things you can experiment with as listed above. You can also build your own training sets, or customize the speech commands set by simply modifying the the training_list.txt file, then recreate the training_list.npz dataset using make_dataset.py, retrain the model and see how it does. Another dimension you can play with is mixing noise with your clean .wav files. This can give you a much bigger dataset and a model that performs better in noisy locations. Of course, this depends on what kind of noise you care about. Crowds of people? Industrial machinery? Just the hum of computers and HVAC systems in an Office environment? It depends on your application. You can experiment using the noise files included in the speech_commands dataset. There are some more advanced options on the make_dataset.py script to help with mixing noise in with the audio recordings. The github speech commands gallery contains some other types of models, some based on LSTM nodes, for example. The GRU architecture does well on smaller sized models, but LSTM hits the highest score when it maximizes the hidden state size. Troubleshooting For any PyTorch issues, see https://pytorch.org/. Exception: Please set your ELL_ROOT environment, as you will be using python scripts from there If you see this error then you need to following ELL setup instructions above and clone the ELL repo, build it, and set an environment variable that points to the location of that repo. This tutorial uses scripts in that repo, and those scripts use the binaries that are built there.
auto_math_text
web
218 articles – 619 references  [version française] HAL: in2p3-00024529, version 1 International Workshop on Topics in Astroparticle and Undeground Physics, TAUP 2003, Seattle : États-Unis Dark matter with HELLAZ (2005) Dark matter interacting in a pressurized TPC will produce an energy spectrum of recoil nuclei whose end point depends on the atomic mass and the pressure of the gas. These can be varied from He to Xe, and 10$^{-2}$ to 20 bar. The threshold depends on the gain of the end cap detector and can reach single electron capability, that is a few eV. HELLAZ has reached that gain with 20 bar He. Parts of this presentation are taken from [J.I. Collar, Y. Giomataris, Nucl. Inst. Meth. 471 (2001) 254]. Subject(s) : Physics/High Energy Physics - Experiment in2p3-00024529, version 1 http://hal.in2p3.fr/in2p3-00024529 oai:hal.in2p3.fr:in2p3-00024529 From: Simone Lantz <> Submitted on: Friday, 2 September 2005 17:01:58 Updated on: Monday, 5 September 2005 16:25:12
auto_math_text
web
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " Page 1 /100 Display every page 5 10 20 Item Guang Yang Physics , 2015, Abstract: Precise measurement of the neutrino mixing angle $\theta_{13}$ is the primary goal of the Double Chooz Experiment (DC), which is located in Chooz, France. The inverse beta decay process provides a unique signature of reactor anti-neutrino interactions, giving prompt signals from positron annihilation and delayed signals from neutron capture by either Gadolinium (Gd) or Hydrogen (H). This paper is dedicated to the latest nH analysis in Double Chooz. Typically, The Gd analysis is primary since fewer background events are involved. However, with accurate estimates of backgrounds and a precise reconstruction of energy, the nH analysis gives a powerful independent measurement of $\theta_{13}$. Physics , 2015, Abstract: Using the Double Chooz detector, designed to measure the neutrino mixing angle $\theta_{13}$, the products of $\mu^-$ capture on $^{12}$C, $^{13}$C, $^{14}$N and $^{16}$O have been measured. Over a period of 489.5 days, $2.3\times10^6$ stopping cosmic $\mu^-$ have been collected, of which $1.8\times10^5$ captured on carbon, nitrogen, or oxygen nuclei in the inner detector scintillator or acrylic vessels. The resulting isotopes were tagged using prompt neutron emission (when applicable), the subsequent beta decays, and, in some cases, $\beta$-delayed neutrons. The most precise measurement of the rate of $^{12}\mathrm C(\mu^-,\nu)^{12}\mathrm B$ to date is reported: $6.57^{+0.11}_{-0.21}\times10^{3}\,\mathrm s^{-1}$, or $(17.35^{+0.35}_{-0.59})\%$ of nuclear captures. By tagging excited states emitting gammas, the ground state transition rate to $^{12}$B has been determined to be $5.68^{+0.14}_{-0.23}\times10^3\,\mathrm s^{-1}$. The heretofore unobserved reactions $^{12}\mathrm C(\mu^-,\nu\alpha)^{8}\mathrm{Li}$, $^{13}\mathrm C(\mu^-,\nu\mathrm n\alpha)^{8}\mathrm{Li}$, and $^{13}\mathrm C(\mu^-,\nu\mathrm n)^{12}\mathrm B$ are measured. Further, a population of $\beta$n decays following stopping muons is identified with $5.5\sigma$ significance. Statistics limit our ability to identify these decays definitively. Assuming negligible production of $^{8}$He, the reaction $^{13}\mathrm C(\mu^-,\nu\alpha)^{9}\mathrm{Li}$ is found to be present at the $2.7\sigma$ level. Limits are set on a variety of other processes. Physics , 2015, Abstract: The Double Chooz collaboration presents a measurement of the neutrino mixing angle $\theta_{13}$ using reactor $\overline{\nu}_{e}$ observed via the inverse beta decay reaction in which the neutron is captured on hydrogen. This measurement is based on 462.72 live days data, approximately twice as much data as in the previous such analysis, collected with a detector positioned at an average distance of 1050m from two reactor cores. Several novel techniques have been developed to achieve significant reductions of the backgrounds and systematic uncertainties. Accidental coincidences, the dominant background in this analysis, are suppressed by more than an order of magnitude with respect to our previous publication by a multi-variate analysis. These improvements demonstrate the capability of precise measurement of reactor $\overline{\nu}_{e}$ without gadolinium loading. Spectral distortions from the $\overline{\nu}_{e}$ reactor flux predictions previously reported with the neutron capture on gadolinium events are confirmed in the independent data sample presented here. A value of $\sin^{2}2\theta_{13} = 0.095^{+0.038}_{-0.039}$(stat+syst) is obtained from a fit to the observed event rate as a function of the reactor power, a method insensitive to the energy spectrum shape. A simultaneous fit of the hydrogen capture events and of the gadolinium capture events yields a measurement of $\sin^{2}2\theta_{13} = 0.088\pm0.033$(stat+syst). Charles E. Lane Physics , 2008, Abstract: The Double Chooz experiment returns to the site of the Chooz experiment with a pair of detectors for a differential neutrino flux measurement, providing sensitivity to sin^2(2theta13) > 0.03. Reaching this goal requires significant improvements in systematic uncertainties, based on the experience with previous reactor neutrino experiments. Daniel M. Kaplan Physics , 2006, DOI: 10.1063/1.2402699 Abstract: There is broad consensus in the worldwide physics community as to the need for a new reactor-neutrino experiment to measure or limit the neutrino mixing angle $\theta_{13}$. The Double Chooz Experiment, planned for operation in the years 2008-2011, will search for values of $\sin^2{2\theta_{13}}$ down to $\approx$0.03. This will be the first new information on $\theta_{13}$ in over a decade and will cover most of the remaining parameter space. A quick and relatively inexpensive project is made possible by the existing neutrino laboratory at the Chooz site. J. V. Dawson Physics , 2009, Abstract: The Double Chooz experiment is the first of the next wave of reactor experiments searching for a non-vanishing value of the mixing angle theta_13. The experimental concept and detector design are presented, and the most pertinent backgrounds are discussed. Operation of the far detector is expected to begin by the end of 2009. Installation of the near detector will occur in 2010. Double Chooz has the capacity to measure sin^2(2theta_13) to 3 sigma if sin^2(2theta_13) >0.05 or exclude sin^2 (2theta_13) down to 0.03 at 90% for Delta m_31^2 = 2.5 x 10^-3 eV^2 with three years of data with both near and far detectors. Statistics , 2014, DOI: 10.1007/JHEP10(2014)032 Abstract: The Double Chooz experiment measures the neutrino mixing angle $\theta_{13}$ by detecting reactor $\bar{\nu}_e$ via inverse beta decay. The positron-neutron space and time coincidence allows for a sizable background rejection, nonetheless liquid scintillator detectors would profit from a positron/electron discrimination, if feasible in large detector, to suppress the remaining background. Standard particle identification, based on particle dependent time profile of photon emission in liquid scintillator, can not be used given the identical mass of the two particles. However, the positron annihilation is sometimes delayed by the ortho-positronium (o-Ps) metastable state formation, which induces a pulse shape distortion that could be used for positron identification. In this paper we report on the first observation of positronium formation in a large liquid scintillator detector based on pulse shape analysis of single events. The o-Ps formation fraction and its lifetime were measured, finding the values of 44$\%$ $\pm$ 12$\%$ (sys.) $\pm$ 5$\%$ (stat.) and $3.68$ns $\pm$ 0.17ns (sys.) $\pm$ 0.15ns (stat.) respectively, in agreement with the results obtained with a dedicated positron annihilation lifetime spectroscopy setup. C. Palomares Physics , 2009, Abstract: The Double Chooz experiment will use the electron antineutrinos produced by the Chooz nuclear power station to search for a non-vanishing value of the Theta_13 neutrino mixing angle. Double Chooz will be the first of a new generation of neutrino experiments using identical detectors at different distances from the neutrino source to reduce the systematic errors due to the uncertainties on the neutrino flux and to the detector acceptance. The far detector is expected to be operative by the beginning of 2010. Installation of the near detector will occur in 2010. I. Gil-Botella Physics , 2007, DOI: 10.1088/1742-6596/110/8/082007 Abstract: The Double Chooz reactor neutrino experiment will be the next detector to search for a non vanishing theta13 mixing angle with unprecedented sensitivity, which might open the way to unveiling CP violation in the leptonic sector. The measurement of this angle will be based in a precise comparison of the antineutrino spectrum at two identical detectors located at different distances from the Chooz nuclear reactor cores in France. Double Chooz is particularly attractive because of its capability to measure sin2(2theta13) to 3 sigmas if sin2(2theta13) > 0.05 or to exclude sin2(2theta13) down to 0.03 at 90% C.L. for Dm2 = 2.5 x 10-3 eV2 in three years of data taking with both detectors. The construction of the far detector starts in 2008 and the first neutrino results are expected in 2009. The current status of the experiment, its physics potential and design and expected performance of the detector are reviewed. Physics , 2012, DOI: 10.1088/1748-0221/8/01/T01003 Abstract: Modern precision neutrino experiments like Double Chooz require a highly efficient trigger system in order to reduce systematic uncertainties. The trigger and timing system of the Double Chooz experiment was designed according to this goal. The Double Chooz trigger system is driven by the basic idea of triggering on multiple thresholds according to the total visible energy and additionally triggering on the number of active photomultiplier tubes (PMTs) in the detector. To do so, the trigger system continuously monitors the analogue signals from all PMTs in the detector. The amplitudes of these PMT-signals are summed for groups of certain PMTs (group signals) and for all PMTs (sum signal), respectively. The group signals are discriminated by two thresholds for each input channel and four thresholds for the sum signal. The resulting signals are processed by the trigger logic unit which is implemented in a FPGA. In addition to the proper trigger, the trigger system provides a common clock signal for all subsequent data acquisition systems to guarantee a synchronous readout of the Double Chooz detectors. The present design of the system provides a high flexibility for the applied logic and settings, making it useful for experiments other than Double Chooz. The Double Chooz trigger and timing system was installed and commissioned in 2011. This article describes the hardware of the trigger and timing system. Furthermore the setup, implemented trigger logic and performance of the trigger and timing system for the Double Chooz experiment is presented. Page 1 /100 Display every page 5 10 20 Item
auto_math_text
web
# I. Introduction Automated identification of forum posts into categories (such as question, answer, feedback and off-topic posts) can help in summarizing threads and allows for efficient information retrieval. Previous approaches to this problem can be classified into supervised and unsupervised classes. Supervised approaches [2, 3, 5] perform this classification task adequately. However, their success comes at a great cost: a large amount of labelled data is required for that level of performance. With larger datasets and ever increasing forum-membership, labelling quickly becomes infeasible. The alternate approaches [1, 6, 7] do away with labelled data, opting for an unsupervised solution. This approach often corresponds to a decrease in performance. In this study, we explored novel statistical techniques for automatically clustering forum posts into dialogue acts using a semi-supervised approach. Our work on the unsupervised classification algorithm is discussed elsewhere. # II. Approach Our semi-supervised algorithm expands on previous work. Barzilay and colleagues [1] proposed an unsupervised approach involving a Hidden Markov Model (HMM) at the sentence level, tailored to match clusters of sentences to particular topics. Others [4] improved the model by introducing structural features, along with a Gaussian Mixture Model (GMM) for emission probabilities. Here, we propose a Hidden Markov Model that incorporates both structural and textual features. Furthermore, we explored the inclusion of emission probabilities from the HMM represented by a Gaussian Mixture Model. Both models were implemented in a semi-supervised fashion. More generally, we believe that a Hidden Markov Model is an appropriate choice when trying to represent sequential data, as it could implicitly factor in human knowledge (e.g., a solution can’t come before a question), and the GMM is said to help reduce topical clustering, which is a problem in unsupervised techniques. Here is a step-by-step description of the semi-supervised approach: 1. Vectorize all posts by means of word n-gram frequency counts and feature occurrences. 2. Cluster vectors that have a given gold label (semi-supervised aspect). 3. Construct a Hidden Markov Model (each cluster obtained in step 2 corresponds to a hidden state, and each post corresponds to an observation from the given state). Run Expectation-Maximization Algorithm: 1. Expectation Step: 1. Construct an n-gram+Feature language model for each state or fit a GMM for each state. This will be used to calculate emission probabilities of a post. 2. Estimate the initial state probabilities given the observed state frequency counts. 2. Maximization Step: 1. Run the Viterbi algorithm to obtain the most likely state sequence, and HMM parameters. In order to compare our novel semi-supervised approaches, we constructed a fully supervised approach. Following a proven approach by Catherine and colleagues [2], we implemented a fully supervised Support Vector Machine (SVM) to use as an approximation of the upper limit on dialogue act classification performance. To do this we trained a Weka SVM:SMO (Sequential Minimal Optimization) classifier on both n-grams and features. # III. Analysis The following evaluation measures were used: ${\textit Precision:=} \frac{\text{# Actual C posts predicted as C }}{\text{# Posts predicted as C}}$ ${\textit Recall :=} \frac{\text{# C posts predicted as C }}{\text{# Actual C posts}}$ ${\text F_1 {\it measure:=} }\frac{2 \times P \times R}{P + R}$ The category-wise evaluation measures for the described techniques are listed in Table 1. As expected, the fully supervised technique outperforms the semi-supervised techniques. However the semi-supervised techniques perform relatively well, with the HMM performing at a similar level to the fully supervised method. The methods perform adequately in most categories with the exception of Clarification and Clarification Request, both of which suffer from a lack of training examples. # IV. Conclusion The results of our study suggest that semi-supervised techniques are promising: they achieve a respectable middle ground between the low cost of unsupervised techniques and the high performance of fully supervised techniques. For future work, we hope to explore higher order Markov chains, incorporating the ability to learn longer-range dependencies between the categories. Our data experimentation has also emphasized another hurdle in the forum-post classification problem: posts can contain multiple dialogue acts (e.g., a given post can have both a Solution to a Problem, and contains Feedback to another Solution). The model has no intuition about this; we suggest that summarization might be an important technique to employ to retain the overall meaning of the post, while cutting out parts (dialogue acts) that are not representative. #### Acknowledgements I am grateful to Krish Perumal and Professor Graeme Hirst for their help and insight during the research process, and their critical reading of the abstract. This work was completed with data provided by VerticalScope Inc. The research was supported by NSERC and VerticalScope. We thank Afsaneh Fazly for comments and insight that greatly improved the work completed. #### References [1] R. Barzilay and L. Lee., "Catching the Drift: Probabilistic Content Models, with Applications to Generation and Summarization," in Proc. of HLT-NAACL, 2004, pp. 113–120. [2] R. Catherine et al., "Does Similarity Matter? The Case of Answer Extraction from Technical Discussion Forums," in Proc. of the 24th Int. Conf. on Computational Linguistics (COLING), 2012, pp. 175–184. [3] L. Hong and B.D. Davison., "A classification–based approach to question answering in discussion boards," in Proc. of 32nd Int. ACM SIGIR Conf. on Research and development in information retrieval, 2009, pp. 171-178. [4] S. Joty et al., "Unsupervised Modeling of Dialog Acts in Asynchronous Conversations," in Proc. of Int. Joint Conf. on Artificial Intelligence (IJCAI), 2011, pp. 1807-1813. [5] S. Kim et al., "Tagging and Linking Web Forum Posts," in Proc. of the 14th Conf. on Computational Natural Language Learning (CoNLL), 2010, pp. 192-202. [6] A. Ritter et al., "Unsupervised Modeling of Twitter Conversations," in The 2010 Annual Conf. of the North American Chapter of the Association for Computational Linguistics, 2010, pp. 172-180. [7] Z. Qu and Y. Liu., "Finding Problem Solving Threads in Online Forum," in Proc. of 5th Int. Joint Conf. on Natural Language Processing (IJCNLP), 2011, pp. 1413-1417.
auto_math_text
web
Evidence of a dynamically evolving Galactic warp @article{Poggio2020EvidenceOA, title={Evidence of a dynamically evolving Galactic warp}, author={Eloisa Poggio and Ronald Drimmel and Ren{\'e} Andrae and Coryn A. L. Bailer-Jones and Morgan Fouesneau and Mario Gilberto Lattanzi and Richard L. Smart and Alessandro Spagna}, journal={Nature Astronomy}, year={2020}, volume={4}, pages={590-596} } • Published 22 December 2019 • Physics • Nature Astronomy In a cosmological setting, the disk of a galaxy is expected to continuously experience gravitational torques and perturbations from a variety of sources, which can cause the disk to wobble, flare and warp 1 , 2 . Specifically, the study of galactic warps and their dynamic nature could reveal key information on the formation history of galaxies and the mass distribution of their haloes. Our Milky Way presents a unique case study for galactic warps, thanks to detailed knowledge of its stellar… Exploring the Galactic Warp through Asymmetries in the Kinematics of the Galactic Disk • Physics, Geology The Astrophysical Journal • 2020 Previous analyses of large databases of Milky Way stars have revealed the stellar disk of our Galaxy to be warped and that this imparts a strong signature on the kinematics of stars beyond the solar Three-Dimensional Kinematics of Classical Cepheids • Physics Astronomy Letters • 2021 —A linear Ogorodnikov–Milne model is applied to study the three-dimensional kinematics of classical Cepheids in the Milky Way. A sample of 832 classical Cepheids from Mr´oz et al. (2019) with Mapping the Galactic Disk with the LAMOST and Gaia Red Clump Sample. VI. Evidence for the Long-lived Nonsteady Warp of Nongravitational Scenarios • Physics The Astrophysical Journal • 2020 By combining LAMOST DR4 and Gaia DR2 common red clump stars with age and proper motion, we analyze the amplitude evolution of the stellar warp independently of any assumption with a simple model. The Evidence of a Dark Matter that Is Not an Exotic Matter: WLM’s Case A recent article reports new observations of the gaseous content of Wolf-Lundmark-Melotte (WLM), an archetype of isolated, gas-rich field dwarf galaxies in the Local Universe, which presents an Three-dimensional magnetic fields of molecular clouds • M. Tahani • Physics, Geology Frontiers in Astronomy and Space Sciences • 2022 To investigate the role of magnetic fields in the evolution of the interstellar medium, formation and evolution of molecular clouds, and ultimately the formation of stars, their three-dimensional Snails across Scales: Local and Global Phase-mixing Structures as Probes of the Past and Future Milky Way • Physics The Astrophysical Journal • 2022 Signatures of vertical disequilibrium have been observed across the Milky Way’s (MW’s) disk. These signatures manifest locally as unmixed phase spirals in z–v z space (“snails-in-phase”), and Indonesian Journal of Electrical Engineering and Computer Science • Materials Science, Engineering • 2021 This book aims to provide a history of Indian electronics engineering from the perspective of the 20th Century up to and including the present day. 3D Parameter Maps of Red Clump Stars in the Milky Way: Absolute Magnitudes and Intrinsic Colors • Physics The Astrophysical Journal • 2021 Red clump stars (RCs) are useful tracers of distances, extinction, chemical abundances, and Galactic structures and kinematics. Accurate estimation of RC parameters—absolute magnitude and intrinsic The structure of the Milky Way based on unWISE 3.4 μm integrated photometry • Physics Monthly Notices of the Royal Astronomical Society • 2021 We present a detailed analysis of the Galaxy structure using an unWISE wide-field image at $3.4\,\mu$m. We perform a 3D photometric decomposition of the Milky Way taking into account (i) the References SHOWING 1-10 OF 81 REFERENCES A three-dimensional map of the Milky Way using classical Cepheid variable stars • Physics Science • 2019 A map of the Milky Way in three dimensions, based on the positions and distances of thousands of classical Cepheid variable stars, shows the structure of the Galaxy’s young stellar population and allows us to constrain the warped shape of the galactic disc. The Gaia reference frame for bright sources examined using VLBI observations of radio stars (Corrigendum) Context. Positions and proper motions of Gaia sources are expressed in a reference frame that ideally should be non-rotating relative to distant extragalactic objects, coincident with the On the Solar Velocity • Physics Research Notes of the AAS • 2018 Confirmation of the Gaia DR2 Parallax Zero-point Offset Using Asteroseismology and Spectroscopy in the Kepler Field • Geology, Physics The Astrophysical Journal • 2019 We present an independent confirmation of the zero-point offset of Gaia Data Release 2 parallaxes using asteroseismic data of evolved stars in the Kepler field. Using well-characterized red giant Structure and Kinematics of the Milky Way • M. Reid • Physics Proceedings of the International Astronomical Union • 2017 Abstract Maser astrometry is now providing parallaxes with accuracies of ±10 micro-arcseconds, which corresponds to 10% accuracy at a distance of 10 kpc! The VLBA BeSSeL Survey and the Japanese VERA A MODEL FOR PERSISTENT GALACTIC WARPS • Physics • 1988 La theorie selon laquelle les gauchissements galactiques observes pourraient representer les modes discrets de la flexion dans un disque auto-gravitant est examinee. La variete des formes de RULES OF BEHAVIOR FOR GALACTIC WARPS An analysis conducted for 12 galaxies with extended, warped H I disks in a variety of reference frames has led to the formulation of clear empirical criteria for galactic warp behavior. In view of The Galactic warp as seen from 2MASS survey • Physics • 2008 The 2MASS survey is used to trace the shape of the external Galaxy where an asymmetric warp is detected. We use the Besancon Galaxy model to quantify the parameters of the stellar warp in the outer The gravitational force field of the Galaxy measured from the kinematics of RR Lyrae in Gaia • Physics Monthly Notices of the Royal Astronomical Society • 2019 From a sample of 15651 RR Lyrae with accurate proper motions in Gaia DR2, we measure the azimuthally averaged kinematics of the inner stellar halo between 1.5  and 20  kpc from the Galactic centre.
auto_math_text
web
The “Geithner Put”, part 2 (If you are curious about the term “Geithner Put”, it is a reference to the Greenspan Put.  Although it is more literal, since non-recourse loans place a floor on an investor’s potential losses, just like a put option does.) In Part 1, I gave an example of how a private investor could buy some loans for $8400, ultimately realize$5000 on them, and still earn a 16.7% profit.  Milo Minderbinder would be proud.  The loser would be the FDIC, which is interesting because the FDIC is financed not by the taxpayer but by the banking industry — or at least, it has been so far.  So even if the FDIC needs to tap their new $500 billion credit line to make good on tons of bad loans, in theory they will eventually repay the Treasury by exacting higher insurance premia from the banking industry. Thus Part 1 could be read as a transfer of wealth from good banks to bad banks and private equity. Assuming Treasury actually demands repayment on the credit line and does not just write it off… Part 2 of the Geithner Put is called the Legacy Securities Program. Here again, the Treasury provides an example: Step 1: Treasury will launch the application process for managers interested in the Legacy Securities Program. Step 2: A fund manager submits a proposal and is pre-qualified to raise private capital to participate in joint investment programs with Treasury. Step 3: The Government agrees to provide a one-for-one match for every dollar of private capital that the fund manager raises and to provide fund-level leverage for the proposed Public-Private Investment Fund. Step 4: The fund manager commences the sales process for the investment fund and is able to raise$100 of private capital for the fund. Treasury provides $100 equity co-investment on a side-by-side basis with private capital and will provide a$100 loan to the Public-Private Investment Fund. Treasury will also consider requests from the fund manager for an additional loan of up to $100 to the fund. Step 5: As a result, the fund manager has$300 (or, in some cases, up to $400) in total capital and commences a purchase program for targeted securities. Step 6: The fund manager has full discretion in investment decisions, although it will predominately follow a long-term buy-and-hold strategy. The Public-Private Investment Fund, if the fund manager so determines, would also be eligible to take advantage of the expanded TALF program for legacy securities when it is launched. This may sound similar to Part 1, with the Treasury providing the leverage instead of the FDIC… But it is actually completely different. The key concept is the non-recourse loan. If Treasury wants to hire a handful of money managers, ask them to raise private capital, match the capital raised, and then let them lever up 3:2 or 2:1, that is not a subsidy in the same way as Part 1. In this case, the loans are full recourse with respect to all of the assets run by that manager. So it is not trivial to construct an offensive example, and this structure by itself is not so bad. No, the bad part is this bit which appears a few paragraphs earlier: Expanding TALF to Legacy Securities to Bring Private Investors Back into the Market … stuff elided … • Funding Purchase of Legacy Securities: Through this new program, non-recourse loans will be made available to investors to fund purchases of legacy securitization assets. Eligible assets are expected to include certain non-agency residential mortgage backed securities (RMBS) that were originally rated AAA and outstanding commercial mortgage-backed securities (CMBS) and asset-backed securities (ABS) that are rated AAA. Whoops, there are those “non-recourse loans” again. The TALF is the Fed’s new$1 trillion program to provide non-recourse loans against new asset-backed securities.  Part 2 of the Geithner Put extends this program to existing securities.  (Note the phrasing “originally rated AAA”.  We are definitely talking about toxic assets here.) So the Treasury provides equity investment and loans for fund managers to purchase assets, and then the Fed provides the subsidy via non-recourse loans against those assets.  The degree of leverage here is determined by the “haircut” applied to the assets; a 10% haircut corresponds to 9:1 leverage, for instance.  Search for “collateral haircuts” in the TALF FAQ to get an idea of the numbers.  Although bear in mind those are for the current TALF, and we will have to wait to see the haircuts for the Geithner Put extension. Apparently, we are left with private equity firms and bankers being able to fleece the FDIC and the Fed via abusing the non-recourse loans, with the Treasury/taxpayer participating in the upside of the fleecing.  Which is fine, I guess, if you believe the FDIC and Fed are themselves good for the losses; i.e., that the losses will not ultimately be placed on the taxpayer.  Color me skeptical, especially with regard to the Fed. But I do give them points for creativity. 22 comments to The “Geithner Put”, part 2 • vvvviking About those haircuts. Harkens me back to other entities that took MBS with “appropriate” haircuts: the FHLBs. They took a lot of Option ARM AAAs from WaMu and Countrywide at pretty generous haircuts, and still have losses on them. Put it another way: when these Option ARMs were originated, the AAA loss coverages were in the 10% range (at most, many were single digits: 7-8%). Two significant Option ARM portfolio purchases have happened in the past 6 months: JPM buying WaMu, and WFC buying WB. In both cases, the buyer wrote down the Option ARM portfolios by 25 – 30%, or about 3X the AAA LC that the rating agencies were assigning. Since the Fed is really supposed to be taking a “super senior” position, the haircut on AAA 2006-2007 vintage securities should be in the 40% range. Wanna bet the haircuts will be nowhere near that level? • knowtheory Apparently, we are left with private equity firms and bankers being able to fleece the FDIC and the Fed via abusing the non-recourse loans, with the Treasury/taxpayer participating in the upside of the fleecing. Which is fine, I guess, if you believe the FDIC and Fed are themselves good for the losses; i.e., that the losses will not ultimately be placed on the taxpayer. Color me skeptical, especially with regard to the Fed. But I do give them points for creativity. Okay, genuinely have no idea whether this is a sane suggestion or not, but why can’t the FDIC turn around and impose a progressively graded insurance rates on large complicated financial institutions? That’d have two effects, 1) recoup the insurance fund/risk from whatever the banks were making via this plan, over a longer more gradual time span, and 2) provide a disincentive to being large and complicated (aka a systemic risk). I’m guessing this is kind of a dream world fantasy, since my understanding is that the FDIC doesn’t have the authority to collect from investment banks (is that right?) or the shadow banking system that orgs like AIG were running. I’m just trying to figure out if this is genuinely just a give away, or if there is a world, some world out there in which this makes sense, and we aren’t screwing the little guy some how (whether it’s the tax payers, or small banks which didn’t do anything wrong). • jpm Nemo, Your example (and others) are assuming that the buyer and seller’s interest are not related. This is not the situation at hand. For example: Citi will invest $1B with a fund that bids on the assets. The gov’t will provide another (say)$9B to match Citi’s $1B as a nonrecourse loan. Citi will then provide personal incentives for the fund managers to bid 100 cents on the dollar for an BBB MBS tranche that is currently trading for 1 cent on the dollar. Naturally, the market is valuing this tranche at 1 cent because there is no hope of being paid back, and the tranche will eventually be a total loss. But when that total loss occurs, Citi now only loses 10 cents on the dollar because the gov’t just put up the other 90 cents. (above stolen from here: http://tinyurl.com/cl5kdv ) • DTM First, the Summary of Terms for the Securities portion states: “Each Fund Manager will have the option to obtain for each Fund secured non-recourse loans from Treasury (“Treasury Debt Financing”) in an aggregate amount of up to 50% of a Fund’s total equity capital.” http://www.treas.gov/press/releases/reports/legacy_securities_terms.pdf So I think those are in fact non-recourse loans coming from the Treasury. Second, I noted in a comment to Part One that you appeared to be omitting the Guarantee Fees from your analysis. I’ll just note here that you appear to be omitting any default risk premium that the Treasury would include in the interest rate for its non-recourse loans. The point is essentially the same: it is possible there would be no net subsidy provided by these non-recourse loans, if the Treasury did in fact charge a sufficient risk premium to cover its losses on the loans which go into default. • snoopy Wow, Nemo, you’re getting a lot of comments. Anyone have an idea of how big the toxic pool is? What is the 10’s of trillions number that I’ve seen tossed around? To get a handle around those kinds of losses is going to take years. So at least they are getting started. We are definitely not anywhere close to the end of the handouts. They are gonna keep coming back for more. They came when the Dow was at 10000, they came when the Dow was at 7000, they will come again when the Dow is at 5000. • Why would anyone assume a legitimate auction? A bank that bought its own asset for its current carrying value would offload at least 85% of the downside risk of the asset, thanks to the beauty of the non-recourse loan. Perhaps – perhaps – the Treasury wouldn’t stand for it, but a more complex special purpose vehicle would almost certainly get by them. Indeed, depending on the payoffs a fund with a highly levered position in a bank’s equity (say, far out of the money options) could bid a high price just so the bank could announce a successful sale and consider the ultimate capital loss on the asset to be a minor marketing expense. • snoopy Here’s an old article which estimates the CDS mess at 60T. http://www.dailymarkets.com/economy/2008/10/07/the-60-trillion-dollar-nightmare-of-credit-default-swaps/ • DTM — Thank you for your responses, both here and in Part 1. Yes, I have ignored the fees there and also the “risk premia” here since I suspect they will be immaterial. For one thing, if the fees and/or premia were going to be enough to cover the losses, private money could already offer a similar deal for profit, and there is no evidence anybody is doing so. In addition, it is my belief that the entire purpose of this plan is to help to recapitalize the banking system, and fees and/or premia sufficient to cover the losses would undermine that goal. I freely admit I could be wrong, and I look forward to seeing the actual numbers emerge. • DTM Nemo, Private partners could not offer the same terms with an expected profit if they had a substantially higher cost of capital than the government, which is very likely the case these days. In other words, the hypothetical non-subsidy explanation for these deals would be that through these partnerships the government is sharing access to its lower cost of capital with its private partners, and in return getting help pricing the assets. Anyway, my primary purpose was just to point out that the structure of these deals does not necessarily imply a subsidy. Rather, to conclude there is a subsidy you must add additional assumptions–assumptions which may well be warranted. But in these discussions I think it is worth making those assumptions explicit and then explaining the basis for them. • DTM By the way, I should note that we probably won’t know for sure whether or not the government has charged profitable fees and default risk premiums in exchange for its guarantees and non-recourse loans respectively for quite a while, basically not until it is fairly clear exactly what losses on those guarantees and loans the government is actually going to realize. In that sense the “actual numbers” needed to confirm or disconfirm various possible assumptions won’t be emerging for quite some time. • billyblog Notice who is being left out of all of this – and bulldozed over in the process. The poor stiffs who took out those mortgages in the first place. Sure, some will play jingle mail, and others will be greedy operators who got caught attempting one flip too many. But most will be decent folks who really will bend every effort to scrape through and save the house, even if it has negative equity for a while. What happens to them once the Geithner Put is rolled out? Well, there was a time when there might have been some hope for them to go into bankruptcy and get their loans renegotiated. (Oh yeah, I forgot, banks around the country are doing this every day — voluntarily. Why there was actually a reported sighting of a mortgage renegotiation last Thursday in Roswell. Or was that a flying saucer?) In other words, there used to be some hope, however faint, that we might get the lenders, at whatever generational level, also to take a haircut, and this time in relation to the consumers and not just in relation to other lenders. But once we go to Geithner’s Put, that ain’t gonna happen. “Harumph, Harumph, Mr. Small Guy, on this Medusa, yeah, some of us bankers and hedge fund managers will end up cannibalizing one another. But that’s OK, you won’t have to watch it. Because we will have long since thrown you overboard. Indeed, we may be able to churn this musical chairs asset swapping long enough so that some of us actually come out big winners. But that will be after Sheriff Tim has put your and the Missus’ furniture out on the lawn, and we get somebody into your old house who can finally make the payments, albeit at a lower level that you, you sap, contracted for. Remember, those contracts are sacred – except when you’re too big, or too dear to the hearts of Tim and Ben, to fail.” And trust me, though Timmy won’t be the physical Sheriff, he will be cutting so many side deals to insulate these lenders from any risk in relation to a shared burden with the mortgage holders, that he will be the hammer that makes sure that the home-no-longer-owner is left even more out in the cold than he is now. And pick a number between 1 and 10 for how transparent those back room deals will be, with 10 being most transparent? Would you believe minus 4, à la twisting Chris Dodd’s arm – but apparently not that hard – to expunge any so-called “populist” elements from the Stimulus Bill? Or à la AIG finally being forced to cough up the information that for all these months it has been laundering our tax dollars to Goldman Sachs and many of its “deserving” friends both here and abroad the great “haircut” rate of dollar for dollar? Yup, the Treasury Department will be the new owners’ muscle, making sure that the full force of the Government ensures that the bankers optimize, even at a discount. And to hell with the people trying to hold on to those homes. Don’t they realize that this is all about swapping assets in the pit, not about having a roof over your head? Remember when Hank Paulson was badgered by Congress to use some of the TARP funds to help out the mortgage holders? Remember when Hank’s answer to that plea was to flip the bird to Congress? Timmy was at his side then, and Timmy’s new program is simply a wonky way of obfuscating the social reality that that this whole thing is ultimately about real people, real people who are about to get really squashed so that the financiers can go back to playing their games – in the baleful sense in which Jon Stewart recently excoriated them. And watch how quickly the secondary market develops for these assets. Why, how long before we have credit default swaps – or their functional equivalents – for these asset purchases effectively guaranteed on the down side by the taxpayer? And then, of course, we’ll need a bailout to bailout the bailout of the bailout, and so on ad indefinitum. This wouldn’t have happened if the banks had been nationalized, because that would have put the whole matter into a political context, where there would have had to have been give and take from both sides and, well, just a smidgen of acknowledgement that we have government and other institutions ultimately for the sake of the general social welfare, not for the sake of securing Vikram Pandit’s right to do a$10m makeover of Citibank’s executive offices. But not in Tim’s universe, where only bankers and hedge fund managers, and other varieties of really “creative” people exist. And is there anything more risible than the suggestion that the banks will end up footing this bill through premiums paid to the FDIC, even though, alas, so many good people will have to be thrown under the bus along the way to maintain the sanctity of these “public/private” contracts that Tim is about to enable? Really, how can anyone make that suggestion with a straight face? • diek You know what? This whole thing is getting surreal. Notice that *everyone* has an opinion on what the best solution is, and they are all getting more vocal about them, and they are all different. In other words, nobody has a clue. The only right thing to do is set a strategy with flexibility, quick feedback, and make adaptations as the feedback comes in. Nothing else has a chance. Geithner’s plan has a lot of avenues of getting capital into banks and other institutions. I hope he has consulted with bank managers, Congress, etc., so that this plan will be acceptable to them, as it does no good to have a perfect plan if nobody follows it. The right solution is going to take many strange detours and will look pretty messy before it’s done. Guaranteed. • chris My Question: Given that billions of dollars are involved, I’d imagine that the government would release their complicated mathematical analysis of all this. Have they? ——————————————————————————————————- (I haven’t seen any government analysis, so let me do one.) I’m an applied mathematician. But my area is not economics. Let me see what I can come up with. Let’s make it simple and take a look at the simplest example. Simple Example *************************************************** No loan/no leverage. 50-50 private/government split. Face value = $100.$84 winning bid. Private investor holds assets to maturity. The holder of the original asset, e.g. a bank, loses $100 –$84 = $16 (compared to the face value). The private investor and the government each put up$42. If the assets pay $84 + x then the government and the private bid winner each gain or lose (except processing costs, etc.) x/2 dollars, with the maximum value of x =$16 = bank loss, and the minimum value of x = -$84 = negative winning bid. In this example, we see that one effect of the plan is to force the government to purchase assets at a price determined by the auction’s winning bid (whether the winning bid is rational or not.) The risk/reward is equally shared by the government and the private investor. In some sense, this simple example is like the government putting in a buy order (capped at face value) for 1/2 of what ever is being sold. This will have the effect of increasing the winning bid (as the government is buying 1/2 of the assets, and price increases as demand increases). Moreover, since x = [the dollars that asset pays out] – [winning bid] the effect is that x decreases as the winning bid increases. This benefits the bank, as x = bank loss, at the expense of the government and the private investor. ******** Next lets factor in leverage and non recourse government backed loans. Once we factor in non-recourse loans providing leverage, the model is complicated. However, it is clear that the amount of the winning bid will increase further as private investors will reap benefits even if they sometimes bid a little bit high, since the government is taking the big downside gamble, but only a 50-50 upside. In this case, the benefits to the private investor and the original asset holder (the bank) will come at the expense of the government. The one model in which this is a good plan, from the government’s perspective is if the plan itself has a feed back loop effect, which makes the [amount that the assets pay out] increase, which will offset the decrease in x, which is caused by the increase in the winning bid. Summery. The banks are getting a nice bailout, as compared to what they would otherwise get for their toxic assets, especially as they can choose not to sell their assets if they don’t like the price. The private investors, since they get to decide how much they will pay, and since they will be bidding with non recourse leveraged dollars, they should also get a nice deal. It is bad to say, but the government is not getting a good deal in all this. If the government was a private business everybody who made this deal would get fired I think. To be honest, I am in over my head here and I am sure a better analysis using standard models is possible. But this extra simplified example I think is sufficient understand how the program is structured and it avoids having to model the expected number of dollars the assets will pay out, etc. Which in turn avoids arguments over any distribution used. Given that billions of dollars are involved, I’d imagine that the government would release their complicated mathematical analysis of all this. Have they? If not, going along with the plan is like buying a pig in the poke. • chris Correction: I wrote: This benefits the bank, as x = bank loss, Actually, I meant to write, this benefits the bank (asset holder), as [bank loss] =$100 – [winning bid]. • snowman Put aside the problem I have with a tax payer bail out of illiquid assets, I have doubts this program will work (per Geithner’s def: get banks to lend cheap money again): there isn’t enough investor money out there to make it work. 1) the big investors will shy away because they’ve already been burned. Pension funds, endowments etc will not put their money into this. Pension assets, for instance, make up about 60% of all assets invested. They are never going to get involved in this scheme, no matter the haircut. Several huge players have recently changed their charters to ensure not one penny sits with a hedge fund, for instance. Anything that smells, tastes, or looks like MBS won’t go with their constituents. So from the get go, a huge portion of the investor base needed to make a market work and create liquidity is already a no-go. 2) The hedgies/Pe guys aren’t sitting on top of a ton of cash either, their traditional sources of money (see point 1) has dried up. Many of the biggest (it’s the 80/20 rule) have started to diversify into either long term value, or trying (with little success) with quant stratgies. It is no surprise over 110 funds have imploded in the past 18 months. Add to this the Uncle Same quid pro quo to the PE/hedgies: “we’ll pay you to invest, but now you have to open your kimono”…don’t expect a rush to line up for the program. 3) There is huge uncertainty on what exactly are in these portfolios, ie the physical assets, and the abiltiy to re-pay. Are they empty 200-unit condos in Naples? Are they 3 bedroom houses in Teaneck NJ? What do the cash flows look like? If I were an investor I wouldn’t trust any figure I got from the bank’s book. I would have to grab my valuator, see the sites, talk to the owners etc etc. In other words, there is a big disconnect between what the banks think they have and what they really have. It’s similar to the commercial real estate disaster in the early 90’s; the investor had to go to the office buildings, see what kind of plumbing it had, interview the tenants etc; because the banks had no real clue. Given the huge swath of real estate we are talking about, the task is almost impossible, and I doubt investors (those with real money) will buy and hold without significant due diligence. 4) Investors won’t attract much retail investment on this stuff. For one, it’s got radioactive labels printed all over it, two: the man on the street won’t get hosed once again unless he really understands what this is (see point above), and three, disposable income isn’t exactly flourishing these days. There simply isn’t enough money around, and anything there is goes to paying down debt and general savings. The list goes on. Geithner has to do something, but I expect a weak fizzle. • doug Nemo, Are you posting to yourself? Just kidding. I have never seen so many that could predict the future with so little uncertainty…. Diek, Right on the money. I agree. • inthecheapseats Nemo, thanks for the good posts. In the context of billyblog and diek, who I sympathize with completely, the thing that is bothering me the most about the Geitner plan is this: I don’t see the criteria by which an outcome could be judged a success for the taxpayer. The plan is so complex that for the intelligent interested lay citizen, NO conceivable outcome will demonstrably convince him/her that the taxpayer didn’t get screwed again at the profit of the saved banks, even in the scenario in which everything goes as Geitner and Obama would wish. Therefore Main Street will continue, with just cause IMHO, to be very cynical about Wall Street and a sizeable chunck of confidence will be witheld from Wall Street for years. As much as I want to give Geitner the benefit of the doubt for the good of the country, I’m stuck with Krugman’s conclusion: despair that this is not going to be the fix the economy needs and runs a good chance of permanently derailing Obama’s agenda. Does anyone know whether or not treasury, or anyone else, has constructed a scoreboard by which the taxpayer can objectively monitor who’s winning and who’s losing as the months roll by? • Xacto Perhaps the whole plan here is to find politically palatable and “sneaky” ways to allow the Fed to print money and get away with it. I.E. there are two “outs”: 1) Inflation kicks in and the worst case scenarios don’t happen because the underlying asset values get rescued by inflation. 2) The worst case scenario happens and the Fed allows its loans to the FDIC to go unpaid (or paid at a 0% rate over 100 years) — i.e. money was just printed to make this happen. • Bill Courtney It looks to me like this is a plan by Treasury to sell derivatives. (Ssshhh! Don’t say that too loudly. We were told that evil, fancy-pants financial instruments were what got us into this mess.) The subsidy looks like half of a call option with no expiration date. (So more like a warrant than an option, but let’s continue.) The investor gets half the option and Treasury gets the other half. The nominal value of the underlying asset is the auction price. The strike price is 85% of whatever price is set by the auction. Thus, the call is in-the-money by 15% of the auction price. The premium (price of the call option) is the 15% that is not covered by the no-recourse loan. Note that there is NO TIME PREMIUM. (That is, there’s no EXPLICIT time premium.) If the price of the underlying asset falls and stays below the strike price (by defaulting, perhaps, or being settled for a very few cents on the dollar) the investor walks away, just as with a call. Else, the investor can take the difference (half the difference, actually, since he’s partnered 50-50 with Treasury) between the selling price and the strike price. Just as with a call. Recognizing that there is no time premium being charged by Treasury means that the original investors could turn around and make a sale in a secondary market (if such is established) and MAKE A KILLING. Of course, in an efficient market, the initial auction will increase the price of the underlying asset by just the amount necessary to equal what the ‘correct’ asset price should be plus enough extra to account for the missing time premium. That is, the price will include an implicit time premium. Of course this will work, when has the commons (excuse me, I meant, “when has the market”) ever behaved tragically? • snowman To inthecheapseats: The Treasury will appoint asset managers who will issue monthly performance reports. Probably will fall to the usual suspects, Bank of New York, JPMChase and a couple others. Their reporting is prettty good so we’ll get to see. That is, if anyone shows up to the party…… Acid test: would you invest in this (remember – the assets to be sold will be decided by the bank so don’t think you are getting any stuff they are sure about)? • What is to keep an investment manager from hedging some toxic asset portfolio with something that has a big upside when the bank selling the toxins is cleaned-up? The hedge could be a bunch of call options on the stock but can also be some of the bank debt that currently sells at very distressed levels. My point is that the investment manager can protect himself and make money even he overpays for the toxic portfolio that goes bust. In fact, if this was the trading strategy, the IM would want to pay as much as possible and get the worst junk off the bank’s balance sheet. On the other hand, the treasury does not hedge itself and can lose all its investment. Actually, in the case of Citibank, BoA and others, the Treasury holds preferred stock, equity shares or both. With Citi, the government owns 40% of equity and probably a good deal of preferreds (I have to check). Therefore, if Citi was cleaned up, the treasury too, would make a killing. Imagine the glory that Geithner would bask in for engineering such a brilliant plan that makes a profit for the government! So who would be the losers in this case? Well, FDIC and the Fed (with the non-recourse loans). The small banks would bear some of the burden through the higher FDIC insurance premiums. As far as the losses of the Fed, I have no idea if those would need to be paid or they would simply be written off. Maybe someone can shed some light in this matter. If the losses on the Fed balance sheet are written off, then it means that they are truly printing money and devaluing everyone that holds US currency. If this is what the Treasury was thinking it would be pretty devious but clever nevertheless. • What Nemo describes is the subsidy hidden in the Geithner plan announced on March 23, 2009. My paper, “The Put Problem with Buying Toxic Assets” at http://ssrn.com/abstract=1343625 argues that insolvent banks won’t sell their toxic assets without a big subsidy.
auto_math_text
web
Title Observation and studies of jet quenching in PbPb collisions at $\sqrt{S_{NN}}$ = 2.76 TeV Author Chatrchyan, S. Khachatryan, V. Sirunyan, A. M. de Wolf, E.A. Janssen, X. Mucibello, L. Roland, B. Rougny, R. Selvaggi, M. van Haevermaet, H. Van Mechelen, P. Van Remortel, N. et al. Faculty/Department Faculty of Sciences. Physics Publication type article Publication 2011 Lancaster, Pa , 2011 Subject Physics Source (journal) Physical review : C : nuclear physics. - Lancaster, Pa, 1970 - 2015 Volume/pages 84(2011) :2 , p. 1-26 ISSN 0556-2813 1089-490X ISI 000293841600001 Carrier E-only publicatie Target language English (eng) Full text (Publishers DOI) Affiliation University of Antwerp Abstract Jet production in PbPb collisions at a nucleon-nucleon center-of-mass energy of 2.76 TeV was studied with the Compact Muon Solenoid (CMS) detector at the LHC, using a data sample corresponding to an integrated luminosity of 6.7 μb−1. Jets are reconstructed using the energy deposited in the CMS calorimeters and studied as a function of collision centrality. With increasing collision centrality, a striking imbalance in dijet transverse momentum is observed, consistent with jet quenching. The observed effect extends from the lower cutoff used in this study (jet pT=120 GeV/c) up to the statistical limit of the available data sample (jet pT≈210 GeV/c). Correlations of charged particle tracks with jets indicate that the momentum imbalance is accompanied by a softening of the fragmentation pattern of the second most energetic, away-side jet. The dijet momentum balance is recovered when integrating low transverse momentum particles distributed over a wide angular range relative to the direction of the away-side jet. Full text (open access) https://repository.uantwerpen.be/docman/irua/1f323f/90539ad8.pdf E-info http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000293841600001&DestLinkType=RelatedRecords&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000293841600001&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000293841600001&DestLinkType=CitingArticles&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Handle
auto_math_text
web
# 4. Machine learning for credit card fraud detection¶ Credit card fraud detection (CCFD) is like looking for needles in a haystack. It requires finding, out of millions of daily transactions, which ones are fraudulent. Due to the ever-increasing amount of data, it is now almost impossible for a human specialist to detect meaningful patterns from transaction data. For this reason, the use of machine learning techniques is now widespread in the field of fraud detection, where information extraction from large datasets is required [Car18, DP15, LJ20, PP19]. Machine Learning (ML) is the study of algorithms that improve automatically through experience [Bon21, FHT01]. ML is closely related to the fields of Statistics, Pattern Recognition, and Data Mining. At the same time, it emerges as a subfield of computer science and artificial intelligence and gives special attention to the algorithmic part of the knowledge extraction process. ML plays a key role in many scientific disciplines and its applications are part of our daily life. It is used for example to filter spam email, for weather prediction, in medical diagnosis, product recommendation, face detection, fraud detection, etc [Bis06, DP15]. The ability of ML techniques to effectively address the challenges raised by CCFD has led to a large and growing body of research in the last decade. As reported in Fig. 1, thousands of papers related to this topic have been published between 2010 and 2020, with about 1500 papers published in 2020 alone. Fig. 1. Number of published articles on the topic of machine learning and credit card fraud detection between 2010 and 2020. Source: Google Scholar. This section aims at providing an overview of this body of recent research, by summarising the main research challenges, and the key machine learning concepts that can be used to address them. ## 4.1. Recent surveys¶ To get a picture of the current state of research on ML for CCFD, we searched Google Scholar for all reviews and surveys made on this topic in the last five years. Using the following boolean search: ("machine learning" OR "data mining") AND "credit card" AND "fraud detection" AND (review OR survey) and restricting the search period from 2015 to 2021, we identified ten reviews/surveys which we report in the following table. Title Date Reference A survey of credit card fraud detection techniques: Data and technique oriented perspective 2016 [ZAM+16] A survey of machine-learning and nature-inspired based credit card fraud detection techniques 2017 [AA17] A survey on credit card fraud detection using machine learning 2018 [PC18] A state-of-the-art review of machine learning techniques for fraud detection research 2018 [SKK18] Detection of credit card fraud: State of art 2018 [SSB18] A survey on different data mining & machine learning methods for credit card fraud detection 2018 [PL18] A systematic review of data mining approaches to credit card fraud detection 2018 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection 2019 [YAG19] Credit Card Fraud Detection: A Systematic Review 2019 [PP19] Credit card fraud detection using machine learning: A survey 2020 [LJ20] A set of ten surveys in five years can be considered high. The fact that so many surveys were published in such a short period (in particular for the five surveys published in 2018) reflects the rapid evolution of the topic of ML for CCFD and the need that teams of independent researchers felt in synthesizing the state of research in this field. Given the common goal of these surveys, it is worth noting that a high degree of redundancy can be found in terms of content. In particular, they all emphasize a common set of methodologies and challenges, that we present in the next two sections. We first cover the baseline methodology, that is, the common workflow that is typically followed in papers dealing with the use of ML techniques to address CCFD. We then summarize the challenges that characterize this topic. ## 4.2. Baseline methodology - Supervised learning¶ A wide number of ML techniques can be used to address the problem of CCFD. This is directly reflected by the huge amount of published papers on the topic in the last decade. Despite this large volume of research work, most of the proposed approaches follow a common baseline ML methodology [Bis06, FHT01, PL18], which we summarize in Fig. 2. Fig. 2. ML for CCFD: Baseline methodology followed by most of the proposed approaches in the recent surveys on the topic. In credit card fraud detection, data typically consists of transaction data, collected for example by a payment processor or a bank. Transaction data can be divided into three groups [AA17, LJ20, VVBC+15] • Account-related features: They include for example the account number, the date of the account opening, the card limit, the card expiry date, etc. • Transaction-related features: They include for example the transaction reference number, the account number, the transaction amount, the terminal (i.e., POS) number, the transaction time, etc. From the terminal, one can also obtain an additional category of information: merchant-related features such as its category code (restaurant, supermarket, …) or its location. • Customer-related features: They include for example the customer number, the type of customer (low profile, high profile, …), etc. In its simplest form, a payment card transaction consists of any amount paid to a merchant by a customer at a certain time. A set of historical transaction data may be represented as a table such as illustrated in Fig. 3. For fraud detection, it is also generally assumed that the legitimacy of all transactions is known (that is, whether the transaction was genuine or fraudulent). This is usually represented by a binary label, with a value of 0 for a genuine transaction, and a value of 1 for fraudulent transactions. Fig. 3. Example of transaction data represented as a table. Each row corresponds to a transaction from a customer to a terminal. The last variable is the label, which indicates whether the transaction was genuine (0) or fraudulent (1). Two stages can be distinguished in the design of an ML-based fraud detection system. The first stage consists of building a prediction model from a set of labeled historical data (Fig. 2, upper part). This process is called supervised learning since the label of the transactions (genuine or fraudulent) is known. In the second stage, the prediction model obtained from the supervised learning process is used to predict the label of new transactions (Fig. 2, lower part). Formally, a prediction model is a parametric function with parameters $$\theta$$, also called a hypothesis, that takes an input $$x$$ from an input domain $$\mathcal{X}\subset \mathbb{R}^n$$, and outputs a prediction $$\hat{y}=h(x,\theta)$$ over an output domain $$\mathcal{Y} \subset \mathbb{R}$$ [Car18, DP15]: $h(x,\theta): \mathcal{X} \rightarrow \mathcal{Y}$ The input domain $$\mathcal{X}$$ usually differs from the space of raw transaction data for two reasons. First, for mathematical reasons, most supervised learning algorithms require the input domain to be real-valued, that is, $$\mathcal{X} \subset \mathbb{R}^n$$, which requires to transform transaction features that are not real numbers (such as timestamps, categorical variables, etc…). Second, it is usually beneficial to enrich transaction data with other variables that may improve the detection performance of the prediction model. This process is referred to as feature engineering (also known as feature transformation, feature extraction, or data preprocessing). For fraud detection, the output domain $$\mathcal{Y}$$ is usually the predicted class for a given input $$x$$, that is $$\mathcal{Y}=\{0,1\}$$. Given that the output class is binary, these prediction models are also called binary classifiers. Alternatively, the output may also be expressed as a fraud probability, with $$\mathcal{Y}=[0,1]$$, or more generally as a risk score, with $$\mathcal{Y} = \mathbb{R}$$, where higher values express higher risks of fraud. The training (or building) of a prediction model $$h(x,\theta)$$ consists of finding the parameters $$\theta$$ that provide the best performance. The performance of a prediction model is assessed using a loss function, that compares the true label $$y$$ to the predicted label $$\hat{y}=h(x,\theta)$$ for an input $$x$$. In binary classification problems, a common loss function is the zero/one loss function $$L_{0/1}$$, which assigns a loss equal to one in the case of wrong prediction, and zero otherwise: \begin{split} \begin{align} L_{0/1}: \mathcal{Y} \times \mathcal{Y} &\rightarrow& \{0,1\} \\ y,\hat{y} &= & \begin{cases} 1,& \text{if } y \ne \hat{y}\\ 0,& \text{if } y=\hat{y} \end{cases} \end{align} \end{split} Note The zero/one loss function is a standard loss function for binary classification problems. It is however not well suited for credit card fraud detection problems, due to the high-class imbalance (much more genuine than fraudulent transactions). Estimating the performance of a fraud detection system is a non-trivial issue, which will be covered in depth in [Chapter 4[(Performance_Metrics). To obtain a fair estimate of a prediction model performance, an important methodological practice, known as validation, is to evaluate the performance of a prediction model on data that were not used for training. This is achieved by splitting the dataset, before training, into a training set and a validation set. The training set is used for the training of the prediction model (that is, to find the parameters $$\theta$$ that minimize the loss on the training set). Once the parameters $$\theta$$ have been fixed, the loss is estimated with the validation set, which gives a better estimate of the performance that the prediction model is expected to have on future (and unseen) transactions. Note Particular care must be taken in practice when splitting the dataset into training and validation sets, due to the sequential nature of credit card transactions, and the delay in fraud reporting. These issues will be addressed in detail in Chapter 5. The supervised learning procedure typically consists of training a set of prediction models and estimating their performances using the validation set. At the end of the procedure, the model that is assumed to provide the best performance (that is, the lowest loss on the validation set) is selected, and used for providing predictions on new transactions (See Fig. 2). A wide range of methods exists for designing and training prediction models. This partly explains the large research literature on ML for CCFD, where papers usually focus on one or a couple of prediction methods. The survey from Priscilla et al. in 2019 [PP19] provides a good overview of the machine learning methods that have been considered for the problem of CCFD. Their survey covered close to one hundred research papers, identifying for each paper which ML techniques were used, see Fig. 4. Fig. 4. Usage frequency of ML techniques in CCFD. Source: Priscilla et al., 2019 {cite}priscilla2019credit. References given in the table are in {cite}priscilla2019credit. The classification of learning techniques into ‘high-level’ categories is not a simple exercise since there often exists methodological, algorithmic, or historical connections among them. Priscilla et al. chose to divide approaches into four groups: Supervised learning, unsupervised learning, ensemble learning, and deep learning. It could be argued that ensemble learning and deep learning are part of supervised learning since they require labels to be known. Also, deep learning and neural networks can be considered to be part of the same category. Covering all ML techniques is out of scope for this book. Rather, our goal is to provide a reference and reproducible framework for CCFD. We decided, based on our research works, to cover five types of methods: logistic regression (LR), decision trees (DT), Random forests (RF), Boosting, and Neural networks/Deep learning (NN/DL). LR and DT were chosen due to their simplicity and interpretability. RF and Boosting were chosen since they are currently considered to be state-of-the-art in terms of performance. NN/DL methods were chosen since they provide promising research directions. ## 4.3. Overview of challenges¶ ML for CCFD is a notoriously difficult problem. We summarise below the challenges commonly highlighted in the reviews on the topic . Class imbalance: Transaction data contain much more legitimate than fraudulent transactions: The percentage of fraudulent transactions in a real-world dataset is typically well under 1%. Learning from imbalanced data is a difficult task since most learning algorithms do not handle well large differences between classes. Dealing with class imbalance requires the use of additional learning strategies like sampling or loss weighting, a topic known as imbalanced learning. Concept drift: Transaction and fraud patterns change over time. On the one hand, the spending habits of credit card users are different during weekdays, weekends, vacation periods, and more generally evolve over time. On the other hand, fraudsters adopt new techniques as the old ones become obsolete. These time-dependent changes in the distributions of transactions and frauds are referred to as concept drift. Concept drift requires the design of learning strategies that can cope with temporal changes in statistical distributions, a topic known as online learning. The concept drift problem is accentuated in practice by the delayed feedbacks (See section Credit card fraud detection system). Near real-time requirements: Fraud detection systems must be able to quickly detect fraudulent transactions. Given the potentially high volume of transaction data (millions of transactions per day), classification times as low as tens of milliseconds may be required. This challenge closely relates to the parallelization and scalability of fraud detection systems. Categorical features: Transactional data typically contain numerous categorical features, such as the ID of a customer, a terminal, the card type, and so on. Categorical features are not well handled by machine learning algorithms and must be transformed into numerical features. Common strategies for transforming categorical features include feature aggregation, graph-based transformation, or deep-learning approaches such as feature embeddings. Sequential modeling: Each terminal and/or customer generates a stream of sequential data with unique characteristics. An important challenge of fraud detection consists in modeling these streams to better characterize their expected behaviors and detect when abnormal behaviors occur. Modeling may be done by aggregating features over time (for example, keeping track of the mean frequency or transaction amounts of a customer), or by relying on sequential prediction models (such as hidden Markov models, or recurrent neural networks for example). Class overlap: The last two challenges can be associated with the more general challenge of overlapping between the two classes. With only raw information about a transaction, distinguishing between a fraudulent or a genuine transaction is close to impossible. This issue is commonly addressed using feature engineering techniques, that add contextual information to raw payment information. Performance measures: Standard measures for classification systems, such as the mean misclassification error or the AUC ROC, are not well suited for detection problems due to the class imbalance issue, and the complex cost structure of fraud detection. A fraud detection system should be able to maximize the detection of fraudulent transactions while minimizing the number of incorrectly predicted frauds (false positives). It is often necessary to consider multiple measures to assess the overall performance of a fraud detection system. Despite its central role in the design of a fraud detection system, there is currently no consensus on which set of performance measures should be used. Lack of public datasets: For obvious confidentiality reasons, real-world credit card transactions cannot be publicly shared. There exists only one publicly shared dataset, which was made available on Kaggle [Kag16] by our team in 2016. Despite its limitations (only two days of data, and obfuscated features), the dataset has been widely used in the research literature, and is one of the most upvoted and downloaded on Kaggle. The scarcity of datasets for fraud detection is also true with simulated data: No simulator or reference simulated datasets are yet available. As a result, most research works cannot be reproduced, making impossible the comparison of different techniques by independent researchers.
auto_math_text
web
# quaternions? This topic is 4711 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I am trying to use quaternions for my animations, but I can't really get them working correctly. I made a plugin for maya, which gets the quaternion from the items transformation matrix. Most of the time this seems to work fine, but especially if I try to blend between multiply animations, it sometimes gives very weird results. I think the problem has something to do with the fact that you can represent each rotation by 2 diffent quaternions. I did a little test with a cube which rotates only around it's x axis. This is a part of the quaternion keyframes I got, and there is indead a flip of signs: <key t = "5" value = ".7330, -.6801, 0, 0"/> <key t = "6" value = ".6234, -.7818, 0, 0"/> <key t = "7" value = "-.4999, .8660, 0, 0"/> <key t = "8" value = "-.3653, .9308, 0, 0"/> Is there a way to get the quaternions in a way that those sign flips won't occur? This is my matrix to quaternion code Quaternion Matrix4::ToQuaternion() const { float trace = m[0][0] + m[1][1] + m[2][2]; if(trace > 0) { float root = sqrtf(trace + 1); Quaternion ret; ret.w = root * .5f; root = .5f / root; ret.x = (m[2][1] - m[1][2]) * root; ret.y = (m[0][2] - m[2][0]) * root; ret.z = (m[1][0] - m[0][1]) * root; ret.Normalize(); return ret; } else { const int next[] = {1,2,0}; int i = 0; if(m[1][1] > m[0][0]) i = 1; if(m[2][2] > m) i = 2; int j = next; int k = next[j]; float root = sqrtf(m - m[j][j] - m[k][k] + 1); Quaternion ret; ret.vec = root * .5f; root = .5f / root; ret.w = (m[k][j] - m[j][k]) * root; ret.vec[j] = (m[j] + m[j]) * root; ret.vec[k] = (m[k] + m[k]) * root; ret.Normalize(); return ret; } } ##### Share on other sites Quote: Is there a way to get the quaternions in a way that those sign flips won't occur? Negating the quaternion when necessary to keep the w coordinate always positive should work OK. ##### Share on other sites Quote: Original post by Fruny Quote: Is there a way to get the quaternions in a way that those sign flips won't occur? Negating the quaternion when necessary to keep the w coordinate always positive should work OK. Actually, no (from experience trying this with real data sets). What you need to do is make sure the angle between consecutive quaternions is acute. If q0 and q1 are those quaternions, if Dot(q0,q1) >= 0 then interpolate q0 and q1. If Dot(q0,q1) < 0 then interpolate q0 and -q1. In my own applications, I preprocess quaternion sequences so that the dot product is always nonnegative, thus avoiding the dot-product test at run time. ##### Share on other sites Quote: Original post by Dave Eberly Quote: Original post by Fruny Quote: Is there a way to get the quaternions in a way that those sign flips won't occur? Negating the quaternion when necessary to keep the w coordinate always positive should work OK. Actually, no (from experience trying this with real data sets). What you need to do is make sure the angle between consecutive quaternions is acute. If q0 and q1 are those quaternions, if Dot(q0,q1) >= 0 then interpolate q0 and q1. If Dot(q0,q1) < 0 then interpolate q0 and -q1. In my own applications, I preprocess quaternion sequences so that the dot product is always nonnegative, thus avoiding the dot-product test at run time. Thanks alot man, this works perfectly. • 28 • 15 • 23 • 10 • 19
auto_math_text
web
# Mesh rendering performance optimisation I'm working on a libgdx implementation of mesh terrain and running into some performance issues. The terrain is represented by a number of mesh tiles, each mesh is made of vertices laid onto a 2D plane. The implementation of the meshes is done via libgdx Mesh, each of which is cached after its initial generation. I use GL3.0 and therefore the vertices are handled via VertexBufferObjectWithVAO which as I understand it should allow GPU caching. Meshes are indexed. Aiming to optimise performance, I have tried to increase the number of vertices in each mesh (while keeping the same overall amount of vertices) but weirdly the performance gets worse rather than improving. Question 1: any possible reasons why given the same total number of vertices the scenario with lower amount of meshes (#3 below) is slower than the scenarios with higher number of meshes? Question 2: based on the OPENGL pipeline summarised below is it correct to assume that VBOs are being transferred to the GPU once and then drawn via GPU memory reference? Performance comparison 1. 1,600 meshes * 3,042 vert (4.8M vertices) -> 131 FPS 2. 625 meshes * 11,250 vert (4.5M vertices) -> 132 FPS 3. 100 meshes * 45,000 vert (4.5M vertices) -> 113 FPS Hardware Details GTX660 2GB Memory Utilisation during test 70% in all scenarios. Vertex allocation memory impact seems to be negligible compared to textures. OPENGL pipeline From API TRACE, this is the frame life-cycle in summary mesh generation (one off) glGenBuffers() glGenVertexArrays() render (every frame) glClear(GL_COLOR_BUFFER_BIT) glEnable(GL_BLEND) glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) glUseProgram(...) glUniformMatrix4fv(...) glUniform1f(...) glActiveTexture(...) glBindTexture(...) ... glBindVertexArray(...) glBindBuffer(...) glDrawElements(...) [for each mesh] *edited to clarify that I have tried to group the vertices into less amount of meshes to reduce number of draw calls **edited to provide more data and streamline questions. • Please edit to clarify exactly what you mean by increasing the size of the mesh (i.e.increasing the vertex count, increasing the distance between vertices, etc). – Pikalek Feb 14 at 16:00 • Are you using a Vertex Only draw call, or are you also using Indices? The reason I ask is that for a uniform grid, you will usually end up calculating the same vertex up to 6 times in the vertex buffer. As you will have 6 triangles sharing the same point. Also, depending on the size of the triangles, you will start getting pipeline issues if the triangles become either the same size or smaller than your resolution. LOD helps, and using index buffers in this case also assists LOD (you can have a number of LOD levels assocated to your mesh). – ErnieDingo Feb 14 at 21:44 • @ErnieDingo Meshes are indexed with shared vertices between triangles. – Adunato Feb 15 at 12:19
auto_math_text
web
# De-Lifting Lemma, does it hold? [closed] Let $\sigma$ denote an independent simultaneous substitution. Now I wonder if the following holds: If $\Gamma \vartriangleright (A\ (\sigma\ \tau))\ \rho$ then there are $\psi$, $\phi$ such that $\Gamma \vartriangleright (A\ \sigma)\ \psi$ and $\psi\ \phi = \tau\ \rho$. Is this true in Prolog? Best Regards - ## closed as not a real question by Andres Caicedo, Dan Petersen, Ryan Budney, S. Carnahan♦Oct 22 '11 at 9:32 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question. I did not downvote. But your question provides no motivation or evidence —apart from your wondering— and all the notation used is undefined. While it is of course not necessarily that everything be understandable to everyone here, people will express with their votes if they thought they question (and answers) were useful for them in one of a myriad different ways, or not. There is no courage involved in these matters, this is not the far west! –  Mariano Suárez-Alvarez Oct 22 '11 at 0:16 I am closing the question because the text provides essentially no context. You may interpret this as a certain success, if you wish. –  S. Carnahan Oct 22 '11 at 9:32 By the way, I strongly recommend against deleting comments in a conversation where other people made a good-faith effort to reply. –  S. Carnahan Oct 22 '11 at 9:35 I have posted a delete request here: tea.mathoverflow.net/discussion/1182/… –  Countably Infinite Oct 22 '11 at 21:40 OK, so $\Gamma$ is a set of universal Horn sentences, and $A\sigma$ is a conjunction of atomic formulas. In theory, Prolog is supposed to find substitutions $\rho$ which make the query provable from $\Gamma$, and moreover, such that any other such substitution is less general than one of those which are output. (Essentially, for each admissible propositional skeleton of a resolution proof, it outputs the most general unifier that makes it a proof.) In this model, the property holds: since $(A\sigma\tau)\rho=(A\sigma)(\tau\rho)$ is entailed by $\Gamma$, $\tau\rho$ must factor through one of the substitutions, call it $\psi$, output for $A\sigma$. However, this model does not describe actual Prolog, which is neither sound nor complete from the logical point of view. It is not sound, because due to the lack of “occurs check”, most implementations will happily unify terms that are not unifiable, thereby proving formulas that are not provable. I will ignore this problem, as any Prolog program whose result depends on the presence or absence of occurs check is invalid (not conforming to the language standard). Prolog is also incomplete, as it uses a deterministic proof search strategy which may get lost in a cycle before having a chance of finding a valid proof, or the substitution we are looking for. This makes the “de-lifting” property fail. Here’s an example (using the notation from the comments): $\sigma=[\\ ]$, $A=p(X)$, $\tau=[X=a]$, $\rho=[\\ ]$ ?- listing(p). p(f(A)) :- p(f(A)). p(a). Yes ?- X=a, p(X). X = a Yes ?- p(X). (The second query enters an infinite loop.) Here is another example, where the query is answered, but the needed substitution is never output: ?- listing(p). p(b). p(f(A)) :- p(A). p(a). Yes ?- X=a, p(X). X = a Yes ?- p(X). X = b ; X = f(b) ; X = f(f(b)) ; X = f(f(f(b))) ; X = f(f(f(f(b)))) ; X = f(f(f(f(f(b))))) ; ... - Then I really don’t understand your notation. Would you care to clarify your question, starting with specifying what each of the undefined symbols ($\Gamma$, $A$, ...) stands for? And what does “Prolog search with substitution” mean? –  Emil Jeřábek May 23 '11 at 19:14 You know, it would really improve matters if you reformulated the original question in a clear, coherent and unambiguous way, instead of this game of having people second-guess what you think and then point out where they failed. People are notoriously incapable of reading other people’s minds. I find it a bit rude that you expect other to do all the work for you, and you can’t be bothered even to properly express what is it that you actually want. Having said that, the last version of your question seems to be already positively answered by the first two paragraphs of my answer. –  Emil Jeřábek May 30 '11 at 11:57 A unifier is, by definition, a substitution which makes the entailment hold. A complete set of unifiers is, by definition, a set of unifiers such that every other unifier is a substitution instance of one of them. The abstract model of Prolog is that it outputs a complete list of unifiers. So, the property you want holds essentially by definition, there is nothing to prove. Do you get it? –  Emil Jeřábek Oct 19 '11 at 11:57 Do you understand the difference between “there exists” and “for all”? I wrote that $\tau\rho\le\psi_i$ for some $i$, not for all $i$. A unifier of a query $A$ is a substitution $\tau$ such that $A\tau$ is provable from the initial set of Horn clauses (the program), which I also have already written above. I have no idea what “general sense” are you talking about now, but the “de-lifting” lemma is trivial. –  Emil Jeřábek Oct 19 '11 at 17:38 I imagine there were other comments, that were later deleted? –  Mariano Suárez-Alvarez Oct 22 '11 at 0:11
auto_math_text