text
stringlengths
7.8k
650k
Site Mobile Navigation At Goldman, Partners Are Made, and Unmade On Wall Street, becoming a partner at Goldman Sachs is considered the equivalent of winning the lottery. This fall, in a secretive process, some 100 executives will be chosen to receive this golden ticket, bestowing rich pay packages and an inside track to the top jobs at the company. What few outside Goldman know is that this ticket can also be taken away. As many as 60 Goldman executives could be stripped of their partnerships this year to make way for new blood, people with firsthand knowledge of the process say. Inside the firm, the process is known as “de-partnering.” Goldman does not disclose who is no longer a partner, and many move on to jobs elsewhere; some stay, telling few of their fate. “I have friends who have been de-partnered who are still there, and most people inside think they are still partners,” said one former Goldman executive, who spoke only on the condition of anonymity. “It is something you just don’t talk about.” Goldman has roughly 35,000 employees, but only 375 or so partners. The former Treasury Secretaries Henry M. Paulson Jr. and Robert E. Rubin, and former Gov. Jon S. Corzine of New Jersey, now chief executive of financial firm MF Global, were all partners. It can take years to make partner, and being pushed from the inner circle can be wrenching. “Being partner at Goldman is the pinnacle of Wall Street; if you make it, you are considered set for life,” said Michael Driscoll, a visiting professor at Adelphi University and a senior managing director at Bear Stearns before that firm collapsed in 2008. “To have it taken away would just be devastating to an individual. There is just no other word for it.” The financial blow can be substantial as well. Executives stripped of partnership would retain their base salary, roughly $200,000, but their bonuses could be diminished, potentially costing them millions of dollars in a good year. While gaining the coveted status of partner, and then losing it, is certainly not unheard-of at private financial and law firms on Wall Street, Goldman’s partnership process stands out for its size and intricacy. Goldman weeds out partners because it is worried that if the partnership becomes too big, it will lose its cachet and become less of a motivational tool for talented up-and-comers, people involved in the process say. If too many people stay, it creates a logjam. The average tenure of a partner is about eight years, in part because of natural attrition and retirements. Goldman insiders also note they have what they call an “up-and-out” culture, leading to the active management of the pool. Photo The new Lower Manhattan headquarters for Goldman Sachs, where window views will be at a premium.Credit Rob Bennett for The New York Times The process of vetting new candidates for partner and deciding which existing partners must go began in earnest in recent weeks, according to people with knowledge of the process, which takes place every two years. They spoke on the condition of anonymity. The 2010 partners will most likely be announced in November. Candidates are judged on many qualities, primarily their financial contribution to the firm. But lawyers and risk managers — who are not big revenue producers — can also make it to the inner circle. The executives responsible for running the partner process this year are the vice chairmen, J. Michael Evans, Michael S. Sherwood and John S. Weinberg; the head of human resources, Edith W. Cooper; and the bank’s president, Gary D. Cohn. Goldman typically removes 30 or so partners every two years, said those people who described the process. The number is expected to be significantly higher this year because fewer senior executives have left the firm as a sluggish economy and uncertain markets limit their opportunities elsewhere. Removing partners like this is unique to Goldman among publicly traded firms. When companies go public, they shed the private partnership system, and ownership of the company is transferred to shareholders. Goldman’s ownership was also transferred to shareholders, but it created a hybrid partner model as an incentive for employees. Those whom Goldman does not want to keep are likely to be quietly told in the coming weeks. Each situation is handled differently, the people with knowledge of the process say. Some partners are given time to find other jobs outside the firm. Others are told they will not be made partner and are asked to consider what they want to do next within the company. While Goldman is on track to remove many more executives than usual, the process is in its early stages and no final decisions have been made, these people caution. A Goldman spokesman declined to comment on how it selects and removes partners. The process is at the heart of Goldman’s culture, a way for the firm to reward and retain top talent. Goldman was one of the last of the big Wall Street partnerships to go public, selling shares in 1999. When it was private, the partners were the owners, sharing in the profits, and in some cases having to put in money to shore up losses. To retain that team spirit as a public company, Goldman continued to name partners. In 1999, there were 221. Yet there are differences from past practices. When Goldman was a private partnership, it was rare that a partner would be asked to leave. Photo Former partners at the firm include, from top, the former Treasury Secretaries Henry M. Paulson Jr. and Robert E. Rubin, and New Jersey’s former governor, Jon S. Corzine.Credit From top: Andrew Harrer/Bloomberg News (2); David Goldman for The New York Times “Once you made partner, you typically retired as a partner,” said another former Goldman executive who used to be involved in the process. “If we asked someone to leave, it was because we had really screwed up and the person wasn’t pulling their weight.” It has been a rough year for Goldman. In July, it paid $550 million to settle civil fraud accusations that it had duped clients by selling mortgage securities while failing to make critical disclosures. The firm did not admit or deny guilt. Still, even in the worst of years, the chance to ascend into the private partnership at Goldman is a huge honor. Candidates can be up for partnership two or even three times before finally being chosen. Partners get investment opportunities not offered to other employees, and are typically the highest paid at the firm. Goldman will even book tables for them at fashionable New York restaurants. A big payday is not guaranteed, however. When the firm does not do well, partners tend to bear the brunt of it. Top Goldman executives did not receive bonuses in 2008, the peak of the financial crisis. But in 2007, a banner year for Goldman, the firm set aside $20.19 billion for compensation and benefits, and its chief executive, Lloyd C. Blankfein, took home $68.5 million in stock and cash. Candidates for partner are vetted by current partners. The review process is known inside Goldman as “cross-ruffing,” a reference to a maneuver in bridge. A few hundred people are typically nominated within the whole company, and the number is eventually whittled down to about 100. Each department compiles a list of potential candidates, with photos and performance reviews. Partners in another department review it. Candidates are not interviewed, and in many cases are unaware they are even up for partner. When final decisions are made, it is usually Mr. Blankfein who breaks the good news to the new partners. Few candidates ever find out why they missed the cut. And Goldman announces only inductees, not those who have been removed. Still, there may be a few telltale signs this year. Goldman recently moved to a new building, just steps away from the Hudson River in Lower Manhattan. Outer offices are hard to come by, and typically given only to partners. Goldman insiders are already speculating that de-partnered executives who decide to stay will have to give up their window view. A version of this article appears in print on September 13, 2010, on page A1 of the New York edition with the headline: At Goldman, Partners Are Made, and Unmade. Order Reprints|Today's Paper|Subscribe
Introduction {#s1} ============ The claustrum is present in nearly all mammalian lineages (Kowianski et al., [@B24]), but its behavioral functions have not been elucidated because of its unusual geometry. Relatively narrow with a long rostrocaudal extent, the claustrum is difficult to study with standard lesion or recording techniques in a behavioral paradigm. Neuronal tracing techniques, however, have revealed many aspects of claustral circuitry, and most views about claustral functions are based on its cortical connectivity (Edelstein and Denaro, [@B14]; Crick and Koch, [@B13]; Smythies et al., [@B44]), which include several unique interhemispheric projections (Minciacchi et al., [@B31]; Li et al., [@B26]; Sloniewski et al., [@B39]; Sadowski et al., [@B37]). Using physiology-based tracing techniques in rats, we recently reported that the M1-Wh region projects strongly to the contralateral claustrum, but only weakly to the ipsilateral claustrum (Alloway et al., [@B4]; Colechio and Alloway, [@B12]; Smith and Alloway, [@B41]; Smith et al., [@B43]). While the M1-Wh region does not receive reciprocal feedback projections from the contralateral claustrum, it is strongly innervated by the ipsilateral claustrum. By contrast, claustral connections with the M1 forelimb regions are comparatively sparse and are exclusively ipsilateral. In addition, the whisker region in S1 barrel cortex is innervated by the ipsilateral claustrum even though S1 cortex does not project to the claustrum in either hemisphere. These findings are significant because exploratory whisking is an active sensory process that requires attention and is bilaterally-coordinated for the purpose of acquiring tactile information about the spatial features of the local environment (Towal and Hartmann, [@B48]; Mitchinson et al., [@B32]). By comparison, rodent forelimb movements are rarely if ever used used to perceive the spatial features of three-dimensional space, but are mainly concerned with supporting and moving the body through space. The discovery of an interhemispheric claustrum-based pathway that connects the cortical regions that process whisker-related information prompted us to hypothesize that the claustrum should have similar circuit connections with the visual system. Like whisking behavior, exploratory eye movements require attention and are concerned with actively acquiring visual information to perceive a broad spatial region (Chelazzi et al., [@B11]; Andrews and Coppola, [@B6]; Wallace et al., [@B51]). In rats the claustrum receives a few projections from visual area 18b, but virtually none from area 17 (Miller and Vogt, [@B29]; Carey and Neal, [@B10]). The ventral part of the rat claustrum projects to visual cortex (Li et al., [@B26]; Sadowski et al., [@B37]), but whether the claustrum has afferent or efferent connections with the FEF remains unknown. Indeed, no data indicate whether the rat claustrum is part of a disynaptic interhemispheric circuit that could coordinate the FEF and V1 areas. Therefore, to test this hypothesis, we injected anterograde and retrograde tracers into physiologically-defined sites in FEF and V1. We compared the results, along with unreported data from our previous rat study (Smith and Alloway, [@B41]), to tracing data accessible from the Allen Mouse Brain Connectivity Atlas. Our findings indicate that the claustrum is part of an interhemispheric circuit that enables the FEF in one hemisphere to transmit the same information to the V1 and FEF cortical areas in the other hemisphere. Materials and methods {#s2} ===================== Anatomical tracing experiments were performed on three adult male Sprague-Dawley rats (Charles River) weighing 300--350 g. All procedures conformed to National Institute of Health standards and were approved by Penn State University\'s Institutional Animal Care and Use Committee. Animal surgery -------------- Rats were initially anesthetized via intramuscular (IM) injection of a mixed solution of ketamine HCl (40 mg/kg) and xylazine (12 mg/kg). Additional IM injections of atropine methyl nitrate (0.5 mg/kg) to limit bronchial secretions, dexamethasone sodium phosphate (5 mg/kg) to reduce brain swelling, and enrofloxacin (2.5 mg/kg) to prevent infection were given before intubating the trachea through the oral cavity and ventilating the rat with oxygen. After placing the animal in a stereotaxic instrument, its heart rate, respiratory rate, end-tidal carbon dioxide, and blood oxygen were monitored (Surgivet) throughout the experimental procedure. Body temperature was regulated by a rectal probe attached to a homeothermic blanket placed on the dorsal side of the animal; a hot water blanket was placed underneath the rat as well. Ophthalmic ointment was applied to prevent corneal drying. After injecting bupivicaine into the scalp, a midline incision was performed to visualize the cranium, and a ground screw was inserted into a craniotomy over the cerebellum. Craniotomies were also made over motor cortex (1--3 mm rostral, 0.5--3 mm lateral to bregma) and visual cortex (5--7 mm caudal, 3--5 mm lateral to bregma) in both hemispheres according to coordinates in Paxinos and Watson ([@B34]). Intracranial microstimulation ----------------------------- Intracranial microstimulation (ICMS) was done in rats to map motor cortex. Microstimulation was performed under ketamine-xylazine anesthesia to produce forepaw, whisker, or eye movements. Following microstimulation mapping, the anesthetic state was maintained with \~1% isoflurane. Cortical stimulation was administered by \~1 MΩ saline-filled glass pipettes. Both short (80-ms, 250 Hz) and long (1-s, 100 Hz) pulse trains were administered. A biphasic constant current source (Bak Electronics, BSI-2) was used to test current levels of 10--250 μA to identify the lowest threshold at each site capable of eliciting a movement. Stimulation was conducted at multiple sites in each animal so that tracer injections could be centralized within the target region to avoid tracer leakage into surrounding representations. The stereotaxic coordinates that evoked movements were similar to previous reports (Hall and Lindholm, [@B18]; Neafsey et al., [@B33]; Hoffer et al., [@B19]; Brecht et al., [@B9]; Haiss and Schwarz, [@B17]). Electrodes were positioned orthogonal to the pial surface and inserted to depths (\~1 mm) that correspond to layer V, which contains corticobulbar and corticospinal neurons. The electrode was initially placed 2--3 mm lateral to the midline to identify the forepaw representation (M1-Fp). More medial sites (1--2 mm lateral) evoked brief whisker retractions (M1-Re) during 80-ms stimulation trains. At the most medial coordinates (\~1 mm lateral), the electrode was advanced deeper to determine the motor representations in the medial bank of frontal cortex. At sites located 1.5--3.0 mm rostral, stimulation at depths 1.5--2.5 mm below the pial surface evoked eye movements visible to the naked eye. Further caudally, 1-s long train stimulation evoked repetitive rhythmic whisker movements at M1 (M1-RW) sites located 0.5--1.7 mm rostral to bregma. Whisker movements at M1-RW sites were frequently bilateral (Haiss and Schwarz, [@B17]). Both FEF and M1-RW are located deep in the medial bank of frontal cortex, but they have distinct domains along the rostral and caudal axis. Extracellular neuronal recordings --------------------------------- To identify sites in primary visual cortex (V1), the same electrodes used for ICMS mapping were used to map visual cortex. After disconnecting the electrode from the constant current source, it was connected to the headstage of a Dagan amplifier (Model 2200) so that extracellular discharges could be amplified, bandpass filtered (300--3000 Hz), and monitored with an oscilloscope and acoustic speaker. Electrodes were placed at stereotaxic coordinates (5.0--7.0 mm caudal to bregma, 2.0--4.0 mm lateral) that correspond to V1 (Paxinos and Watson, [@B34]), and were advanced \~400 μm into the brain to reach layer IV. Neuronal responses to visual stimulation were tested by manipulating a handheld blue LED in different directions over the ipsilateral and contralateral eyes to identify responsive areas corresponding to the monocular or binocular regions of V1. Because this procedure may not distinguish V1 from adjacent visual areas, injection sites in V1 were verified by cytoarchitectonic criteria (see Results). Tracer injections ----------------- Tracers were injected either iontophoretically or by pressure. For anterograde tracing, 15% solutions of FluoroRuby (FR; D-1817, Invitrogen) or biotinylated dextran amine (BDA; D-7135, Invitrogen) in 0.01 M phosphate buffered saline (PBS) were used. For retrograde tracing, 2% solutions of True Blue chloride (TB; T-1323, Invitrogen) or Fluorogold (FG; H-22845, Fluoro-Chrome) were used. The FEF received iontophoretic injections of BDA or FG from glass pipettes (\~30 μm tip). A retention current (-7.0 μA) was used to limit tracer leakage while advancing the pipette to its injection depth, where the retention current was turned off and positive current pulses of 2--5 μA (7 s on/off duty cycle) were applied for 10--20 min to eject the tracer at two depths separated by 300 μm. In one rat, a mixture of FG and BDA was iontophoretically ejected in FEF. Visual cortex received pressure injections of FR or TB from Hamilton syringes in which glass pipettes (\~50 μm diameter tips) were cemented on the end of the needle. A summary of the tracer injections is in Table [1](#T1){ref-type="table"}. ###### **Summary of tracer injections from current study and previously published data (Smith and Alloway, [@B41])**. **Case** **Left hemisphere** **Right hemisphere** ---------- --------------------- ---------------------- -------------- --------- TI-14 -- V1 (FR) FEF (FG/BDA) V1 (TB) TI-15 FEF (BDA) V1 (FR) FEF (FG) V1 (TB) TI-16 FEF (BDA) V1 (FR) FEF (FG) V1 (TB) CL-01 M1-Re (FR) -- M1-Re (FG) -- CL-02 M1-Re (FR) -- M1-Re (FG) -- CL-03 M1-Fp (FR) -- M1-Fp (FG) -- CL-04 M1-Fp (FR) -- M1-Fp (FG) -- CL-05 M1-Re (FR) -- M1-Re (FG) -- CL-06 M1-Fp (FR) -- M1-Fp (FG) -- CL-21 M1-RW (FR) -- M1-RW (FG) -- CL-22 M1-RW (FR) -- M1-RW (FG) -- CL-23 M1-RW (FR) -- M1-RW (FG) -- *Anterograde tracers: BDA, biotinylated dextran amine; FR, FluoroRuby*. *Retrograde tracers: FG, FluoroGold; TB, True Blue Chloride*. Following tracer injections, the skin was sutured and treated with antibiotic ointment. Each animal received additional doses of atropine, dexamethasone, and enrofloxacin. Animals were returned to single housed cages for a 7--10 day survival period to allow for tracer transport. Histology --------- Rats were deeply anesthetized with IM injections of ketamine (80 mg/kg) and xylazine (18 mg/kg) and perfused transcardially with heparinized saline, 4% paraformaldehyde, and 4% paraformaldehyde with 10% sucrose. Brains were removed and stored in 4% paraformaldehyde and 30% sucrose at 4°C until saturated. All brains were sectioned bilaterally into 60-μm slices using a freezing microtome with a slit in the left hemisphere (ventral to the rhinal fissure) to allow proper orientation when mounting. Serially-ordered sections were divided into three series. The first series was mounted on gelatin-coated slides, and then dried and stained with thionin acetate to reveal cytoarchitecture. The second series was processed to visualize BDA using a heavy metal enhanced horse radish peroxidase immunohistochemical reaction as previously described (Kincaid and Wilson, [@B23]; Smith et al., [@B42]). Briefly, sections were first washed in 0.3% H~2~O~2~ to degrade endogenous enzyme activity, rinsed in two 0.3% Triton-X-100 (TX-100) washes, and then incubated for 2 h in avidin-biotin horse radish peroxidase solution mixed in 0.3% TX-100. Sections were then washed twice in 0.1 M PBS and incubated in 0.05% DAB, 0.0005% H~2~O~2~, 0.05% NiCl~2~, and 0.02% CoCl~2~ in 0.1 M tris buffer (pH = 7.2) for 10 min. Two subsequent washes in 0.1 M PBS stopped the reaction. Following immunohistochemistry to visualize BDA, sections were mounted on gelatin-coated slides, dried overnight, dehydrated in ethanol, cleared in xylene and coverslipped with Cytoseal. The third series was directly mounted, dried, dehydrated, defatted, and coverslipped to visualize fluorescent tracers alone. Anatomical analysis ------------------- All tissue was inspected with an Olympus BH-2 microscope equipped for both brightfield and fluorescent microscopy. Terminals labeled with BDA were visualized with brightfield, whereas TB and FG labeling were visualized with a near UV filter (11000v2; Chroma Technologies), and a TRITC filter (41002, Chroma Technologies) was used for FR labeling. Labeled soma and terminal synapses were plotted and digitally reconstructed using optical transducers attached to the microscope stage (MDPlot, Accustage). For anterograde tracers, beaded varicosities on the axonal terminals were plotted because they represent en passant synapses (Voight et al., [@B50]; Kincaid and Wilson, [@B23]). For retrograde tracers, only labeled cells with dendrites were plotted. Digital photomicrographs of brightfield and fluorescent labeling were acquired with a Retiga EX CCD digital camera mounted on the BH-2 microscope. Additional images were obtained with an Olympus FV1000 laser scanning confocal microscope using a 60× oil immersion objective. For TB (405 nm excitation, 410--460 nm emission) and FG (405 nm, excitation, 520--600 nm), sections were scanned sequentially to demonstrate both single and double labeled neurons and were then merged to produce a composite image. Quantitative analysis of tracer reconstructions was performed using MDPlot software (version 5.1; Accustage). Analysis of the claustrum was confined to sections that contained the striatum because more rostral levels do not contain the claustrum-associated Gng2 protein (Mathur et al., [@B28]). After the sections were plotted, a grid of 50 μm^2^ bins was superimposed on the reconstructions. Bins containing at least four labeled terminals and one labeled neuron were classified as containing overlapping tracer labeling. Analyses of BDA-FG and BDA-TB overlap were performed separately. The number of overlapping bins was expressed as a percentage of the total number of bins that contained tracer labeling. Statistical analysis was performed using Origin software (version 8.0; Origin Lab). Because BDA processing diminishes the intensity of fluorescence, the third series, which was processed for fluorescence but not BDA, was used to count FG- and TB-labeled and double-labeled neurons. In addition to our own neuroanatomical tracing experiments, corticoclaustral connectivity in mice was analyzed by accessing data in the Allen Mouse Brain Connectivity Atlas. The analyzed cases were chosen based on the Allen Brain Institute\'s designation of cortical injection site area. We chose homologous cytoarchitectonic regions and confirmed the functional representation of these regions based on labeling patterns in subcortical structures (see Results). Results {#s3} ======= To compare the claustral connections with FEF and V1, three rats received different anterograde and retrograde tracers in FEF and V1 of the left and right hemispheres, respectively, (Table [1](#T1){ref-type="table"}). Combining different tracer injections in the same animal allowed us to quantify tracer overlap in the claustrum bilaterally and determine the relative strength of corticoclaustral and claustrocortical connections with FEF and visual cortex. In the first rat, a combined solution of FG and BDA was iontophoretically injected into FEF of the right hemisphere, whereas FR and TB were separately injected into V1 of the left and right hemispheres, respectively. In the other two rats, BDA and FR were separately injected into respective sites in FEF and V1 of the left hemisphere, whereas FG and TB were separately injected into respective sites in FEF and V1 of the right hemisphere. Projections from FEF -------------------- In agreement with previous reports (Neafsey et al., [@B33]; Brecht et al., [@B9]; Haiss and Schwarz, [@B17]), cortical sites that evoked eye movements were consistently found at coordinates in the cingulate (Cg) cortex. As shown by Figures [1](#F1){ref-type="fig"}, [2](#F2){ref-type="fig"}, tracer injections at these sites were largely confined to CG cortex but some tracer occupied the most medial part of the medial agranular (med-AGm) cortex. While FEF is rostral to M1 sites that evoke rhythmic whisking movements, both FEF and M1-RW reside in Cg and, possibly, the most medial part of AGm (med-AGm) as defined by cytoarchitectonic criteria. ![**Case TI-16 demonstrates that the FEF projects to claustrum and other forebrain regions. (A)** Nissl-stained section through the lateral agranular (AGl), medial agranular (AGm), and cingulate (Cg) cortices. **(B)** Deposit of biotinylated dextran amine (BDA) at an M1 site in Cg cortex that evoked eye movements and produced labeled terminals in the contralateral Cg cortex. **(C--F)** The BDA deposit produced labeled terminals in the dorsomedial neostriatum (NS) and ventral claustrum (vCLA) in the left **(C,D)** and right **(E,F)** hemispheres. **(G)** Nissl-stained section of thalamus used to identify BDA-labeled projections **(G′)** in the anterior medial (AM), interanteromedial (IAM), mediodorsal (MD), reuniens (Re), ventromedial (VM), and ventroanterior (VA) nuclei. Box corresponds to **(I)**. **(H--J)** Terminal labeling was densest in the AM and MD nuclei. Boxes in **(I)** indicate **(H,J)**. ec, external capsule; lv, lateral ventricle; sm, stria medularis; AV, anteroventral; AD, anterodorsal; CM, centromedial; Rt, reticular nucleus; VL, ventrolateral. Numbers in **(B--G)** indicate distance from bregma in millimeters. Scale bars: 500 μm in **(A,G)**; 250 μm in **(C,I)**; 100 μm in **(H)**.](fnsys-08-00093-g0001){#F1} ![**Corticoclaustral projections from FEF and V1 in case TI-15. (A--B′)** Left hemisphere injections of BDA in FEF **(A,A′)** and FluoroRuby (FR) in V1 **(B,B′)** of the same rat. BDA labeling appeared bilaterally in vCLA **(D,F)**, but was noticeably denser on the contralateral side **(F)**. Sparse FR labeling was apparent only in the ipsilateral vCLA **(D′)**. Boxes in **(C)** and **(E)** correspond to **(D,D′)** and **(F,F′)**, respectively. Red arrowheads demarcate the dorsal claustrum (dCLA) from the vCLA. Black and white arrowheads denote common blood vessels. Scale bars: 500 μm in **(A)**; 250 μm in **(C)**; 100 μm in **(D)**.](fnsys-08-00093-g0002){#F2} Many BDA-labeled projections from FEF terminated in visual cortex and brainstem regions such as the dorsomedial superior colliculus, periaqueductal gray, oculomotor complex, and the pontine reticular formation. These results corroborate studies that placed rodent FEF in the Cg/med-AGm region on the basis of its connections with oculomotor-related nuclei in the brainstem (Leichnetz et al., [@B25]; Stuesse and Newman, [@B47]; Bosco et al., [@B8]; Guandalini, [@B16]). Labeled projections from FEF terminated in the contralateral Cg and other forebrain structures in both hemispheres, including the dorsomedial neostriatum, ventral claustrum (vCLA), and thalamus (see Figure [1](#F1){ref-type="fig"}). These patterns are similar, but not identical, to projections from the M1 whisker regions, (Alloway et al., [@B2], [@B4]). While projections from FEF terminate more medially in neostriatum than those from M1-Wh, both motor regions project to numerous thalamic regions including the anteromedial (AM), interanteromedial (IAM), paracentral (PC), centrolateral (CL), parafasicular (Pf), reuniens (Re), ventral anterior (VA), and ventromedial (VM) nuclei. The FEF also projects to the ipsilateral mediodorsal (MD) nucleus and, to a lesser extent, to the contralateral MD (Figures [1G′--J](#F1){ref-type="fig"}), and these projections to MD appear homologous to the FEF projections in primates (Stanton et al., [@B46]; Sommer and Wurtz, [@B45]). Inspection of the claustrum in both hemispheres revealed dense projections from the contralateral FEF. As seen in both Figures [1](#F1){ref-type="fig"}, [2](#F2){ref-type="fig"}, BDA injections in FEF produced dense terminal labeling in a large part of the contralateral vCLA, but produced noticeably weaker labeling in a smaller area in the ipsilateral vCLA. These corticoclaustral projections from FEF are remarkably similar to the pattern of corticoclaustral projections that originate from the M1 whisker regions (Smith and Alloway, [@B41]). Projections from V1 ------------------- We used physiology, cytoarchitecture, and demarcations in the Paxinos and Watson ([@B34]) atlas to confirm the injections in V1. Cytoarchitecturally, V1 is characterized by a prominent granular layer IV, which is not present in surrounding medial and lateral secondary visual cortices, areas 18a and b (Miller and Vogt, [@B30]). Substantial amounts of transported tracer in the lateral geniculate nucleus (LGN) of the thalamus further confirmed our injections into V1 (data not shown). Examination of the claustrum in both hemispheres revealed very sparse projections from V1 cortex. In fact, as shown in Figure [2](#F2){ref-type="fig"}, the few labeled projections from V1 to the claustrum that were most noticeable were generally located in the ipsilateral hemisphere. By contrast, projections from V1 were observed in several subcortical structures, and many of these overlapped with the projections from FEF. Labeled projections from FEF and V1 overlapped ipsilaterally in the dorsomedial neostriatum, superior colliculus, PC, CL, and lateroposterior (LP) thalamic nuclei. Non-overlapping projections from V1 and FEF appeared in the laterodorsal (LD) thalamus, the dorsal zona incerta (ZI), and in the basal pontine nuclei, in which labeled projections from FEF were observed on both sides of this structure. Claustral projections to FEF and V1 ----------------------------------- Injections of FG in FEF and TB in V1 of the same hemisphere produced a dense population of labeled soma in the vCLA of the ipsilateral hemisphere. As shown in Figure [3](#F3){ref-type="fig"}, FG- and TB-labeled neurons were intermingled in the ipsilateral vCLA, but very few labeled neurons appeared in the contralateral vCLA. A small proportion (7.0 ± 1.2%, mean ± s.e.m.) of the labeled soma were double labeled as shown in confocal images (see Figures [3C,E](#F3){ref-type="fig"}). Because TB is not easily visualized and is not transported as efficiently as FG, this quantitative measurement of double labeled neurons probably underestimates the proportion of claustral neurons that project to both FEF and V1. ![**The ventral claustrum projects to both FEF and V1. (A--B′)** Fluorogold (FG) deposit in FEF and True Blue (TB) in V1 of the right hemisphere for the same case in Figure [2](#F2){ref-type="fig"}. **(C)** Confocal image of two faintly double-labeled neurons in the left vCLA. **(D)** Reconstruction of labeled soma in the vCLA. Yellow and blue dots are FG- and TB-labeled soma, respectively, and green dots are double-labeled neurons. Black arrows indicate areas in **(C,E)**. **(E)** Confocal image of multiple retrogradely-labeled neurons in the right CLA. Red arrowheads in **(C,E)** indicate double-labeled neurons, white arrowheads indicate TB-labeled neurons. Scale bars: 500 μm in **(A)**; 50 μm in **(C)**; 1 mm in **(D)**.](fnsys-08-00093-g0003){#F3} Nonetheless, our plotted reconstructions illustrate partial overlapping populations of FG- and TB-labeled neurons in vCLA. Double-labeled neurons dominated the center of the labeled population (see Figure [3D](#F3){ref-type="fig"}), and the presence of these neurons indicates that vCLA sends divergent projections to both FEF and V1 in the ipsilateral hemisphere, as reported previously in cats (Minciacchi et al., [@B31]). This result is similar to our previous observations indicating that the claustrum sends divergent projections to the S1 and M1 whisker regions in the ipsilateral hemisphere (Smith et al., [@B43]). The TB and FG injections also produced intermingled labeled neurons, including double-labeled cells, in several other subcortical regions. Populations of FG- and TB-labeled neurons were intermingled in the ipsilateral intralaminar nuclei (PC, CL, IAM, Pf) and bilaterally in the Re nucleus, which occupies the midline of the thalamus. Prominent labeling, including dual-labeled cells, also appeared in the lateral preoptic area. Cortico-claustro-cortical circuit connections --------------------------------------------- Tracer overlap in the claustrum was quantified for the two cases (TI15 and TI16) in which both FEF and V1 were bilaterally injected. In these cases, BDA (FEF) and FR (V1) were deposited on the left side while FG (FEF) and TB (V1) were injected on the right side (Table [1](#T1){ref-type="table"}). As shown in Figure [4B](#F4){ref-type="fig"}, labeling from all four tracers occupied a compact region in the vCLA, spanning no more than 500 μm^2^ within each coronal section. Using 50-μm^2^ bins, a bin had to contain at least four labeled varicosities and one labeled soma to be classified as terminal-soma overlap. This represents the same standard that we used previously to assess cortico-claustro-cortical connectivity (Smith and Alloway, [@B41]). ![**Projections from the left M1-FEF terminate in the right claustrum, which projects to FEF and V1 in the right hemisphere. (B)** Reconstructions of claustral labeling for the same case depicted in Figures [2](#F2){ref-type="fig"}, [3](#F3){ref-type="fig"}. BDA- and FR-labeled terminals shown as black and red dots, respectively; TB- and FG-labeled soma shown as blue- and gold-filled circles, respectively. **(A,C)** Overlap analysis in the left **(A)** and right **(C)** claustrum shows black bins, which contain at least four BDA-labeled varicosities; gold bins, which contain at least one FG-labeled soma; and white bins, which contain both an FG-labeled soma and four BDA-labeled varicosities. **(D,E)** Overlap analysis of BDA-labeled terminals and TB-labeled soma using the same threshold criteria for the bins as in **(A,C)**. Percentages represent fraction of total bins that are colored white in the claustrum of this specific section (i.e., terminal-soma overlap). Scale bars: 1 mm in **(B)**. Bin sizes: 50 μm^2^ in **(A,C--E)**.](fnsys-08-00093-g0004){#F4} Anterograde tracer injections in the left FEF produced dense terminal labeling in the right claustrum. This terminal labeling surrounded the labeled soma produced by retrograde tracer injections in the right FEF. Our overlap analysis indicated that nearly half of the labeled bins in the right claustrum contained both tracers (see Figure [4C](#F4){ref-type="fig"}). By comparison, terminal-somal overlap in the left claustrum was virtually absent owing to a paucity of labeling from either tracer (Figure [4A](#F4){ref-type="fig"}). When terminal-soma overlap across all claustral sections was calculated, the proportion of all labeled bins that contained both BDA-labeled terminals and FG-soma (i.e., terminal-somal overlap) was much larger contralateral to the FEF-BDA injection (43.3 ± 4.6%) than ipsilaterally (5.3 ± 4.3%). Hence, the FEF projects mainly to the contralateral claustrum, which then projects to the FEF in that hemisphere to create an interhemispheric cortico-claustro-cortical circuit between the FEF regions in the two hemispheres. A similar pattern was found for the connections between FEF and the contralateral V1 region. As shown by the section reconstructed in Figures [4D,E](#F4){ref-type="fig"}, terminal-somal overlap was 17.5% in the claustrum contralateral to the FEF-BDA injection but was 0% in the ipsilateral claustrum. When terminal-somal overlap was calculated for all sections through the claustrum that contained labeled bins, the proportion of bins that contained overlap was larger in the claustrum contralateral to the FEF-BDA injection (37.2 ± 3.0%) than in the ipsilateral claustrum (11.4 ± 1.2%). These findings clearly demonstrate the presence of an interhemispheric cortico-claustro-cortical circuit in which the FEF transmits information disynaptically to the contralateral V1 by means of its projections to the contralateral claustrum. Retrograde confirmation of corticoclaustral projections ------------------------------------------------------- Our anterograde tracing results indicate that V1 sends very weak projections to the claustrum, whereas FEF sends dense projections to vCLA. To confirm this finding, we inspected data from our previous study in which we injected FG into the claustrum (see Figure 9 in Smith and Alloway, [@B41]). In that case, the contralateral frontal cortex contained many retrogradely-labeled neurons in the Cg and medial AGm regions (see Figures 10, 11 in Smith and Alloway, [@B41]), but no labeled neurons were observed in the S1 barrel region of either hemisphere. In the occipital region, however, separate populations of FG-labeled neurons were found ipsilaterally (data not reported previously). As indicted by Figure [5](#F5){ref-type="fig"}, a few labeled neurons appeared in layer VI of primary visual cortex (V1) and lateral secondary visual cortex (V2l), but many more labeled neurons were observed in the medial part of the secondary visual cortex (V2m), which is consistent with previous reports (Miller and Vogt, [@B29]; Carey and Neal, [@B10]). ![**Few neurons in visual cortex project to the claustrum. (A)** Reconstruction of an FG deposit in claustrum depicted in Figure 9 of Smith and Alloway ([@B41]). **(B,C)** Nissl-stained section through primary visual (V1), medial secondary visual (V2m), and retrosplenial cortices. **(D)** Plotted location of FG-labeled neurons in an adjacent section. Inset in **(D)** indicates location of **(E)**. **(F,G)** Successive magnifications of retrogradely-labeled soma in V2m. **(H)** Digital reconstructions of FG-labeled neurons in visual cortex. Numbers indicate caudal distance from bregma. Scale bars: 250 μm in **(A,C--E)**; 1 mm in **(B,H)**; 100 μm in **(F)**; 50 μm in **(G)**.](fnsys-08-00093-g0005){#F5} Functional topography of claustral connections with motor cortex ---------------------------------------------------------------- The claustral connections with FEF in the present study were compared to the claustral connections for the M1 whisker (M1-RW, M1-Re) and M1 forepaw (M1-Fp) representations that we characterized previously (Smith and Alloway, [@B41]). Figure [6](#F6){ref-type="fig"} shows the rostrocaudal distribution of claustral labeling produced by injecting retrograde (Figure [6A](#F6){ref-type="fig"}) or anterograde (Figure [6B](#F6){ref-type="fig"}) tracers into these four motor regions (summary of injections in Table [1](#T1){ref-type="table"}). Statistical analysis revealed significant effects for injection location and hemispheric labeling for both anterograde (Injected area: *F* = 34.2; *p* \< 0.00001; Hemispheric labeling: *F* = 25.1; *p* \< 0.00001) and retrograde injections (Injected area: *F* = 10.5; *p* \< 0.00001; Hemispheric labeling: *F* = 95.1; *p* \< 0.00001). ![**Distribution of mean terminal and cell counts from tracer injections in different parts of rat motor cortex. (A)** Labeled terminal counts in the ipsilateral and contralateral claustrum from BDA and FR injections in FEF and in the rhythmic whisking (M1-RW), whisker retraction (M1-Re), and forepaw (M1-Fp) regions from a previous study (Smith and Alloway, [@B41]). **(B)** Labeled cell counts in the claustrum of both hemispheres from FG injections in the same regions. Symbols in each line graph represent the mean counts per section from three tracer injection cases; error bars represent standard error of the mean.](fnsys-08-00093-g0006){#F6} The FEF, M1-RW, and M1-Re regions all project significantly more strongly to the contralateral than to the ipsilateral claustrum (FEF, paired *t* = 2.46, *p* \< 0.05; M1-RW, paired *t* = 6.26, *p* \< 0.000001; M1-Re, paired *t* = 5.59, *p* \< 0.00001). Following retrograde tracer injections into these motor regions, however, the number of labeled neurons is much larger ipsilaterally (FEF, paired *t* = 7.34, *p* \< 0.0000001; M1-RW, paired *t* = 6.46, *p* \< 0.000001; M1-Re, paired *t* = 5.38, *p* \< 0.00001). By comparison, labeling produced by tracer injections in M1-Fp is extremely weak in both directions and is present almost entirely on the ipsilateral side (anterograde labeling, *t* = 6.04, *p* \< 0.000001; retrograde labeling, *t* = 5.60, *p* \< 0.00001). The retrograde labeling patterns observed in the claustrum in the present study were compared with three cases of retrograde labeling in the claustrum (CL01, CL05, and CL21) that were illustrated previously in Figures 1--3 of Smith and Alloway ([@B41]). These comparisons indicate that the claustrum has a distinct functional topography. As shown in Figure [7](#F7){ref-type="fig"}, each ICMS-defined and tracer-injected motor region is linked to a specific part of the claustrum. The FG injections in FEF (case TI-15) and in M1-RW (case CL21), which occupy the Cg/med-AGm region, produced retrograde labeling in the deepest parts of the vCLA. The FG injection in M1-Re (case CL05), which is centered in AGm, produced labeling in the middle of vCLA. Finally, an FG injection in M1-Fp (case CL03), which occupies AGl, revealed labeled neurons mainly in the dCLA. These data indicate that the claustrum has a topographic organization in which the medial to lateral extent of M1 cortex is represented ventral to dorsal in the claustrum. ![**Representative examples illustrating the topography of labeled neurons in the claustrum produced by tracer injections in FEF, M1-RW, M1-Re, and M1-Fp. (A)** Reconstructions of the M1 tracer injections in the current study (case TI-15) and in cases CL-03, CL-05, and CL-21 illustrated in Figures 1--3 of Smith and Alloway ([@B41]). **(B)** Reconstruction of FG-labeled neurons in the claustrum from each injection in **(A)**. Scale bars: 500 μm in **(A)**; 250 μm in **(B)**.](fnsys-08-00093-g0007){#F7} These claustrum subdivisions are defined not only by the specificity of their inputs from motor cortex, but also by their projections to different sensory regions. Comparison of the retrograde labeling in the present study with those from our previous report (Smith et al., [@B43]), indicates that vCLA projects to both FEF and V1, whereas the middle of the claustrum projects to both M1-Re and S1-Wh. Corticoclaustral projections in mice ------------------------------------ We inspected the corticoclaustral connections in the Allen Mouse Brain Connectivity Atlas ([@B1]), and focused on cases with tracer injections in S1, V1, Cg, AGm, and AGl. We examined mouse cases in which the tracers filled all cortical layers and the injection locations appeared equivalent to our injection sites as determined by the surrounding anatomical landmarks. Finally, we analyzed whether the terminal labeling patterns in the forebrain and brainstem matched the patterns seen in our rat experiments to assure functional homology with our data. We observed, for example, that Cg injections in mice produced labeling in the dorsomedial neostriatum, nucleus MD in the thalamus, dorsomedial superior colliculus, and the ocular motor complex in the midbrain. This pattern of labeling is completely consistent with the patterns that we observed when anterograde tracers were deposited in the FEF (Cg cortex) of rats. Likewise, tracer injections in AGm or AGl of mice produced subcortical labeling patterns that are consistent with our anterograde tracer injections at sites where ICMS evoked movements of the whiskers or forelimb (Alloway et al., [@B2], [@B4], [@B3]). Qualitative inspection of the injections in the S1-Wh (Experiment\#: 126908007, 127866392) region matched our previous finding that rat S1 does not project to the claustrum (Smith et al., [@B43]). Mice that received injections in V1 showed sparse labeling in the ipsilateral claustrum (Experiment\#: 113887162, 100141599), which corresponds to our findings when rat V1 region is injected. Finally, as shown in Figure [8](#F8){ref-type="fig"}, injections into Cg (Figures [8A--C](#F8){ref-type="fig"}; Experiment\# 112514202), AGm (Figures [8D--F](#F8){ref-type="fig"}; Experiment\# 141603190), and AGl (Figures [8G--I](#F8){ref-type="fig"}, Experiment\# 141602484) display patterns of interhemispheric corticoclaustral labeling that are highly similar to our results in the rat. In mice, as in rats, the majority of coronal sections containing the claustrum indicate that the AGm and Cg regions project more strongly to the contralateral than to the ipsilateral claustrum (confirming findings by Mao et al., [@B27]), whereas AGl has sparse connections with the claustrum in either hemisphere. ![**Corticoclaustral projections from Cg, AGm, and AGl in mice**. Images of AAV injections and subsequent labeling acquired from the Allen Mouse Brain Connectivity Atlas ([@B1]). Center panels show images of labeling from representative AAV tracer injections in Cg **(B)**, AGm **(E)**, and AGl **(H)**. Hyperlinks connect to the complete data sets on the Allen Institute website. In each case, labeling appears in the contralateral cortex as well as bilaterally in the striatum and claustrum. **(A,D,G)** correspond to insets of the claustrum in the left hemisphere of center panels. **(C,F,I)** likewise correspond to insets of the claustrum in the right hemisphere of center panels. Scale bars: 250 μm in **(A)**; 1 mm in **(B)**.](fnsys-08-00093-g0008){#F8} Discussion {#s4} ========== By placing different anterograde and retrograde tracers in FEF and V1 of both hemispheres, this study revealed several new findings about the functional organization of the rat claustrum. Most significantly, rat FEF sends dense projections to the contralateral claustrum, but sends relatively weak projections to the ipsilateral claustrum. The claustrum receives weak projections from ipsilateral V1 but is not innervated by the contralateral V1. When different retrograde tracers are injected into FEF and V1 of the same hemisphere, many intermingled and double-labeled neurons appear in the ipsilateral, but not the contralateral, claustrum. These results indicate that the claustrum is part of an interhemispheric circuit for transmitting information from FEF to separate visuomotor regions in the other hemisphere. Our previous work shows that the claustrum has a parallel set of circuit connections with the M1 and S1 whisker regions (Smith and Alloway, [@B41]; Smith et al., [@B43]). Collectively, these findings indicate that the claustrum has a role in the interhemispheric transmission of certain types of sensorimotor information. While the claustrum receives dense interhemispheric projections from cortical motor regions that regulate movements of the whiskers and eyes, the M1 limb regions send very weak projections to the claustrum and only within the same hemisphere. These differences in the density of corticoclaustral projections from different parts of M1 are also apparent in the Allen Mouse Brain Connectivity Atlas. A summary of these functional differences in claustral connectivity is illustrated in Figure [9](#F9){ref-type="fig"}. ![**Circuit diagram of interhemispheric sensorimotor cortico-claustro-cortical circuits in rats**. Strength of projections are indicated by line thicknesses (see legend).](fnsys-08-00093-g0009){#F9} Our last major finding is that the claustrum has a well-defined functional topography along its dorsoventral axis. The M1 forepaw region projects to the dorsal claustrum, the M1 whisker region projects to the middle claustrum, and the FEF region projects to the vCLA. Likewise, the S1 forelimb, the S1 whisker, and the V1 regions receive projections from the dorsal, middle, and vCLA, respectively. Visuomotor claustrum circuitry ------------------------------ Corticoclaustral projections from the frontal and occipital cortices differ both qualitatively and quantitatively. The FEF projects densely to the contralateral claustrum, but only weakly to the ipsilateral claustrum. Visual cortex sends some projections to the ipsilateral claustrum, but these originate mainly from V2m, which also projects to FEF and the ventral superior colliculus, regions known for controlling saccadic eye movements (Wang and Burkhalter, [@B52]; Wang et al., [@B54]). The corticoclaustral projections from both FEF and V2m originate from layer V, which is significant because this layer contains corticobulbar motor output neurons. These facts indicate that rat vCLA has a role in processing information concerned with eye movements. The lack of reciprocal projections between certain cortical areas and the claustrum provides some clues about the function of the claustrum. While FEF projects strongly to the contralateral claustrum, the claustrum projects ipsilaterally to FEF but does not send feedback projections to the contralateral FEF. Likewise, the connections between the claustrum and primary visual cortex are not reciprocal. The claustrum projects strongly to ipsilateral V1, but reciprocal projections from V1 to the claustrum are practically nil (Miller and Vogt, [@B29]; Carey and Neal, [@B10]). After placing different retrograde tracers in FEF and V1 of the same hemisphere, we observed many double-labeled neurons in the ventral part of the ipsilateral claustrum, and this indicates that identical information is transmitted from vCLA to both FEF and V1. When these divergent claustral projections to V1 and FEF are considered with the relative weakness of corticoclaustral feedback projections in the same hemisphere, the emerging circuit suggests that the claustrum is important for coordinating V1 and FEF processing in the same hemisphere. Functional topography in the claustrum -------------------------------------- Our studies demonstrate that rat claustrum has a well-defined functional topography. In a previous report we showed that the M1 forelimb region is linked to dCLA, whereas the M1 whisker region is connected to vCLA (Smith and Alloway, [@B41]). The present study extends this work by showing that visuomotor cortical areas are connected to the most ventral part of vCLA. We have observed intraclaustral connections along the rostrocaudal, but not the dorsoventral axis (Smith and Alloway, [@B41]). This anisotropic organization of intraclaustral connectivity is consistent with the segregation of unimodal responses in different subregions of the primate claustrum (Remedios et al., [@B36]). Theoretical function of interhemispheric sensorimotor claustral circuits ------------------------------------------------------------------------ We recently injected different retrograde tracers into S1 and M1, and we observed many double-labeled neurons in the claustrum (Smith et al., [@B43]). In the present study we observed many double-labeled claustral neurons after injecting different retrograde tracers into FEF and V1. These results are consistent with Type A and B claustral neurons that were previously reported in the brains of rats and cats (Minciacchi et al., [@B31]). Selective placement of different retrograde tracers in cortex of the same animals has revealed claustral neurons that innervate both ipsilateral and contralateral frontal regions (Type C neurons), and others that innervate the contralateral frontal and ipsilateral occipital regions (Type D neurons). Other studies on a variety of mammalian species have identified specific claustral regions that project to sensory and motor cortical areas, including divergent projections to the S1 and S2 cortices (Li et al., [@B26]; Sadowski et al., [@B37]; Jakubowaska-Sadowska et al., [@B21]). The presence of double-labeled neurons demonstrates that the claustrum conveys the same information to separate, but functionally-related cortical areas. While the exact nature of the information that is transmitted to FEF and V1 (or to the M1 and S1 whisker regions) remains unknown, claustral divergence provides a mechanism for ensuring simultaneous processing of the same information in separate cortical regions. Our findings in two different sensorimotor systems indicate that dense interhemispheric projections to the claustrum originate from motor regions in frontal cortex. Mounting evidence indicates that these frontal regions (Cg, AGm) in the rat are involved not only in motor control, but also in directed-attention and memory-guided orienting behaviors (Reep and Corwin, [@B35]; Erlich et al., [@B15]; Boly et al., [@B7]). Transmission of attention-related motor signals to the claustrum is supported by the fact that the several intralaminar thalamic nuclei also have connections with the claustrum. Many tracing studies have reported that the claustrum receives non-reciprocal projections from the centromedial, CL, PC, and Pf nuclei (Kaufman and Rosenquist, [@B22]; Sloniewski et al., [@B40]; Vertes et al., [@B49]; Alloway et al., [@B5]). Substantial evidence implicates these intralaminar nuclei with a critical role in attention and conscious perception, (Hudetz, [@B20]), and these connections suggest that the claustrum is involved in dispersing attention-dependent signals during the conscious state. Consistent with our past work (Smith and Alloway, [@B41]; Smith et al., [@B43]), the present study supports our hypothesis that claustral connections enable interhemispheric transmission of certain types of modality-specific information to widely-separated cortical areas. By transmitting information from the frontal cortex in one hemisphere to parietal and occipital regions in the other hemisphere, the claustrum provides an interhemispheric route that extends beyond the other callosal projections that interconnect corresponding sites in both hemispheres. Ocular saccades and whisking are rapid movements involved in the active acquisition of visual and somesthetic information from both sides of the body. These movements are purposeful, they require a conscious state, and they are dynamically modulated by sensory inputs. In addition to callosal connections between corresponding cortical areas in the two hemispheres, the claustrum provides a node for transmitting attention-dependent sensorimotor signals from one frontal region to multiple sensorimotor regions in the other hemisphere. This is especially relevant when attention is directed toward improving the acquisition and interpretation of sensory inputs that may come from a broad expanse of extra-personal space. Indeed, studies in the human visual system have indicated that callosal connections and subcortical circuits are involved in interhemispheric visuomotor integration, "promoting a unified experience of the way we perceive the visual world and prepare our actions" (Schulte and Muller-Oehring, [@B38]). In our view, the claustrum facilitates interhemispheric corticocortical transmission so that multiple sensorimotor cortical regions can work together to produce a stable global percept out of the rapidly shifting sensory information coming in from sensors on both sides of the head. Conflict of interest statement ------------------------------ The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. This work was supported by NIH grant NS37532 awarded to Kevin D. Alloway. [^1]: Edited by: Brian N. Mathur, University of Maryland School of Medicine, USA [^2]: Reviewed by: Preston E. Garraghty, Indiana University, USA; Helen Sherk, University of Washington, USA [^3]: This article was submitted to the journal Frontiers in Systems Neuroscience.
/** * Copyright (C) 2016 Hyphenate Inc. All rights reserved. * <p> * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * http://www.apache.org/licenses/LICENSE-2.0 * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.htmessage.fanxinht.utils; import java.text.SimpleDateFormat; import java.util.Date; import java.util.List; import java.util.regex.Pattern; import android.app.Activity; import android.app.ActivityManager; import android.app.ActivityManager.RunningTaskInfo; import android.app.AlertDialog; import android.app.ProgressDialog; import android.content.Context; import android.graphics.Rect; import android.net.ConnectivityManager; import android.net.NetworkInfo; import android.os.Build; import android.text.TextUtils; import android.util.DisplayMetrics; import android.util.Log; import android.view.View; import android.view.ViewTreeObserver; import android.widget.TextView; import android.widget.Toast; import com.alibaba.fastjson.JSONException; import com.alibaba.fastjson.JSONObject; import com.htmessage.fanxinht.HTApp; import com.htmessage.fanxinht.HTConstant; import com.htmessage.fanxinht.R; import com.htmessage.fanxinht.acitivity.addfriends.invitefriend.ContactInfo; import com.htmessage.fanxinht.acitivity.main.servicecontacts.ServiceUser; import com.htmessage.fanxinht.domain.User; import com.github.promeg.pinyinhelper.Pinyin; import com.jrmf360.rplib.JrmfRpClient; import com.jrmf360.rplib.http.model.BaseModel; import com.jrmf360.tools.http.OkHttpModelCallBack; public class CommonUtils { private static final String TAG = "CommonUtils"; private static Toast toast; private static ProgressDialog dialog; /** * check if network avalable * * @param context * @return */ public static boolean isNetWorkConnected(Context context) { if (context != null) { ConnectivityManager mConnectivityManager = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo mNetworkInfo = mConnectivityManager.getActiveNetworkInfo(); if (mNetworkInfo != null) { return mNetworkInfo.isAvailable() && mNetworkInfo.isConnected(); } } return false; } /** * check if sdcard exist * * @return */ public static boolean isSdcardExist() { if (android.os.Environment.getExternalStorageState().equals(android.os.Environment.MEDIA_MOUNTED)) return true; else return false; } static String getString(Context context, int resId) { return context.getResources().getString(resId); } /** * get top activity * @param context * @return */ public static String getTopActivity(Context context) { ActivityManager manager = (ActivityManager) context.getSystemService(Context.ACTIVITY_SERVICE); List<RunningTaskInfo> runningTaskInfos = manager.getRunningTasks(1); if (runningTaskInfos != null) return runningTaskInfos.get(0).topActivity.getClassName(); else return ""; } /** * set initial letter of according user's nickname( username if no nickname) * * @param * @param user */ public static void setUserInitialLetter(User user) { final String DefaultLetter = "#"; String letter = DefaultLetter; if (!TextUtils.isEmpty(user.getNick())) { letter = Pinyin.toPinyin(user.getNick().toCharArray()[0]); user.setInitialLetter(letter.toUpperCase().substring(0, 1)); if (isNumeric(user.getInitialLetter()) || !check(user.getInitialLetter())) { user.setInitialLetter("#"); } return; } if (letter == DefaultLetter && !TextUtils.isEmpty(user.getUsername())) { letter = Pinyin.toPinyin(user.getUsername().toCharArray()[0]); } user.setInitialLetter(letter.substring(0, 1)); if (isNumeric(user.getInitialLetter()) || !check(user.getInitialLetter())) { user.setInitialLetter("#"); } } /** * set initial letter of according user's nickname( username if no nickname) * * @param * @param user */ public static void setServiceInitialLetter(ServiceUser user) { final String DefaultLetter = "#"; String letter = DefaultLetter; if (!TextUtils.isEmpty(user.getNick())) { letter = Pinyin.toPinyin(user.getNick().toCharArray()[0]); user.setInitialLetter(letter.toUpperCase().substring(0, 1)); if (!check(user.getInitialLetter()) || isNumeric(user.getInitialLetter())) { user.setInitialLetter(DefaultLetter); } return; } if (letter == DefaultLetter && !TextUtils.isEmpty(user.getUsername())) { letter = Pinyin.toPinyin(user.getUsername().toCharArray()[0]); } user.setInitialLetter(letter.substring(0, 1)); if (!check(user.getInitialLetter()) || isNumeric(user.getInitialLetter())) { user.setInitialLetter(DefaultLetter); } } /** * set initial letter of according user's nickname( username if no nickname) * * @param * @param user */ public static void setContactsInfoInitialLetter(ContactInfo user) { final String DefaultLetter = "#"; String letter = DefaultLetter; if (!TextUtils.isEmpty(user.getName())) { letter = Pinyin.toPinyin(user.getName().toCharArray()[0]); user.setLetter(letter.toUpperCase().substring(0, 1)); if (isNumeric(user.getLetter()) || !check(user.getLetter())) { user.setLetter("#"); } return; } if (letter == DefaultLetter && !TextUtils.isEmpty(user.getName())) { letter = Pinyin.toPinyin(user.getName().toCharArray()[0]); } user.setLetter(letter.substring(0, 1)); if (isNumeric(user.getLetter()) || !check(user.getLetter())) { user.setLetter("#"); } } /** * set initial letter of according user's nickname( username if no nickname) * * @param * @param user */ public static void setUserTeamLetter(User user) { final String DefaultLetter = "0"; String letter = DefaultLetter; String info = user.getUserInfo(); JSONObject userJson = JSONObject.parseObject(info); String teamId = userJson.getString("teamId"); if (!TextUtils.isEmpty(teamId)) { letter = teamId; user.setInitialLetter(letter); return; } } public static String getDuration(Context context, String rel_time, String now_time) { if (TextUtils.isEmpty(now_time)) { if (!TextUtils.isEmpty(rel_time)) { String showTime = rel_time.substring(0, rel_time.lastIndexOf(":")); return showTime; } return "时间错误"; } String backStr = ""; SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); Date d1 = null; Date d2 = null; try { d1 = format.parse(rel_time); d2 = format.parse(now_time); // 毫秒ms long diff = d2.getTime() - d1.getTime(); long diffMinutes = diff / (60 * 1000) % 60; long diffHours = diff / (60 * 60 * 1000) % 24; long diffDays = diff / (24 * 60 * 60 * 1000); if (diffDays != 0) { if (diffDays < 30) { if (1 < diffDays && diffDays < 2) { backStr = context.getString(R.string.yesterday); } else if (1 < diffDays && diffDays < 2) { backStr = context.getString(R.string.The_day_before_yesterday); } else { backStr = String.valueOf(diffDays) + context.getString(R.string.Days_ago); } } else { backStr = context.getString(R.string.long_long_ago); } } else if (diffHours != 0) { backStr = String.valueOf(diffHours) + context.getString(R.string.An_hour_ago); } else if (diffMinutes != 0) { backStr = String.valueOf(diffMinutes) + context.getString(R.string.minutes_ago); } else { backStr = context.getString(R.string.just); } } catch (Exception e) { e.printStackTrace(); } return backStr; } public static int getSupportSoftInputHeight(Activity activity) { Rect r = new Rect(); activity.getWindow().getDecorView().getWindowVisibleDisplayFrame(r); int screenHeight = activity.getWindow().getDecorView().getRootView().getHeight(); int softInputHeight = screenHeight - r.bottom; if (Build.VERSION.SDK_INT >= 20) { // When SDK Level >= 20 (Android L), the softInputHeight will contain the height of softButtonsBar (if has) softInputHeight = softInputHeight - getSoftButtonsBarHeight(activity); Log.d("observeSoftKeyboard---9", String.valueOf(getSoftButtonsBarHeight(activity))); } if (softInputHeight < 0) { Log.w("EmotionInputDetector", "Warning: value of softInputHeight is below zero!"); } return softInputHeight; } private static int getSoftButtonsBarHeight(Activity activity) { DisplayMetrics metrics = new DisplayMetrics(); activity.getWindowManager().getDefaultDisplay().getMetrics(metrics); int usableHeight = metrics.heightPixels; if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR1) { activity.getWindowManager().getDefaultDisplay().getRealMetrics(metrics); } int realHeight = metrics.heightPixels; if (realHeight > usableHeight) { return realHeight - usableHeight; } else { return 0; } } //键盘显示监听 public static void observeSoftKeyboard(final Activity activity, final OnSoftKeyboardChangeListener listener) { final View decorView = activity.getWindow().getDecorView(); decorView.getViewTreeObserver().addOnGlobalLayoutListener(new ViewTreeObserver.OnGlobalLayoutListener() { int previousKeyboardHeight = -1; @Override public void onGlobalLayout() { Rect rect = new Rect(); decorView.getWindowVisibleDisplayFrame(rect); int displayHeight = rect.bottom - rect.top; int height = decorView.getHeight(); int keyboardHeight = height - rect.bottom; if (Build.VERSION.SDK_INT >= 20) { // When SDK Level >= 20 (Android L), the softInputHeight will contain the height of softButtonsBar (if has) keyboardHeight = keyboardHeight - getSoftButtonsBarHeight(activity); } if (previousKeyboardHeight != keyboardHeight) { boolean hide = (double) displayHeight / height > 0.8; listener.onSoftKeyBoardChange(keyboardHeight, !hide, this); } previousKeyboardHeight = height; } }); } public interface OnSoftKeyboardChangeListener { void onSoftKeyBoardChange(int softKeybardHeight, boolean visible, ViewTreeObserver.OnGlobalLayoutListener onGlobalLayoutListener); } public static User Json2User(JSONObject userJson) { User user = new User(userJson.getString(HTConstant.JSON_KEY_HXID)); user.setNick(userJson.getString(HTConstant.JSON_KEY_NICK)); user.setAvatar(userJson.getString(HTConstant.JSON_KEY_AVATAR)); user.setUserInfo(userJson.toJSONString()); CommonUtils.setUserInitialLetter(user); return user; } public static JSONObject User2Json(User user) { JSONObject jsonObject = new JSONObject(); String userInfo = user.getUserInfo(); try { if (userInfo != null) { jsonObject = JSONObject.parseObject(userInfo); } } catch (JSONException e) { Log.d("JSONUtil----->>", "User2Json error"); } return jsonObject; } public static boolean isChinese(String str) { char[] chars = str.toCharArray(); boolean isGB2312 = false; for (int i = 0; i < chars.length; i++) { byte[] bytes = ("" + chars[i]).getBytes(); if (bytes.length == 2) { int[] ints = new int[2]; ints[0] = bytes[0] & 0xff; ints[1] = bytes[1] & 0xff; if (ints[0] >= 0x81 && ints[0] <= 0xFE && ints[1] >= 0x40 && ints[1] <= 0xFE) { isGB2312 = true; break; } } } return isGB2312; } public static boolean test(String s) { char c = s.charAt(0); int i = (int) c; if ((i >= 65 && i <= 90) || (i >= 97 && i <= 122)) { return true; } else { return false; } } public static boolean check(String fstrData) { char c = fstrData.charAt(0); if (((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z'))) { return true; } else { return false; } } public static boolean isNumeric(String str) { Pattern pattern = Pattern.compile("[0-9]*"); return pattern.matcher(str).matches(); } /** * 短吐司 * * @param context * @param msg */ public static void showToastShort(Context context, String msg) { if (toast == null) { toast = Toast.makeText(context, msg, Toast.LENGTH_SHORT); } else { toast.setText(msg); } toast.show(); } /** * 短吐司 * * @param context * @param msg */ public static void showToastShort(Context context, int msg) { if (toast == null) { toast = Toast.makeText(context, msg, Toast.LENGTH_SHORT); } else { toast.setText(msg); } toast.show(); } /** * 长吐司 * * @param context * @param msg */ public static void showToastLong(Context context, String msg) { if (toast == null) { toast = Toast.makeText(context, msg, Toast.LENGTH_LONG); } else { toast.setText(msg); } toast.show(); } /** * 长吐司 * * @param context * @param msg */ public static void showToastLong(Context context, int msg) { if (toast == null) { toast = Toast.makeText(context, msg, Toast.LENGTH_LONG); } else { toast.setText(msg); } toast.show(); } /** * 弹窗 * * @param context * @param title * @param content * @param listener */ public static void showAlertDialog(Activity context, String title, String content, final OnDialogClickListener listener) { AlertDialog.Builder builder = new AlertDialog.Builder(context); View dialogView = View.inflate(context, R.layout.layout_alert_dialog_delete, null); TextView tv_delete_people = (TextView) dialogView.findViewById(R.id.tv_delete_people); View view_line_dialog = dialogView.findViewById(R.id.view_line_dialog); TextView tv_delete_title = (TextView) dialogView.findViewById(R.id.tv_delete_title); TextView tv_cancle = (TextView) dialogView.findViewById(R.id.tv_cancle); TextView tv_ok = (TextView) dialogView.findViewById(R.id.tv_ok); tv_delete_title.setText(title); tv_delete_people.setText(content); builder.setView(dialogView); final AlertDialog dialog = builder.show(); tv_ok.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { dialog.dismiss(); listener.onPriformClock(); } }); tv_cancle.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { dialog.dismiss(); listener.onCancleClock(); } }); } public interface OnDialogClickListener { void onPriformClock(); void onCancleClock(); } public static void showDialogNumal(Context context, String msg) { dialog = new ProgressDialog(context); dialog.setMessage(msg); dialog.show(); } public static void cencelDialog() { if (dialog != null) { dialog.dismiss(); } } /** * 更新魔方的后台信息 * @param activity * @param nick * @param avatar */ public static void upDateRedAvatarUrl(Activity activity,String nick, String avatar) { if (TextUtils.isEmpty(nick)) { nick = HTApp.getInstance().getUsername(); } if (TextUtils.isEmpty(avatar)) { avatar = HTApp.getInstance().getUserAvatar(); } JrmfRpClient.updateUserInfo(activity,HTApp.getInstance().getUsername(), HTApp.getInstance().getThirdToken(), nick, avatar, new OkHttpModelCallBack<BaseModel>() { @Override public void onSuccess(BaseModel baseModel) { boolean success = baseModel.isSuccess(); if (success) { Log.d(TAG, "----更新魔方红包信息成功"); } else { Log.d(TAG, "----更新魔方红包信息失败"); } } @Override public void onFail(String s) { Log.d(TAG, "----更新魔方红包信息失败"); } }); } public interface onUpdateListener { void success(); void failed(); } }
1 B.R. 212 (1979) In re Stephen A. HOLLOCK, Individually and trading as S & R T.V. Electronics, Bankrupt. PIERCE-PHELPS, INC. and First Pennsylvania Banking & Trust Company, Appellees, v. Stephen A. HOLLOCK, Individually and trading as S & R T.V. Electronics, Appellant. Bankruptcy No. 76-201, Civ. No. 79-548. United States District Court, M.D. Pennsylvania. September 13, 1979. *213 Charles A. Shea III, Shea, Shea & Caputo, Wilkes-Barre, Pa., for appellant. John Q. Durkin, Nogi, O'Malley & Harris, Scranton, Pa., for appellees. MEMORANDUM AND ORDER RICHARD P. CONABOY, District Judge. Stephen A. Hollock, individually and trading as S & R T.V. Electronics, filed a Voluntary Petition in Bankruptcy on February 20, 1976. Pursuant to Rules 701 and 712, Pierce-Phelps, Inc. and First Pennsylvania Banking and Trust Company brought an adversary proceeding against Hollock to determine the dischargeability of a debt which he owed to them. A hearing on the matter was held before Bankruptcy Judge Gibbons, and he determined that the Plaintiffs Pierce-Phelps and First Pennsylvania Bank were entitled to have their debt excepted from the order of discharge. Hollock has appealed that determination to this *214 court pursuant to Bankruptcy Rule 801. Since we find that the facts as found by the Bankruptcy Judge are amply supported by the record, and do support his conclusions of law we will affirm the determination of the Bankruptcy Judge. The indebtedness which is the subject of this action is the result of a floor plan financing agreement which was entered into by Pierce-Phelps as supplier, First Pennsylvania Bank as financing agent and Hollock as dealer. The bankrupt applied for this plan in June of 1973. He submitted credit applications and financial statements in support of his application. None of these initial statements were false or misleading in any way. Hollock's application was approved, and on July 2, 1973, he executed a Dealer Floor-Plan Agreement and Signatory Authorization with First Pennsylvania. This contract granted him a secured line of credit in the amount of $20,000. Under the dealer floor-plan arrangement, Pierce-Phelps supplied equipment to Hollock, and was paid for the items delivered by the bank. When Hollock subsequently sold an item he was to forward his payment for it immediately to the bank. Pierce-Phelps employed checkers, who went to the individual dealers each month to check the inventory against their records, and determine that proper payments were being made. If the checker found that a piece of equipment was not in the inventory, and payment for it had not been forwarded to the bank, he was to collect payment before leaving the dealer. Hollock maintained his place of business in his home, and stored his equipment in his basement. During the seven months prior to December of 1974, it became increasingly difficult for the checker assigned to Hollock to perform his duties, because the cartons were stored in an inaccessible area. Hollock helped the checker by reading the serial numbers from the cartons to him, while the checker compared them to his list. After the physical count was completed, both the checker and Hollock would sign and verify the report before it went back to Pierce-Phelps. In December of 1974, Hollock informed Pierce-Phelps' credit manager that he had "cheated" on the reports that he had made with the checker. He had called model and serial numbers from empty cartons, even though some of the items had in fact been missing since July of 1974. The total value of all the missing items was determined to be $9,150.32. Section 17(a) of the Bankruptcy Act, 11 U.S.C. § 35(a), excepts certain debts from discharge in bankruptcy, including: "(2) . . . liabilities for obtaining money or property by false pretenses or false representations, or for obtaining money or property on credit or obtaining an extension or renewal of credit in reliance upon a materially false statement in writing respecting his financial condition made or published or caused to be made or published in any manner whatsoever with intent to deceive." The party seeking to declare a debt nondischargeable under this section must prove the following elements: "`(1) the debtor made the representations; (2) that at the time he knew they were false; (3) that he made them with the intention and purpose of deceiving the creditor; (4) that the creditor relied on such representations and (5) that the creditor sustained the alleged loss and damage as the proximate result of the representations having been made.'" Sweet v. Ritter Finance Co., 263 F.Supp. 540, 543 (W.D.Va.1967). Collier on Bankruptcy, 14th Edition Paragraph 17.16 p. 1641 discusses this provision: "A false representation within the meaning of clause (2) may consist of a false financial statement made to the creditor for the purpose of obtaining property on credit terms. (citing cases) Although case law had reached this result prior to the 1960 amendment, (citing cases) the addition of the following clause in § 17a(2) removed any doubt on the subject: (citing cases) `or for obtaining money or property on credit or obtaining an extension or renewal *215 of credit in reliance upon a materially false statement in writing respecting his financial condition made or published or caused to be made or published in any manner whatsoever with intent to deceive. . . .' (citing cases) But a statement which was true when given, will not constitute a false representation because of a subsequent change in the debtor's affairs, unless the statement constitutes a continuing representation because of the debtor's duty to inform the creditor of changes in his (the debtor's) status. (citing cases)." The Bankruptcy Judge, based on the above provisions of law, found that the inventory statements in question were "continuing representations" by the debtor, that the creditors relied on them to their detriment, that the debtor acknowledged that the statements were false, and made for the purpose of deceiving the creditors, and that as a result of their reliance on these representations, the creditors suffered a loss of $9,150.32. Therefore, he excepted this debt from discharge. In reviewing the determinations of a Bankruptcy Judge, the District Court must accept the Bankruptcy Judge's findings of fact unless they are clearly erroneous, giving due regard to the opportunity of the Bankruptcy Judge to judge the credibility of the witnesses. See Bankruptcy Rule 810. The clearly erroneous standard does not apply to questions of law, and the Bankruptcy Judge's legal conclusions may not be approved without our independent determination of the legal questions. In re Gilchrist, 410 F.Supp. 1070, 1075 (E.D.Pa.); 2A Collier Paragraph 39.28 at 1532-1533. The debtor's objections to the findings of the Bankruptcy Judge frame three issues for our consideration: (1) whether the misrepresentations made by the debtor were subsequent to the extension of credit, and therefore have no effect on the dischargeability of this as a pre-existing debt; (2) whether the misstatements and resulting incorrect inventory reports were "continuing representations" on which the creditor was entitled to rely; and (3) whether the creditors sustained their burden of proof. The debtor's first objection is based on the principle that the fraud necessary to except a debt from discharge must have existed when the debt was created, and that any subsequent fraudulent conduct is inconsequential. He argues that the original extension of credit here took place before any false statements were made, and that his subsequent misrepresentations therefore do not bar this debt from discharge. In support of this, the debtor relies on several cases which we find to be inapposite because they deal with debts created by a single contract, extension of credit or payment due, and allegations of fraud subsequent to the time the credit was extended.[1] No cases are cited where there is an ongoing relationship or course of dealing between the parties, as in the case at bar. Their agreement contemplated a flexible line of credit, subject to periodic evaluation, which placed on the debtor a duty to make continuing representations in the form of monthly inventory reports to keep the creditor informed of his status. Therefore, we cannot agree that this case is governed by the principle suggested by the debtor and is not subject to discharge on that basis. Debtor next contends that the inventory reports were not continuing representations made by him because they were made by employees of the creditor, and were not his statements. *216 The checkers employed by the creditors were operating according to the following instructions: "It is your important responsibility to physically determine whether or not items still being financed are on the dealer's floor. A mere checking of our list against the dealer's records is not sufficient, nor should the job be delegated to the dealer or one of his employees. . . . Our letter to the dealer, welcoming him to the Plan, has told him that you will check for us, and asked the dealer to provide you with every cooperation so that you can do your job easily and with the least possible disturbance to the dealer." The debtor contends that the checker was not acting according to his instructions when he allowed the debtor to assist him in making the inventory check, and that the creditor therefore is not entitled to rely on these reports. However, the checker's instructions do allow him to enlist the cooperation of the dealer, which in this instance is just what occurred. Copies of the forms submitted at the hearing indicated that they were signed by the debtor and the checker, and that the debtor knew that they would then be forwarded to Pierce-Phelps as a part of the floor-plan financing system. Under these circumstances, we find no error in the Bankruptcy Judge's determination that these constituted "continuing representations" of the debtor, on which the creditors were entitled to rely. The last contention of the debtor is that his creditors have failed to sustain their burden of proof. However, our examination of the record shows that the testimony offered supports the finding of the Bankruptcy Court. There was uncontradicted testimony presented to show that the debtor made false representations to the creditors by submitting false inventory reports; that he knew these reports were false when made; that he did so for the purpose of deceiving his creditors and delaying his payments and obtaining an extension of further credit; that the creditors relied on these representations, and as a result of this reliance suffered a loss in the amount of $9,150.32. The creditors did establish all the elements necessary to except a debt from discharge under Section 17(a) of the Bankruptcy Act. The determination of the Bankruptcy Judge will be affirmed. ORDER NOW, this 13th day of September, 1979, the Order of the Bankruptcy Court is hereby affirmed, and the debt owed by the bankrupt herein to the Appellees herein in the sum of $9,150.32 is excepted from the order of discharge in these proceedings. NOTES [1] See e.g. In re Schalm CCH Bankruptcy Law Reporter Paragraph 66, 811 (S.D.N.Y., 1978) Cash and letters of credit obtained by general partner from limited partners were not excepted from discharge by later false representations); In re Ducote, CCH Bankruptcy Law Reporter Paragraph 66, 691 (W.D.La., 1978) (False statement submitted to loan company after loan was approved does not except that loan from discharge); In re Langdon, CCH Bankruptcy Law Reporter Paragraph 66, 117 (S.D.N.Y.1977) (Purchaser of mobile home from bankrupt seller sought to have down payment returned as a non-dischargeable debt, and court refused, because there was no fraudulent intent alleged at the inception of the contract).
Contents ======== 1. Introduction 2. Adenylate kinase isoform-based energetic and metabolic signaling network 2.1. Adenylate kinase isoforms in the nucleus2.2. Adenylate kinase isoforms in cell motility and nucleotide pool homeostasis2.3. Adenylate kinase localization and interacting partners 3. Adenylate kinase catalyzed β-phosphoryl transfer and cell energy economy 4. Adenylate kinase and AMP signaling: an integrated metabolic monitoring and signal ommunication system 5. AMP as universal fuel consumption and low energy signal, mediator of drug action and therapeutic agent 6. Adenylate kinase and AMP signaling networks in body energy sensing 7. Adenylate kinase never rests: from altered energetic signaling to immunodeficiency, cell motility defects, reticular dysgenesis and sensorineural deafness 8. Summary 1.. Introduction ================ Recent studies provide new evidence of the unique energetic, metabolic monitoring and signaling role played by the ubiquitous enzyme adenylate kinase which catalyzes the nucleotide phosphoryl exchange reaction 2ADP ↔ ATP + AMP, critical in cell life \[[@b1-ijms-10-01729]--[@b11-ijms-10-01729]\]. Historically, the function of adenylate kinase has been ascribed to *de novo* adenine nucleotide synthesis and cell energy economy through regulation of nucleotide ratios in different intracellular compartments and AMP-sensitive metabolic enzymes \[[@b1-ijms-10-01729],[@b12-ijms-10-01729]--[@b17-ijms-10-01729]\]. Adenylate kinase has been intensely studied, including its genetics and gene polymorphism, tissue and developmental expression, intracellular distribution and structurefunction relationship \[[@b2-ijms-10-01729],[@b3-ijms-10-01729],[@b14-ijms-10-01729],[@b16-ijms-10-01729]--[@b25-ijms-10-01729]\]. This exemplifies adenylate kinase as a model protein for nucleotide binding folds and phosphoryl transfer catalysis \[[@b19-ijms-10-01729],[@b24-ijms-10-01729],[@b25-ijms-10-01729]\]. The energetic role of adenylate kinase has gained particular significance following discovery that this enzyme facilitates transfer and utilization of γ- and β-phosphoryls in the ATP molecule through a chain of sequential reactions \[[@b1-ijms-10-01729],[@b26-ijms-10-01729]--[@b28-ijms-10-01729]\]. The adenylate kinase-catalyzed energy transfer shuttle and ligand conduction circuit concept \[[@b1-ijms-10-01729],[@b5-ijms-10-01729],[@b29-ijms-10-01729]--[@b31-ijms-10-01729]\] ([Figure 1](#f1-ijms-10-01729){ref-type="fig"}) is supported by biochemical, phosphoryl exchange measurements using ^18^Olabeling, physiological and gene-knockout studies \[[@b4-ijms-10-01729],[@b5-ijms-10-01729],[@b8-ijms-10-01729],[@b15-ijms-10-01729],[@b27-ijms-10-01729]--[@b42-ijms-10-01729]\], and is broadly used to explain energetic signaling mechanisms in heart and skeletal muscles \[[@b5-ijms-10-01729],[@b15-ijms-10-01729],[@b42-ijms-10-01729]--[@b44-ijms-10-01729]\], in hormone secretion \[[@b45-ijms-10-01729]--[@b47-ijms-10-01729]\], organ failure \[[@b4-ijms-10-01729],[@b31-ijms-10-01729],[@b48-ijms-10-01729]--[@b50-ijms-10-01729]\], tumor development \[[@b51-ijms-10-01729]--[@b53-ijms-10-01729]\], energy support of the cell nucleus \[[@b30-ijms-10-01729],[@b34-ijms-10-01729],[@b54-ijms-10-01729],[@b55-ijms-10-01729]\], as well as in sperm and cell motility \[[@b8-ijms-10-01729],[@b56-ijms-10-01729]--[@b59-ijms-10-01729]\]. Muscles of AK1 knockout mice, with one less phosphotransfer chain, display lower energetic efficiency, slower relaxation kinetics and a faster drop in contractility upon ischemia associated with compromised myocardial-vascular crosstalk, AMP and adenosine generation, and impaired metabolic signal communication to the membrane metabolic sensor - the ATP-sensitive potassium channel (K-ATP) and distorted signaling to the energy-sensing AMP-activated protein kinase (AMPK) \[[@b4-ijms-10-01729],[@b5-ijms-10-01729],[@b1-ijms-10-01729],[@b32-ijms-10-01729],[@b33-ijms-10-01729],[@b35-ijms-10-01729],[@b39-ijms-10-01729]--[@b42-ijms-10-01729]\]. The unique ability of the ^18^O-assisted ^31^P-NMR technique to monitor adenylate kinase phosphotransfer and AMP signal dynamics in intact tissues, together with measurements of phosphotransfer rates through creatine kinase and glycolytic/glycogenolytic circuits and responses of metabolic sensors provide further insights into an integrated cellular energetic, metabolic monitoring and energy sensing interface \[[@b5-ijms-10-01729],[@b30-ijms-10-01729],[@b37-ijms-10-01729],[@b44-ijms-10-01729],[@b60-ijms-10-01729]\]. Due to a unique property of the catalyzed reaction, adenylate kinase is recognized as a sensitive reporter of the cellular energy state, translating small changes in the balance between ATP and ADP into relatively large changes in AMP concentration \[[@b1-ijms-10-01729],[@b2-ijms-10-01729],[@b5-ijms-10-01729],[@b29-ijms-10-01729],[@b62-ijms-10-01729]\]. This enables enzymes and metabolic sensors that are affected by AMP to respond with higher sensitivity and fidelity to stress signals \[[@b33-ijms-10-01729],[@b35-ijms-10-01729],[@b42-ijms-10-01729],[@b52-ijms-10-01729],[@b60-ijms-10-01729]--[@b63-ijms-10-01729]\]. Recent data further indicate that adenylate kinase mediated intracellular AMP signaling is coupled with a number of AMP-responsive elements including metabolic sensors, AMPsensitive metabolic enzymes and adenosine signaling \[[@b5-ijms-10-01729],[@b30-ijms-10-01729],[@b33-ijms-10-01729],[@b42-ijms-10-01729],[@b47-ijms-10-01729],[@b61-ijms-10-01729]--[@b65-ijms-10-01729]\]. By catalyzing nucleotide exchange and AMP signaling, adenylate kinase regulates the activity of glycolytic and glycogenolytic enzymes and provides an integrative node for both pathways to respond rapidly to fluctuating energy demands \[[@b5-ijms-10-01729],[@b30-ijms-10-01729]\]. Adenylate kinase generated AMP is emerging as a potential metabolic signal whose intracellular and circulatory levels determine the balance of peripheral organ energy supply between glucose, glycogen and fat, thus regulating food intake, hormonal state, sleep, hibernation and body energy sensing in conjunction with hypothalamic AMP-activated protein kinase (AMPK), K-ATP channels and adenosine metabolic signaling cascades \[[@b4-ijms-10-01729],[@b5-ijms-10-01729],[@b30-ijms-10-01729],[@b63-ijms-10-01729]--[@b69-ijms-10-01729]\]. Thus, such integrated energetic and metabolic signaling roles place adenylate kinase as a hub within the metabolic regulatory system coordinating components of the cellular bioenergetics network. A distinct role for adenylate kinase is emerging in cell motility, differentiation and mechanoelectrical signal transduction. Adenylate kinase has been implicated in spermatozoa motility by facilitating ATP delivery from midpiece mitochondria to remote ATPases in the tail \[[@b20-ijms-10-01729]\]. It was demonstrated that through interaction with anchoring protein Oda5p adenylate kinase provides local ATP for dynein ATPase, ensuring that both high-energy phosphate bonds of ATP are efficiently utilized at the major site of power production of the microtubule motors involved in diverse cellular movements \[[@b57-ijms-10-01729],[@b70-ijms-10-01729]\]. A recent study has demonstrated that a mutation in adenylate kinase 7 (AK7) underlies a primary ciliary dyskinesia phenotype in chronic obstructive pulmonary disease \[[@b7-ijms-10-01729]\]. Moreover, van Horssen et al. \[[@b8-ijms-10-01729]\] demonstrated that cytoskeleton-based cell motility can be modulated by spatial repositioning of adenylate kinase 1 (AK1) enzymatic activity providing local ATP supply and 'on-site' fueling of the actomyosin-machinery. More significantly and unexpectedly, mutations in the mitochondrial AK2 (adenylate kinase 2) gene have been identified in individuals affected with reticular dysgenesis, the most severe form of inborn severe combined immunodeficiencies (SCID), which is associated with sensorineural deafness, a process where nucleotide signaling, cell motility and ciliary functions are involved \[[@b9-ijms-10-01729],[@b10-ijms-10-01729]\]. Knockdown of zebra fish *ak2* lead to aberrant leukocyte development, stressing the critical role of AK2, the major isoform in this type of cells, in leukocyte differentiation \[[@b10-ijms-10-01729]\]. It is increasingly recognized that systemic integration of complementary energetic and metabolic signaling networks ensures cellular energy homeostasis and an adequate response to a broad range of functional activities and stress challenges \[[@b5-ijms-10-01729],[@b44-ijms-10-01729]\]. A network and circuit view of the cellular energetic system allows for new perspectives leading to a comprehensive understanding of disease conditions associated with disturbances in energy metabolism, metabolic monitoring and signaling, and metabolic sensors response \[[@b5-ijms-10-01729],[@b71-ijms-10-01729],[@b72-ijms-10-01729]\]. The purpose of this review is to summarize recent evidence regarding the role of the adenylate kinase phosphotransfer circuit in facilitating intracellular energetic communication, metabolic monitoring and AMP signal transduction. We highlight that due to intrinsic catalytic properties adenylate kinase is able to monitor and sense cell and body energetic imbalances caused by physical activity, inadequate oxygenation or nutrient supply. By generating and transmitting metabolic signals to a number of AMP/nucleotide-sensitive cellular and extracellular components, adenylate kinase is a primary player in adjusting cellular energetics, food intake, substrate transport and vascular blood flow to direct nutrient and oxygen delivery and maintain energy homeostasis. 2.. Adenylate Kinase Isoform-Based Energetic and Metabolic Signaling Network ============================================================================ The existence of multiple isoforms of an enzyme usually relates to their different intracellular distribution and kinetic properties according to the local metabolic requirements ([Figure 2](#f2-ijms-10-01729){ref-type="fig"}) \[[@b8-ijms-10-01729],[@b13-ijms-10-01729],[@b14-ijms-10-01729],[@b51-ijms-10-01729],[@b56-ijms-10-01729],[@b57-ijms-10-01729],[@b73-ijms-10-01729]\]. Following the discovery of adenylate kinase more than six decades ago \[see [@b57-ijms-10-01729]\], three major isoforms, AK1, AK2 and AK3 have been identified, which are localized in the cytosol, mitochondrial intermembrane space and matrix, respectively \[[@b13-ijms-10-01729],[@b14-ijms-10-01729],[@b73-ijms-10-01729]\]. They differ in molecular weight, structure, kinetic properties and nucleotide specificity \[[@b19-ijms-10-01729],[@b25-ijms-10-01729],[@b74-ijms-10-01729]\]. AK1 and AK2 specifically bind AMP and favor binding to ATP over other nucleotide triphosphates, while AK3 is a GTP:AMP phosphotransferase specific for the phosphorylation of intramitochondrial AMP, but can only use GTP or ITP as a substrate \[[@b13-ijms-10-01729],[@b75-ijms-10-01729]\]. Within the AK family there are several conserved regions, including a P-loop, AMP- and MgATP/MgADP-binding domains and a lid domain \[[@b19-ijms-10-01729],[@b25-ijms-10-01729]\]. Adenylate kinase isoforms can form dimers and higher molecular order structures \[[@b76-ijms-10-01729],[@b77-ijms-10-01729]\]. Phosphorylation of adenylate kinase has not been detected; but acetylation and myristoylation, which could facilitate binding of adenylate kinase to cell membranes, mitochondria or the nucleus, have been demonstrated \[[@b56-ijms-10-01729],[@b73-ijms-10-01729],[@b78-ijms-10-01729]\]. Recently discovered glutathionylation of adenylate kinase could confer regulation of its activity by the cellular redox state \[[@b79-ijms-10-01729]\]. Different polymorphic subforms of AK1 (AK1-1 and AK1-2) and splice variants of AK2 (AK2AD) have been found, which have distinct electrophoretic mobility and kinetic properties, i.e. AK2A (26.5 kDa) and AK2B (25.6 kDa) \[[@b2-ijms-10-01729],[@b5-ijms-10-01729],[@b80-ijms-10-01729],[@b81-ijms-10-01729]\]. AK1-2 occurs only in the Caucasian populations and is common among hemophilia-A patients \[[@b81-ijms-10-01729]\]. A recent study indicates that the negative effect of smoke on birth weight is more marked in AK1-1 mothers that in AK1-2 carriers, suggesting that zygotes carrying AK1-2 allele are more protected from damaging factors \[[@b82-ijms-10-01729]\]. Interestingly, the specific activity of AK1-2 is about 3.5-times lower than that of AK1-1, whereas the Michaelis constants do not differ for the allelozymes \[[@b80-ijms-10-01729]\]. Sequence analysis showed that Glu-123 in AK1-1 is exchanged for Gln-123 in AK1-2. Mitochondrial AK2-1(A) subform was found in about 30% of bipolar manic depression syndrome (MDS) patients, but in no case of unipolar MDS or controls. Also, tissue specific adenylate kinase isoforms AK4 and AK5 have been cloned which have mitochondrial matrix and cytosolic localization respectively ([Figure 2](#f2-ijms-10-01729){ref-type="fig"}) \[[@b2-ijms-10-01729],[@b23-ijms-10-01729]\]. AK4 protein levels are increased in cultured cells exposed to hypoxia and in animal models of neurodegenerative diseases \[[@b83-ijms-10-01729]\]. Although AK4 is enzymatically inactive it retains nucleotide binding capability, interacts with the mitochondrial ADP/ATP translocator and serves stress responsive protein function promoting cell survival and proliferation \[[@b83-ijms-10-01729]\]. Both AK4 and AK3 are among hypoxia-inducible factor 1 (HIF-1) regulated genes promoting cell survival \[[@b84-ijms-10-01729],[@b85-ijms-10-01729]\]. AK5 was detected in human pancreatic beta-cells and was implicated in regulation of the K-ATP channel \[[@b47-ijms-10-01729]\], while appearance of autoantibodies to AK5 in refractory limbic encephalitis patients carries a poor prognosis \[[@b86-ijms-10-01729]\]. More recently, the existence of an additional *AK1* gene product, the p53-inducible membrane-bound myristoylated AK1β, has been reported and implicated in p53-dependent cell-cycle arrest and nucleotide exchange in the submembrane space \[[@b51-ijms-10-01729],[@b73-ijms-10-01729],[@b87-ijms-10-01729]\]. In this context, the gene encoding AK1 is down-regulated during tumor development, which could be associated with lower AK1β levels and cell cycle disturbances \[[@b88-ijms-10-01729]\]. AK1β also has been demonstrated to be associated with the nuclear envelope \[[@b73-ijms-10-01729]\] and proteomic studies have identified AK1β in epithelium microvilli \[[@b89-ijms-10-01729]\], suggesting a role in energy support of nuclear and epithelia transport processes. 2.1.. Adenylate kinase isoforms in the nucleus ---------------------------------------------- The AK6 isoform was identified to be localized to the cell nucleus where energy provision and nucleotide channeling into DNA synthesis play a critical role in processing genetic information \[[@b34-ijms-10-01729],[@b54-ijms-10-01729]\]. However, there is still controversy regarding the AK6 isoform which is also known as TAF9 RNA polymerase II possessing ATPase activity \[[@b90-ijms-10-01729]\], suggesting that other adenylate kinase isoforms (AK1 and AK5) can also subserve nuclear energetic needs ([Figure 2](#f2-ijms-10-01729){ref-type="fig"}) \[[@b13-ijms-10-01729],[@b34-ijms-10-01729],[@b73-ijms-10-01729]\]. Knockdown of AK6 slows growth and development of *C. elegans* \[[@b91-ijms-10-01729]\], while in yeast a point mutation in Fap7 gene, an analog of AK6, reduces growth on glucose \[[@b92-ijms-10-01729]\]. Another nuclear protein Rad50, a member of DNA repair RAD50/MRE11/NBS1 protein complex (RMN), which is essential for sensing and signaling from DNA double-strand breaks, in addition to ATP binding and hydrolysis bears an adenylate kinase activity required for efficient tethering between different DNA molecules \[[@b93-ijms-10-01729]\]. A mutation affecting the adenylate kinase activity of Rad50, necessary for DNA tethering also abolishes the formation of viable spores \[[@b93-ijms-10-01729]\]. Mutations in the genes that encode Nbs1 and Mre11, which are part of the RMN complex, are responsible for the human radiation sensitivity disorder, the Nijmegen breakage syndrome (NBS), and the ataxia-telangiectasia-like disorder (ATLD), which are characterized by defective checkpoint responses and high levels of chromosomal abnormalities \[[@b94-ijms-10-01729]\]. Interestingly, the Nijmegen breakage syndrome gene product, Nbs1, is required for enzymatic activities of Rad50 and whole RMN complex function, including partial unwinding of a DNA duplex and efficient cleavage of fully paired hairpins \[[@b95-ijms-10-01729]\]. Thus, adenylate kinase and adenylate kinase activity-possessing proteins play a significant role in the energetics of cell nucleus which is separated from major ATP generating processes in the cytosol. 2.2.. Adenylate kinase isoforms in cell motility and nucleotide pool homeostasis -------------------------------------------------------------------------------- The large molecular weight isoform AK7 is associated with cell motility and other processes \[[@b5-ijms-10-01729]\]. Recently, a high level of expression of the AK7 isoform has been demonstrated in bronchial epithelium and appears to be associated with ciliary function \[[@b96-ijms-10-01729]\]. AK7 is a differentiation marker of kinocilia-bearing cells \[[@b97-ijms-10-01729]\], and mutation in the AK7 gene is associated with human ciliary dyskinesia \[[@b7-ijms-10-01729]\]. In this regard, Foxj1, a forkhead transcription factor necessary for ciliogenesis, induces adenylate kinase isoforms AK5 and AK7 along with genes whose products comprise dynein arms \[[@b98-ijms-10-01729]\]. Yet to be named, a protein with adenylate kinase domains coded by chromosome 9 open reading frame 98 (C9orf98) gene is associated with cell migration and nucleotide exchange \[[@b99-ijms-10-01729]\]. In this regard, sea urchin embryo cilia and sperm flagella use high-molecular weight adenylate kinase with triplicated catalytic domains to power swim by ciliary and flagellar movement \[[@b100-ijms-10-01729]\]. Thus, multiple adenylate kinase isoforms create a phosphotransfer network to serve specific needs in different cellular compartments for energetic and metabolic signaling. Heart muscle harbors about 30 -- 40% of adenylate kinase activity in mitochondria particularly in the intermembrane space \[[@b26-ijms-10-01729],[@b74-ijms-10-01729]\]. The mitochondrial AK2 isoform has the highest affinity (lowest Km) for AMP (≤ 10 μM) among AMP metabolizing enzymes and is highly concentrated in the narrow intermembrane space \[[@b18-ijms-10-01729],[@b26-ijms-10-01729],[@b74-ijms-10-01729]\]. Virtually all the AMP reaching mitochondria is converted to ADP and channeled into oxidative phosphorylation maintaining a low cytosolic AMP concentration \[[@b21-ijms-10-01729],[@b35-ijms-10-01729],[@b41-ijms-10-01729],[@b101-ijms-10-01729]\]. In such a way, adenylate kinase tunes cytosolic AMP signals and guards the cellular adenine nucleotide pool \[[@b1-ijms-10-01729],[@b41-ijms-10-01729],[@b102-ijms-10-01729],[@b103-ijms-10-01729]\]. During intense physical activity or metabolic stress, such as ischemia, the AMP concentration rises, turning on other AMP-metabolizing enzymes such as AMPdeaminase and 5'-nucleotidase producing IMP and adenosine \[[@b60-ijms-10-01729],[@b104-ijms-10-01729]\]. In this regard, a marked elevation of mitochondrial AK2 activity has been demonstrated in hypertrophy in response to increased energy demand and the necessity to maintain the cellular adenine nucleotide pool \[[@b105-ijms-10-01729]\]. In Drosophila, AK2 is essential for survival and circadian rhythm formation and the lack of the AK2 gene causes significant insect growth suppression \[[@b2-ijms-10-01729]\]. AK2 is downregulated in Keloids disease associated with fibroproliferative dermal tumors developing as a result of deregulated wound healing \[[@b106-ijms-10-01729]\]. Expression of adenylate kinase isoforms increases in response to muscle exercise, hypoxia, and metabolic stress \[[@b107-ijms-10-01729],[@b108-ijms-10-01729]\]. Also, muscle exercise performance correlates with adenylate kinase activity, signifying that this enzyme is an integral part of cellular energetic homeostasis \[[@b108-ijms-10-01729]\]. In addition, a decreased expression and activity of AK1 has been found in a mouse model for muscular dystrophy (mdx mice) suggesting a direct relationship between lack of dystrophin and alteration of AK1 inducing energetic deficit \[[@b109-ijms-10-01729]\]. A significant increase in AK1 in obese patients could indicate an imbalance in AMP signaling in this metabolic syndrome \[[@b110-ijms-10-01729]\]. 2.3.. Adenylate kinase localization and interacting partners ------------------------------------------------------------ Although major compartments for adenylate kinase isoform localization are known based on cellular fractionation, intimate intracellular anatomy of adenylate kinase distribution has only been studied by immunocytochemistry in neuronal cells and skeletal muscle \[[@b74-ijms-10-01729],[@b111-ijms-10-01729],[@b112-ijms-10-01729]\]. In skeletal muscle myofibrils, adenylate kinase is localized in linear arrays along with creatine kinase and glycolytic enzymes \[[@b111-ijms-10-01729]\]. Similar localization of GFP-tagged AK1 was detected in neonatal cardiomyocytes \[[@b76-ijms-10-01729]\]. Such sequential arrangement of adenylate kinase molecules could provide a bidirectional phosphorelay that links ATP-generation with ATP-consumption and ATP-sensing processes \[[@b1-ijms-10-01729],[@b5-ijms-10-01729]\]. Another study using tagged proteins indicates that AK1β-EGFP is mainly localized on the plasma membrane, whereas AK1-EGFP is distributed throughout the cell except for trace amounts in the nuclear membrane and some vesicles \[[@b87-ijms-10-01729]\]. Adenylate kinase was found to co-purify and presumably interact with glycolytic enzymes and associate with myofibrils, cellular and mitochondrial membranes \[[@b21-ijms-10-01729],[@b26-ijms-10-01729],[@b33-ijms-10-01729],[@b51-ijms-10-01729],[@b73-ijms-10-01729],[@b74-ijms-10-01729],[@b78-ijms-10-01729],[@b113-ijms-10-01729]\]. Adenylate kinase was also found to be engaged in intimate functional/structural interactions with the sarcolemmal K~ATP~ channel, a major metabolic sensor \[[@b33-ijms-10-01729],[@b46-ijms-10-01729],[@b47-ijms-10-01729]\]. More recently, a phosphotransfer enzyme anchoring protein FHL2 was discovered in heart muscle \[[@b76-ijms-10-01729]\]. This protein positions adenylate kinase and other phosphotransfer enzymes close to ATP utilization sites in myofibrils ensuring that both high-energy phosphate bonds of ATP are efficiently utilized. Mutation of the FHL2 protein is associated with cardiomyopathy \[[@b76-ijms-10-01729]\]. Another adenylate kinase anchoring protein, Oda5p, anchors adenylate kinase in the proximity of the dynein arm ensuring that both high-energy phosphate bonds of ATP are efficiently utilized at the major site of power production of the microtubule motors involved in diverse cellular movements \[[@b57-ijms-10-01729],[@b70-ijms-10-01729]\]. In epididymal spermatozoa adenylate kinase AK1 interacts with sperm associated protein P25b within cholesteroland sphingolipid-enriched membrane domains \[[@b114-ijms-10-01729]\]. Such association and localization is necessary for peri-membrane space ATP homeostasis and for sperm maturation and fertilization \[[@b114-ijms-10-01729]\]. A direct interaction between adenylate kinase and several enzymes of the dNTP synthase complex, as well as nucleoside diphosphate kinase was demonstrated using protein affinity chromatography and immunoprecipitation \[[@b115-ijms-10-01729]\]. These results identified adenylate kinase as a specific component of the nuclear dNTP synthase complex, where it facilitates the survey of nucleotide ratios and the synthesis of nucleotides necessary for error-free DNA replication. Although topological positioning of specific proteins is crucial in energetics and signaling processes, the full interactome of adenylate kinase isoforms in cardiac muscle and other tissues is still unknown. In summary, molecular diversity and intracellular arrangement of adenylate kinase isoforms serve as a prototype of energetic and metabolic monitoring network where the localization of enzymes, their kinetic properties and the ability to communicate between different compartments play a critical role. 3.. Adenylate Kinase Catalyzed β-Phosphoryl Transfer and Energy Economy ======================================================================= Adenylate kinase's unique energetic function allows the utilization of the second high-energy bond of the β-phosphoryl in the ATP molecule, thereby doubling the energetic potential, a property not shared by other phosphotransfer systems \[[@b1-ijms-10-01729],[@b5-ijms-10-01729],[@b12-ijms-10-01729],[@b20-ijms-10-01729],[@b26-ijms-10-01729],[@b29-ijms-10-01729]\]. This catalytic function of adenylate kinase is particularly important in tissues with high and fluctuating energy demands, as well as those under metabolic stress \[[@b1-ijms-10-01729],[@b2-ijms-10-01729],[@b5-ijms-10-01729]\]. It is also important for the intimate energy supply to specific cellular processes such as cell ciliary, flagella or cytoskeleton based motility \[[@b8-ijms-10-01729],[@b20-ijms-10-01729],[@b29-ijms-10-01729]--[@b31-ijms-10-01729]\]. A concerted action of both cytosolic and mitochondrial adenylate kinase isoforms is required to facilitate high-energy phosphoryl delivery to cellular ATPases and feedback signal communication to mitochondrial respiration within structurally organized enzymatic modules and networks (see [Figure 1](#f1-ijms-10-01729){ref-type="fig"}) \[[@b1-ijms-10-01729],[@b26-ijms-10-01729],[@b29-ijms-10-01729],[@b31-ijms-10-01729]\]. In these networks, a series of rapidly equilibrating reactions catalyzed by adenylate kinase provide the driving force for high-energy phosphoryl flux \[[@b1-ijms-10-01729],[@b5-ijms-10-01729],[@b30-ijms-10-01729]\]. In addition, adenylate kinase coupled with creatine kinase and glycolytic pathways communicate adenine nucleotide flux changes generated by cellular ATPases to metabolic sensors \[[@b5-ijms-10-01729],[@b29-ijms-10-01729],[@b33-ijms-10-01729],[@b37-ijms-10-01729]\]. In such a way phosphotransfer reactions synchronize electrical and mechanical activities with energy supply processes, which is fundamental for optimal function of the heart \[[@b44-ijms-10-01729],[@b116-ijms-10-01729]\]. Adenylate kinase is one of principle components in the generation of metabolic oscillations by sustaining dynamic fluctuations of adenine nucleotide ratios \[[@b5-ijms-10-01729],[@b29-ijms-10-01729],[@b117-ijms-10-01729],[@b118-ijms-10-01729]\]. In this regard, the term of excitable "adenylate kinase medium" has been proposed \[[@b119-ijms-10-01729]\] to emphasize the significance of this enzyme in conveying energetic and metabolic signals \[[@b5-ijms-10-01729],[@b33-ijms-10-01729]\]. These functions render adenylate kinase essential to the integrated cellular phosphotransfer network sustaining an efficient and vibrant cell energetic economy. Further insights and key support for the current understanding of metabolic signaling networks in their full complexity have come with the development of new methodologies \[[@b1-ijms-10-01729],[@b4-ijms-10-01729],[@b15-ijms-10-01729],[@b31-ijms-10-01729],[@b44-ijms-10-01729],[@b60-ijms-10-01729]\]. Highenergy phosphoryl fluxes through adenylate kinase, captured with ^18^O-assisted ^31^P-NMR, tightly correlated with the performance of the myocardium under various conditions of stress load \[[@b120-ijms-10-01729]\]. This implicates that adenylate kinase along with other phosphotransfer reactions are indispensable routes that direct flow of high-energy phosphoryls between cellular ATPases and the ATP production machinery in mitochondria \[[@b44-ijms-10-01729]\]. Labeling studies indicate that in intact tissues the highest adenylate kinase-catalyzed β-phosphoryl phosphotransfer flux is in the kidney, which approximates 98% of γ-ATP turnover, followed by the liver (80%), the heart (15 -- 22%) and contracting (10 -- 17%) or resting (3 -- 5%) skeletal muscles suggesting an important role of adenylate kinase in tissue energy homeostasis \[[@b5-ijms-10-01729],[@b30-ijms-10-01729],[@b41-ijms-10-01729]\]. Specifically, the total adenylate kinase enzymatic capacity measured in vitro in murine heart is about 6 mM/s, while the phosphotransfer flux in vivo is between 0.2--0.3 mM/s, it increases with functional load and can reach 1 mM/s in hypoxia and is expected to rise even more in ischemia \[[@b5-ijms-10-01729]\]. For comparison, the range of phosphotransfer capacities of creatine kinase and glycolytic pathways are 6--10 mM/s and 2--3 mM/s, respectively \[[@b5-ijms-10-01729]\]. In this regard, activities of adenylate kinase isoforms and intracellular free AMP levels tightly correlate with tissue respiration rates \[[@b16-ijms-10-01729],[@b121-ijms-10-01729]\] and expression of adenylate kinase isoforms increases in response to muscle exercise, hypoxia, and metabolic stress \[[@b107-ijms-10-01729],[@b108-ijms-10-01729]\]. Indeed, adenylate kinase remains active in its ATP regenerating and transferring role as long as ADP is available and the enzyme is not inhibited by a build-up of AMP \[[@b12-ijms-10-01729],[@b13-ijms-10-01729]\]. Adenylate kinase phosphotransfer flux is markedly suppressed by high glucose in insulin secreting cells, reducing adenylate kinase-mediated AMP signaling to the K-ATP channel and AMPK, two key regulators of hormone secretion \[[@b45-ijms-10-01729],[@b122-ijms-10-01729]\]. There is a reciprocal compensatory relationship between adenylate kinase and creatine kinase phosphotransfers to safeguard cellular energy economy: reduction of creatine kinase activity promotes high-energy phosphoryl transfer through the adenylate kinase system in creatine kinase-knockout muscles or under hypoxic stress \[[@b1-ijms-10-01729],[@b15-ijms-10-01729],[@b28-ijms-10-01729],[@b36-ijms-10-01729],[@b60-ijms-10-01729]\]. The condensed mitochondrial structures and a very narrow intracristal space in the living cell poses diffusional limitations for nucleotide exchange \[[@b29-ijms-10-01729]\]. Adenylate kinase (AK2) in the intermembrane space appears necessary to conduct the ADP stimulatory signal through the adenine nucleotide translocator to matrix ATP-synthases, as well as in exporting ATP produced by oxidative phosphorylation \[[@b2-ijms-10-01729],[@b21-ijms-10-01729],[@b29-ijms-10-01729]\]. Disruption of the adenylate kinase gene impedes ATP export and mitochondria-cytosolic communication in yeast \[[@b101-ijms-10-01729]\]. Muscles of AK1 knockout mice display lower energetic efficiency, slower relaxation kinetics and can not sustain low ADP levels under a functional load despite the presence of active creatine kinase, mitochondrial oxidative phosphorylation and glycolytic/glycogenolytic ATP-regenerating pathways, indicating disruption of the coherent energetic network with blunted response to metabolic signals \[[@b15-ijms-10-01729],[@b40-ijms-10-01729]--[@b42-ijms-10-01729]\]. Genetic disruption of both cytosolic M-CK- and AK1-catalysed phosphotransfer pathways compromises intracellular metabolic communication and energetic efficiency, reducing the cellular capability to maintain total ATP turnover under functional load \[[@b39-ijms-10-01729]\]. These new methodologies and transgenic gene manipulations provide the opportunity to decipher regulatory mechanisms that underlie cardiac and skeletal muscle bioenergetic homeostasis. Taken together, studies of adenylate kinase gene-knockout models have opened new perspectives for the further understanding of how cellular energetic and metabolic signaling networks integrate with genetic, biosynthetic, membrane-electrical and receptor-mediated signal transduction events \[[@b2-ijms-10-01729],[@b5-ijms-10-01729],[@b32-ijms-10-01729],[@b33-ijms-10-01729],[@b35-ijms-10-01729],[@b39-ijms-10-01729]\]. Gene-knockout studies also revealed a remarkable plasticity of the cellular phosphotransfer system, where deficiency in an individual enzyme is compensated through the remodeling of the whole energetic network at enzymatic, architectural and genomic levels \[[@b15-ijms-10-01729],[@b36-ijms-10-01729],[@b123-ijms-10-01729],[@b124-ijms-10-01729]\]. Unexpectedly and contrary to common beliefs that adenylate kinase promotes nucleotide degradation, the AK1 deficient heart had less ability to maintain nucleotide pools under metabolic stress \[[@b32-ijms-10-01729],[@b35-ijms-10-01729]\]. Also, in failing hearts adenylate kinase activity tightly correlates with higher cellular adenine nucleotide content \[[@b49-ijms-10-01729],[@b50-ijms-10-01729]\], indicating a new function for adenylate kinase to safeguard the cellular nucleotide pool by rephosphorylating AMP back to ADP and ATP. Emphasizing the significance of intact energetic and metabolic signaling, AK1 deficiency is associated with a range of compensatory changes in glycolytic, glycogenolytic and mitochondrial metabolism and corresponding gene expression to support energy metabolism \[[@b4-ijms-10-01729],[@b15-ijms-10-01729],[@b32-ijms-10-01729],[@b35-ijms-10-01729],[@b39-ijms-10-01729],[@b123-ijms-10-01729]\]. In this regard, one of the major finding resulting from adenylate kinase and creatine kinase knockout studies was the discovery that glycolytic/glycogenolytic enzymes have the ability to provide a network capacity for transferring and distributing high-energy phosphoryls \[[@b1-ijms-10-01729],[@b5-ijms-10-01729],[@b36-ijms-10-01729]\]. The function of the adaptor protein DRAL/FHL-2, which anchors adenylate kinase, creatine kinase and glycolytic enzymes to sites of high energy consumption in myofibrils \[[@b76-ijms-10-01729]\], further highlights the significance of the topological arrangement and integration of the intracellular phosphotransfer network in matching cellular energetic needs. In summary, adenylate kinase-facilitated high energy phosphoryl transfer and coordination between cellular sites of ATP consumption and ATP generation are essential for the safeguard of the cellular nucleotide pool and energy economy. Although significant progress has been made, there are still significant unanswered questions concerning adenylate kinase physiology, especially regarding the energetic role of mitochondrial intermembrane AK2 and matrix AK3, nuclear AK6, ciliary AK7 and other tissue specific adenylate kinase isoforms. 4.. Adenylate Kinase and AMP Signaling: An Integrated Metabolic Monitoring and Signal Communication System ========================================================================================================== Growing evidence indicates the significance of metabolic monitors which directly sense cellular energy state and respond to metabolic imbalances by generating and delivering signaling molecules to metabolic sensors and effectors to produce a regulatory response \[[@b1-ijms-10-01729],[@b2-ijms-10-01729],[@b5-ijms-10-01729],[@b52-ijms-10-01729],[@b62-ijms-10-01729]\]. Central in metabolic monitoring is the enzyme adenylate kinase which constantly reads cellular adenine nucleotide balance, and, in case of disparity, generates AMP signals and facilitates their delivery to a number of AMPsensitive components, including those in the gycolytic and glycogenolytic pathways and to metabolic sensors and effectors such as K-ATP channels and AMPK, which adjust tissue energy and body hormonal state ([Figure 3](#f3-ijms-10-01729){ref-type="fig"}) \[[@b5-ijms-10-01729],[@b29-ijms-10-01729],[@b37-ijms-10-01729],[@b61-ijms-10-01729]\]. Both K-ATP channels and AMPK can regulate cellular energy balance through managing Ca^2+^ influx and by phosphorylating targeted proteins \[[@b5-ijms-10-01729],[@b44-ijms-10-01729]\]. Due to a unique property of the adenylate kinase-catalyzed reaction, a small decrease in ATP levels results in a large increase in AMP, making the latter a sensitive indicator and therefore a suitable signaling molecule of cellular energetic status \[[@b1-ijms-10-01729],[@b5-ijms-10-01729],[@b62-ijms-10-01729]\]. In recent years AMP signaling is emerging as one of the most versatile system in the regulation of diverse cellular processes \[[@b5-ijms-10-01729],[@b72-ijms-10-01729]\]. Importantly, AMP signals must be integrated and tuned to an appropriate level, since low or excess AMP signaling, due to metabolic, hormonal state or metabolic sensors mutations, are associated with disease conditions \[[@b5-ijms-10-01729],[@b29-ijms-10-01729],[@b37-ijms-10-01729],[@b61-ijms-10-01729],[@b62-ijms-10-01729],[@b72-ijms-10-01729],[@b125-ijms-10-01729]\]. As a result of adenylate kinase-mediated metabolic monitoring and AMP signaling ([Figure 3](#f3-ijms-10-01729){ref-type="fig"}), the activity of ATP generating pathways is increased while ATP-consumption is decreased. In particular, adenylate kinase phosphotransfer directly couples with K-ATP channels, facilitating the translation of metabolic signals critical in adjusting cellular excitability-dependent functions in response to demand \[[@b1-ijms-10-01729],[@b33-ijms-10-01729],[@b37-ijms-10-01729],[@b46-ijms-10-01729],[@b47-ijms-10-01729],[@b61-ijms-10-01729]\]. In the intracellular environment where diffusion is restricted, reactions tend to depend strongly on the local rather than global concentrations of metabolites \[[@b37-ijms-10-01729]\]. In this way, adenylate kinase-catalyzed AMP signal generation and nucleotide exchange in the intimate "sensing zone" of metabolic sensors regulate the dynamics and frequency of ligand switching in order to facilitate decoding of cellular information \[[@b5-ijms-10-01729],[@b30-ijms-10-01729],[@b44-ijms-10-01729]\]. Indeed, intracellular measurements using the ^18^O-assisted ^31^P NMR and mass spectrometric techniques indicate that adenylate kinase is uniquely situated to tune the magnitude of the AMP signal because its phosphotransfer displays only a fraction of total capacity, is compartmentalized and not universally at equilibrium, thus it can differentially promote both AMP signal generation and AMP rephosphorylation \[[@b1-ijms-10-01729],[@b4-ijms-10-01729],[@b28-ijms-10-01729],[@b31-ijms-10-01729],[@b36-ijms-10-01729],[@b38-ijms-10-01729],[@b45-ijms-10-01729],[@b60-ijms-10-01729],[@b120-ijms-10-01729],[@b126-ijms-10-01729]\]. In this way, adenylate kinase through negative and positive feedback loops governs adenine nucleotide and glycolytic oscillations providing a dynamic component for facilitated intracellular energetic signal communication \[[@b5-ijms-10-01729],[@b29-ijms-10-01729],[@b117-ijms-10-01729],[@b119-ijms-10-01729]\]. Moreover, through a series of spatially linked enzymatic reactions adenylate kinase facilitates propagation of nucleotide signals in the intracellular and extracellular space, thus coordinating the response of metabolic sensors and nucleotide/nucleoside receptor signaling \[[@b1-ijms-10-01729],[@b5-ijms-10-01729],[@b64-ijms-10-01729],[@b65-ijms-10-01729]\]. Therefore adenylate kinase provides sustained communication between cytosolic and nuclear processes coordinating cell energetic and genomic events \[[@b30-ijms-10-01729],[@b34-ijms-10-01729]\] and between myocardium and vasculature regulating coronary flow and oxygen delivery \[[@b4-ijms-10-01729]\]. Adenylate kinase-induced increase in AMP promotes the generation of adenosine, a powerful metabolic signaling and cardioprotective agent \[[@b60-ijms-10-01729],[@b104-ijms-10-01729]\]. AMP also triggers AMPK activation, an essential signaling module in cellular adaptation to stress \[[@b42-ijms-10-01729],[@b52-ijms-10-01729],[@b62-ijms-10-01729],[@b127-ijms-10-01729]\]. In the heart, the significance of AMPK is suggested by the findings that AMPK activity is increased in ischemia and preconditioning, and that AMPK agonists protect myocardium against ischemic injury \[[@b127-ijms-10-01729],[@b128-ijms-10-01729]\]. Hypoxia or metabolic stress increases adenylate kinase flux inducing AMP generation and downstream AMP/adenosine signaling events important in cardioprotection \[[@b1-ijms-10-01729],[@b59-ijms-10-01729],[@b63-ijms-10-01729],[@b104-ijms-10-01729]\]. Adenylate kinase deficiency compromises metabolic signal reception by metabolic sensors, such as K-ATP channels and AMPK producing a stress vulnerable phenotype \[[@b33-ijms-10-01729],[@b42-ijms-10-01729]\]. In this regard, new data suggests that hydrolysis of AMP by CD73 on graft-resident or circulating cells regulates endothelial barrier function and diminishes transendothelial leukocyte trafficking and mitigates inflammatory and immune response of cardiac transplantation via the A(2B) adenosine receptors \[[@b129-ijms-10-01729]\]. Thus, adenylate kinasemediated AMP signaling through metabolic sensors and nucleoside receptors is an integral part of cellular stress response system. Adenylate kinase does not act alone in performing cellular metabolic monitoring. Recent evidence indicates that the interaction between adenylate kinase and creatine kinase phosphorelays determines metabolic signal communication to the K-ATP channel and mediates energetic remodeling in preconditioned and failing hearts \[[@b29-ijms-10-01729],[@b33-ijms-10-01729],[@b34-ijms-10-01729],[@b37-ijms-10-01729],[@b41-ijms-10-01729],[@b44-ijms-10-01729],[@b116-ijms-10-01729]\]. Under normal conditions creatine kinase suppresses adenylate kinase phosphotransfer by scavenging cellular ADP \[[@b28-ijms-10-01729]\] and thus maintaining the K-ATP channels in the closed state with low metabolic signaling through the AK → AMP → AMPK and AK → AMP → adenosine axis \[[@b33-ijms-10-01729],[@b52-ijms-10-01729],[@b61-ijms-10-01729],[@b105-ijms-10-01729],[@b116-ijms-10-01729]\]. However, hypoxia or metabolic stress diminishes creatine kinase and increases adenylate kinase flux inducing AMP generation and downstream AMP/adenosine signaling events to adjust cellular energy balance \[[@b1-ijms-10-01729],[@b60-ijms-10-01729],[@b63-ijms-10-01729],[@b126-ijms-10-01729]\]. Indeed, deletion of the AK1 gene shifts this balance towards the creatine kinase system, compromising energetic signal communication \[[@b4-ijms-10-01729],[@b33-ijms-10-01729],[@b35-ijms-10-01729],[@b39-ijms-10-01729]\]. Accordingly, AK1 deficiency blunts metabolic signal reception by metabolic sensors, such as K-ATP channels and AMPK, compromising their ability to withstand energetic stress \[[@b33-ijms-10-01729],[@b42-ijms-10-01729]\]. Underscoring the significance of phosphotransfer redistribution in metabolic signaling is the observation that altered adenylate kinase and creatine kinase phosphotransfer enzyme activities are associated with hypertrophy, abnormal vascular tone and hypertension \[[@b105-ijms-10-01729],[@b130-ijms-10-01729],[@b131-ijms-10-01729]\]. The role of adenylate kinase in integrating signaling pathways is further indicated by the recent demonstration that AMP-stimulated AMPK regulates vascular response to hypoxia and nitric oxidedependent vasorelaxation as well as excitation of the oxygen-sensing carotid body which is critical for detection of O~2~ deficit in the bloodstream and adjustment of breathing pattern \[[@b132-ijms-10-01729]--[@b134-ijms-10-01729]\]. In addition, adenylate kinase appears to be the major phosphotransfer system in extraocular muscle, where, together with creatine kinase, it regulates precise eye movements \[[@b135-ijms-10-01729]\]. A more recent study suggests that intracellular and extracellular adenylate kinase plays an important role in nucleotide energetic signaling regulating actin assembly-disassembly involved in cell movement and chemotaxis \[[@b136-ijms-10-01729]\]. Thus, the balance between phosphotransfer systems and subsequent signaling events determine the outcome of metabolic regulation of muscle contractility, electrical activity and vascular tone. In recent years, adenylate kinase-mediated extracellular AMP and ADP signaling has gain an additional significance due to the involvement in high-density lipoprotein (HDL) endocytosis, in signaling through adenosine and AMP-specific receptors \[[@b137-ijms-10-01729]\] and through AMP and other metabolite-sensing RNA riboswitches \[[@b138-ijms-10-01729], [@b139-ijms-10-01729]\], as well as in cell differentiation, tumor suppression and regulation of vascular tone \[[@b5-ijms-10-01729],[@b30-ijms-10-01729],[@b45-ijms-10-01729],[@b51-ijms-10-01729],[@b53-ijms-10-01729],[@b55-ijms-10-01729],[@b64-ijms-10-01729],[@b65-ijms-10-01729],[@b140-ijms-10-01729],[@b141-ijms-10-01729]\]. In this regard, metabolic monitoring system in procaryotic cells utilizes RNA structures embedded at the 5′ ends of mRNAs to sense particular metabolic cues and regulate the encoded genes \[[@b142-ijms-10-01729]\]. By sensing the energy status of muscle cells and regulating gene expression, the adenylate kinase and downstream AMP signaling are critical regulators of mitochondrial itochondrial biogenesis, through the AMPK-PGC-1 signaling cascade, thereby increasing the energy transducing capacity of the cell \[[@b138-ijms-10-01729],[@b143-ijms-10-01729]\]. Moreover, adenylate kinase through AMP signaling regulates glycolytic and glycogenolytic pathways conveying information regarding increased energy demand \[[@b5-ijms-10-01729]\]. Regulation of glycogen metabolism is of primary importance in muscle energetics, including that of the heart \[[@b5-ijms-10-01729],[@b144-ijms-10-01729]\]. Particularly, defective AMP and AMPK signaling is a primary cause of imbalance in glycogen metabolism and associated disease conditions \[[@b145-ijms-10-01729]\]. Adenylate kinase metabolic monitoring plays a significant role in plant tissues \[[@b146-ijms-10-01729]\]. Modulation of plastidial adenylate kinase activity and consequently AMP and adenine nucleotide levels significantly increases starch yield and amino acid biosynthesis \[[@b147-ijms-10-01729],[@b148-ijms-10-01729]\], while overexpression of adenylate kinase in yeasts markedly increases ATP production \[[@b149-ijms-10-01729]\]. Thus, adenylate kinase by providing ATP β-phosphoryls for energetic needs and by regulating nucleotide ratios and AMP signaling adjusts the efficiency of energy metabolism, the capacity of ATP generating reactions and the activity of biosynthetic processes. Intracellular energetic and metabolic signal communication can employ several different mechanisms ranging from facilitated diffusion to ligand conduction, depending on cell type and specific compartment, structural organization and topological arrangement of enzymatic networks \[[@b29-ijms-10-01729],[@b30-ijms-10-01729]\]. While spatial heterogeneity and directionality of the enzyme-catalyzed process is not important in well mixed conditions *in vitro,* this becomes very important entity in highly organized living matter \[[@b150-ijms-10-01729]\]. The cluster organization of enzymes and the high rate of unidirectional phosphoryl exchange in phosphotransfer systems would promote ligand conduction and signal communication at cellular distances \[[@b1-ijms-10-01729],[@b29-ijms-10-01729],[@b30-ijms-10-01729]\]. Adenylate kinase facilitates mitochondria-nuclear energetic communication \[[@b34-ijms-10-01729]\] and within the nucleus, embedded into organized enzymatic complexes of nucleotide-metabolizing and phosphotransfer enzymes, is involved in maintaining proper nuclear dNTP ratios and facilitates channeling of nucleotides into DNA replication machinery \[[@b115-ijms-10-01729]\]. Imbalances in dNTP ratios affect the fidelity of DNA replication and repair leading to acquired mutations \[[@b30-ijms-10-01729],[@b115-ijms-10-01729]\]. Thus, adenylate kinase surveys nucleotide ratios necessary for error-free DNA replication, while another nuclear protein Rad50, harboring adenylate kinase activity, participates in DNA double-strand break repair \[[@b93-ijms-10-01729]\]. Despite recent advances in the metabolic signaling field, more studies are necessary to elucidate mechanisms which link adenylate kinase phosphotransfer flux, AMP signal dynamics and the response of metabolic sensors (AMPK, K-ATP), as well as the significance of AK → AMP → AMP-sensors signaling system in heart physiology, such as in shaping heart force-frequency relationship and Frank- Starling response, and cardioprotection \[[@b5-ijms-10-01729],[@b44-ijms-10-01729]\]. At the molecular level, studies of multienzyme complexes including adenylate kinase would shed light on intimate mechanisms of metabolic sensing. Co-localization of components in signal transduction cascades is critical for the directionality and specificity of the signaling response. The presence of adenylate kinase in close proximity of AMPK would facilitate metabolic signal transduction and create a favorable energetic environment for the phosphorylation of targeted proteins. The significance and downstream mechanisms by which AK → AMP → AMPK signaling axis regulate cell cycle, developmental potential, and timing of differentiation are just starting to be elucidated. Thus, although more studies are needed, adenylate kinase-mediated metabolic monitoring and downstream AMP signaling is increasingly recognized among major homeostatic mechanisms in a number of cells, critical in the regulation of diverse cellular processes and stress-response. 5.. AMP as Universal Fuel Consumption and Low Energy Signal, Mediator of Drug Action and Therapeutic Agent ========================================================================================================== Metabolic signals regulate and integrate many vital functions throughout the human body, including energy homeostasis, blood pressure, heart performance, food intake, hormonal status and brain performance \[[@b30-ijms-10-01729],[@b71-ijms-10-01729]\]. Recent evidence suggests the general importance of adenylate kinase-mediated AMP signaling in appetite, wakefulness and sleep control and in hormonal, food and antidiabetic drug actions which are coupled to alterations of cellular AMP levels and associated signaling \[[@b4-ijms-10-01729],[@b5-ijms-10-01729],[@b46-ijms-10-01729],[@b47-ijms-10-01729],[@b55-ijms-10-01729],[@b62-ijms-10-01729]--[@b70-ijms-10-01729],[@b86-ijms-10-01729],[@b138-ijms-10-01729],[@b151-ijms-10-01729]\]. AMP signaling also plays an important role in hypoxic response, immune function and taste reception \[[@b60-ijms-10-01729],[@b85-ijms-10-01729],[@b129-ijms-10-01729],[@b137-ijms-10-01729],[@b139-ijms-10-01729]\]. Adenylate kinase integrates and tunes AMP signals coming from different sources and delivers them to metabolic sensors to elicit a regulatory response ([Figure 4](#f4-ijms-10-01729){ref-type="fig"}). Reduced or increased adenylate kinase activity would distort integration of AMP signals and, depending on tissue metabolic phenotype and intensity of AMP-generating reactions, could produce both positive and negative impact on the activity of metabolic sensors \[[@b42-ijms-10-01729],[@b152-ijms-10-01729]\]. Recent evidence suggests that ingestion of fructose, a major constituent of sugar and high-fructose corn syrup, causes increase in cellular AMP and AMP/ATP ratio leading to activation of AMPK \[[@b68-ijms-10-01729], [@b153-ijms-10-01729]\]. Contrary to glucose, which usually lowers cellular AMP levels and inhibits AMPK, fructose activates hypothalamic AMPK and increases food consumption ([Figure 4](#f4-ijms-10-01729){ref-type="fig"}) \[[@b68-ijms-10-01729]\]. Compared with glucose-sweetened beverages, consumption of fructose-sweetened beverages with meals elevates postprandial plasma triglycerides and lowers 24-h insulin and leptin profiles in normal weight women \[[@b154-ijms-10-01729]\]. Fructose metabolism bypasses the rate-limiting step of the glucose pathway, and is metabolized much more quickly than glucose. Excessive fructose intake depletes cellular ATP by trapping inorganic phosphate required for ATP resynthesis, consequently inducing nucleotide degradation and increasing plasma uric acid and allantoin levels \[[@b155-ijms-10-01729]\]. These metabolic alterations induced by fructose may play an important role in the epidemic of metabolic syndrome and may contribute the development of diabetes and cardiovascular disease \[[@b156-ijms-10-01729]\]. A diet high in fructose can give rise to hyperlipidemia, insulin resistance and hypertension \[[@b157-ijms-10-01729]\]. Similarly, ethanol consumption and subsequent acetate metabolism causes AMP accumulation in liver and other tissues resulting in increased AMP and adenosine signaling and blood flow \[[@b158-ijms-10-01729],[@b159-ijms-10-01729]\]. Excess ethanol consumption can also cause nucleotide degradation and depletion of cellular adenine nucleotide pool \[[@b160-ijms-10-01729]\]. However, short term and limited exposure to fructose and ethanol could be beneficial in stimulating energy metabolism and metabolic signaling, thus "AMPing" your body \[[@b153-ijms-10-01729],[@b161-ijms-10-01729]\]. In this regard, cardioprotection induced by both fructose and ethanol preconditioning stimuli could be related to augmented AMP signaling \[[@b162-ijms-10-01729],[@b163-ijms-10-01729]\]. Normal human blood AMP levels are in the 10--20 μM range of which only a fraction is free due to binding to serum proteins \[[@b164-ijms-10-01729]\]. Intracellular and blood AMP levels are increased during physical activity and are sensitive indicators of myocardial ischemia rising within minutes after insult \[[@b165-ijms-10-01729],[@b166-ijms-10-01729]\]. Adenylate kinase isoforms provide fine tuning of intracellular, interstitial and circulating AMP levels due to different kinetic properties and localization ([Figure 5](#f5-ijms-10-01729){ref-type="fig"}). Deficiency of AK1 isoform reduces the AMP signal stress response \[[@b4-ijms-10-01729],[@b32-ijms-10-01729],[@b33-ijms-10-01729],[@b35-ijms-10-01729],[@b42-ijms-10-01729]\], however at the basal level it could result in higher AMP signaling due to compromised tune-up mechanisms \[[@b152-ijms-10-01729]\]. Circulating AMP is emerging as a molecular mediator of hibernation and constant darkness effect, switching mice from a glucoseburning, fat-storing state to a fat-burning, glucose-conserving lethargy \[[@b66-ijms-10-01729]\]. In hibernating mammals AMP originating, at least partly, from the brown fat also down-regulates the seasonally-dependent proliferation of the thymus \[[@b167-ijms-10-01729]\]. In addition, overworked brains release adenosine, usually originating from AMP, to slow cell activity and trigger sleep \[[@b168-ijms-10-01729],[@b169-ijms-10-01729]\]. These data strongly support the notion that AMP and adenosine play a key role as endogenous modulators of wakefulness and sleep which fits with our proposed role of phosphotransfer reactions in regulating brain activity and information processing \[[@b30-ijms-10-01729]\]. Available evidence suggests that growth factors, which alter cellular energy metabolism, and hormones, such as adipocyte-derived leptin and adiponectin, activate AMPK through local or temporal regulation of AMP levels \[[@b170-ijms-10-01729]--[@b172-ijms-10-01729]\]. Leptin alters muscle adenine nucleotide homeostasis (decreases ATP) and increases AMP dynamics (inferred from elevated AMP and IMP levels) followed by activation of AMPK \[[@b173-ijms-10-01729]--[@b175-ijms-10-01729]\]. This could be also due to increased expression of uncoupling proteins (UCP) - mitochondrial carriers that dissipate the electrochemical gradient across the mitochondrial inner membrane \[[@b176-ijms-10-01729],[@b177-ijms-10-01729]\]. Mild uncoupling of mitochondria shift cellular nucleotide balance and, due to the monitoring function of adenylate kinase, AMP levels would go up triggering metabolic signaling cascades \[[@b176-ijms-10-01729],[@b177-ijms-10-01729]\]. Interestingly, AMP by itself through interaction with ANT can produce uncoupling that could facilitate respiration and reduce ROS production \[[@b178-ijms-10-01729]\], which could be beneficial during ischemia reperfusion. AMP signaling plays a significant role in cellular senescence. It's been shown by proteome analysis that AK1 protein is markedly increased in senescent skeletal muscle fibers \[[@b179-ijms-10-01729]\] and that lifespan of worms can be extended by the addition of copies of the AMPK gene and by chronic activation of AMPK as is seen on calorie-restricted diets \[[@b180-ijms-10-01729]\]. Indeed, AMP/ATP ratios are several folds higher in senescent fibroblasts compared with young fibroblasts and this is accompanied by a marked elevation in AMPK activity \[[@b181-ijms-10-01729],[@b182-ijms-10-01729]\]. This could be viewed as a compensatory measure to cope with declining capacity of energy metabolism during aging. AMP and AMPK signaling is critical in cell differentiation, maintaining cell polarity and completing normal cell division as well as in induction of meiotic maturation in mammalian oocytes \[[@b183-ijms-10-01729]--[@b185-ijms-10-01729]\]. AMP directly or through AMPK plays a physiological role in modulating activity of cystic fibrosis transmembrane conductance regulator (CFTR) in polarized epithelia cells \[[@b186-ijms-10-01729],[@b187-ijms-10-01729]\]. CFTP nucleotide binding folds possess an intrinsic adenylate kinase activity which could facilitate metabolic signal reception \[[@b61-ijms-10-01729],[@b188-ijms-10-01729]\]. Thus, AMP signaling plays a critical role in altered hormonal and energetic states including cell differentiation, maintenance of polarity and senescence. AMP is important mediator of insulin and metabolic protein kinase Akt signaling. It has been proposed that protein kinase Akt mediates inhibitory effects of insulin on AMPK \[[@b189-ijms-10-01729]\]. Since Akt does not directly phosphorylate AMPK, changes in Akt activity induced by insulin can regulate AMPK through changes in cellular AMP/ATP ratio. Indeed, it was demonstrated that Akt activation reduces cellular AMP/ATP ratio causing decline in AMPK activity \[[@b190-ijms-10-01729]\]. Insulin and Akt-mediated inhibition of AMPK can be overcome by metformin, which is known to act on site I of mitochondrial respiratory chain causing increase in AMP levels \[[@b189-ijms-10-01729],[@b191-ijms-10-01729]\]. In this regard, free fatty acids, which activation generates AMP, prevents AMPK inhibition by insulin \[[@b192-ijms-10-01729]\]. Thus, insulin-Akt signaling axis can expand the range of metabolic effects through tuning up AMP signals and the activity of AMPK. Most importantly, recent studies indicate that pharmacological actions of popular antidiabetic drugs metformin and thiazolidinediones and cholesterol lowering statins are related to their ability to alter cellular AMP levels and consequently AMPK activity \[[@b151-ijms-10-01729],[@b193-ijms-10-01729]--[@b195-ijms-10-01729]\]. Time course studies revealed that troglitazone-induced increases in phosphorylated forms of AMPK and ACC are paralleled by an increase in the AMP-to-ATP ratio \[[@b193-ijms-10-01729]\]. A similar increase in AMP is seen following incubation of cells with rosiglitazone \[[@b196-ijms-10-01729]\]. Moreover, livers of rats treated with resveratrol, a constituent of red grapes and red wine, show a strong tendency for AMPK activation, as well as increase phosphorylation of two downstream indicators of its activity \[[@b197-ijms-10-01729]\]. Besides inhibiting HMG-CoA reductase, statins preserve CD39/ATPDase activity in thrombin-treated endothelial cells \[[@b198-ijms-10-01729]\]. Preservation of adenine nucleotide metabolism may directly contribute to the observed anti-thrombotic and anti-inflammatory actions of statins. In this regard, metformin, a biguanidine compound from French lilac and more recently extracts from bitter melon, a traditional Chinese medicine, activate AMPK and exerts its glucose lowering effect through mild interference with the efficiency of energy metabolism resulting in changes in intracellular nucleotide dynamics and AMP levels \[[@b151-ijms-10-01729],[@b194-ijms-10-01729],[@b199-ijms-10-01729]\]. Through activating AMP signaling metformin also improves cardiac function after ischemia in rats \[[@b200-ijms-10-01729], [@b201-ijms-10-01729]\]. Interestingly that metformin increases glucose utilization and lactate production in cells with a dominant-negative mutant form of AMPK (DN-AMPK) \[[@b202-ijms-10-01729]\], suggesting existence of AMPKindependent pathways of metabolic signaling including direct effects of AMP on other metabolic sensors and enzymes of energetic pathways. Recent interest in clinical use of adenosine as "adenosine flush" and in reperfusion therapy is also associated with AMP signaling, activation of AMPK and replenishment of cellular ATP levels \[[@b203-ijms-10-01729]--[@b205-ijms-10-01729]\]. Adenosine, besides signaling through adenosine receptors, can be taken up by cells and phosphorylated to AMP initiating metabolic signaling cascades \[[@b104-ijms-10-01729],[@b206-ijms-10-01729]\]. Subsequent conversion of AMP to ADP and ATP by reactions involving adenylate kinase and ATP synthesis in mitochondrial oxidative phosphorylation and glycolysis would replenish cellular ATP and total adenine nucleotide pool which is diminished during ischemic insult \[[@b104-ijms-10-01729],[@b206-ijms-10-01729]\]. Thus, adenosine and AMP have pleiotropic metabolic signaling and energetic effects which could be further explored in reperfusion therapy. Last but not least, AMP through interaction with taste receptors has the bitterness-suppressing quality that allows taste buds to interpret food as seeming "sweeter" \[[@b207-ijms-10-01729]\]. This makes lower-calorie food products more palatable. Recently AMP has been approved by the FDA as a 'Bitter Blocker' additive to foodstuffs (Linguagen Corp.). AMP is found in a wide range of natural foods --- including breast milk. Calcium compounds in breast milk are usually bitter and thus breast milk may be natural system using bitter-blockers. AMP also inhibits behavioral and electrophysiological responses of mice to bitter tastants \[[@b207-ijms-10-01729]\]. A number of AMP containing medications are used for nutritional supplementation and for treating dietary shortage or imbalance and disease conditions \[[@b208-ijms-10-01729]\]. Nucleotides such as AMP affect a number of immune functions, including the reversal of malnutrition and starvation-induced immunosuppression due to adenine nucleotide shortage, the enhancement of Tcell maturation and function and natural killer cell activity \[[@b208-ijms-10-01729]\]. However AMP by itself has immunosuppressive properties. In fact, mosquito and fly saliva contains AMP and adenosine as vasodilatative agents which also have immunosuppressant activity facilitating pathogen or parasite transmission \[[@b209-ijms-10-01729]\]. To this end, secreted and cell associated adenylate kinase has been identified as a virulence factor in a number of pathogens affecting nucleotide signaling and host immune defense \[[@b210-ijms-10-01729]\]. Due to immunomodulatory function, a promising therapeutic potential for the AMP analog AICAR/(ZMP) exist in the treatment of multiple sclerosis and other Th1 cell-mediated inflammatory diseases such as psoriasis and arthritis \[[@b211-ijms-10-01729]\]. Thus, AMP is emerging as pivotal metabolic signal conveying information about food consumption, hormonal and energy metabolism status, a regulator of brain activity associated with wakefulness and appetite control as well as a mediator of drug action and therapeutic agent. 6.. Adenylate Kinase and AMP Signaling Networks in Body Energy Sensing ====================================================================== Sensing body energy level and corresponding mental and physical strength is important for humans and animals and this property had key survival and evolutionary advantages \[[@b5-ijms-10-01729],[@b30-ijms-10-01729],[@b72-ijms-10-01729],[@b212-ijms-10-01729]--[@b216-ijms-10-01729]\]. Here we further advance the hypothesis that body energy sensing is mediated in part by adenylate kinase, which conveys information about the adenine nucleotide pool status and, thus the overall energy balance \[[@b5-ijms-10-01729],[@b30-ijms-10-01729]\]. The general concept has been proposed previously by H. Teagtmeyer \[[@b217-ijms-10-01729]\] that systemic metabolic communication and energy transfer arises from the interaction of a series of moiety conserved cycles operating through tissues and circulation. Phosphotransfer reactions have emerged as principle signal generators and coupling relays between cellular metabolism and metabolic sensors \[[@b5-ijms-10-01729],[@b29-ijms-10-01729],[@b30-ijms-10-01729],[@b44-ijms-10-01729],[@b218-ijms-10-01729]\]. In particular, adenylate kinase as generator and modulator of metabolic signals is a critical player integrating complex AMPK and KATP channel signaling cascades in regulation of hormone secretion, blood flow, stress response, muscle and brain functional activity \[[@b1-ijms-10-01729],[@b4-ijms-10-01729],[@b5-ijms-10-01729],[@b30-ijms-10-01729],[@b33-ijms-10-01729],[@b34-ijms-10-01729],[@b37-ijms-10-01729],[@b47-ijms-10-01729]\]. A case in point is myocardial-vascular metabolic signal communication governed by phosphotransfer redistribution, metabolic cycles and relays culminating in a metabolic sensor and functional response ([Figure 6](#f6-ijms-10-01729){ref-type="fig"}) \[[@b4-ijms-10-01729]\]. As brain function specifically depends on glucose metabolism, new spatial and network representation of glycolytic pathway \[[@b5-ijms-10-01729]\] allow for new perspectives and understanding of energetic signal communication and glucose sensing in neurons \[[@b30-ijms-10-01729],[@b212-ijms-10-01729]\]. Body energy sensing depends on information relayed and distributed to a central network of metabolic sensing neurons through hard-wired neural connections and by metabolic and hormonal signals from the periphery coming through the blood and interstitial fluids \[[@b212-ijms-10-01729],[@b214-ijms-10-01729]\]. The brain has a specialized set of neurons that integrate many of these signals and produce regulatory response. These, first described as "glucose sensing" neurons, are really broad-range cellular metabolic sensors that have transporters, metabolite-sensing kinases, ion channels, and receptors that allow them to sense and interpret signals coming from the periphery \[[@b214-ijms-10-01729]\]. Metabolic sensing neurons are integrated into a network that links them to afferent and efferent pathways involved in the control of energy homeostasis \[[@b212-ijms-10-01729]\]. Recent studies indicate that beside satiety hormonal signals, such as leptin and insulin, a myriad of metabolic inputs are generated and sensed by the brain including glucose, fructose, fatty acids and their metabolites, amino acids, Krebs cycle intermediates and nucleotides through specialized receptors and signaling cascades \[[@b212-ijms-10-01729]--[@b216-ijms-10-01729]\]. In many cases, reception or metabolism of these signals by sensing neurons results in altered intracellular ATP/AMP ratio and behavior of metabolic sensors such as AMPK, K-ATP channels and other regulators of firing activity \[[@b33-ijms-10-01729],[@b72-ijms-10-01729],[@b213-ijms-10-01729],[@b219-ijms-10-01729],[@b220-ijms-10-01729]\]. This integrated information is then used not only to guide choices the animal makes about the amount of fuels to take in from the environment but also how stored fuels should be distributed and metabolized by various peripheral tissues. Molecular defects or a raise in threshold for sensing catabolic signals from the periphery by specialized metabolic sensing neurons is one of the causes of obesity \[[@b212-ijms-10-01729]--[@b214-ijms-10-01729]\]. Metabolism sensing neurons are regulated by glucose through mechanisms requiring either the KATP channel or the Na^+^/glucose cotransporter SGLT3, which is a glucose-sensing rather than glucosetransporting molecule \[[@b212-ijms-10-01729]--[@b216-ijms-10-01729]\]. In glucose-excited neurons, the decrease in ATP and ATP-to-ADP ratio leads to activation of K-ATP channels and plasma membrane hyperpolarization, and thus reduction in neuronal firing activity. In glucose-inhibited neurons, the reduced ATP-to-ADP ratio induces closure of chloride channels and/or reduction in the activity of the Na-K pump, depolarization of the plasma membrane, activation of voltage-sensitive Ca^2+^ channels, and synaptic neurotransmitter secretion. Changes in energy expenditure are achieved by regulating hormonal status, sympathetically mediated thermogenesis and cellular energy sensors such as AMPK, which switch off ATP-consuming and activate ATP-regenerating pathways in response to cellular fuel shortage \[[@b212-ijms-10-01729]--[@b214-ijms-10-01729]\]. Recent studies show that activity of AMPK is increased in the brain in response to low levels of cellular fuels and negative energy balance and, contrary to peripheral tissues, decreased by leptin \[[@b72-ijms-10-01729],[@b213-ijms-10-01729],[@b219-ijms-10-01729]\]. Furthermore, reduced activity of AMPK in the hypothalamus reduces food intake and body weight \[[@b213-ijms-10-01729],[@b219-ijms-10-01729]\]. Circulating leptin levels give the brain input regarding energy storage so it can regulate appetite and metabolism. Although leptin is a circulating signal that reduces appetite, in general, obese people have an unusually high circulating concentration of leptin and develop leptin resistance \[[@b212-ijms-10-01729]--[@b214-ijms-10-01729]\]. Among other factors, consumption of high amounts of fructose causes leptin resistance \[[@b221-ijms-10-01729]\]. This could be due to excess of AMP signaling in the periphery induced by both leptin and fructose which is conveyed to the brain as false "low energy" signal forcing it to increase food consumption. AMP and other peripheral metabolic signals apparently can overcome inhibitory leptin signaling circuits in the hypothalamus \[[@b68-ijms-10-01729],[@b153-ijms-10-01729],[@b175-ijms-10-01729]\]. In this regard, a significant increase in adenylate kinase isoform AK1 and other phosphotransfer enzymes in obese/overweight and morbidly obese women further indicate an imbalance in metabolic signaling in this metabolic syndrome \[[@b116-ijms-10-01729]\]. Labeling studies of adenine nucleotide -phosphoryls indicate that adenylate kinase is active in the brain and that its phosphotransfer rate is increased by drugs improving cerebral circulation and memory dysfunction \[[@b222-ijms-10-01729]\]. Similarly, photic stimulation increases adenylate kinase-mediated ATP β-phosphoryl turnover in photoreceptors suggesting a role in energetics of these specialized cells \[[@b223-ijms-10-01729]\]. Other studies indicate that adenylate kinase isoforms in the brain may contribute to neuronal maturation and regeneration \[[@b23-ijms-10-01729],[@b224-ijms-10-01729]\]. Activation of melanocortin system, which is involved in regulation of appetite, metabolism and body weight, increases expression of adenylate kinase AK1 in the hypothalamus \[[@b225-ijms-10-01729]\]. AMPK, a master metabolic sensor present in hypothalamus, responds to adenylate kinase integrated AMP levels, and plays a critical role in hormonal and nutrient-derived anorexigenic and orexigenic signaling \[[@b62-ijms-10-01729],[@b72-ijms-10-01729],[@b226-ijms-10-01729]\]. Adenylate kinase AK2 is among 14 genes mapped in quantitative trait loci for body weight and abdominal fat weight \[[@b227-ijms-10-01729]\]. Recent proteome studies revealed that adenylate kinase AK2 may be an important regulator involved in the anti-lipid and antioxidant effects of tomato paste \[[@b228-ijms-10-01729]\]. The down-regulation of AK1 expression by hyperglycemia in pancreas may contribute to the defective coupling of glucose metabolism to K-ATP channel activity in type 2 diabetes \[[@b48-ijms-10-01729]\]. Information processing in the brain and energy sensing takes place not only intracellularly but also on extracellular surfaces and between different type of cells through intercellular connections \[[@b30-ijms-10-01729],[@b229-ijms-10-01729],[@b230-ijms-10-01729]\]. In fact, brain ecto-adenylate kinase is an integral part of synaptosomal ATPmetabolizing enzyme cascades regulating ATP, AMP and adenosine signaling \[[@b231-ijms-10-01729]\]. Similarly, adenylate kinase by regulating nucleotide exchange and signaling at peripheral nerve endings can convey information regarding the energy state of particular tissues, thus contributing to body energy sensing. In other cell types ecto-adenylate kinase provides a mechanism for propagation of nucleotide based signals along the cellular surface, thus coordinating multiple receptor-mediated signaling events \[[@b64-ijms-10-01729],[@b65-ijms-10-01729]\]. Extracellular AMP induces bronchoconstriction in suspected asthmatic patients and is used in disease diagnostics as a bronchial provocation test \[[@b232-ijms-10-01729]\]. Both ATP and adenosine signaling which could be modulated by adenylate kinase are critical in synaptic transmission and for normal brain function \[[@b233-ijms-10-01729]\]. Thus, the systemic adenylate kinase → AMP → AMPK signaling network represents a new modality in body energy balance. In different tissues, adenylate kinase activity depends on the nutritional and hormonal state \[[@b234-ijms-10-01729],[@b235-ijms-10-01729]\]. Adenylate kinase phosphotransfer flux is markedly suppressed by high glucose in insulin secreting cells, reducing adenylate kinase-mediated AMP signaling to the K-ATP channel and AMPK, two regulators of hormone secretion \[[@b45-ijms-10-01729],[@b122-ijms-10-01729]\]. In humans, deficiency of the AK1 isoform is associated with mental retardation, psychomotor impairment and congenital anemia \[[@b236-ijms-10-01729]\]. In this regard, a strong correlation has been observed between fasting and higher expression of the adenylate kinase AK3 isoform, UCP3 and increased activity of AMPK and fatty acid oxidation \[[@b237-ijms-10-01729]\], suggesting an interrelated signaling cascade. Although the significance of energetic and metabolic signaling is growing, little is known regarding the regulation of adenylate kinase phosphotransfer under different metabolic, hormonal and functional states, and how such regulation affects the ability of the cell to generate and respond to energetic stress-related signals. A new emerging modality in extracellular and intracellular nucleotide signaling and information processing is the cAMP → AMP → adenosine/AMPK pathway sequentially connecting cAMP- and AMP-response elements \[[@b238-ijms-10-01729]\]. In this pathway, cAMP signaling is followed by conversion of cAMP by phosphodiesterases to AMP that activates the AMP-signaling cascade \[[@b183-ijms-10-01729]\]. The role of adenylate kinase in this system is envisioned to propagate AMP metabolic signals along the membrane surface or within the cytosolic space and nuclear compartment where AMPK resides \[[@b1-ijms-10-01729],[@b5-ijms-10-01729],[@b65-ijms-10-01729],[@b239-ijms-10-01729]\]. Cosequently, production of adenosine from AMP by 5′-nucleotidase could stimulate adenosinergic signaling pathways \[[@b240-ijms-10-01729]\]. Thus, this kind of integration of cyclic nucleotide → nucleotide → nucleoside signaling provides means of coordinating diverse signaling events and facilitating information transfer from one subsystem to another. Understanding principles governing integration and synchronization of metabolic sensors with cellular metabolism is important for regulation of cellular energetic and ionic homeostasis as well as hormonal balance and food intake \[[@b37-ijms-10-01729],[@b61-ijms-10-01729],[@b241-ijms-10-01729]--[@b243-ijms-10-01729]\]. Growing evidence and increasing number of discovered metabolic signaling cascades indicate that these systems are essential in vital cellular processes integrating gene expression, metabolism and response to stress, moreover their defects are associated with diseases under a wide umbrella of "metabolic syndrome" \[[@b33-ijms-10-01729],[@b62-ijms-10-01729],[@b243-ijms-10-01729],[@b244-ijms-10-01729]\]. In this regard, recent studies indicate that metabolites linked to glucose such as succinate and alphaketoglutarate can signal through specific G-protein-coupled receptors which are also present in neurons \[[@b245-ijms-10-01729],[@b246-ijms-10-01729]\]. Glucose usually decreases adenylate kinase phosphotransfer flux and intracellular AMP levels and consequently adenosine signaling, but it can increase Krebs cycle substrate levels through anaplerosis \[[@b1-ijms-10-01729],[@b246-ijms-10-01729]\]. Succinate and alpha-ketoglutarate also have signaling function in cell nucleus regulating DNA methylation, which has been implicated in obesity \[[@b247-ijms-10-01729]\]. Taken together, emerging data indicate that coupling of "energetic" phosphotransfer enzymes with phosphoryltransfering protein kinase cascades and metabolite sensitive ion channels, transporters and receptors, comprise a unified intracellular and transcellular energetic and metabolic signal transduction matrix capable of processing, integrating and delivering cellular information regarding energy state (see [Figure 6](#f6-ijms-10-01729){ref-type="fig"}). Within this network, adenylate kinase and AMP signaling throughout the intracellular compartments, extracellular space and body fluids comprise a major metabolic monitoring and body energy sensing node, capable of transducing and distributing signals to metabolic sensors to adjust energy metabolism, fuel selection, food intake and functional response. 7.. Adenylate Kinase Never Rests: From Altered Energetic Signaling to Immunodeficiency, Cell Motility Defects, Reticular Dysgenesis and Sensorineural Deafness ============================================================================================================================================================== A thoughtful notion that "adenylate kinase never rests" was expressed by P.B. Detwiler more than a decade ago when commenting on the paper entitled "Adenine nucleoside diphosphates block adaptation of mechano-electrical transduction in hair cells" \[[@b248-ijms-10-01729]\]. In this paper P.G. Gillespie and A.J. Hudspeth demonstrated that mechano-electrical signal transduction and adaptation by hair cells depends on adenine nucleotides and adenylate kinase reaction \[[@b248-ijms-10-01729]\]. Indeed, the past decade further revealed that the dynamic behavior of the adenylate kinase reaction governs many intracellular and extracellular nucleotide signaling processes and that adenylate kinase mutations cause severe human disease phenotypes \[[@b1-ijms-10-01729],[@b2-ijms-10-01729],[@b8-ijms-10-01729]--[@b11-ijms-10-01729],[@b22-ijms-10-01729],[@b30-ijms-10-01729],[@b102-ijms-10-01729]\]. Adenylate kinase in conjunction with other phosphotransfer rections is involved in metabolic signaling to membrane ion channels regulating cell ionic balance and electrical activity, and in energy supply to distant ATPases powering spermatozoa and other cell motility related processes \[[@b20-ijms-10-01729],[@b29-ijms-10-01729],[@b31-ijms-10-01729],[@b33-ijms-10-01729],[@b44-ijms-10-01729],[@b47-ijms-10-01729]\]. The large molecular weight isoform AK7 is unusual in the adenylate kinase family; it contains a Dpy-30 motif involved in protein dimerization and is associated with cell motility and other processes \[[@b2-ijms-10-01729],[@b5-ijms-10-01729],[@b96-ijms-10-01729]\]. Mutations in the evolutionarily conserved Ak7 gene results in animals presenting with pathological signs characteristic of primary ciliary dyskinesia (PCD), including ultrastructural ciliary defects and decreased ciliary beat frequency in respiratory epithelium \[[@b7-ijms-10-01729]\]. Ak7 appears to be a marker for cilia with (9 + 2) microtubular organization critical in motility, morphogenesis, cell division and immune response \[[@b7-ijms-10-01729],[@b97-ijms-10-01729]\]. In humans PCD is a genetically and phenotypically heterogeneous disorder, characterized by progressive development of bronchiectasis, inflammation, and features characteristic of chronic obstructive pulmonary disease \[[@b7-ijms-10-01729]\]. Thus, these results suggest that mutations of the human Ak7 gene may underlie a subset of genetically uncharacterized PCD cases. More recently, the elegant work from B. Wieringa's laboratory examining the adenylate kinase spatial positioning within a cell by utilizing different artificial location tags, demonstrate that cytoskeleton-based cell motility can be modulated by spatial repositioning of adenylate kinase enzymatic activity to provide local ATP supply and ADP scavenging capacity \[[@b8-ijms-10-01729]\]. These results are corroborated by the use of another heterodimer-inducing approach for transient translocation of AK1 to the specific cellular sites under conditions of constant global AK1 activity in the cell \[[@b8-ijms-10-01729]\]. Thus, adenylate kinase functions as a coupling factor between local ATP supply and regulation of actomyosin and other molecular motor behavior, which is central to cell shape changes and cell motility. Similarly, creatine kinase--mediated ATP supply fuels actin-based events in phagocytosis and is required for thrombin receptor signaling to the cytoskeleton \[[@b249-ijms-10-01729],[@b250-ijms-10-01729]\]. In this regard, the dynamics of actin and myosin in brain presynaptic and postsynaptic regions play a critical role in brain activity and memory formation, and consume the major fraction of the total energy in neurons \[[@b251-ijms-10-01729],[@b252-ijms-10-01729]\]. It remains to be determined whether adenylate kinase provides energy support for motor proteins involved in neuronal trafficking and memory consolidation. Interestingly in humans, genetic adenylate kinase deficiency or losses of adenylate kinase from the brain after surgery are associated with compromise intellectual function \[[@b253-ijms-10-01729],[@b254-ijms-10-01729]\]. Conversion of mechanical stimuli to electrical signals in stereocilia of inner ear hair cells is associated with Ca^2+^ dynamics and movement of myosin motors, similar to many mechanoelectrical and ciliary motility systems \[[@b7-ijms-10-01729],[@b248-ijms-10-01729]\]. The energy supply and nucleotide-based signaling coordinating behavior of such systems is provided by phosphotransfer enzymes including adenylate kinase \[[@b1-ijms-10-01729],[@b8-ijms-10-01729],[@b29-ijms-10-01729],[@b30-ijms-10-01729]\]. It was demonstrated that the creatine kinase circuit is essential for high-sensitivity hearing as demonstrated by hearing loss in creatine kinase knockout mice \[[@b255-ijms-10-01729]\]. Recently biallelic mutations in mitochondrial adenylate kinase AK2 gene have been identified in individuals affected with reticular dysgenesis \[[@b9-ijms-10-01729]\]. Human reticular dysgenesis is the most severe form of inborn severe combined immunodeficiencies (SCID) associated with sensorineural deafness. This disease is characterized by the absence of granulocytes and almost complete deficiency of lymphocytes in peripheral blood. Mutations in AK2 gene result in absent or strong decrease in AK2 protein expression \[[@b9-ijms-10-01729]\]. Restoration of AK2 expression in the bone marrow cells of individuals with reticular dysgenesis overcomes neutrophil differentiation arrest, underlining its specific requirement in the development of a restricted set of hematopoietic lineages. Moreover, AK2 is specifically expressed in the stria vascularis region of the inner ear, which may provide an explanation of the sensorineural deafness in these individuals \[[@b9-ijms-10-01729]\]. Interestingly, immunohistochemistry indicate that AK2 is present within the lumen of the stria vascularis capillaries suggesting that it could be functioning as an ecto-enzyme, however more detailed studies at the cellular level are required and different AK2 and other AK isoform antibodies should be tested. Since almost all cells contain mitochondria and intermembrane AK2, the negative result of intracellular AK2 immunohistochemistry could be related to inaccessibility of intermembrane proteins to antibodies. It is known that ecto-adenylate kinase plays an important role in extracellular nucleotide signaling; however it is unusual to detect a mitochondrial isoform outside the cell \[[@b64-ijms-10-01729],[@b65-ijms-10-01729]\]. An intriguing possibility exist that beside nucleotide signaling, mucosa embedded AK2 has specific mechanometabolic signal transduction functions and behavior similar to that described in smart hydrogels containing adenylate kinase \[[@b256-ijms-10-01729]\]. A recent proteome study indicates adenylate kinase is present in the extracellular mucus \[[@b257-ijms-10-01729]\]. However, whether the adenylate kinase system due to mutation of AK2 is altered in primary mechanoelectrical signal transducing hair cells which have abundant mitochondria should be determined. Our data indicate that both AK1 and AK2 associate with mitotic spindle, which have microtubular organization (9 + 2) similar to cilia, apparently to power cell division cycle and chromosome disjunction. Previous studies have shown that disruption of analogous adenylate kinase Aky2 gene in yeast coding mitochondrial intermembrane and partially cytosolic AKY2 protein halts ATP export from mitochondria \[[@b99-ijms-10-01729]\]. Thus, AK2 isoform, usually confined within mitochondrial intermembrane space, may have important extramitochondrial functions. That the gene encoding the mitochondrial energy metabolism related enzyme adenylate kinase 2 (AK2) is mutated in individuals with reticular dysgenesis is supported by simultaneously published independent evidence \[[@b10-ijms-10-01729]\]. Knockdown of zebrafish ak2 gene leads to aberrant leukocyte development, stressing the evolutionarily conserved role of AK2 in leukocyte differentiation \[[@b10-ijms-10-01729]\]. Mononuclear cells obtained from bone marrow of healthy donors lacked AK1, whereas AK2 was readily detectable. These results indicate that leukocytes may be susceptible to defects caused by the lack of AK2, as they do not express AK1 in sufficient amounts to compensate for AK2 functional deficits. Previous studies have linked AK1 mutations to severe nonspherocytic hemolytic anemia and associated with mental retardation and psychomotor impairment \[[@b3-ijms-10-01729],[@b236-ijms-10-01729]\]. In this regard, the deficiency of another metabolic enzyme adenosine deaminase (ADA) accounts for approximately 17% of all SCIDs and 50% of all autosomal recessive SCIDs \[[@b258-ijms-10-01729]\]. The metabolic basis of this immunodeficiency is likely related to accumulation of the ADA substrates adenosine and 2′-deoxyadenosine that kill T and B cells through mechanisms involving accumulation of dATP and induction of apoptosis \[[@b258-ijms-10-01729]\]. Whether nucleotide metabolism is altered in AK2 mutant cells remains to be determined. Taken together, these observations suggest that reticular dysgenesis is the first example of a human immunodeficiency syndrome that is causally linked to the defective phosphotransfer enzyme involved in nucleotide signaling and mitochondrial energy metabolism. In this regard, absence or reduction of AK2 protein would interfere with mitochondrial bioenergetics and mitochondria-nucleus energetic signal communication that could compromise implementation of leukocyte developmental program \[[@b9-ijms-10-01729],[@b30-ijms-10-01729],[@b34-ijms-10-01729]\]. Evolvement of mitochondrial network and establishment of metabolic circuits are part of developmental programming and execution of cell differentiation sequences \[[@b259-ijms-10-01729],[@b260-ijms-10-01729]\]. A recent study indicates that energy demand signaling gradients arising in the cell would allow propagation of information on local energy consumption over distances, through nucleotide-based signaling conveying positional information to mitochondria \[[@b261-ijms-10-01729]\]. In such way, responding to energy demand gradients, mitochondria can pattern the cytoplasm over length scales that are suited to provide an energy supply continuum and convey morphogenetic information in large cells and tissues \[[@b259-ijms-10-01729],[@b261-ijms-10-01729]\]. Thus, AK2 deficiency can disrupt the flow of developmental information governing cell differentiation. Adenylate kinase supports energetics and motility of flagella containing parasites and secreted adenylate kinase is a major virulence factor in a number of pathogenic bacteria \[[@b5-ijms-10-01729],[@b20-ijms-10-01729],[@b56-ijms-10-01729],[@b57-ijms-10-01729],[@b210-ijms-10-01729]\]. Recently, a novel myristoylated AK2 isoform has been discovered in P. falciparum causing severe tropical malaria \[[@b11-ijms-10-01729]\]. This modification significantly enhances the stability of the kinase and apparently could be used for targeting the enzyme to membranes or other specific cellular sites, similar to AK1, another myristoylated adenylate kinase isoform \[[@b11-ijms-10-01729],[@b51-ijms-10-01729]\]. The association of myristoylated AK2 with the disease causing clone could be used as a therapeutic target to fight malaria. A large specialized network of six adenylate kinase isoforms exists in a unicellular flagellated parasite Trypanosoma cruzi, the causative agent of Chagas' disease \[[@b59-ijms-10-01729]\]. This parasite apparently has developed a sophisticated phosphotransfer network where adenylate kinase acts in concert with arginine kinase and nucleoside diphosphate kinase to support the invasive phenotype \[[@b262-ijms-10-01729]\]. Importantly, that mutation in the adenylate kinase gene renders pathogens avirulent \[[@b210-ijms-10-01729]\], suggesting new ways to "silence" otherwise deadly bacteria. Also, a nonstructural protein 4B (NS4B) from hepatitis C virus, which is absolutely required for viral propagation, was found to possess adenylate kinase-like activity \[[@b263-ijms-10-01729]\]. Adenylate kinase 2 is highly upregulated by INF-alpha and IL-15 stimulation in natural killer (NK) cells suggesting role in innate immune defense \[[@b264-ijms-10-01729]\]. In this regard, due to unique catalytic properties and stability adenylate kinase is used as an enzyme amplification system in ATP biosensors for detecting bacterial contaminations in food industry, defense and health care that improves sensitivity levels up to several thousands fold (Celsis, AKuScreen). A recent study demonstrates that mitochondrial AK2 is directly involved in induction of apoptosis through the formation of an AK2-FADD-caspase-10 complex and that downregulation of AK2 attenuates etoposide- or staurosporine-induced apoptosis in human cells \[[@b6-ijms-10-01729]\]. Significantly, that downregulation of cytosolic AK1 transcription with siRNA increases apoptosis in pancreatic cancer cells \[[@b265-ijms-10-01729]\]. Thus, adenylate kinase plays a significant role in making decision between life and death in cellular existence and could be a target in treatment of infectious disease and cancer. In summary, we highlight here new exciting developments regarding multi-faceted adenylate kinase biology, revealing the significance of mutations and modifications of this never resting phosphotransfer enzyme in energy support of cell motility, disease pathogenesis and regulation of cell differentiation and apoptosis. 8.. Summary =========== Metabolic signals regulate and integrate many vital functions throughout the human body, including energy homeostasis, blood pressure, heart performance, food intake, hormonal status and brain performance. Growing evidence indicate the significance of metabolic monitors which directly sense cellular energy state and respond to imbalances by generating and delivering signaling molecules to metabolic sensors to produce a regulatory response. Adenylate kinase-mediated metabolic monitoring and downstream AMP signaling (AK → AMP → AMP-sensors) plays a critical role in the regulation of diverse cellular processes and serves as a primary stress-response pathway. Due to signaling to a number of AMP/nucleoside-sensitive cellular and extracellular components, adenylate kinase senses cellular energetic imbalances caused by physical activity, inadequate oxygenation or nutrient supply, generates and transmits feedback signals to adjust cellular energetics, substrate transport and vascular blood flow to facilitate oxygen and nutrient delivery. Adenylate kinase phosphotransfer dynamics regulates many diverse intracellular and extracellular nucleotide signaling processes, including excitation-contraction coupling, hormonal secretion, cell and ciliary motility, nuclear transport, energetics of cell cycle, DNA synthesis and break repair, and developmental programming. Moreover, adenylate kinase generated and modulated cellular, interstitial and blood AMP levels are emerging as potential metabolic signals that are associated with body energy sensing, sleep, hibernation, vascular flow and food intake. AMP is a mediator of antidiabetic drug action and has growing importance as a therapeutic agent. Within integrated phosphotransfer network, adenylate kinase is essential in integration and synchronization of metabolic sensors with the dynamics of cellular metabolism which is critical for regulation of genetic, energetic, electrical and signal transduction processes determining cell viability and functional activity. As such, adenylate kinase and AMP signaling components dispersed throughout the intracellular compartments, extracellular spaces and body fluids comprise a major metabolic monitoring and body energy sensing node transducing and distributing signals to metabolic sensors, thus conveying information about body energy and fuel usage status. Moreover, evidence is mounting regarding the direct relationship between defects in adenylate kinase and AMP metabolic signaling and human diseases, such as heart failure, hypertrophic cardiomyopathy, diabetes, obesity, hemolytic anemia, reticular dysgenesis, ciliary dyskinesia, cancer and neurodegeneration. Thus, adenylate kinase, previously considered as a regular housekeeping enzyme, is a critical player in metabolic monitoring and systemic integration of different signaling pathways to ensure cellular energy homeostasis and an adequate response to a broad range of functional, environmental and stress challenges. This work was supported by grants from the National Institutes of Health, Marriott Heart Disease Research Program and Marriott Foundation. A.T. holds the Mayo Clinic Marriott Family Professorship in Cardiovascular Research. The authors thank to Dr. C. Folmes for critical reading of the manuscript and valuable suggestions. ![Adenylate kinase shuttle facilitates transfer of ATP β- and γ-phosphoryls from generation to utilization sites.\ Adenylate kinase (AK), present in mitochondrial and myofibrillar compartments, enables the transfer and makes available the energy of two high-energy phosphoryls, the β- and the γ-phosphoryls of a single ATP molecule. In this case, AMP signals feedback to mitochondrial respiration amplified by the generation of two molecules of ADP at the mitochondrial intermembrane site. Within the intracellular environment of a cardiomyocyte, the transfer of ATP and AMP between ATP-production and ATP-consumption sites may involve multiple, sequential, phosphotransfer relays that result in a flux wave propagation along clusters of adenylate kinase molecules (lower panel). Handling of substrates by "bucket-brigade" or a ligand conduction mechanism facilitates metabolic flux without apparent changes in metabolite concentrations. AK1 and AK2 -- cytosolic and mitochondrial AK isoforms, respectively. i.m. and o.m. -- inner and outer membranes, respectively. Modified from \[[@b5-ijms-10-01729]\] with permission.](ijms-10-01729f1){#f1-ijms-10-01729} ![Adenylate kinase isoform network and intracellular localization.\ Adenylate kinase isoforms are coded by separate genes, KAD1 -- KAD7, localized to different chromosomes. Corresponding proteins AK1 -- AK7 define separate adenylate kinase isoforms with different molecular weights, kinetic properties and intracellular localization. The AK1 isoform mostly consists of the ADK domain which is characteristic for the whole protein family. The AK1β splice variant has an additional myristoylation domain that targets the protein to the plasma membrane. The proteins Rad50 and C9orf98 with ADK domains and activity have specific cellular functions. The AK2, AK3 and AK4 isoforms have a flexible lid domain which closes over the site of phosphoryl transfer upon ATP binding to prevent water accessibility. A short form of the lid domain exists also in AK1, AK5 and AK6. Within the network adenylate kinase proteins distribute high-energy phosphoryls (\~ P) and communicate AMP signals.](ijms-10-01729f2){#f2-ijms-10-01729} ![Adenylate kinase metabolic monitoring system.\ Adenylate kinase reads the cellular energy state, generates, tunes and communicates AMP signals to metabolic sensors. In such way adenylate kinase conveys information about the adenine nucleotide pool status and, thus, the overall energy balance. In response to AMP signals metabolic sensors reduce ATP-consuming and activate ATP-generating pathways to adjust energy metabolism, functional activity and increase fuel and oxygen supply.](ijms-10-01729f3){#f3-ijms-10-01729} ![AMPing up and down --- integration cellular AMP signals by adenylate kinase.\ Adenylate kinase integrates AMP metabolic signals produced or downregulated during exercise, stress response, food consumption and during changes in hormonal balance or mitochondrial coupling state. Adenylate kinase relays deliver AMP signals to metabolic sensors and by catalyzing nucleotide exchange in the intimate "sensing zone" of metabolic sensors facilitate decoding of cellular information.](ijms-10-01729f4){#f4-ijms-10-01729} ![Regulation of intracellular AMP levels.\ Cytosolic adenylate kinase (AK1) is the major AMP generator while mitochondrial AK2 isoform, due to low Km(AMP), is the major AMP sequestration and tune-up mechanism. AMP is also generated during free fatty and amino acid activation, during adenosine rephosphorylation by adenosine kinase (ADK), during IMP reamination and by cyclic nucleotide phosphodiseterase (PDE). Oxygen deprivation and intense muscle contraction increase AMP removal through adenosine and IMP pathways catalyzed by 5'-nucleotidase (5'-NT) and AMP-deaminase (AMPD). Defects in mitochondria metabolism would reduce AMP tuning capacity of AK2 and, in fact, can reverse reaction towards AMP generation. The metabolically active AMP pool, estimated about 10--20%, is in dynamic equilibrium with bound and/or compartmentalized AMP \[[@b1-ijms-10-01729],[@b5-ijms-10-01729]\].](ijms-10-01729f5){#f5-ijms-10-01729} ![Myocardial-vascular metabolic signaling as a paradigm of global energy sensing.\ Metabolic signal transduction cascades initiated by phosphotransfer redistribution between adenylate kinase (AK) and creatine kinase (CK) govern AMP/adenosine (Ado) cycle and the response of metabolic sensors (ATP-sensitive potassium channel, K-ATP and AMP-activated protein kinase, AMPK). Hypoxia or metabolic stress diminishes CK and increases AK flux inducing AMP generation and subsequent AMP/adenosine signaling events. Adenosine/AMP signals delivered to vascular tissue through intercellular and paracellular pathways induce signaling through A(2A) adenosine receptors, AMPK and K-ATP channels. AMPK activates eNOS inducing NO/cGMP signaling and could regulate K-ATP channels. Collectively, A(2A)AR, AMPK, eNOS and K-ATP signaling converge on contractile protein, Ca^2+^ and membrane potential regulation, critical determinants of vascular tone. Dipyridamole, an adenosine uptake inhibitor, disrupts Ado- AMP cycle and tuning of adenosine signals, thus potentiating vascular response. Modified from \[[@b4-ijms-10-01729]\] with permission.](ijms-10-01729f6){#f6-ijms-10-01729}
There are several reasons for Cain’s rise — not least among them the sad state of GOP presidential field for 2012. But I’ll focus on just two: (1) Herman Cain understands how white conservatives think, and (2) knows just what they want to hear from someone like him. Cain has cleaved to, and returned again and again to a longstanding conservative belief when it comes to African Americans and other people of color who don’t vote Republican: Those colored folks just don’t know what’s good for ’em. It’s a simple answer, as I wrote in 2005. It occurred to me that I’d heard the kind of stuff before, most recently in the comments resulting from a brouhaha that recently broke out about the portrayal of certain black Republicans. It’s the same basic rhetoric I’ve heard in just about every discussion I’ve been involved in over why there aren’t more black republicans. My point has always been that Republicans — like other predominantly white organizations — spend more time asking why more black people aren’t joining them than they do asking themselves why they aren’t attracting more black supporters. In other words, they avoid the reality that the reason they don’t attract more Black supporters is because they don’t address — and aren’t seen as addressing — the needs and concerns of many in Black communities. The analysis never gets further than that because it would probably undermine their current base of power. So every discussion I’ve had ends up with the other side’s argument boiling down to this: the reason more blacks don’t support the Republican party is because they don’t know what’s good for them. That’s the nice way of putting it. The more blunt way of putting it would be much closer to the way the conservative blogger above put it. Because they are dumb. The Blacks who don’t vote Republican are dumb… (This phenomenon isn’t limited to race, by the way. The same can apply to any predominantly homogenous organization that seeks to diversify its ranks, fails to do so, and then wonders why. Nor are liberals or progressives immune. We should this in mind when we wonder why we don’t have more support among white working-class Americans, many of whom joined the ranks of the birthers and tea partiers.) It’s an easy out, because it doesn’t require Republicans to address their own agenda, let alone change it. It’s easier to ask “Why don’t more Blacks vote Republican?” and answer “Because they don’t know any better,” “Because they don’t know what’s good for them,” or “Because they’re brainwashed.” Case closed. It’s easy, because you don’t need to do anything else, except perhaps bemoan their failure to “wise up” and “join us.” There’s no work and no change required on your part. Self examination, on the other hand, is hard. It’s harder to ask “How are we failing to address the concerns of fill-in-the-group effectively, so that they will naturally want to join us?”, because the answer may challenge and require you to change some of your assumptions. You have to put yourself in the position of the “Other,” and then work harder to find ways to address their concerns in the context of your values. It’s even harder because failure is then your fault — not theirs. As Anson Asaka points out, if Republicans asked themselves the second question, they’d have to consider that maybe African Americans have some good reasons for not voting Republican. African Americans are not brainwashed. The majority of black people support the Democratic Party because it is their interest to do so, at this point in time. Democratic administrations enacted major civil rights legislation ending Jim Crow. Democrats supported and continue to support affirmative action. Democratic presidents have appointed judges and Attorney Generals who have defended civil rights. The opposite is true for Republican administrations. The Democrats were the first major political party to nominate an African American for President. The Democrats were the first party to appoint an African American as a Supreme Court justice. Most black elected officials are Democrats. Many African Americans hold key positions and wield substantial influence in the Democratic Party. That is not true with respect to the Republican Party. In response to the enactment of civil rights legislation, Dixiecrats left the Democratic Party and fled to the Republican Party. To win over Southern segregationists, the Republican Party adopted the Southern strategy and became hostile to civil rights, workers rights and welfare. Instead of being the Party of Lincoln, the Republican Party became the party of Strom Thurman and Jesse Helms. In effect, the Republican Party became the new White Citizens’ Council. It is no coincidence that Republicans are at the front line defending symbols of the racist Confederate past. So far, only one of Perry’s GOP rivals has commented on N-WordheadGate: Herman Cain. Asked yesterday about the story, Cain, the only black Republican in the race, lashed out at Perry. “Since Governor Perry has been going there for years to hunt, I think that it shows a lack of sensitivity for a long time of not taking that word off of that rock and renaming the place,” Cain said on This Week. On Fox News Sunday, Cain added that there “isn’t a more vile, negative word than the N-word and for him to leave it there as long as he did before, I hear, that they finally painted over it, is just plain insensitive to a lot of black people in this country.” Cain’s reaction is certainly understandable. Anyone could find the revelations offensive, and Cain is a black man who grew up in the segregated South. And yet, as Michael Tomasky points out today, it’s Cain, not Perry, who could be damaged the most by this story. To understand why, you have to consider that there are two things Republicans hate more than anything. One is being accused of racism, which has happened with increasing frequency since President Obama became president, and, if you ask Republicans, is never, ever justified. Two is unfair treatment by the allegedly biased mainstream media. So among Republicans, the widespread response to the Post story was not, “wow, Rick Perry messed up.” It was, “the liberal media is smearing another Republican as a racist!” It’s in this context that the backlash has occurred. Cain wasn’t expressing reasonable grievances — he was “piling on” and legitimizing a sleazy political attack. The Daily Caller’s Matt Lewis writes this morning, “Cain’s comments were — at best — premature — and at worst, highly irresponsible. It was a cheap shot, and, perhaps a signal that Cain is willing to play the race card against a fellow Republican when it benefits him.” Over at the conservative blog Red State, Eric Erickson says the story is “a slander Herman Cain is picking up and running with as a way to get into second place.” Cain held his tongue when the audience booed a gay soldier at a republican debate. But he was the only Republican candidate to criticize Rick Perry over (a) the name of his hunting camp and (b) his failure to change it. Big mistake. “Niggerhead” itself did little to harm Perry’s standing among GOP voters. (Immigration and Perry’s beyond-inept debate performance did that.) Calling Perry out over “Niggerhead” did a good deal of damage to Herman Cain. Bet he won’t do that again. The context of Cain’s remark and the backlash against it might also explain why Cain in particular has drawn little African American support, and why his candidacy would probably not boost the number of Blacks voting Republican. Flip the context, and what many African Americans see in Cain is the same thing they see when the look at the likes of Clarence Thomas — a Black man who has lived long enough, and is old enough that he ought to know better. Because they know better, and know that he really knows better, most respond to Cain with a familiar two-word dismissal: “Negro, please.” …Some blacks like Cain have gotten a small piece of the economic pie, and have markedly increased their political reach and standing. This makes it even easier to buy Cain’s line and to get mad at those that don’t and accuse them of screaming racism whenever anything goes wrong. However, Cain knows but would never dare publicly admit the tormenting facts that countless studies, surveys, reports, and investigations, lawsuits, and court challenges, and the mountains of EEOC complaints have irrefutably documented. Blacks are still two and three times more likely to be unemployed than whites, trapped in segregated neighborhoods, and have their kids will attend disgracefully failing, mostly segregated public schools. Young Black males and females are far more likely to be murdered, suffer HIV/AIDS affliction, to be racially profiled by police, imprisoned, placed on probation or parole, permanently barred in many states from voting because of felony convictions, much more likely to receive the death penalty especially if their victims are white, and more likely to be victims of racially motivated violence than whites. Research studies show that whites with a felony record are more likely to be hired in some places than college educated blacks. Cain would never purse his lips to acknowledge the stark fact that middle-class blacks, like himself, who reaped the biggest gains from the civil rights struggles, often find the new suburban neighborhoods they move to re-segregated and soon look like the old neighborhoods they fled. They are ignored by cab drivers, followed by clerks in stores, left fuming at restaurants because of poor or no service, find that more and more of their sons and daughters are cut out of scholarships and student support programs at universities because of the demolition of affirmative action, and denied bank loans for their businesses and homes. Cain could easily find himself being by passed by a fearful cab driver while on his way to an important business meeting who didn’t watch Fox News and know who Cain was. In fact just a week before Cain cavalierly blew off the corrosive and shackling bars of racism that still shackle millions of blacks as “no big deal” Cain huffed at the revelation of his GOP presidential rival Rick Perry’s “Niggerhead” rock. Cain quickly corrected his memory lapse and got back on script and shrugged it off as much ado about nothing. What Hermain Cain is doing is nothing new. It was a method of survival for generations of our ancestors, because knowing how some white people thought, telling them what they want to hear, and allowing them to believe what they wanted or needed to believe — about you, and about themselves — could make the difference between life and death. It could mean the difference between a job and survival, or no job (as recently illustrated on the big screen in The Help. At other times, papering over uncomfortable truths helped maintain the appearance of peace and order, and sustain relationships that at least looked happy on the surface. Millions of our ancestors learned these lessons and passed them down to their children. Those lessons came down through the generations, and are still learned today, because they are still necessary today. It is also, and always has been, a way to get ahead, to “get over”, by using what you know to gain an advantage. Cain has certainly done that, going an an little known candidate to one whose name appears regularly in the media. He’s done so based in part on his strengths — he’s an effective speaker, with a simple message message that the GOP base loves — and in part on what he understands about conservatives and why it makes him useful to Republicans. Of course, this isn’t the first time that Cain has attacked Obama for his blackness, or lack thereof. At the beginning of this year, he told a group of Republicans that “they”—the liberal media—are “scared” that a “real black man might run against Barack Obama.” Likewise, in an interview with New York Magazine this summer, Cain doubled-down on his remarks, telling the magazine that Obama is not a “strong black man” in terms that he identifies with. That Cain presents himself as more blackity-black than Barack Obama is just part of his persona. What’s striking about it all is his choice of audience. With the exception of his interview with New York Magazine, Cain saves these remarks for white, Republican audiences. I’d be shocked if this wasn’t deliberate. Conservatives hate accusations of racism and are more vocal about those than they are actual instances of discrimination against racial minorities. With his upbringing in the segregated South and an accent that shows it, Herman Cain stands as the perfect weapon against anyone who questions the racial egalitarianism of conservatives. To borrow a line I used yesterday, Cain offers “absolution from racial guilt and a unique chance to turn the tables on liberals who accuse the right of racism.” Herman Cain is a great speaker, but that’s the reason he received a standing ovation from the crowd at the Values Voter Summit, after denying any anger over Jim Crow. Indeed, this quote—from an attendee at the summit—says it all, “I don’t give him a chance, but it would be interesting. At least, no one would call him a racist.” Cain learned a valuable lesson when he dared criticize Rick Perry’s “insensitivity” in failing for decades to change the name of his hunting camp to something less offensive than “Niggerhead.” It’s an old lesson, and simple one: “Remember your place, boy.” It may be the key to his future success in the GOP, if he remembers the narrow role he is allowed to play, and sticks to the script. Herman Cain knows the rules, and has always known them. His temporary lapse over “Niggerhead” was merely a refresher course, to remind him of what he already knows. For example, he may be leading the Republican pack at the moment, but we will not see Herman Cain on the stage at the end of the GOP convention, thanking his party for its faith in him and promising victory in November, and Herman Cain knows it. Not even if he wins every primary and leads Mitt Romney in every poll from now until the last primary, will Herman Cain be the Republican nominee. Cain may be leading the pack now, in part because he’s also got a tax message that resonates deeply with the GOP base, even if it makes the Republican establishment nervous. But Cain said himself that the last debate looked like he and Romney were the top two candidates. In another party, it might be so. But it won’t be so in the GOP. No matter how many polls he wins, Cain will never be more than a “second tier” GOP candidate. Period. And he knows it. They may not like Romney. They may not even want to elect him. But as much as conservatives like Herman Cain, they don’t want to elect him even more than they don’t want to elect Romney. Cain may be the right’s new “Black Friend,” and a great guy, but Romney — even Perry — is “acceptable” to Republicans as president in a way that a Herman Cain won’t be for a very long time to come. Maybe Cain understands that, and isn’t running for the top of the ticket anyway. If he’s very, very lucky, and remembers his lessons well, we could witness him accepting the number 2 spot, where his job will be to support the ideas of the guy at the top of the ticket. You know, one of the white guys he’s beating out right now. (Take your pick.) And if that day comes, Herman Cain will say it’s not about race. He’ll continue to say that throughout his term as VP, every time the media asks him. You know, as part of a news article about Vice President Cain hunting with President Perry — at “Niggerhead,” of course. About Terrance Heath Terrance Heath is the Online Producer at Campaign for America's Future. He has consulted on blogging and social media consultant for a number of organizations and agencies. He is a prominent activist on LGBT and HIV/AIDS issues.
Data is available at the NIH funded Systems Biology Center website: <http://stmc.health.unm.edu/tools-and-data/> under the Dowload Data Here link after Letendre K. et al. reference. Also available for download from: <https://www.dropbox.com/sh/bmab4vnrdxn7ond/AAAluhRS9jor1v46C_c_76bQa?dl=0> Introduction {#sec001} ============ Search has been extensively studied in biology, particularly in ecology, to understand how animals search for food, mates and prey. The pattern of movement by searching agents affects search efficiency in a variety of biological contexts \[[@pcbi.1004818.ref001]--[@pcbi.1004818.ref003]\]. Optimal foraging theory suggests that animals, including social animals such as ants and bees, have evolved strategies to individually or collectively maximize food intake in minimal time \[[@pcbi.1004818.ref004]\]. Similar to foraging animals, T cells of the immune system search for targets to mount an immune response. T cells are a critical immune effector, required to clear viral infections and to help B cells produce antibody. In order to initiate an effective immune response, naïve T cells must encounter and sample dendritic cells (DCs) bearing cognate antigen in lymph nodes (LNs). In the absence of infection, T cells continuously enter and exit LNs interacting with DCs. Upon infection, DCs present cognate antigen and provide stimulatory signals leading to T cell activation. T cell-DC interactions are required for naïve T cells to survive, activate and eventually clear infection as well as maintain immune memory \[[@pcbi.1004818.ref005]--[@pcbi.1004818.ref007]\]. T cell activation is promoted by repeated sampling of nearby DCs \[[@pcbi.1004818.ref008]\], while at the same time T cells explore the entire population of DCs for rare antigen indicative of infection. This presents T cells with an optimization problem in which T cells must balance *thoroughness* and *extent* of search. This requires that many T cells search across a broad *extent*, contacting many DCs quickly, a process similar to optimal foraging in animals. Simultaneously, T cell search is sometimes *thorough*, repeatedly sampling in a small area \[[@pcbi.1004818.ref008]\]. Both of these factors contribute to the overall rate at which T cells encounter DCs within LNs, which is a critical component of organismal fitness impacting the overall timing of the immune response. Relatively little quantitative analysis has been done to describe how T cells move in LNs or how that movement affects the rate at which T cells encounter DCs. Initial studies to understand the type of T cell motion in LNs from pioneering two-photon imaging of naïve T cells suggested that T cells move using a simple diffusive random walk, analogous to Brownian motion of molecules \[[@pcbi.1004818.ref009],[@pcbi.1004818.ref010]\]. Following these studies, computational modeling of T-DC interactions have often used simple diffusive random walks to represent T cell behavior \[[@pcbi.1004818.ref011],[@pcbi.1004818.ref012]\]. However, subsequent studies have not precisely described T cell motion in LNs, so it is unclear whether diffusive random walks are appropriate models for T cell movement. Optimal random search strategies have been extensively studied in ecology, and ecological models of movement may be useful for characterizing T cell motility and search efficiency. Brownian motion, Lévy walks, and correlated random walks (CRWs, also called persistent random walks), have been proposed as idealized biological search models \[[@pcbi.1004818.ref013]\], but careful quantitative analysis is required to understand how well search models characterize T cell motility and search efficiency \[[@pcbi.1004818.ref014]\]. Brownian motion is often referred to as a simple random walk and is characterized by movement with uniformly distributed turning angles and small fixed step sizes relative to the time resolution of observation \[[@pcbi.1004818.ref010],[@pcbi.1004818.ref015]--[@pcbi.1004818.ref018]\]. Qualitative similarities between Brownian motion and the movement of microorganisms resulted in simple random motion being used as a dominant model of cell motion \[[@pcbi.1004818.ref019]\]. Brownian motion results in diffusive movement in which distance travelled is proportional to the square root of time. In two dimensions this results in a normal distribution of speeds, and in three dimensions it results in a Maxwell distribution of speeds \[[@pcbi.1004818.ref020]\]. Lévy walks exist between ballistic (or straight directional) motion at one extreme and Brownian motion at the other. In contrast to Brownian motion, the step lengths of Lévy searchers fit a power law distribution with most step lengths being small, but with a heavy-tail, that is, a decreasing probability of larger steps and a non-zero probability of steps of any length \[[@pcbi.1004818.ref002],[@pcbi.1004818.ref013]\]. Lévy walks have been used to model animal movement, for example, in albatross, ant, aphid and human foraging, and more recently, T cells in the brain \[[@pcbi.1004818.ref002],[@pcbi.1004818.ref021]--[@pcbi.1004818.ref024]\]. Both Brownian and Lévy walks assume that the direction of search at each step is drawn from a uniform distribution and is independent of previous steps (i.e. is isotropic and Markovian). CRWs on the other hand use fundamentally different mechanisms to model similar patterns of motion that tend to persist in direction over time. CRWs depend on the distribution of turning angles between successive steps leading to directional persistence. In search modelled by CRWs, the current direction of motion probabilistically influences future step directions \[[@pcbi.1004818.ref013]\]. On relatively short time scales, Lévy walks and CRW may be difficult to distinguish since they both produce superdiffusive motion \[[@pcbi.1004818.ref025]\], that is, displacement that increases faster than the square root of time. Compared to diffusive movement, superdiffusion increases search *extent* and decreases search *thoroughness*. Despite the fact that many search strategies are well-characterized, there has been no systematic analysis of T cell motion in LNs. The lack of clarity in empirical studies has led to T cell motility being modelled using Brownian motion \[[@pcbi.1004818.ref018]\], Lévy walks \[[@pcbi.1004818.ref024]\], and correlated random walks (CRW) \[[@pcbi.1004818.ref008],[@pcbi.1004818.ref026]\], or a combination of movement patterns \[[@pcbi.1004818.ref027]\]. Recently, Harris et al. showed that the movement of T cells in *Toxoplasma gondii* infected brain tissue fits a Lévy walk resulting in superdiffusion and efficient detection of protozoan targets \[[@pcbi.1004818.ref024]\]. It is not clear if Lévy movement has not previously been found in LN because such movement does not occur there, or simply because it had not been looked for. The lack of precise quantitative understanding of T cell motion in LNs leads to inconsistent models and limits our ability to determine how T cell motility affects the efficiency with which T cells encounter DCs. In this study, we analyze T cell search behavior in LNs using two-photon microscopy. We begin our analysis with traditional statistical methods that describe the velocities, step lengths, displacement, and turning angles taken by naïve T cells searching for DCs. We then extend these analyses to more accurately and comprehensively describe motility patterns, including using maximum likelihood estimates (MLE) to fit experimental data. Our study statistically analyzes T cell search strategies in LNs, and uses multiple efficiency metrics that measure the spatial thoroughness and extent of T cell search. We then directly quantify the contribution of different types of motion to the efficiency of T cell search. Additionally, by comparing T cell movement to the patterns generated by null models of random motion, interesting non-random interactions between T cells and their environment become apparent, suggesting that T cells adapt movement in response to environmental cues. Our null models reveal hot spots that are visited more frequently than can be explained by chance. Our results suggest that even a precise characterization of T cell movement based on the assumption of random movement does not fully capture the complexity of T cell movement in the LN environment. Results {#sec002} ======= Movement of naïve T cells in lymph nodes is superdiffusive, not Brownian {#sec003} ------------------------------------------------------------------------ Two photon microscopy (2PM) has been used extensively to study the movement of T cells in intact lymph nodes \[[@pcbi.1004818.ref015],[@pcbi.1004818.ref016],[@pcbi.1004818.ref018],[@pcbi.1004818.ref028],[@pcbi.1004818.ref029]\]. We isolate bulk primary T cells from LNs of naïve C57Bl/6 animals, fluorescently label T cells with dyes, reintroduce labeled T cells into recipient mice, and then use 2PM to image labeled T cells in intact explanted LNs of recipients (see [Materials and Methods](#sec009){ref-type="sec"} for further details). We track cells for up to 10 minutes and include all motile cells in observation windows. We eliminate tracks with total track length shorter than 17μm or that show squared displacement less than 300μm^2^ (= 17μm x 17μm) as described previously by Letendre et al. \[[@pcbi.1004818.ref030]\]. The data analyzed here are from 5,891 individual T cell tracks from 41 fields from 12 experiments. We group those 41 fields into 7 datasets, each dataset containing fields imaged using frame rates within one second of each other. This allows us to combine data across fields when performing analyses, such as velocity autocorrelation, that depend on the frame rate. We observe T cell velocities and motility coefficients largely in agreement with those previously published \[[@pcbi.1004818.ref009],[@pcbi.1004818.ref016],[@pcbi.1004818.ref030],[@pcbi.1004818.ref031]\]. We calculate the diffusion coefficient using the unweighted average method \[[@pcbi.1004818.ref032],[@pcbi.1004818.ref033]\]. T cells move with a mean speed with 95% confidence interval = 5.81 ±0.024 μm/min, median speed = 4.22 μm/min, motility coefficient, D = 19.2±0.534 μm^3^/min, calculated from a linear fit MSD of 5,185 tracks (out of 5,891 tracks filtered for *r*^*2*^ \> 0.8). The motility coefficient is calculated using a linear model fit to the first 25% of each displacement curve and for positions not exceeding the 10 min track time. Displacement is commonly used as a first step to assess whether movement is consistent with a Lévy walk or Brownian motion (sample tracks in [S1 Fig](#pcbi.1004818.s002){ref-type="supplementary-material"})\[[@pcbi.1004818.ref024],[@pcbi.1004818.ref031]\]. We determine the displacement of individual T cells over time. [Fig 1A](#pcbi.1004818.g001){ref-type="fig"} shows the mean squared displacement (MSD) of one of the 7 datasets, as well example tracks with lower ([Fig 1B](#pcbi.1004818.g001){ref-type="fig"}) and higher ([Fig 1C](#pcbi.1004818.g001){ref-type="fig"}) *r*^*2*^ values. We then calculate the linear fit to the log-log-transformed data. Logarithmically transforming data before applying a linear regression is a common way to measure the exponent of a power-law relationship between dependent and independent variables \[[@pcbi.1004818.ref034]\]. Log-log-transformed Lévy walks produce displacement exponents, *α*, between 1 and 2 \[[@pcbi.1004818.ref035]\]. We calculate the distribution of *α* for all T cell tracks and find that 56% of T cells have a displacement exponent *α* falling in the expected window for a Lévy walk ([Fig 1D](#pcbi.1004818.g001){ref-type="fig"}). Only 28.3% of cell tracks are subdiffusive (*α* \< 1), and the remaining tracks (15.6%) have a best-fit displacement exponent indicative of accelerating motion (*α* \> 2). Because low *r*^2^ values of linear fits to log-log-transformed data may indicate that the data are not well-described by any displacement exponent, we repeat the analysis on data sets restricted to *r*^2^ values \> 0.5, which discards 33% of all tracks, and *r*^2^ \> 0.75, discarding 50% of all tracks (see [S2 Fig](#pcbi.1004818.s003){ref-type="supplementary-material"} for figures with different *r*^2^ filters). Increasing *r*^2^ filtering decreases the fraction of cells in the subdiffusive window, but the qualitative message remains the same: T cells demonstrate heterogeneous behavior, with some displacements consistent with subdiffusive, Brownian, ballistic and even accelerating motion, but the majority of T cells are superdiffusive but sub-ballistic. [Fig 1D](#pcbi.1004818.g001){ref-type="fig"} shows the histogram of *α* for tracks with an *r*^2^ \> 0.8, other *r*^2^ thresholds are shown in [S2 Fig](#pcbi.1004818.s003){ref-type="supplementary-material"}, including all tracks with no filtering in [S2A Fig](#pcbi.1004818.s003){ref-type="supplementary-material"}. ![T cells move in lymph nodes with some features of a Lévy walk.\ Lévy walks are characterized by particular power law exponents of mean squared displacement (MSD) and step length distribution. (A, bottom) Observed T cell MSD vs. time. The dashed line is the linear regression with slope *α* = 1.41 indicating superdiffusion. (A, top) The number of data points in the MSD calculation. (B) Example displacements for a single T cell track with *r*^2^ = 0.52, and (C) with *r*^2^ = 0.93. (D) Histogram of *α* for individual tracks with *r*^2^ \> 0.8 (see [S2 Fig](#pcbi.1004818.s003){ref-type="supplementary-material"} for other *r*^2^ thresholds) with labels indicating the range of values of *α* consistent with Brownian, Lévy and ballistic motion. (E) Empirical complementary cumulative distribution function (CCDF) of all 145,731 step lengths for all 5,077 cells. The *x*-axis is all possible distances less than the maximum observed, the *y*-axis is the probability that an observed step length exceeds a particular value of x. The dashed line (offset for clarity) with slope 4.05 is the best fit to the power law tail of the CCDF which includes only 6.15% of the steps \[[@pcbi.1004818.ref036]\]. The line with slope 1.19 is the best fit to all data. (F,G) Examples of step length distributions and MLE fits for tracks with 49% and 93% of the track in the tail. (H) Percentage of tracks in the Lévy region for *μ* and *α* power law exponents and their intersection. Data are included when the *r*^2^ \> 0.5 for *α* and at least 50% (left histogram) or 70% (right histogram) of the track steps are retained in fitting the power law tail.](pcbi.1004818.g001){#pcbi.1004818.g001} Naïve T cell movement in LNs is not consistent with a Lévy walk {#sec004} --------------------------------------------------------------- While displacement analysis suggests many T cells are consistent with a Lévy walk, another defining feature of Lévy walks is that the inverse power law complementary cumulative distribution function (CCDF) for step lengths has an exponent, *μ*, between 1 and 3. Therefore, we analyzed T cell step lengths for the *μ* exponent. We define a *step* to be the resultant of a velocity subsequence in which each T cell velocity vector deviates by no more than 15° from the previous vector and a *step length* is the distance covered by a step. [Fig 1E](#pcbi.1004818.g001){ref-type="fig"} shows that a power law fit to the population of T cell step lengths is only valid if almost 94% of the data are excluded from the analysis (see [Materials and Methods](#sec009){ref-type="sec"}: Distribution fitting). The resulting best-fit *μ* exponent for the remaining 6% of the power law tail is 4.05 ([Fig 1E](#pcbi.1004818.g001){ref-type="fig"}). The curvilinearity, the poor fit, as well as the *μ* value all indicate that a Lévy walk is not a good description of T cell motility. On average 51% of data must be excluded in order to obtain a maximum likelihood estimated (MLE) power law fit (see [Fig 1F and 1G](#pcbi.1004818.g001){ref-type="fig"} for example tracks with low and high percentage of steps in the power law tail; see [S3A and S3D Fig](#pcbi.1004818.s004){ref-type="supplementary-material"} for histograms of *μ* using other goodness of fit (GoF) threshold values; and see [S2](#pcbi.1004818.s003){ref-type="supplementary-material"} and [S3](#pcbi.1004818.s004){ref-type="supplementary-material"} Figs for additional analysis.) We determined the number of tracks that fit both 1\< *μ* \<3 and 1\< *α* \<2 parameters. Setting our GoF filtering criteria to require that at least 70% of the data per track is retained in calculating the exponent *μ* ([Fig 1F](#pcbi.1004818.g001){ref-type="fig"}), and the *r*^2^ statistic for the power law exponent *α* is at least 0.7, we find that only 5.5% of all T cell tracks fit both criteria for Lévy walk ([Fig 1H](#pcbi.1004818.g001){ref-type="fig"}). We note that the tracks excluded when filtering by *r*^2^ and those filtered by the percent of track in the power law tail both tend to be subdiffusive. For any filtering criteria the vast majority of T cell tracks are not Lévy walks ([S3E Fig](#pcbi.1004818.s004){ref-type="supplementary-material"}). To further analyze T cell motion, we quantify speeds (T cell displacement between consecutive frames multiplied by the frame rate) of all T cell tracks ([Fig 2A and 2C](#pcbi.1004818.g002){ref-type="fig"}) and find that in LNs T cell speeds range from 6.5×10^−4^ μm/s to 0.9 μm/s ([Fig 2A](#pcbi.1004818.g002){ref-type="fig"}). We fit experimentally derived speeds ([Fig 2A](#pcbi.1004818.g002){ref-type="fig"}) and step lengths ([Fig 2B](#pcbi.1004818.g002){ref-type="fig"}) to idealized probability distributions. We use parametric distributions because they are associated with well-known generative processes; for example, the Gaussian distribution is produced by the cumulative effect of additive processes, the lognormal distribution is often associated with multiplicative or branching processes \[[@pcbi.1004818.ref037]\], and the Maxwell distribution is a product of Brownian motion in three dimensions. We use likelihood measures to rank how well different distributions explain the observed data (Tables [1](#pcbi.1004818.t001){ref-type="table"} and [2](#pcbi.1004818.t002){ref-type="table"}). ![Distributions of T cell speed and step lengths with MLE fits.\ For (A) speed and (C) step length the lognormal function is the best fit (see Tables [1](#pcbi.1004818.t001){ref-type="table"} and [2](#pcbi.1004818.t002){ref-type="table"} for likelihood values and model parameters). Fits for normalized speed (B) and normalized step lengths (D) are divided by the mean speed or step length of the track from which they are drawn. (E) Histogram of all 149,592 observed turning angles. The green line is the maximum likelihood estimation of the gamma distribution used to model turning angles in the efficiency simulation. (F) Turning angle autocorrelation for 23,169 vectors from the 537 T cell tracks observed in one dataset. The correlation in movement direction decays until reaching zero at approximately 240 s.](pcbi.1004818.g002){#pcbi.1004818.g002} 10.1371/journal.pcbi.1004818.t001 ###### MLE fits to step lengths and normalized step lengths (N = 145,731 steps). Negative log-likelihood measures the relative ability of candidate models to explain the observed data (For additional fits tested, see [S1](#pcbi.1004818.s016){ref-type="supplementary-material"} and [S2](#pcbi.1004818.s017){ref-type="supplementary-material"} Tables). The corrected Akaike information criterion (AICc) and Bayesian information criterion (BIC) ([S2 Table](#pcbi.1004818.s017){ref-type="supplementary-material"}) confirm that order of fit quality is not due to the number of model parameters. The most negative log likelihood and AICc scores are the best fits; in this case that is the smallest positive score for the lognormal distribution. The last column lists the distribution parameters that were selected by MLE. See [S1](#pcbi.1004818.s016){ref-type="supplementary-material"} and [S2](#pcbi.1004818.s017){ref-type="supplementary-material"} Tables for other distribution fits and goodness of fit statistics. ![](pcbi.1004818.t001){#pcbi.1004818.t001g} ------------------------------------------------------------------ ------------------------------- ------------ ----------------- ---------------- **Step lengths** Distribution Negative log Likelihood×10^5^ AICc×10^5^ MLE Parameters **Lognormal** **2.65** **5.29** **μ = 0.4818** **σ = 0.9192** Gaussian 3.36 6.72 μ = 2.3895 σ = 2.4229 Maxwell 4.02 8.04 a = 3.8497 Power Law (Levy) 4.58 9.16 α = 1.1921 **Normalized Step lengths (step length/track mean step length)** **Lognormal** **1.20** **2.40** **μ = -0.2217** **σ = 0.6896** Gaussian 1.61 3.23 μ = 1 σ = 0.7324 Maxwell 1.69 3.37 a = 0.5117 Power Law (Levy) 3.32 6.63 α = 1.2245 ------------------------------------------------------------------ ------------------------------- ------------ ----------------- ---------------- 10.1371/journal.pcbi.1004818.t002 ###### MLE fits to speeds and normalized speeds (N = 159,746). The lognormal distribution has the most negative log-likelihoods and AICc score and therefore is the best fit. The parameters selected by MLE are shown for each distribution. See [S1](#pcbi.1004818.s016){ref-type="supplementary-material"} and [S2](#pcbi.1004818.s017){ref-type="supplementary-material"} Tables for other distribution fits and goodness of fit statistics. ![](pcbi.1004818.t002){#pcbi.1004818.t002g} -------------------------------------------- ------------------------------- ------------ ----------------- ---------------- **Speeds** Distribution Negative log Likelihood×10^5^ AICc×10^5^ MLE Parameters **Lognormal** **-1.84** **-3.68** **μ = -2.5027** **σ = 0.9329** Gaussian -1.61 -3.23 μ = 0.1161 σ = 0.0881 Maxwell -1.12 -2.24 a = 0.0071 Power Law (Levy) 0.122 0.245 μ = 1.2069 Normalized Speeds (speed/track mean speed) **Lognormal** **1.22** **2.45** **μ = -0.1669** **σ = 0.6154** Gaussian 1.37 2.74 μ = 1 σ = 0.5706 Maxwell 1.32 2.65 a = 0.4414 Power Law (Levy) 3.58 7.16 μ = 1.2446 -------------------------------------------- ------------------------------- ------------ ----------------- ---------------- The distribution of T cell step lengths and speeds is more consistent with a lognormal distribution than with Brownian motion (defined by a Gaussian or Maxwell distribution) or a Lévy walk (defined by a power law distribution of speeds \[[@pcbi.1004818.ref038]\]) as shown by the higher values in the MLE for power law fits in Tables [1](#pcbi.1004818.t001){ref-type="table"} and [2](#pcbi.1004818.t002){ref-type="table"}. The variance of observed T cell speeds and lengths is high, and the distributions have a heavier tail (greater right skew) than both Gaussian and Maxwell distributions. The power law probability distribution over-represents both very small steps and very large steps compared to observed T cells. The lognormal distribution shows the best statistical fit for both speed and step lengths. The gamma distribution also fits the observed speeds very well ([S1](#pcbi.1004818.s016){ref-type="supplementary-material"} and [S2](#pcbi.1004818.s017){ref-type="supplementary-material"} Tables, [S4 Fig](#pcbi.1004818.s005){ref-type="supplementary-material"}). However since gamma and lognormal are often used to model the same phenomena, we present only lognormal here \[[@pcbi.1004818.ref039]\]. It is possible that the right skew in the speed distribution arises from the variance between track mean speeds rather than from speed variance within tracks \[[@pcbi.1004818.ref022]\]. To test for this possibility, we divide each speed drawn from within a cell track by the cell mean speed (called "normalized") and ask whether the distribution becomes less heavy-tailed. We find that both normalized speed and step length distributions are still best fit by a lognormal distribution (compare [Fig 2A](#pcbi.1004818.g002){ref-type="fig"} with [Fig 2C, 2B and 2D](#pcbi.1004818.g002){ref-type="fig"} and the normalized vs. raw lengths and speeds in Tables [1](#pcbi.1004818.t001){ref-type="table"} and [2](#pcbi.1004818.t002){ref-type="table"}), but the right skew is decreased. Our observations indicate that the heavy-tailed lognormal distribution is not simply due to distinct populations moving at different mean speeds, though heterogeneity in speed within the population is a factor. Both Brownian motion and Lévy walks assume that the angle of each turn is drawn from a uniform random distribution. We analyze the turning angles of each T cell at each time step and find that T cell turning angles are not uniform, and that there is a bias toward turning angles of less than 90° ([Fig 2E](#pcbi.1004818.g002){ref-type="fig"}). The non-uniform distribution of turning angles suggests that T cells may move according to a CRW. We fit distributions to turning angles using MLE and find the gamma distribution to be the best fit, although it cannot capture all of the variation in the bi-modal distribution ([Fig 2E](#pcbi.1004818.g002){ref-type="fig"} green-dotted line). We then performed an autocorrelation analysis of directions over time to determine whether there is a dependency between the direction of T cells at one time step and the previous time steps ([Fig 2F](#pcbi.1004818.g002){ref-type="fig"}). We find that T cells show turning angle autocorrelation consistent with a CRW (indicated by positive values in [Fig 2F](#pcbi.1004818.g002){ref-type="fig"}). The correlation persists for approximately 4 minutes. Our cross-correlation analysis shows no drift in the observation fields (Materials and Methods: Equation 2). T cells balance search for unique individual targets and interactions with multiple targets {#sec005} ------------------------------------------------------------------------------------------- A key function of naïve T cell search within LNs is to find and interact with antigen bearing DCs. To determine whether different types of search can affect T cell interaction with DCs, we use an agent-based model, using biologically informed parameters, to assess the degree to which different modes of random search predict the observed pattern of T cell search efficiency (i.e., the number of DCs encountered per unit time). We reproduce features of T cell movement by creating search tracks using the best distribution fit to speeds ([Table 2](#pcbi.1004818.t002){ref-type="table"}) and turning angles, limited by the total distance covered and time observed for empirical T cell tracks. We run simulations with DC targets placed with 3 different degrees of clustering: highly clustered (DC centers placed in 10μm radius spheres), moderately clustered (in 20μm radius spheres) to more evenly dispersed (in 40μm radius spheres) ([S5 Fig](#pcbi.1004818.s006){ref-type="supplementary-material"}). We confirm that these DC placements result in a range of clusteredness according to the Hopkins aggregation statistic that ranges from 0.44 for dispersed clusters (close to the 0.5 value expected for a uniform distribution) to 0.2 for compact clusters. We confirm that Brownian motion in our simulations results in diffusive movement ([S6 Fig](#pcbi.1004818.s007){ref-type="supplementary-material"}). We then compare efficiency of modelled search with observed T cell tracks from the experimental data across this range of DC cluster sizes. We calculate efficiency of T cell search in two ways. First, we determine how many unique "DC" targets were encountered by each T cell in a specific period of time. Previous studies suggest that naïve T cells have no *a priori* information about the location of DCs in LNs \[[@pcbi.1004818.ref011],[@pcbi.1004818.ref012]\]. Second, we determine how many total DC target encounters occur in the specified time. Total contacts count repeated contact with the same DC while unique contacts counts only one contact per DC. Total contacts are important for T cell activation and potentially survival while unique contacts are a measure of how long it may take T cells to find rare DCs presenting cognate antigen. The simulation addresses two questions: do statistical descriptions of T cell movement produce search efficiencies that are similar to those of observed T cells; and, how do the relative efficiencies of the idealized models compare to each other and experimentally observed T cells. Not surprisingly, the efficiency of observed T cells show a much wider range of variability compared with idealized models ([Fig 3A](#pcbi.1004818.g003){ref-type="fig"}), and we find clear differences in search efficiency between observed T cells and some idealized models. Brownian searchers are approximately 40% less efficient than observed T cells for unique DC contact ([Fig 3A and 3B](#pcbi.1004818.g003){ref-type="fig"} and [Table 3](#pcbi.1004818.t003){ref-type="table"}). In contrast, the power law (Lévy) fit was 30% *more* efficient than observed T cell tracks, and as expected, more efficient than any other model for unique contacts with DCs. We also modelled a correlated random walk (CRW) as well as a CRW with a lognormal distribution of step lengths (a lognormal modulated CRW, LogMCRW). We show that the idealized search that most closely fits the observed efficiency of experimentally derived T cell search in LNs is the LogMCRW ([Fig 3B](#pcbi.1004818.g003){ref-type="fig"}), in keeping with CCDF fits ([Fig 2](#pcbi.1004818.g002){ref-type="fig"}). Efficiency is not dependent on placement of DC targets in the model: efficiency measures remain similar across multiple target distributions and degrees of clustering ([Table 3](#pcbi.1004818.t003){ref-type="table"}). Thus, LogMCRW is not only the best description of the step length distribution, but also the best efficiency match for unique contact T cell search in LNs. ![T cell search balances unique and total contacts with targets.\ Interquartile boxplots show search efficiency for DCs in 10 μm radius clusters. Panels (A) and (B) show *unique contact* efficiency; (C) and (D) show *total contact* efficiency. (A) and (C) show 1000 efficiency samples for each of the 41 fields. (B) and (D) compare the percent change in median search efficiency for each candidate search model relative to observed T cell search (indicated by the line at 0). See Tables [3](#pcbi.1004818.t003){ref-type="table"} and [4](#pcbi.1004818.t004){ref-type="table"} for other target distributions and significance values. Outliers are not shown for clarity.](pcbi.1004818.g003){#pcbi.1004818.g003} 10.1371/journal.pcbi.1004818.t003 ###### Percent change of each idealized search strategy for unique contacts compared to the empirical search strategy across 3 different target distributions. Table entries are percent change in median search efficiency from observed ± 95% confidence interval. Two p-values are shown: the first indicates the significance of the change in *median* efficiency between the observed and idealized runs (N = 10 runs, each run consists of 4,100 samples, [Fig 3B](#pcbi.1004818.g003){ref-type="fig"}). The second p-value tests whether *all* raw efficiency values differ between observed and idealized runs (N = 41,000, [Fig 3A](#pcbi.1004818.g003){ref-type="fig"}). All p-values are calculated using the Mann-Whitney U test. The values in parentheses are the Hopkins aggregation statistic. All search strategies are statistically different from observations except LogMCRW in the most diffuse 40 μm DC clusters (in bold). ![](pcbi.1004818.t003){#pcbi.1004818.t003g} Search Strategy ------------------ --------------------- --------------------- --------------------- --------------------- ---------------------- --------------------- **10 μm (0.2)** -41.88 ± 0.82 -28.11 ± 1.09 -15.91 ± 1.69 -12.98 ± 0.99 -7.35 ± 1.92 27.63 ± 5.44 p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ **20 μm (0.32)** -39.828 ± 0.64 -25.87 ± 0.70 -13.39 ± 1.62 -9.926 ± 1.37 -3.98 ± 1.94 34.17 ± 8.49 p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ p \< 10^−3^, 10^−4^ p \< 10^−4^, 10^−4^ **40 μm (0.44)** -41.88 ± 0.81 -22.75 ± 0.55 -9.621 ± 1.97 -4.798 ± 1.37 -0.218 ± 2.17 36.02 ± 6.41 p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ p \< 10^−3^, 10^−3^ p = **0.85**, 10^−4^ p \< 10^−4^, 10^−4^ Our simulation of unique target search also gives a quantitative estimate of the contribution of different types of T cell movement to search efficiency ([Table 3](#pcbi.1004818.t003){ref-type="table"}). Correlation in angles of T cells increases the search efficiency by \~10% (from -42% for Brownian without correlation to -28% for CRW; -17% for lognormal to -7% for LogMCRW). The heavy-tailed step lengths contributed a 20% increase in efficiency (-42% Brownian to -17% lognormal). These results show that T cell motion is a complex mix of multiple motility parameters that contribute to overall T cell search efficiency. In addition to unique antigen search, multiple DC contacts by T cells contribute to T cell activation and may also be required for survival \[[@pcbi.1004818.ref041]--[@pcbi.1004818.ref043]\]. Interestingly, we find that the efficiency of total contacts is reversed from that seen for unique contacts (compare [Fig 3B and 3D](#pcbi.1004818.g003){ref-type="fig"}, [Table 4](#pcbi.1004818.t004){ref-type="table"}). Brownian searchers made the greatest number of total contacts, while power law (Lévy) searchers made the fewest total contacts ([Fig 3D](#pcbi.1004818.g003){ref-type="fig"}). Brownian searchers tend to resample the same locality and are therefore more thorough in their search at the cost of reduced search extent. In contrast, superdiffusive heavy-tailed searchers leave DC clusters more quickly and their total contact rate falls, increasing extent at the cost of thoroughness. Again, LogMCRW is closer to observed data than the other simulated patterns, and it successfully balances total contact rate with exploration of new DC clusters ([Fig 3D](#pcbi.1004818.g003){ref-type="fig"}). 10.1371/journal.pcbi.1004818.t004 ###### Percent change of each simulated search strategy for total contacts compared to the empirical search strategy across 3 different target distributions. Table entry format is identical to [Table 3](#pcbi.1004818.t003){ref-type="table"}. These values correspond to [Fig 3C and 3D](#pcbi.1004818.g003){ref-type="fig"}. Brownian motion, bootstrap and LogMCRW are not significantly different from the observed distribution of efficiencies when targets are more clustered (in bold), but power law search underestimates the efficiency of search for total contacts. ![](pcbi.1004818.t004){#pcbi.1004818.t004g} Search Strategy -------------- ----------------------- --------------------- --------------------- ---------------------- ----------------------- --------------------- 10 μm (0.2) 8.7 ± 1.16 12.94 ± 1.34 7.24 ± 3.25 0.73 ± 2.59 8.4 ± 3.66 -28.66 ± 2.43 p \< 10^−4^, **0.29** p \< 10^−4^, 10^−3^ p \< 0.01, 0.05 p = **0.63**, 10^−4^ p \< 10^−3^, **0.73** p \< 10^−4^, 10^−4^ 20 μm (0.32) 12.71 ± 1.52 15.67 ± 1.54 9.22 ± 2.72 2.29 ± 2.72 12.18 ± 2.64 -26.27 ± 4.14 p \< 10^−4^, **0.87** p \< 10^−4^, 10^−4^ p \< 0.05, 0.05 p = **0.19**, 10^−4^ p \< 10^−4^, **0.8** p \< 10^−4^, 10^−4^ 40 μm (0.44) 17.71 ± 1.86 20.89 ± 1.58 13.07 ± 3.24 4.52 ± 2.51 16.31 ± 2.69 -24.08 ± 4.4 p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ p \< 0.05, 10^−4^ p \< 10^−4^, 10^−4^ p \< 10^−4^, 10^−4^ We also performed a statistical bootstrap analysis in which search tracks were generated by sampling uniformly from all observed track speeds and turning angles \[[@pcbi.1004818.ref041]\]. While the efficiency of total contacts for bootstrap tracks is statistically indistinguishable from observed T cells, bootstrap tracks are 12% less efficient than observed cells in unique contacts ([Fig 3B](#pcbi.1004818.g003){ref-type="fig"} and [Table 3](#pcbi.1004818.t003){ref-type="table"}). Thus, individual T cell tracks confer efficiency for unique DC target search that is lost when the steps within a track are randomized, suggesting that there is underlying heterogeneity in T cell tracks that increases T cell search efficiency. Naïve T cells show heterogeneity in movement patterns {#sec006} ----------------------------------------------------- To assess potential variation in T cell motility, we analyzed differences in speeds across individual T cell tracks. We find that the distribution of speeds is highly skewed for cells with lower mean speeds, but there is less skew for cells with high mean speeds ([Fig 4A](#pcbi.1004818.g004){ref-type="fig"}). The fastest cells (mean speeds \>15 μm/min, [Fig 4D](#pcbi.1004818.g004){ref-type="fig"}) produce more symmetric speed distributions as demonstrated by the low skew and kurtosis. Also, distribution fitting of speeds shows that the speeds are now best fit by Gaussian and Maxwell distributions ([Table 5](#pcbi.1004818.t005){ref-type="table"}). In contrast, slow cells (mean speed \<5 μm/min, [Fig 4C](#pcbi.1004818.g004){ref-type="fig"}) have a heavier tailed distribution of speeds as shown by skew and kurtosis with lognormal remaining the best fit ([Table 5](#pcbi.1004818.t005){ref-type="table"}). This is not due to the number of data points available at high speeds, as skew decreased even at the speeds with the highest number of data points ([Fig 4B](#pcbi.1004818.g004){ref-type="fig"}). However, "slow" and "fast" are not discrete populations, as a mixed Gaussian cluster analysis shows no evidence of discrete populations defined by mean speed and variance ([S7 Fig](#pcbi.1004818.s008){ref-type="supplementary-material"}). These results suggest that T cells exhibit a continuum of movement patterns within LNs, leading to different types of searches: slow moving cells show a heavy-tailed distribution while faster moving cells are more Brownian. ![T cells moving at different speeds show different movement patterns.\ (A) Skew of step length distribution as a function of track mean speed and (C) the number of data points as a function of track mean speed. Tracks with mean track speeds (MTS) less than 5μm/min (B) and greater than 15μm/min (D) were selected to illustrate different MLE model fits for fast and slow tracks (for fits see [Table 5](#pcbi.1004818.t005){ref-type="table"}).](pcbi.1004818.g004){#pcbi.1004818.g004} 10.1371/journal.pcbi.1004818.t005 ###### Best fit likelihood and MLE estimated parameters for the fastest and slowest cells. The Gaussian distribution better fits tracks with mean speed \> 15 μm/min while lognormal better fits tracks with mean speed \< 5 μm/min. The step speed distribution for fast tracks has a shorter and lighter tail than the sample of tracks with slower mean speeds. Best negative loglikelihood scores are in bold. For tracks with mean speed of \< 5 um/min, skew is 2.37 and kurtosis 13.3; for tracks with mean speed \> 15 um/min, skew is 0.52 and kurtosis 3.98. ![](pcbi.1004818.t005){#pcbi.1004818.t005g} Mean Speed \< 5 μm/min Mean Speed \> 15 μm/min --------------- ------------------------ ------------------------- ----------- ----------- ----------- ------------ **Lognormal** **-1.09** **-3.35** **0.826** -2.60 -1.35 0.387 **Gaussian** -0.918 0.0482 0.0407 **-2.73** **0.277** **0.0958** Maxwell -0.789 0.0013 -2.66 0.0286 Power Law -0.492 1.25 2.018 1.35 "Hotspots" in the LN environment show differing patterns of T cell motion {#sec007} ------------------------------------------------------------------------- The variation in movement shown in [Fig 4](#pcbi.1004818.g004){ref-type="fig"} suggests that T cells may alter their search pattern in response to environmental cues. Our previous work shows that altering movement in response to environmental cues can enhance search efficiency \[[@pcbi.1004818.ref044],[@pcbi.1004818.ref045]\]. Extending our previous work in \[[@pcbi.1004818.ref046]\], we analyze T cells in LN to identify whether T cells movement changes within local microenvironments of the LN. To do this, we identify whether there are locations in the LN that are visited by T cells more frequently than predicted by a null model. We analyzed each observation field separately; each field was discretized into cubes of 20μm per side, which is approximately twice the diameter of a naïve T cell. We used the LogMCRW simulation we described earlier as a null model (for details of null model see [Materials and Methods](#sec009){ref-type="sec"}, [S11 Fig](#pcbi.1004818.s012){ref-type="supplementary-material"}). We identified spots that were visited by T cells in the simulation null model and then identified spots that were visited by T cells from actual experimental data. We found that experimental T cells visited certain spots at significantly higher frequency than the null model (see [S11 Fig](#pcbi.1004818.s012){ref-type="supplementary-material"} \[[@pcbi.1004818.ref047]\]). Spots that were visited at a frequency 2σ higher than the null model were called "hotspots" (examples shown in [S12 Fig](#pcbi.1004818.s013){ref-type="supplementary-material"}). Hotspots were observed in 37 of the 41 observation fields. The null model results in only 2.73% of visited locations being hotspots (as expected given that we identify hotspots as those visited 2 standard deviations above the mean, [S11 Fig](#pcbi.1004818.s012){ref-type="supplementary-material"}); in contrast, in empirical observations, 10.51% of locations from observed experimental data are hotspots. We also find that our null simulation predicts 32% tracks will visit hotspots but our observed tracks show that 51% of observed tracks visit hotspots. These data all support the hypothesis that hotspots exist in empirical observations. We define *hot tracks* to be T cell tracks that intersect with hotspots and *cold tracks* to be those that do not. Hot tracks have median speeds that are significantly higher than cold tracks with hot track speed at 7.27 μm/min and cold tracks at 4.25 μm/min (median speed is 37.4% greater for hot tracks than for cold, p-values \<\< 10^−3^ Mann-Whitney U test). We also find that the step length distributions of hot tracks have a significantly lower skew and kurtosis compared to cold tracks ([Table 6](#pcbi.1004818.t006){ref-type="table"}), indicative of more Gaussian distributions in hot tracks. Furthermore, though the step lengths of hot tracks and cold tracks are both best fit by lognormal PDFs, the Gaussian and Maxwell distributions are nearly as good for hot tracks ([Fig 5A and 5B](#pcbi.1004818.g005){ref-type="fig"} and [Table 6](#pcbi.1004818.t006){ref-type="table"}). These results show that T cells that visit hotspots exhibit different, and more Brownian movement, suggesting that they search more thoroughly than T cells that do not visit hotspots. ![T cells visiting hotspots show a different distribution of speeds than T cells that do not visit hotspots.\ Cold tracks (A) have a speed distribution that is more peaked at low speeds with a more skewed, heavy-tailed distribution compared to hot tracks (B). For fits, see [Table 6](#pcbi.1004818.t006){ref-type="table"}. (C) Visit frequency, or number of observations of hot tracks in hot vs. cold spots. Hot tracks were observed to visit hotspots more than cold spots. The graph shows the distribution of average number of visits by hot tracks to hotspots versus cold spots. Interquartile box plot of the distribution with the red line indicating the median number of visits. Outliers are not shown. \*\*\*\* indicates p\<\<10^−3^ using Mann-Whitney U test.](pcbi.1004818.g005){#pcbi.1004818.g005} 10.1371/journal.pcbi.1004818.t006 ###### Hot and cold track step lengths show different MLE distribution fits. Hot tracks tend to be faster than cold tracks and more Brownian in their movement pattern. The high kurtosis and skew is due to a long tail in the distribution of step lengths belonging to tracks that do not visit hotspots. For hot tracks, skew is 1.45 and kurtosis 6.74; for cold tracks skew is 11.12 and kurtosis 136. ![](pcbi.1004818.t006){#pcbi.1004818.t006g} Hot Tracks Cold Tracks ----------- ------------ ------------- ------- ------ ------- ----------- Lognormal 1.29 0.671 0.752 1.85 0.819 **0.583** Gaussian 1.405 2.504 1.72 4.22 4.53 **22.3** Maxwell 1.48 3.077 8.24 171 Power Law 2.96 2.65 1.21 The presence of hotspots suggests that a microenvironment within the LN might modify T cell behavior. To show T cell adaptation within LNs, we ask whether hot tracks (T cells that have visited hotspots) behave differently in hotspots versus other locations within the LN (cold spots). We find that T cells from hot tracks spend more time in hotspots than in other locations (cold spots), with T cells spending a median of 5.36 time steps in hotspots compared to 4.5 in cold spots (p-values \<\< 10^−3^ Mann-Whitney U test, [Fig 5C](#pcbi.1004818.g005){ref-type="fig"}). T cells that visit hotspots are found in those hotspots between 13.3% and 23.2% (95% confidence interval) more often than they are in other LN locations, i.e. cold spots. Thus, hotspots are visited by more T cells than can be explained by chance, the T cells that visit those hotspots move differently than those that don't, and T cells spend more time in hotspots than in other locations; all suggesting that T cell movement changes in response to the LN environment. Discussion {#sec008} ========== T cell activation depends on interactions between T cells and antigen-bearing DCs in secondary lymphoid organs including LNs \[[@pcbi.1004818.ref009],[@pcbi.1004818.ref017]\]. In this study, we quantify the movement of T cells within LNs, and how efficiently they encounter DC targets (in terms of both unique and total contacts). We use quantitative analysis and computer simulations to show that a search strategy that employs both correlations in successive turning angles and a lognormal distribution of speeds is most representative of observed T cell motion, which we call a LogMCRW. However, T cell motion does not perfectly fit any simple parametric model, and different types of motility are observed depending on where the T cell is and how fast the T cell moves. Accurate characterization of T cell movement is important because motility determines the timing of other immune processes downstream of T cell activation. Several groups have published models of how T cells interact with DCs in LNs. Mirsky at al. \[[@pcbi.1004818.ref048]\] provide a review. Recent data also suggests that motility can affect both T cell recirculation \[[@pcbi.1004818.ref049]\] and T cell dwell time leading to activation especially when detecting rare antigen \[[@pcbi.1004818.ref008],[@pcbi.1004818.ref041]\]. Different studies employ different models of T cell motion due to the lack of precise understanding of how T cells move. For example, some models assumed Brownian movement while another assumed a CRW with a Gaussian distribution of steps and speeds, and yet another uses tracks bootstrapped from empirical data \[[@pcbi.1004818.ref018],[@pcbi.1004818.ref026],[@pcbi.1004818.ref040],[@pcbi.1004818.ref041]\]. Our results show that the LogMCRW pattern of motion not only fits the experimental data, but also most faithfully reproduces the modelled search efficiency of observed T cell movement. We use an agent-based model to compare empirical T cell movement to idealized simulations. These simulations demonstrate that simulated Lévy walks overestimate real T cell search efficiency (for unique DC contacts) while the Brownian walk, CRW, and bootstrap tracks underestimate it. The reverse is true for total contacts. A lognormal distribution of steps combined with correlation among steps (LogMCRW) best represents empirical T cell search efficiency for total and unique contacts. We identify and quantify three mechanisms that increase T cell search efficiency for unique targets: 1) heavy-tailed step lengths (comparing lognormal versus Brownian search accounts for 20%); 2) directional correlation (comparing lognormal vs. LogMCRW accounts for 10%); and heterogeneity among T cells (comparing bootstrap to observed accounts for 10%) ([Fig 4](#pcbi.1004818.g004){ref-type="fig"} and [Table 3](#pcbi.1004818.t003){ref-type="table"}). Thus, computational models allow us to quantify the contribution of a variety of factors to T cell search efficiency. In our study, we thoroughly analyze the motility of naïve T cells in LNs in the absence of antigenic stimulation. Our results largely agree with a recent study by Banigan et al. also showing persistent directional movement for 3--4 minutes by naïve T cells \[[@pcbi.1004818.ref027]\]. T cells have previously been shown to move in "streams", which may correspond to the persistence in movement. Persistence may also reflect cells following a path of least resistance or intrinsic regulation of cell movement, for example, the time required to form a leading edge. In contrast to Banigan, we find a lognormal distribution of T cell steps and show that the heavy tailed distribution of step lengths is important for search efficiency. Banigan et al. also suggested that modeling T cell movement using 2 subpopulations may be a more faithful reproduction of T cell movement in LN \[[@pcbi.1004818.ref027]\]. Our data does not support the existence of 2 subpopulations of T cells. Rather, we find that there may be subregions (hotspots) within the LN that leads to differences in T cell search behavior. T cell motion near hotspots is less directionally persistent and more Brownian ([Fig 5](#pcbi.1004818.g005){ref-type="fig"}). These results demonstrate that T cells react to their environment, and more specifically, they suggest that T cells that visit hotspots stay longer and thus search more thoroughly at those hotspots. The identity of hotspots remains to be determined. It is possible that hotspots are locations of DCs or high endothelial venules from which T cells enter the LN. T cells that search areas with DCs more thoroughly may have more repeated contacts with the same DC as well as contacts with more DCs within the same area, enhancing the potential for productive T cell interaction with DCs presenting cognate antigen. One potential mechanism for hotspots is chemokine production by DCs, although there is no direct experimental evidence for this. Another possibility is that hotspots may reflect an underlying structure such as the fibroblastic reticular cells that may form a network that guides T cell movement \[[@pcbi.1004818.ref050]\]. However, the distribution of our hotspots does not obviously reflect any network structure. Others have tested the potential role of a network on T cell search efficiency \[[@pcbi.1004818.ref011],[@pcbi.1004818.ref051]\] and found that the presence of a network has little impact on T cell search efficiency. Upon activation by cognate antigen, T cell motility within the LN changes, T cells slow down over a period of several hours and begin to form long lived interactions with DCs, essentially ending the "search" phase \[[@pcbi.1004818.ref016],[@pcbi.1004818.ref017]\]. Effector T cells then exit the LN and enter peripheral sites of inflammation. Effector T cell motion in the brains of *Toxoplasma gondii* infected animals was shown to be a generalized Lévy walk based on displacement analysis \[[@pcbi.1004818.ref024]\]. This differs from our findings that T cells in LNs do not fit a Lévy walk. The difference between our findings and those of Harris et al. may result from intrinsic differences between naïve and effector states. Another possibility is that differences between the tissues that the T cell resides in, for example, the LN for naïve T cells or the brain for effector T cells, contain structural and chemical variability leading to different motility. As expected, our simulation shows that Lévy searchers are efficient at finding rare targets, but Brownian motion is more efficient when measuring total contacts. These results show that biological context may be important for T cell search efficiency: in the search for rare and unique antigens, the heavy-tailed search is more efficient. However, in situations where high numbers of DC contacts may be important for T cell activation and potentially survival, Brownian motion has an advantage. The observed T cell motion appears to combine the best properties of each, utilizing multiple modes of motility to achieve efficiency in different contexts. Previous studies have used modeling to reproduce experimental results, and we use this approach to show that the LogMCRW statistical model captures immunologically important properties of T cell search. Similar to empirically observed T cell movement, combining multiple features of random search in the LogMCRW balances search over a wide spatial extent to find unique targets, with thorough search that allows repeated contacts within a cluster. In addition, we extend our use of modeling to identify novel features of the biology underlying T cell movement in LNs. Because the LogMCRW is a good estimate of search efficiency, it also provides a useful null model with which observed T cell motion can be compared, revealing that T cells move differently in different locations in the LN. Thus the statistical model and search efficiency simulations not only characterize cell movement and provide estimates of search efficiency, they can also be used to reveal the complexity of T cell motility. Indeed, comparison to our null model reveals non-random T cell movement which may indicate change in response to some feature of the LN. We find that T cells respond differently to specific microenvironments within the lymph nodes, which we call hotspots. The presence of hotspots suggest that, like foraging animals, T cells may respond to features of their environment in order to guide their search \[[@pcbi.1004818.ref052],[@pcbi.1004818.ref053]\]. Prior work has characterized the movement of foraging animals using both CRW and Lévy walks. Lévy walks in particular have been suggested as optimal to maximize foraging rate \[[@pcbi.1004818.ref002],[@pcbi.1004818.ref016]\]. Our work suggests that in order to balance maximizing repeated (total) contacts with maximizing new (unique) contacts, the LogMCRW may be more effective. More generally, walks with heavy tailed step length distributions and correlation among turning angles may be most effective at balancing the thoroughness and extent of search. In foraging animals as well as searching T cells, natural selection may opt for movement that is effective in a variety of circumstances, even if that movement is difficult to describe analytically. T cells provide a unique window into biological search strategies because so many searchers can be visualized rapidly in relatively intact natural conditions. Such movement patterns can be included in agent-based models, even if they are not easy to present in closed form equations. Our data suggests that the LogMCRW strategy might be a better approach than either Brownian or Lévy walk in situations that need to balance repeated contacts with already-found targets and discovery of new items. Additionally, T search for patchily distributed DCs \[[@pcbi.1004818.ref016]\] in the LN may demonstrate response to cues, similar to other collective foragers such as ants collecting patchily distributed resources in natural habitats \[[@pcbi.1004818.ref054]\]. In contrast to previous assumptions about simple random motion, our analysis shows that T cell movement in lymph nodes is complex, and involves correlation, variation in step lengths, and heterogeneity in response to local environments. The deviation from idealized models reflects the immunological need to balance the spatial extent and local thoroughness of search. The complex movements of T cells in LN provide a window into biological search strategies and how natural selection may balance multiple objectives in a variety of biological contexts. Materials and Methods {#sec009} ===================== Ethics statement {#sec010} ---------------- The protocol was approved by the IACUC at the University of New Mexico (protocol \# 10--100487). The breeding and maintenance of mice used in this research conform to the principles outlined by the Animal Welfare Act of the National Institutes of Health. All efforts were made to minimize suffering with use of ketamine and xylazine when appropriate. Euthanasia was performed by isofluorane overdose. Mice {#sec011} ---- C57BL/6 mice were from Jackson Laboratories (Bar Harbor, ME). All mice were bred and/or maintained in a specific pathogen-free condition in barrier facilities (Albuquerque, NM) and conform to the principles outlined by the Animal Welfare Act and the National Institutes of Health guidelines. T cell observations using two-photon microscopy {#sec012} ----------------------------------------------- Lymph nodes were prepared according to the protocol described previously \[[@pcbi.1004818.ref030],[@pcbi.1004818.ref055]--[@pcbi.1004818.ref057]\]. T cells were purified by nylon wool or by negative selection using the pan-T cell kit (Miltenyi Biotec) as previously described by Cannon et al. \[[@pcbi.1004818.ref028]\] and purified T cells labeled with either 1μM CFSE (Invitrogen) or 5 μM CMTMR (Invitrogen, Carlsbad, CA). 5 to 10×10^6^ labeled T cells were injected I.V. into recipient mice and inguinal lymph nodes were removed 15--18 hours later and imaged using two photon-imaging. Imaging experiments were performed using either a workstation with a Bio-Rad Radiance 2000 scanner mounted on an Olympus upright microscope with a chamber at 37°C or a 2-photon microscope in the Fluorescence Microscopy Facility in the UNM Cancer Center with a mode locked Ti:Sapphire infrared laser (Coherent Ultra II; tunable from 680--1080 nm; avg. power 3.5 W) for multiphoton fluorescence excitation on a Zeiss Axiovert 200 stand. For the Bio-Rad 2P, explanted lymph nodes were placed on a glass coverslip in the chamber. The sample is perfused with a 37°C solution of DMEM (phenol red free, Gibco) bubbled with 95% O~2~ and 5% CO~2~. T cell motility within a lymph node was monitored in the T cell area at a minimum of 50--70μm below the surface of the node. For the Zeiss 2P, the microscope stand is a Zeiss Axiovert 200 with motorized XY stage and IR-corrected long working distance objectives (25X:multi-immersion and 40X:water immersion) and image acquisition via a Zeiss LSM510 scanhead. Ex-vivo tissue and organs are maintained during microscopic observation in a stage microincubator system (LCI-Live Cell Imaging) equipped with heating, humidity, CO~2~ atmosphere and perfusion. Explanted lymph nodes were placed on a glass coverslip in the chamber. The sample is perfused with a 37°C solution of DMEM (phenol red free, Gibco) bubbled with 95% O~2~ and 5% CO~2~. For 4D analysis of T cell motility, multiple stacks in the z-axis (z step = 3 μm) were acquired every 15--20 s (depending on the number of z stacks acquired) for 15--40 min, with an overall field thickness of 40--60 μm. Cell motility was analyzed with Imaris software (version 6; Bitplane). Tracks that lasted fewer than 3 time steps (duration filter in Imaris) were not taken into account in the analysis. Length filter (threshold of 17 μm = 3 times the diameter of the cell) Displacement^2^ filter (threshold of 300 μm^2^ = 17 μm X 17 μm) were also used to discard tracks of non-motile cells. Videos were made by projecting the 4D information along the z-axis in a single plane. The observation area covers approximately two thirds of the T cell zone of the lymph node. Cell motility was analyzed with Imaris 6.0 (Bitplane AG, Zurich, Switzerland). The point sequences generated by Imaris were used to create position vectors joining adjacent cell locations (sample tracks [S1 Fig](#pcbi.1004818.s002){ref-type="supplementary-material"}). The Euclidean norm for each vector was calculated and divided by the time resolution to produce speeds. Distribution fitting {#sec013} -------------------- Following Fisher \[[@pcbi.1004818.ref058]\] we use maximum likelihood estimation (MLE) to parameterize candidate PDFs. We fit probability model parameters using cumulative distribution functions (CDF), rather than by binning data which has been shown to bias conclusions about random walk distributions \[[@pcbi.1004818.ref059]\]. We define a *step* as a vector of T cell motion that does not deviate beyond 15° from the original direction (see [S8 Fig](#pcbi.1004818.s009){ref-type="supplementary-material"} for analysis of threshold dependency). Five PDF models (lognormal, Maxwell, Gaussian, exponential, and power law) for step length and speed were selected for analysis based on a combination of their negative log-likelihood scores, their importance in other biological processes, and their previous use in modeling T cell movement. Our selection of the relative goodness of fit (GoF) of each candidate PDF to empirical data was evaluated using likelihood functions, Anderson-Darling (AD), Bayesian information criterion (BIC), corrected Akaiki Information Criterion (AICc), and the Kolmogorov-Smirnov (KS) test. Following Clauset, Shalizi, and Newman \[[@pcbi.1004818.ref036]\], we fit power laws using MLE and with the power law PDF: $P\left( x \right) = \frac{\mu - 1}{x_{\text{min}}}\left( \frac{x}{x_{\text{min}}} \right)^{\mu}$ where *x*~min~ is the smallest observed value, *P*(*x*) is the probability of *x* occurring, and *μ* is the estimated parameter. We used the *x*~min~ value with the best KS score of all possible choices as an estimator of the beginning of a power law tail. The percentage of positions in a track in the power law tail gives us a measure of the quality of the power law fit. Using this measure we show that a power law fit to the population of observed steps excludes 94% of the data ([Fig 1F and 1H](#pcbi.1004818.g001){ref-type="fig"}). Autocorrelation and cross-correlations {#sec014} -------------------------------------- Velocity autocorrelations were calculated following \[[@pcbi.1004818.ref060]\] and \[[@pcbi.1004818.ref061]\]. The autocorrelation function is the ensemble mean for the *n-1* possible delay times given the *n* vectors defining a T cell track. The result is a measure of how much T cell direction depends on previous directions as a function of time delay. Our use of autocorrelation is distinct from the analysis of periodic velocity vector magnitudes by Beltman et al. \[[@pcbi.1004818.ref040]\], but the similar to that done in Banigan et al. \[[@pcbi.1004818.ref027]\]. Letting *v*(*p*~*k*~(*t*)) be the unit velocity vector at time *t* belonging to the *k*^*th*^ path, we defined the cross-correlation function, C~cross~, to be: C~cross~(*p*) = ⟨*v*(*p*~*k*~(*t*)) ∙ *v*(*p*~*m*~(*t*))⟩, ∀*k*, *m* where *p*~*k*~ and *p*~*m*~ are T cells paths. This measures the step angle dependence between T cell paths at the same moment in time, that is, a measure of drift due to global effects on the observation field. Mean squared displacement {#sec015} ------------------------- Mean squared displacement (MSD) coefficients, commonly called the *α* exponent \[[@pcbi.1004818.ref001],[@pcbi.1004818.ref013],[@pcbi.1004818.ref021]\], were calculated using least-squares polynomial fit by numerically solving the associated Vandermonde matrix \[[@pcbi.1004818.ref062]\] and fit quality assessed with the *r*^2^ measure. Parametric and linear fits were also made to mean displacement. In [Fig 1A](#pcbi.1004818.g001){ref-type="fig"} we present only the first 10 minutes of observation (as was done in \[[@pcbi.1004818.ref016],[@pcbi.1004818.ref063],[@pcbi.1004818.ref064]\]) at which point the curve reaches its first stationary inflection which in \[[@pcbi.1004818.ref061]\] is indicative of unconstrained motion and therefore appropriate for determining *α*. In addition, in this study few tracks persist beyond 10 minutes and so the MSD signal also becomes dominated by noise ([Fig 1A](#pcbi.1004818.g001){ref-type="fig"} top). Heterogeneity {#sec016} ------------- We tested for heterogeneity by comparing track speed skew ([Fig 4](#pcbi.1004818.g004){ref-type="fig"}) and AIC evidence ratios as a function of mean speed. The sample skew of the distribution of speeds was calculated using the method of moments applied to a mean speed sliding window of width 0.125 μm/s progressing in 0.1 μm/s increments. Search efficiency simulation {#sec017} ---------------------------- The simulation to test T-DC interaction efficiency was implemented as a continuous (floating-point) 3D model written in C++. Boost libraries \[[@pcbi.1004818.ref065]\] were used to generate variates drawn from model PDFs. Because the clustering and density of targets can influence which movement types are most efficient, we replicated the estimated density of DCs and varied the degree of clustering in our simulations. We use LN DC density of 2--5% as determined in \[[@pcbi.1004818.ref043]\] to calculate a target DC density of 3.17×10^−5^ targets/μm^3^. Our observed fields have an average volume of 6.3×10^6^ μm^3^. We scale the number of targets as a function of field volume in order to maintain the same target density between simulation fields. DCs were clustered into groups of 10 and were uniformly distributed within spheres defining a cluster. By varying the sphere radius, we controlled the degree of clustering from uniform to highly clustered. A 3D version of the Hopkins statistic \[[@pcbi.1004818.ref066]\] was used to measure the resulting non-uniformity of target placement (Tables [3](#pcbi.1004818.t003){ref-type="table"} and [4](#pcbi.1004818.t004){ref-type="table"}). In the Hopkins statistic scores range from 0 to 0.5 where 0 is highly clustered and 0.5 indicates no clustering ([S5 Fig](#pcbi.1004818.s006){ref-type="supplementary-material"}). T cell tracks were observed and recorded as 3D coordinate sequences within a bounding box defined by the visible section of the *ex vivo* lymph node. Idealized models (Brownian, CRW, Power Law, etc.) of search were parameterized by the speeds and turning angles estimated from observation (see [Distribution fitting](#sec013){ref-type="sec"}). Searchers in the idealized model start at the same initial positions as the observed T cells, and exist in a volume equal to the observed field volume. Candidate search patterns were generated for each of the 41 observation fields. Our efficiency measure is the number of targets found divided by the sum of the time used by searchers. Since we modelled walks rather than flights (i.e. speeds are finite) the sum of D(*k*) for all simulated tracks *k* was limited to the total distance travelled by observed T cells. Therefore the average velocity of the population of searchers is kept within the observed range. Based on an assumed radii of 5 μm for DCs and T cells, targets were marked as discovered if a searcher track passed within 10 μm of a target point. We define two versions of the efficiency measure, one that increments its output value only when a target was not previously detected by that searcher, and another that increments for all targets found. These two versions allow us to record unique contacts and total contacts ([Fig 3](#pcbi.1004818.g003){ref-type="fig"}). The simulation measures the target encounter rate and determines, using the Mann-Whiney test, whether the candidate search models' search efficiency is significantly different from that observed in T cells. We use the Mann-Whitney test because the observed and simulated distribution of efficiencies is non-Gaussian. Simulations were replicated 100 times per field, producing 4,100 efficiency data points for each search model. The entire process was repeated 10 times in order to generate confidence intervals for the simulation; in all this results in 41,000 efficiency samples. Identifying hotspots and hot tracks {#sec018} ----------------------------------- In order to test whether the environment within LNs influences T cell movement we extend an analysis begun in \[[@pcbi.1004818.ref046]\]. Fields were discretized into 8000 μm^3^ cubes (the length of a cube is 20 μm, approximately twice the diameter of a T cell). We use the LogMCRW simulation as a null model and record the number of times a location is visited by unique T cells in simulation (repeated 10 times). We use a 2σ (two-standard deviation) threshold for determining which locations are visited more frequently in the observed fields than expected and call these hotspots. This is repeated for each of the 41 individual observational fields. All other visited locations are called cold spots. A comparison of the number of hotspots in simulation and in the observed data gives an indication of how much behavior is not captured by the simulation. We define hot tracks to be T cell tracks that visit hotspots and cold tracks to be T cell tracks that do not. We also examine the number of visits by hot tracks to cold spots and hotspots. We also examine the distribution of step lengths and speeds for hot and cold tracks. For additional information on methods, see supplementary materials and methods ([S1 Text](#pcbi.1004818.s001){ref-type="supplementary-material"}). Supporting Information {#sec019} ====================== ###### Supplemental Methods. (DOCX) ###### Click here for additional data file. ###### Example of individual T cell tracks. (TIF) ###### Click here for additional data file. ###### Histogram of mean squared displacement exponents with varying *r*^2^ filters. As the linear regression slopes are filtered by the *r*^2^ statistic, the histogram narrows but maintains its mean value. (A) *r*^2^ \> 0, 3.5% of tracks filtered, (B) *r*^2^ \> 0.25, 21% filtered, (C) *r*^2^ \> 0.5, 33%, (D) *r*^2^ \> 0.75, 50%, and *r*^2^ \> 0.9, 69% of tracks filtered out. (E) *r*^2^ \> 0.8 with regions of interest marked. (TIF) ###### Click here for additional data file. ###### **Histogram of power law exponents fit to the CCDF of step length for tracks with varying percentages of their steps in the power law tail: (A) all tracks, (B) tracks with at least 50%, (C) 70%, and (D) 90% of steps in the power law tail.** An increasing fraction of steps in the tail results in *μ* values being more likely to be between 1 and 3 but as a total fraction of all tracks those well fit by a power law falls rapidly, for (A) 35%, (C) 31%, (D) 24%, and (E) 7% of total tracks are represented. (E) Fraction of Tracks with Lévy characteristics. Power law exponents, *μ*, for step length and *α*, for displacement. Tracks are grouped by fit quality (GoF). Retained percentage refers to the amount of data discarded in order to obtain a power law fit (see [methods](#sec009){ref-type="sec"} for *μ* fitting). Displacement *α*, values are filtered by *r*^2^. (TIF) ###### Click here for additional data file. ###### Weibull probability plot. The gamma probability distribution has comparable negative log-likelihood scores to the lognormal distribution (speeds shown here). The lognormal model overestimates the probability of high speeds at the tail of the distribution while the gamma distribution over estimates the probability of very low speeds. (TIF) ###### Click here for additional data file. ###### Sample DC target cluster distributions in simulation. Panel A: 20 μm radius clusters with Hopkins index = 0.2. Panel B: 20 μm radius clusters with Hopkins index = 0.32. Panel C: 40 μm radius clusters with Hopkins index = 0.44. (TIF) ###### Click here for additional data file. ###### Mean squared displacement for simulated search models. Numbers in color indicate the slope of the mean-squared linear fit to the log-log transformed displacement curve. As expected, Brownian motion has a slope close to one, as does the lognormal step distribution model. All other models produce superdiffusive motion. (TIF) ###### Click here for additional data file. ###### We found no evidence of distinct subpopulations defined by variance and mean speed. An expectation maximization Gaussian mixture model finds that clustering tracks according to track speed and track variance results in a single grouping. The color bar and contour map indicate the height of the best-fit Gaussian model. Increasing the number of Gaussians to fit incrementally up to 16 does not reveal any natural clusters. This figure supports the skew plot [Fig 4C](#pcbi.1004818.g004){ref-type="fig"}. Example field (1 of 41). (TIF) ###### Click here for additional data file. ###### The dependency between the angle used to calculate steps from T cell positions and the number of steps resulting. For example at threshold of 180° all steps in each track are combined and the resulting number of steps in the population is small. The influence of the angle threshold on the number of combined positions is smooth. No natural choice of threshold angle is apparent. (TIF) ###### Click here for additional data file. ###### As the number of data points in tracks lasting more than 10 minutes drops, MSD becomes dominated by noise. As a result we perform linear regression only on the first 10 minutes of each track (green line). (1 of 7 datasets). (TIF) ###### Click here for additional data file. ###### Visualization of search tracks. Dark green targets are undiscovered. Targets become cyan if they are within the search volume of a T cell track (detected). In this example targets are grouped into clusters of 10 with radius 10 μm. Each T cell track is assigned a random color to help distinguish them from one another. Example field (1 of 41). (TIF) ###### Click here for additional data file. ###### Distribution of hotspot visitor counts. Spot counts for (A) simulated locations over 10 repetitions, and (B) observed locations. Example plot of observed field and the corresponding simulation (1 of 41). The red lines correspond to the hotspot threshold for this field (μ+2σ of the simulated location visitor counts). For this field the threshold is 4.047. Of the 498 locations in the simulated field 17 (3.41%) are hotspots (mean of 10 simulations). The observed field had 621 locations, of which 78 (12.5%) are hotspots, an increase of 258% over simulation. (TIF) ###### Click here for additional data file. ###### Visualization of hotspots and hot tracks in 4 of 41 observed fields. Hotspots are indicated by black rectangles where the area is proportional to the number of unique visitors. Hot tracks are displayed in color with each color corresponding to a track. Tracks that do not visit a hot spot are shown in grey with the shades corresponding to individual tracks. Plots are a projection of a 3D space into the xy-plane. Overlapping hotspots indicate distinct z-coordinates. (TIF) ###### Click here for additional data file. ###### A potential source of error is the dependence of the observed speed on the frame rate of observation. We test whether this confounding factor exists in our experiments by fitting a linear model to the mean speed for each of our seven binned microscope video frame rates vs the observed mean speed. Our frame delays range from 13 s to 20.7 s. The slope of the best MLE fit is 0.0013. The p-value is 0.66 and the *r*^2^ is 0.041. Together this suggests there is no relationship between frame rates and observed speed and that the observed speeds are not artifacts of the measuring rate. (TIF) ###### Click here for additional data file. ###### Video of the simulation in progress. The video shows four instances of the efficiency simulation 1) An observed field, 2) Brownian motion simulation, 3) Power Law simulation, and 4) LogMCRW. Individual T cell tracks are variously colored according to track. Target DCs are green initially and turn cyan when encountered by a T cell. (MP4) ###### Click here for additional data file. ###### Extended step fit statistics. Table shows the Akaike information criterion evidence ratio (AIC E), applied to first 7 rows only; the corrected Akaike information criterion (AICc); negative log-likelihood (nlogl), Kolmogorov-Smirnov (KS), Anderson-Darling (AD), chi-squared (χ2), and Bayesian information criterion (BIC). Score ranking is in parentheses. Differences in BIC and AICc scores are less than 1:103 of the AICc score. (DOCX) ###### Click here for additional data file. ###### Extended speed fit statistics. Table shows the Akaike information criterion evidence ratio (AIC E), applied to first 7 rows only; the corrected Akaike information criterion (AICc); negative log-likelihood (nlogl), Kolmogorov-Smirnov (KS), Anderson-Darling (AD), chi-squared (χ2), and Bayesian information criterion (BIC). Score ranking is in parentheses. Differences in BIC and AICc scores are less than 1:103 of the AICc score. (DOCX) ###### Click here for additional data file. ###### Maximum likelihood estimated parameters and associated likelihood scores for steps calculated using a 30° threshold. The lognormal probability distribution is still the best fit when steps are calculated using a 30° rather than 15° threshold. Compare to [Table 1](#pcbi.1004818.t001){ref-type="table"} in the main text. (DOCX) ###### Click here for additional data file. Thanks to François Asperti-Boursin for data collection, T cell tracking, Imaris analysis and discussion. Thanks to Christian Gunning, Helen Wearing, David Ackley, Vasudev Kenkre, Stephanie Forrest, Aaron Neumann, Deborah Gordon, Brianna Mulligan, and Grant Lythe for helpful discussion and reviews of this paper. Thanks to Genevieve Phillips and Becky Lee of the UNM Cancer Center Fluorescence Microscopy Facility as well as John Connor and Denis Bragin of the BRAIN Imaging Center for help with 2-photon microscopy. [^1]: The authors have declared that no competing interests exist. [^2]: Conceived and designed the experiments: GMF JLC MEM. Performed the experiments: GMF JLC. Analyzed the data: GMF KAL JLC MEM. Wrote the paper: GMF MEM JLC.
By now, you’ve heard the praise, the fervor, and the cries of sore losers everywhere. But the hype for this one really is deserved. Cuphead is a magnificent blend of gameplay taken from Contra, Gradius, and even some Mega Man for good measure. And while many games have done so, Cuphead is one of the ones that stands out from the crowd. If you haven’t already bought, and downloaded this game to your Xbox One or Computer you really ought to. But if you need more details before doing so, read on. GENERATIONS: The animation on display will even amaze your Great Grandparents. Cuphead is the result of a couple of high-risk takers. Studio MDHR started out with a vision: An action game that truly feels like playing a late 1930’s cartoon. Early on they discovered that making that vision a reality was going to be far more time-consuming, and expensive than originally thought. They ended up quitting their jobs, and re-mortgaging their homes just to be able to bring this title to market. My hope is that this risk has paid off. Because the finished product is nothing short of amazing. Cuphead very likely has the best animation of any video game ever made thus far. Studio MDHR painstakingly made every background in the game by matte painting it. Every frame of animation was hand drawn on a cell before being scanned into a computer to be inked, and colored. As a result the game delivers on the core promise of looking, and feeling like a 1930’s animated short. The character designs are breathtaking. All of the hallmarks of vintage cartoons are here. The angled pupils, exaggerated movement, and pretty much everything you can recall from old Popeye, and Betty Boop serials are here. Studio MDHR even went as far as hiring an actual big band jazz ensemble to write, and perform the score for Cuphead. So not only does it look like an 80-year-old cartoon, it also sounds like an 80-year-old cartoon. Just seeing the game in action alone would be worth the price of admission. There is such a wealth of talent on display through the entire game that it’s honestly something that has to be experienced. In the realm of audio, and visual experiences Cuphead is nearly in a class all by itself. But what about the game play? Well, it’s a fairly solid, and enjoyable experience. The game starts out with a very clever tutorial, and a classic story book introduction. Cuphead, and his brother Mugman go against their guardian’s wishes when they visit a casino. Unfortunately, the Casino is owned by the Devil, and he rigs the game at the Craps table to claim the souls of our heroes. But they plead for their lives so he tells them he’ll forgive their debt if they go get the soul contracts of the others in the town. So that’s the set up for just why Cuphead, and Mugman are off on their adventure. The game places you on an overhead view of a map, where you move the characters around, and choose a stage, or talk to an NPC. There are three maps, and you’ll need to complete every stage to move onto the next one. Each map also has a shop in it where you can use coins to upgrade your abilities. There are three main types of stage on display here. You’ll have Run n’ Gun stages. These play like you’d expect, taking homage from games like Contra, and Metal Slug. So you’ll have to fire where you’re going. You can’t shoot backwards while moving forward. The game play is not a twin stick style, rather a more traditional one. In these stages you’ll find the aforementioned coins. So you’ll certainly need to play these if you want any hope of buffing up your character. Some of the items in the shops will give you a new style of weapon, or extra hits on your health meter. But any item you choose will have a side effect to balance things out. For instance, buying extra health comes at the cost of weakening your attacks slightly. But there are a wide variety of things to check out here. So you can swap out items for others after you’ve paid for them, and see what load out works best for you. There are shmup levels too, these generally play like the third type of stage I’ll get to in a moment. The difference being here, you’ll be piloting a plane, and fighting a multifaceted battle against a boss character. With the shmup mechanics here, the game feels a lot more like the memorable moments in old horizontal shooters like Thunder Force III, R-Type, Gradius, or Life Force rather than the more contemporary bullet hell shooter. Just because there aren’t zillions of things to avoid doesn’t mean there isn’t anything to avoid. These encounters throw plenty enough at you, and you’ll have to memorize attack patterns to survive. You can also shrink your plane so if you get into a situation you don’t think is avoidable, it may just be your ace in the hole. Finally, there are the Boss stages. In these you’ll use the Run n’ Gun mechanics in a multifaceted battle against a boss character. These fights feel closer to the classic NES Mega Man boss fights. than the ones in the old Run n’ Guns. One boss in particular will give you memories of storming Dr. Wily’s castle in Mega Man II. All of these bosses however will require you to learn patterns, and expert timing to get through them in one piece. Since most of the stages in the game are Boss stages you can expect to lose many, many times when you first attempt them. There are also a couple of side challenges where you’ll free ghosts by parrying other ghosts. You should honestly do these because the parry is a mechanic Cuphead uses to beef up your super meter. When you fill up the meter you can do a very devastating attack which is especially handy in boss battles. Anything colored pink in the game can be parried, and these challenges are the perfect way to master this mechanic. Most of the stages in the game have an easy mode in addition to a regular tough as nails mode. You’ll need to beat the harder difficulty on bosses to get the contracts, needed to finish the story. But playing the stages on Easy will let you progress, and see what future stages have to offer. You can also go back to any stage you beat previously to replay it. Cuphead definitely has a high level of challenge. But the challenge is generally very fair. You’ll die hundreds of times over. But upon your expletive laden loss you’ll understand that your last death was your own fault. You jumped when you meant to shoot. Or you didn’t plan for a moving platform properly. Or you weren’t patient enough. Or you panicked, and walked into that projectile. Cuphead isn’t impossible though. Those who absolutely love old platformers, shmups, and classic action games from the days of Atari, Sega, Nintendo, and Commodore platforms will likely pick things up a bit faster. But that doesn’t mean someone newer to this type of experience cannot persevere. It’s the kind of game that requires patience, and practice to excel in. For some players it may take more time, and patience than others. But everything in the game is so captivating it’s worth checking out. There are a couple of very minor issues I have with the game though. The most alarming are a few rare bugs. Admittedly these are rare, and in time they’ll probably be fixed. But they’re still a nuisance when they happen. One of them will glitch a low-level enemies’ health to a point it takes no damage. When this happens you can try to just skip past it. But that might mean you take damage in the process, and impede your ability to clear the level. Exiting the stage, and re-entering it usually fixes it in the interim, but that is also a nuisance. The other bug I’ve run into is an inexplicable performance hit, where the game will suddenly drop frames, and run ridiculously choppy for around 60 seconds before going back to normal. It’s especially annoying in boss fights. Closing the game, and re-starting the application again, fixes it in the interim. But it can be pretty annoying. I also wish there could have been a few more action stages over boss rush stages to add to the variety. Nevertheless, I can wholeheartedly recommend Cuphead to just about anybody who is even remotely interested in it. The animation, and soundtrack alone are worth the price of admission. Even for all of the complaints some may have with the level of challenge, the experience easily overshadows that. This is a game that is a wonder to behold. And while old-school arcade challenge may not be your Cuphead of tea, (I know, that’s a terrible joke.) Cuphead is still one of the most entertaining experiences you’ll likely have this year. If you relish a challenge, and love classic cartoons you should buy this for your computer or Xbox One if you haven’t already. You may want to look into this game even if you normally don’t care for this sort of fare. The amount of talent, and dedication on display is nothing short of captivating. Here’s hoping Cuphead was a successful enough endeavor for a follow-up, or another game using the same wonderful artists, and animators. I know I’ve repeated myself a lot in this review, and I probably sound a bit redundant. But win or lose, Cuphead is one experience you just may want to roll the dice on. (I think I did better on that one.) Anyway, as we all know, knockoffs are nothing new. We see them in everything. Everyday household items. Appliances, and of course creative media. Including obviously, video games. Over the last thirty or more years we’ve seen Pac-Man clones. Space Invaders clones. Super Mario Brothers clones. Street Fighter clones. Doom clones. Basically, one could spend a lifetime talking about the concept alone before even getting to the examples. Some of which I’ve already reviewed. Many knockoffs aren’t worth a second thought. But as Mortal Kombat, Saints Row, and others have taught us, sometimes they are. Taking a proven formula, and putting their own spin on it. PROS: Nice graphics. Decent mechanics. Controls well. CONS: Saves can’t be brought to another system. Unbalanced. CAPTAIN PLANET: He’s our hero! Going to take pollution, down to zero! Cartoon Network Punch Time Explosion XL does just that. This time the target is Super Smash Bros. The SSB series looks like it can easily be copied at face value. The core concept of keeping combatants off of your hill or out of your ring seems simple. You have a cast of characters who are unique, yet share a simplified movement set. Moving beyond that, Smash has also employed campaigns in past games. Such as Melee’s Adventure Mode, or Brawl’s Subspace Emissary Mode. Smash has a ton of different items you could add in for random fun. Or assist trophies, that enable NPCs to help you win. Nintendo’s series even has a lot of individual mini game challenges throughout the series from target smashing, to sandbag beating. All with mechanics that hyper-competitive players find quite deep. Today, the series has hardcore fans, and countless tournaments where the best players win enough cash to live on. It’s one of the most watched series on Twitch. Its reputation has reached the heights of games like Street Fighter, and Tekken. To say that CNPTEXL has some lofty goals is an understatement. Does it get anywhere near the pedigree of Nintendo’s mascot party fighter? No. But is it a bad game? Shockingly, the answer is also no. This game takes Nintendo’s approach to mascots, and applies it to Time Warner’s Cartoon Network. The game was published a bit before the channel’s power houses Adventure Time, and Regular Show. So you won’t be playing as Benson or throwing down with Finn. However the game’s roster does go pretty far back to the channels early days. Dexter’s Lab, The Power Puff Girls, Samurai Jack, and Johnny Bravo all make appearances with many of their characters. Some of the later hits like Ben 10 & The Grim Adventures of Billy, and Mandy are here. And even some of the lesser known shows are represented. The game has a campaign mode in the vein of Super Smash Bros. Brawl’s Subspace Emissary mode. The story is told by a narrator (Voiced by Space Ghost’s George Lowe), which follows the convergence of all of Cartoon Network’s shows. This follows a formula similar to Brawl’s. You will go through side scrolling platformer stages with brawler elements. Depending on the stage, you can use a certain number of characters. When you get to the end of the campaign it is revealed that the narrator’s TV remote has gone rogue, and is responsible for the merging of the realities. Of course, this remote is the final boss. Along the way you’ll also unlock characters for you to use in the other mode. Again, much like the Subspace Emissary. The difference is that you use currency to do it. CNPTEXL has a Store option where you will find not only the bonus characters, but stages, alternate costumes, and clips from the various Cartoon Network shows. Clearing the game or playing enough in the other modes will give you points that can be used to unlock them. Once unlocked, the characters, and stages can be used in the Story mode or the Battle mode. There is a vault where the unlocked clips can be viewed, along with the character models. It works kind of like a cut down version of Smash’s trophy room. You can get info on the characters, what shows they belong to, and their original appearances. It isn’t nearly as deep as what you will find in Nintendo’s games, but it still gives you something to look forward to if you are a fan of the CN shows. The clips are DVD quality, and most of the clips are from some of the better shows’ moments. The meat of the game is in its multiplayer. Battle mode is up to four players, and also allows you to use a variety of controllers. If you’re playing the Wii version you can use the Wiimote, and Nunchuck. Or you can opt for either a Classic Controller or a Gamecube Controller. As I’ve mentioned before, the core concept of CNPTEXL is the same as the Nintendo franchise it cribs from. Each of the game’s 26 stages will see players trying to keep each other off of the arena. You do this by attacking one another, to build up damage. The more damage you take, the farther you are knocked back with each successful hit. Each stage has a knockout zone around it. Going beyond it, or being unable to otherwise make it back to the arena results in a death. The object of course is to be the last one with any lives left. The game plays as one would expect. There is a primary attack button, a special move button, a shield button, and a button for your finishers. Each of the main three buttons can be combined with directions. So as in Smash, you can get different moves based upon what direction is used with each. It also has smash attacks of its own. So pressing a direction with the attack button at the same time will dish out more knock back. The shield also allows you to roll out-of-the-way, and perform parries as in Smash. Many of the tactics employed in Smash like edge guarding can also work here. Even holding the shield for too long will break it, leaving you open to punishment. The finisher button is novel too in that you don’t have to chase down a smash ball. The one thing this game does to carve itself out a niche is the use of a gem system. Beating up on your opponent will cause them to drop gems. Collect enough of them, and you can use your finisher. Most of the finishers are pretty cool, and have anime inspired animations leading up to the attack. In addition to the primary battle mode, there are a handful of variants. Choosing a custom match is similar to the way custom matches in Smash games work. You can turn assist trophies on or off, set the frequency of items, and set the time limit or number of lives. It does not let you go over each individual item however. Beyond the custom mode, there is a mode called Drones where the game will throw a bunch of NPC enemies into the match. Instead of scoring you on stock or knockouts, it instead scores you on whoever defeated the most computer controlled combatants. There is also a variant called PTE mode, where you collect energy orbs. Think of it like the coin mode in the Smash series. Finally, there’s the arcade mode. This plays like the arcade mode in Smash. The game puts you in a ladder, against other combatants, and you’ll get a different ending for each character you beat the mode with. As far as the look, and sound of the game go, the visuals are pretty nice, while the sound isn’t. All of the characters models look pretty good considering Papaya’s probable budget constraints. Backgrounds aren’t very detailed. Muddy textures cover most of the background objects, and small details are lost in the shuffle. Although one has to be impressed with some of the destruction, and transition that goes on in certain stages. Again, the finishing moves are actually pretty impressive. Especially if you’re a fan of some of these old shows. Audio is lackluster however. Aside from the voice samples, and quality during the unlockable clips, there isn’t much to recommend. Music isn’t all that memorable, and none of the effects will really wow you. Despite all of the similarities with Nintendo’s games it still doesn’t hold a candle to Super Smash Bros. That’s the biggest trouble with Cartoon Network Punch Time Explosion XL. The roster isn’t as large, and as great as many of these shows were, it simply isn’t as fun to pick up Johnny Bravo, as it is to pick up Donkey Kong. What’s worse is that the roster you do get isn’t really all that balanced. There are a handful of characters you’ll stick with if you do decide to play this with friends even remotely regularly. While every fighting game ends up with one or two characters that have more versatility, the best fighters still make everyone viable. This game really doesn’t. It was clearly made to be a Smash clone for people on a budget. Or at least for Cartoon Network fans who couldn’t get enough Smash-like experiences. Unfortunately while it does succeed on those merits, it won’t succeed in keeping you away from Nintendo’s franchise for very long. The fact you can’t unlock everything on your own, and bring it to a friend’s is disheartening too. Especially since, at least on the Wii, you can back up your save file to an SD card. Still, if you do like some of these classic cartoons, you might want to check the game out anyway. It is by no means a terrible game, and it is a fun ride as far as licensed games go. But you aren’t going to drop Super Smash Bros. for this. Nor are you going to fool yourself into thinking you’re playing Super Smash Bros. if you pick it up on the Xbox 360 or PlayStation 3. It’s average. But sometimes that’s enough. This year was packed with a large amount of guests, activities, and panels. So many in fact, that it was impossible to see everything between the variety, and overlap. Still, I just like to recap my convention experiences. I always have a lot of fun, getting to go to panels, talking with other fans, and taking in a really great meal. Some of the highlights for me over the weekend began almost immediately upon arrival. One of the first events I attended was an Epic Rap Battles Of History event. Some of the most notable episodes were played on a screen. After each one of them the hosts of the event, and the fans in attendance debated which characters won. Historical Accuracy, the number of good insults, rhythmic flow, were all factors in picking a winner. A large number of attendees loved the He-Man costume I roamed about the convention center in. I probably stopped every 15 minutes or so, so that someone could take a snapshot. It was more over than my Dr.Insano cosplay from last year, and that had gone very well. But there were far many more impressive costumes than mine. One of the best moments was when Alan Oppenheimer’s booth assistant saw me coming down the aisle. Then proceeded to put her head face down on her arm on top of her table, and laugh. But both Mr. Oppenheimer, and his assistant were very kind, hospitable, and friendly. Of course Masters Of The Universe was a huge part of my childhood. So meeting the guy who provided the voices of many of its most iconic characters like Man At Arms, and Skeletor was a really awesome moment for me. I also got to see Alan Oppenheimer, and Noah Hathaway talk about their time working together on The Never Ending Story, and other projects in a panel together. Like many of the various panels I attended it was pretty informative. Noah talked in-depth about how the scene where Artax dies in a swamp was done, taking several shoots on a giant sound stage. The stage had a lowering platform for the horse to simulate sinking, and was covered in mud. The set designers also brought in the trees, and other props for the scene. He also talked about leaving, and returning to acting, as well as the fun of nitpicking movies. Alan also talked a lot about voice acting, and the importance of being able to visualize a voice for a character. He also talked about using traditional acting techniques in voice acting. Notably, how much of acting is actually listening to the other performers in any given scene. Like last year’s ConnectiCon, Doug Walker was in three panels. Doug is best known for his long running Nostalgia Critic web show. The first panel was on Friday, and it focused on how to better debate movies with other people by listening. The set up, was that each of the attendees in line would bring up a movie they loved that the internet at large seemed to hate, or vice versa, and why. The point of the exercise was to show how much you could learn about someone in just hearing why they did or didn’t like a movie. It also made for the argument that you can have a strong opinion about a film, and still remember that that’s ultimately what it is: An opinion. Often times we can forget that when we talk about pop culture. We may have all of the evidence in the world that a movie is bad, and justify our opinion. But someone else is going to like it anyway, and it doesn’t make them terrible for doing so. In fact, really listening to someone’s opposing point of view may bring out some interesting things you may not have considered. Doug was also part of a web series roundtable panel with Marble Hornets, internet comedian Uncle Yo, and Signal Crash. This Q&A session was geared more toward production of content. Advice was given to creative people in attendance. What kinds of techniques to use in any given craft. What avenues to take in furthering a goal. But there was also the rather frank theme of doing what one loves because they love to do it above all else. Not only from Doug Walker, but from all of the members of the panel. It was an encouraging panel that acknowledged challenges, acknowledged that there will be rejections, and failures. But it also left a theme of persistence, and sense of pride in whatever our passions are. Whether we ever get to do them professionally or not. Of course there was also the That Guy With The Glasses panel in which Doug, and Rob Walker fielded questions of all kinds. As in the roundtable, some of the questions were about production, promotion, and professionally furthering one’s creative output into a business. Others were about the content of the TGWTG flagship series. Then there were other moments that came out of left field. One fan brought in a script, and wanted the Walkers to produce. They couldn’t do that, but they did recite the first page in the voices of Chester A. Bum, and Jeff Goldblum. At one point during the panel the Nostalgia Chick herself; Lindsay Ellis showed up with the rest of Chez Apocalypse. Posing as a con goer, Lindsay asked Doug when Nella (of Chez Apocalypse) would be getting top billing in lieu of the Nostalgia Chick. Fans cheered as Lindsay, Nella, and Elisa would celebrate the run in during their exit. Chez Apocalypse were also part of another panel with other internet media creators including members of Steam Funk Studios, and Overclock Remix. Similar to some of the other panels, it was a Q&A session filled with some insight into the guests’ creative processes, how they keep things fresh, and how they handle criticism. There was also a lot of advice given to the audience at hand. The biggest piece being perseverance. Being able to see where one began, and the level of improvement over time as a driver to keep going. Actor Walter Jones was also at ConnectiCon. Most know him as the Black Ranger from the original Mighty Morphin Power Rangers show. He talked about his life growing up in Detroit, Michigan. His early days working as an entertainer on cruise ships, and of course his time on Power Rangers. He joked about how difficult the helmets were to see out of at times. He described some of the impressive stunts he did during shoots only to have parts lost during edits. He was also asked if he had seen himself as a role model for African-American children by someone in attendance. He told the audience that he saw himself as a role model for all of the children watching the show, and that nobody in the cast risked doing anything to jeopardize that. When asked if he would ever return to Power Rangers, he said it would be an option provided it would be backed by The Writers Guild Of America. The original show wasn’t, and it was the main reason he left after the Mighty Morphin era of the series ended. Another person asked if he still talks with the rest of the cast, and he replied that he did from time to time when schedules line up. He added that he actually knows some of the cast members from other iterations of the series. It was an intriguing panel even if you weren’t a Power Rangers fan. I also found my way into a Cosplay Court event during the convention. Hosted by Steam Funk, it played a lot like a small claims court show like The People’s Court. The spin on it was that everyone in the room had to play in their cosplay character. Audience members were chosen for the character on trial, prosecution, defense, and even the witnesses. In one case I was called to the stand as He-Man, and was cross-examined by a cosplayer who was The Mad Hatter from American McGee’s Alice. In one case a Mario cosplayer was on trial for the extermination of the Koopa race, as well as the Mushroom Kingdom’s citizens. Another case was against Frozen’s Elsa, and of course there were many Disney themed cosplayers involved. Including a pretty good Ursula of The Little Mermaid fame. Voice actors Maurice La Marche, and Rob Paulsen also had two events. I managed to get into the second one. It was a Pinky & The Brain Q&A, and it was certainly one of the highlights of the convention for me. Nearly the entire session was done in character. Both actors talked about many of the shows they’ve done over the years, in addition to a lot of the cartoons that inspired them. There were some zany moments too. One member of the audience wanted Maurice to determine if a photo of his daughter looked more like him or his ex-wife. There was another point when someone had asked P&B which fan was the worst they had experienced. Maurice pointed into the front of the crowd saying “That guy right there.” to which the crowd erupted in laughter as it was revealed to be Doug Walker. Doug pretended to fail to be conspicuous while walking to an exit that turned out to be a hall filled with chairs. He then sheepishly walked back to his chair. Later in the panel, the two actors actually listed Doug in a list of some of the most pleasant entertainers they’ve known over the years. A list that included names like JonLovitz, and Steven Spielberg. I was also lucky enough to catch a Voice Actor roundtable near the end of the final day of the convention. Lauren Landa, (Dead or Alive 5, Attack On Titan) Danielle McRae (League Of Legends, Skullgirls), Chris Cason (Dragonball Z), Brittany Lauda (Prince Adventures) were on hand to make for a nice sendoff. All of the guests were laid back, very friendly, and were funny. As with all of the previous panels fans asked the panelists what some of their favorite works were. What some of their dream roles would be, and some of the things voice acting entails. Speaking of interesting people, I do want to give a shout out to Jenisaur, a blogger who introduced herself to me at the convention. She writes over at http://www.sub-cultured.com/ about all kinds of things. Comics. Conventions. Novels. You name it. If it sounds interesting to you, check it out. There were a lot of other panels, and events I missed that I would have loved to have seen. But you can only get out to so many over the three days. I would have loved to have made it out to the Jennifer Hale panel. She has done so many interesting video game, and animated television roles over the years. I also missed seeing Ellen McLain, the voice of GLaDOS from the Portal series. Her husband John Patrick Lowrie was there with her, and he’s done voice work for Half-Life 2. Hearing a bit about voice work for Valve would have really been a blast for me, and sadly I had to miss them. TV’s Diedrich Bader was there too. I also had to miss his panel. I did get to see him for a split second roaming the dealer’s room, and shared a very brief “Hello”. I loved seeing him on The Drew Carey Show back in the day, and his role in Office Space was pretty great. Apparently he has done a myriad of cameos, and voice work that I never knew about. Alas, another interesting panel I missed out on. Others I missed? TeamFourStar was there. There was a Cards Against Humanity panel. There’s just so much to do, and so little time. But I suppose that’s a testament to just how much there is to do every year. Cosplay death match, creative workshops, heavily discounted movies at the theatre across the street. Video game tournaments. Table top game tournaments. Japanese import rhythm arcade machines. Swag. Obviously the panels. It really is a great time, and I love it when I attend it. I can’t wait to see what next year brings. Plus there’s always City Steam Innocence IPA waiting for me a mere two blocks away.
Visualizing Decision Trees with Python (Scikit-learn, Graphviz, Matplotlib) Learn about how to visualize decision trees using matplotlib and Graphviz Image from my Understanding Decision Trees for Classification (Python) Tutorial. Decision trees are a popular supervised learning method for a variety of reasons. Benefits of decision trees include that they can be used for both regression and classification, they don’t require feature scaling, and they are relatively easy to interpret as you can visualize decision trees. This is not only a powerful way to understand your model, but also to communicate how your model works. Consequently, it would help to know how to make a visualization based on your model. This tutorial covers: How to Fit a Decision Tree Model using Scikit-Learn How to Visualize Decision Trees using Matplotlib How to Visualize Decision Trees using Graphviz (what is Graphviz, how to install it on Mac and Windows, and how to use it to visualize decision trees) How to Visualize Individual Decision Trees from Bagged Trees or Random Forests As always, the code used in this tutorial is available on my GitHub. With that, let’s get started! How to Fit a Decision Tree Model using Scikit-Learn In order to visualize decision trees, we need first need to fit a decision tree model using scikit-learn. If this section is not clear, I encourage you to read my Understanding Decision Trees for Classification (Python) tutorial as I go into a lot of detail on how decision trees work and how to use them. Import Libraries The following import statements are what we will use for this section of the tutorial. import matplotlib.pyplot as plt from sklearn.datasets import load_iris from sklearn.datasets import load_breast_cancer from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split import pandas as pd import numpy as np from sklearn import tree Load the Dataset The Iris dataset is one of datasets scikit-learn comes with that do not require the downloading of any file from some external website. The code below loads the iris dataset. import pandas as pd from sklearn.datasets import load_irisdata = load_iris() df = pd.DataFrame(data.data, columns=data.feature_names) df['target'] = data.target Original Pandas df (features + target) Splitting Data into Training and Test Sets The code below puts 75% of the data into a training set and 25% of the data into a test set. X_train, X_test, Y_train, Y_test = train_test_split(df[data.feature_names], df['target'], random_state=0) The colors in the image indicate which variable (X_train, X_test, Y_train, Y_test) the data from the dataframe df went to for a particular train test split. Image by Michael Galarnyk. Scikit-learn 4-Step Modeling Pattern # Step 1: Import the model you want to use # This was already imported earlier in the notebook so commenting out #from sklearn.tree import DecisionTreeClassifier # Step 2: Make an instance of the Model clf = DecisionTreeClassifier(max_depth = 2, random_state = 0) # Step 3: Train the model on the data clf.fit(X_train, Y_train) # Step 4: Predict labels of unseen (test) data # Not doing this step in the tutorial # clf.predict(X_test) How to Visualize Decision Trees using Matplotlib As of scikit-learn version 21.0 (roughly May 2019), Decision Trees can now be plotted with matplotlib using scikit-learn’s tree.plot_tree without relying on the dot library which is a hard-to-install dependency which we will cover later on in the blog post. The code below plots a decision tree using scikit-learn. tree.plot_tree(clf); This is not the most interpretable tree yet. In addition to adding the code to allow you to save your image, the code below tries to make the decision tree more interpretable by adding in feature and class names (as well as setting filled = True ). fn=['sepal length (cm)','sepal width (cm)','petal length (cm)','petal width (cm)'] cn=['setosa', 'versicolor', 'virginica'] fig, axes = plt.subplots(nrows = 1,ncols = 1,figsize = (4,4), dpi=300) tree.plot_tree(clf, feature_names = fn, class_names=cn, filled = True); fig.savefig('imagename.png') How to Visualize Decision Trees using Graphviz Decision Tree produced through Graphviz. Note that I edited the file to have text colors correspond to whether they are leaf/terminal nodes or decision nodes using a text editor. Graphviz is open source graph visualization software. Graph visualization is a way of representing structural information as diagrams of abstract graphs and networks. In data science, one use of Graphviz is to visualize decision trees. I should note that the reason why I am going over Graphviz after covering Matplotlib is that getting this to work can be difficult. The first part of this process involves creating a dot file. A dot file is a Graphviz representation of a decision tree. The problem is that using Graphviz to convert the dot file into an image file (png, jpg, etc) can be difficult. There are a couple ways to do this including: installing python-graphviz though Anaconda, installing Graphviz through Homebrew (Mac), installing Graphviz executables from the official site (Windows), and using an online converter on the contents of your dot file to convert it into an image. Creating the dot file is usually not a problem. Converting the dot file to a png file can be difficult. Export your model to a dot file The code below code will work on any operating system as python generates the dot file and exports it as a file named tree.dot . tree.export_graphviz(clf, out_file="tree.dot", feature_names = fn, class_names=cn, filled = True) Installing and Using Graphviz Converting the dot file into an image file (png, jpg, etc) typically requires the installation of Graphviz which depends on your operating system and a host of other things. The goal of this section is to help people try and solve the common issue of getting the following error. dot: command not found . dot: command not found How to Install and Use on Mac through Anaconda To be able to install Graphviz on your Mac through this method, you first need to have Anaconda installed (If you don’t have Anaconda installed, you can learn how to install it here). Open a terminal. You can do this by clicking on the Spotlight magnifying glass at the top right of the screen, type terminal and then click on the Terminal icon. Type the command below to install Graphviz. conda install python-graphviz After that, you should be able to use the dot command below to convert the dot file into a png file. dot -Tpng tree.dot -o tree.png How to Install and Use on Mac through Homebrew If you don’t have Anaconda or just want another way of installing Graphviz on your Mac, you can use Homebrew. I previously wrote an article on how to install Homebrew and use it to convert a dot file into an image file here (see the Homebrew to Help Visualize Decision Trees section of the tutorial). How to Install and Use on Windows through Anaconda This is the method I prefer on Windows. To be able to install Graphviz on your Windows through this method, you first need to have Anaconda installed (If you don’t have Anaconda installed, you can learn how to install it here). Open a terminal/command prompt and enter the command below to install Graphviz. conda install python-graphviz After that, you should be able to use the dot command below to convert the dot file into a png file. dot -Tpng tree.dot -o tree.png Windows installing of Graphviz through conda. This should fix the ‘dot’ is not recognized as an internal or external command, operable program or batch file issue. How to Install and Use on Windows through Graphviz Executable If you don’t have Anaconda or just want another way of installing Graphviz on your Windows, you can use the following link to download and install it. If you aren’t familiar with altering the PATH variable and want to use dot on the command line, I encourage other approaches. There are many Stackoverflow questions based on this particular issue. How to Use an Online Converter to Visualize your Decision Trees If all else fails or you simply don’t want to install anything, you can use an online converter. In the image below, I opened the file with Sublime Text (though there are many different programs that can open/read a dot file) and copied the content of the file. Copying the contents of a dot file In the image below, I pasted the content from the dot file onto the left side of the online converter. You can then choose what format you want and then save the image on the right side of the screen. Save visualization to computer Keep in mind that there are other online converters that can help accomplish the same task. How to Visualize Individual Decision Trees from Bagged Trees or Random Forests A weakness of decision trees is that they don’t tend to have the best predictive accuracy. This is partially because of high variance, meaning that different splits in the training data can lead to very different trees. The image above could be a diagram for Bagged Trees or Random Forests models which are ensemble methods. This means using multiple learning algorithms to obtain a better predictive performance than could be obtained from any of the constituent learning algorithms alone. In this case, many trees protect each other from their individual errors. How exactly Bagged Trees and Random Forests models work is a subject for another blog, but what is important to note is that for each both models we grow N trees where N is the number of decision trees a user specifies. Consequently after you fit a model, it would be nice to look at the individual decision trees that make up your model. Fit a Random Forest Model using Scikit-Learn In order to visualize individual decision trees, we need first need to fit a Bagged Trees or Random Forest model using scikit-learn (the code below fits a Random Forest model). # Load the Breast Cancer (Diagnostic) Dataset data = load_breast_cancer() df = pd.DataFrame(data.data, columns=data.feature_names) df['target'] = data.target # Arrange Data into Features Matrix and Target Vector X = df.loc[:, df.columns != 'target'] y = df.loc[:, 'target'].values # Split the data into training and testing sets X_train, X_test, Y_train, Y_test = train_test_split(X, y, random_state=0) # Random Forests in `scikit-learn` (with N = 100) rf = RandomForestClassifier(n_estimators=100, random_state=0) rf.fit(X_train, Y_train) Visualizing your Estimators You can now view all the individual trees from the fitted model. In this section, I will visualize all the decision trees using matplotlib. rf.estimators_ In this example, notice how we have 100 estimators. You can now visualize individual trees. The code below visualizes the first decision tree. fn=data.feature_names cn=data.target_names fig, axes = plt.subplots(nrows = 1,ncols = 1,figsize = (4,4), dpi=800) tree.plot_tree(rf.estimators_[0], feature_names = fn, class_names=cn, filled = True); fig.savefig('rf_individualtree.png') Note that individual trees in Random Forest and Bagged trees are grow deep You can try to use matplotlib subplots to visualize as many of the trees as you like. The code below visualizes the first 5 decision trees. I personally don’t prefer this method as it is even harder to read. # This may not the best way to view each estimator as it is small fn=data.feature_names cn=data.target_names fig, axes = plt.subplots(nrows = 1,ncols = 5,figsize = (10,2), dpi=3000) for index in range(0, 5): tree.plot_tree(rf.estimators_[index], feature_names = fn, class_names=cn, filled = True, ax = axes[index]); axes[index].set_title('Estimator: ' + str(index), fontsize = 11) fig.savefig('rf_5trees.png') Create Images for each of the Decision Trees (estimators) Keep in mind that if for some reason you want images for all your estimators (decision trees), you can do so using the code on my GitHub. If you just want to see each of the 100 estimators for the Random Forest model fit in this tutorial without running the code, you can look at the video below. Concluding Remarks This tutorial covered how to visualize decision trees using Graphviz and Matplotlib. Note that the way to visualize decision trees using Matplotlib is a newer method so it might change or be improved upon in the future. Graphviz is currently more flexible as you can always modify your dot files to make them more visually appealing like I did using the dot language or even just alter the orientation of your decision tree. One thing we didn’t cover was how to use dtreeviz which is another library that can visualize decision trees. There is an excellent post on it here. Image from produced by dtreeviz library. If you have any questions or thoughts on the tutorial, feel free to reach out in the comments below or through Twitter. If you want to learn more about how to utilize Pandas, Matplotlib, or Seaborn libraries, please consider taking my Python for Data Visualization LinkedIn Learning course.
2009 ND 148 Sarmed A. Abdullah, M.D., Plaintiff and Appellant v. State of North Dakota, d/b/a University of North Dakota, and Dr. David J. Theige, individually, Defendants and Appellees No. 20080254. Supreme Court of North Dakota. Filed July 29, 2009. Paul Henry Myerchin (argued) and Clark Jay Bormann (appeared), P.O. Box 995, Bismarck, N.D. 58502-0995, for plaintiff and appellant. Tag Christian Anderson (argued), Special Assistant Attorney General, Risk Management Division, 1600 East Century Avenue, Suite 4, Bismarck, N.D. 58503, and Kirsten Renata Franzen (on brief), Assistant Attorney General, North Dakota Office of Attorney General, 500 North 9th Street, Bismarck, N.D. 58501, for defendants and appellees. Opinion of the Court by Maring, Justice. Maring, Justice. [¶ 1] Sarmed Abdullah, M.D., appeals from a summary judgment dismissing his action against the State of North Dakota, doing business as the University of North Dakota, and against Dr. David Theige, the director of the residency program at the University's School of Medicine and Health Sciences, stemming from Abdullah's dismissal from the internal medicine residency program at the University's School of Medicine for "incompetence in the area of [p]rofessionalism." Abdullah argues his dismissal from the residency program was arbitrary and capricious, and he asserts the district court erred in granting summary judgment because there are genuine issues of material fact on each of his claims. We affirm. I [¶ 2] Abdullah graduated from the Damascus University School of Medicine in Syria in 1999. In July 2001, he began a residency training program with the Medical College of Wisconsin, which included three rotations. As a result of evaluations in those rotations, the school offered him three options: (1) resign from the residency program; (2) accept probation and a remediation plan; or (3) take a leave of absence from the program and find another residency program. Abdullah decided to take a leave of absence and enrolled in a Post Graduate Year 1 internal medicine residency program at East Tennessee State University from August 2003 through September 2004. [¶ 3] On October 1, 2004, Abdullah began an internal medicine residency program at the University's School of Medicine for his Post Graduate Year 2. Abdullah's application to the University's residency program listed the Medical College of Wisconsin under "CONTINUING MEDICAL EDUCATION (CME) Courses in Internal Medicine," rather than under a "Residency" section. In April 2005, Abdullah executed a "resident contract" with the University for a training program in internal medicine at the Post Graduate Year 3 level, which ran from October 1, 2005, through September 30, 2006. The "residence contract" provided that "appropriate certification [would] be provided upon satisfactory completion of the education and training program," and "[u]nsatisfactory or persistently less than satisfactory resident evaluation can result in required remedial activities, temporary suspension from duties, or termination of employment and residency education." The contract also said the "resident [could] be terminated for unsatisfactory or persistently less than satisfactory performance of duties as determined by supervising faculty or for failure to progress in medical knowledge and skills." [¶ 4] In a June 28, 2006, letter to Abdullah, Theige, the director of the residency program, informed Abdullah that his "recent behavior and correspondence ha[d] made [Theige] very concerned about [Abdullah's] personal well-being and mental health," and Theige informed Abdullah that he had been placed on an "emergency leave of absence from the residency program, pending a psychiatric evaluation." Abdullah subsequently returned to the program on August 1, 2006, but his scheduled completion date for his residency training was extended to October 20, 2006. [¶ 5] In an October 12, 2006, letter to Abdullah, Theige informed Abdullah that he was temporarily suspended from the residency program, pending a psychiatric examination, for concerns about his "professional behavior." In an October 23, 2006, letter to Abdullah, Theige summarized Abdullah's status with the residency program, including professionalism concerns about Abdullah's failure to disclose his residency at the Medical College of Wisconsin and the circumstances of his departure from that program, Abdullah's conduct regarding authorship of a research manuscript with Dr. William Newman, and Abdullah's conduct regarding a home visit with a patient: On June 22, 2006, I received an email from you telling me that you did not intend to finish our program "if after July, 2006 my GI Fellowship contract in Mayo Clinic is not on the desk." I sent you a reply indicating my bewilderment, and asked to meet with you the next day. I later found out that you had just learned of your failure to match with a GI fellowship program. I am also aware that you were just finishing a rotation as the night float resident. The next morning, I came to work and discovered a handwritten note from you on my desk requesting my "testimony about [Abdullah] to the Chief Justice of the United States John Roberts in the US Supreme Court in Washington D.C. for the attached application. Your cooperation—not obstruction—of Justice will be appreciated." The note was attached to a typewritten 2-page "Personal Statement to the Supreme Court of the United States" which I found to be almost incoherent. At that point, I learned that you had left town that morning to begin a vacation. I met with you in my office on June 28, 2006. At that time, you appeared to be calm, coherent, and reasonable, but I placed you on an emergency medical leave from the program pending a psychiatric evaluation. In addition to the question of your mental health, I was also concerned about part of the content of your "Personal Statement to the Supreme Court." In that document, you mentioned that you had been a resident at the Medical College of Wisconsin in the summer of 2001. I was not previously aware of this. In your curriculum vitae included with your application to our program, you did list your preliminary residency in internal medicine at East Tennessee State University. An experience at the Medical College of Wisconsin was listed only in small print under the heading "Continuing Medical Education (CME) courses in Internal Medicine." We verified, with your cooperation, that you had been a resident at MCW, and that you resigned. You underwent an extensive psychiatric evaluation at the University of Pittsburgh in July 2006. I received a letter from your psychiatrist on July 18, 2006. The psychiatrist wrote that your psychiatric symptoms were contextual and related to sleep deprivation. He indicated that with treatment of the sleep disturbance, you could return to work after July 21, 2006. Our Resident Evaluation and Advancement Committee reviewed this matter on July 25, 2006. The committee recommended that you should be reinstated to the program after completing a meeting with the program director, but that concerns about your professionalism should be noted and reported in the future when requests for verification of training are received. I met with you on July 27, 2006. You were reinstated to the program as of August 1, 2006. Your anticipated completion date for the program was postponed to October 20, 2006 because of your recent medical leave. On September 28, 2006, Dr. William Newman sent me a letter expressing his concern about your professional behavior related to your joint research effort. Over the next several days, I received additional correspondence from other faculty and staff expressing concerns about your behavior. One of the concerns was that you initiated a home visit with a patient without appropriate attending physician supervision and that you contacted a physician at the Mayo Clinic on this patient's behalf, but that you did not appropriately identify yourself as a resident physician. Finally, on October 12, 2006, a staff member . . . reported that she was very frightened by your behavior and that she felt unsafe. I met with you later that morning and suspended you from the program pending a psychiatric evaluation. After I receive a report from your psychiatrist, this matter will be referred to the Resident Evaluation and Advancement Committee. One of the serious issues to be considered is the matter of my evaluation of your performance in each core competency, but especially in professionalism. In order to receive credit for the final year of training and successfully complete our program, a third-year resident must be given satisfactory ratings in each competency area. My rating of your performance in professionalism will be determined after appropriate review of the matters outlined above. [¶ 6] In a November 6, 2006, letter to Abdullah, Theige informed Abdullah that the University's Resident Evaluation and Advancement Committee had reviewed Abdullah's status and recommended dismissing Abdullah from the residency program. In that letter, Theige informed Abdullah of his dismissal from the residency program. [¶ 7] Abdullah appealed the dismissal to a Resident Fair Process and Grievance Hearing Panel, which resulted in an evidentiary hearing before a panel of five doctors. The Hearing Panel affirmed the decision to dismiss Abdullah from the residency program for "incompetence in the area of [p]rofessionalism," finding: l) The reference to three months in the Medical College of Wisconsin residency on the CV Dr. Abdullah submitted with his UND . . . application appears following a section on Continuing Medical Education credit and not in the section with his East Tennessee State University residency year. He denies this was an attempt to conceal this residency affiliation. On his Application for Residents for the VA, signed 4-26-04, he listed it clearly under previous residencies, however, he checked a "No" response to a question "Within the last five years have you resigned or retired from a position after being notified you would be disciplined or discharged, or after questions about your clinical competence were raised?" The letter from Dr. Olds [at the Medical College of Wisconsin] to Dr. Abdullah . . . clearly indicates that he was already on probation and that he was offered resignation as an alternative to accepting continuing probation and remediation. 2) The research manuscript in question describes both phase I and phase II projects. Dr. Newman was identified by all evidence and testimony as the mentor for the phase I, or initial, project, for which he was listed as the principle investigator in the submission to the Institutional Review Board. In testimony, Dr. Abdullah described the addition of the phase II component with Dr. Santoro as mentor. He submitted the manuscript to the Mayo Clinic Proceedings without either mentor listed as co-author, instead providing an acknowledgement for each. Dr. Stephanie Borchardt, research coordinator at the VA, had suggested the acknowledgement for Dr. Newman, as a minimum, after judging it to be unprofessional not to include Dr. Newman as an author. Dr. Newman was not presented a draft or any other copy of the manuscript before its submission and finally obtained an earlier copy, not the version submitted, by petitioning for it under the Freedom of Information Act to Dr. Borchardt. Dr. Abdullah did not present his findings or final write-up at a residency Research Committee meeting as required by the Program's Resident Research Requirement. 3) The visit to a former patient's home was conducted for reasons that are inconsistently explained by [Abdullah], both from his documentation at the time of the event and from his testimony at this hearing. He described it as having occurred both as a medical or community outreach activity and as an effort to obtain consent from the patient to use his records as the basis for a case report. Such a home visit, for any reason, by a resident in the program has never been done and is not a part of the training experience, nor was this visit approved or supervised by anyone in the program. The special license granted to residents by the State of North Dakota does not allow for any professional activities outside the scope of resident duties or supervision. [Abdullah] also moved to take over the care of this patient by scheduling him for an office visit without conferring first with either the patient's current primary care physician or discussing it with a clinic supervisor or administrator. He also failed to properly identify himself as a resident when he contacted a physician at Mayo Clinic seeking information pertaining to this patient. . . . . 1) The Hearing Panel concluded that Dr. Abdullah deliberately misrepresented his academic and employment history to avoid revealing the circumstances of his having left the Medical College of Wisconsin Internal Medicine Residency under disciplinary proceedings, and that this constitutes a substantial act of unethical and unprofessional conduct. Although the location of the reference to the Medical College of Wisconsin residency on the CV Dr. Abdullah submitted with his UND . . . application, combined with his testimony, leave it unclear whether this was a knowing and deliberate misrepresentation, the response on his VA application to the question regarding resignation under disciplinary conditions or questions of competence provides clear evidence of his intent to hide this fact. 2) The Panel concluded that Dr. Abdullah was unethical and unprofessional in his attempt to bar or remove a principle investigator, Dr. Newman, from work and publication over which the investigator rightfully had jurisdiction, and in deliberately submitting for publication a manuscript in violation of the program's requirements concerning resident research. 3) The Panel concluded that Dr. Abdullah's conduct concerning a home visit to a former patient and conduct concerning the patient involved in that visit was unethical and unprofessional. The visit itself was unprecedented by a resident in this program, unapproved, unsupervised and outside the scope of his resident duties and resident licensure. Seeking a patient's consent for publication by visiting him in his home is a highly irregular and troublesome approach, showing disregard for the ethical implications of the means by which to obtain consent from patients. During and following that visit he attempted to take over care of the patient, inappropriately intervened by trying to gather medical information on a patient for whom someone else had primary clinical responsibility, and failed to identify himself properly to a Mayo Clinic physician. Abdullah appealed the Hearing Panel's decision to the Dean of the University's School of Medicine, who upheld the Hearing Panel's decision to dismiss him from the residency program. [¶ 8] Abdullah then sued the University and Theige individually in district court, alleging: (1) the University breached its residency contract with Abdullah; (2) Theige intentionally interfered with Abdullah's prospective business opportunity; (3) the University and Theige arbitrarily, capriciously, and wrongfully dismissed Abdullah from the residency program, which violated his substantive due process rights under 42 U.S.C. § 1983; and (4) the University arbitrarily and capriciously dismissed Abdullah from the residency program in violation of Title I of the Americans with Disabilities Act of 1990, ["ADA"] 42 U.S.C. § 12102 et seq., and the North Dakota Human Rights Act, N.D.C.C. ch. 14-02.4. Abdullah sought damages and certification of successful completion of his third year of the residency program. The district court granted summary judgment for the University and Theige. II [¶ 9] The district court decided this case in the posture of summary judgment, which is a procedure for promptly resolving a controversy on the merits without a trial if either party is entitled to judgment as a matter of law, and if no dispute exists as to either the material facts or the inferences to be drawn from undisputed facts, or if resolving disputed facts would not alter the result. ACUITY v. Burd & Smith Constr., Inc., 2006 ND 187, ¶ 6, 721 N.W.2d 33. A district court's decision on a motion for summary judgment is a question of law that we review de novo on the record. Riemers v. Grand Forks Herald, 2004 ND 192, ¶ 4, 688 N.W.2d 167. The party moving for summary judgment must show there are no genuine issues of material fact and the case is appropriate for judgment as a matter of law. Green v. Mid Dakota Clinic, 2004 ND 12, ¶ 5, 673 N.W.2d 257. A party resisting a motion for summary judgment cannot merely rely on the pleadings or other unsupported conclusory allegations, but must present competent admissible evidence by affidavit or other comparable means which raises an issue of material fact. Beckler v. Bismarck Pub. Sch. Dist., 2006 ND 58, ¶ 7, 711 N.W.2d 172. "In summary judgment proceedings, neither the trial court nor the appellate court has any obligation, duty, or responsibility to search the record for evidence opposing the motion for summary judgment." Fish v. Dockter, 2003 ND 185, ¶ 15, 671 N.W.2d 819 (quoting Anderson v. Meyer Broadcasting Co., 2001 ND 125, ¶ 14, 630 N.W.2d 46). "The opposing party must also explain the connection between the factual assertions and the legal theories in the case, and cannot leave to the court the chore of divining what facts are relevant or why facts are relevant, let alone material, to the claim for relief." Fish, at ¶ 15 (quoting Anderson, at ¶ 14). III [¶ 10] Abdullah argues his dismissal from the residency program was arbitrary and capricious and the district court erred in granting summary judgment because there are genuine issues of material fact on each of his claims. He argues the professionalism issue was a pretext for mental health problems and his dismissal was arbitrary and capricious. He claims he did not misrepresent his reasons for leaving the Medical College of Wisconsin, he did not violate the University's standards for authorship for a research article, and he did not provide medical services to a patient in a home visit. A [¶ 11] Abdullah argues there are disputed issues of material fact about whether the University breached its contractual obligation to certify his graduation from the residency program. He claims he was dismissed for a disciplinary matter and not for academic reasons and, even if his dismissal was for academic reasons, the University did not act in good faith and its reasons for dismissal were arbitrary and capricious. [¶ 12] In rejecting Abdullah's breach of contract claim, the district court said there was no provision in the residency contract that required the University to graduate a resident and the contract gave the University vast discretion for academic decisions. The court said a reasonable person could find Abdullah engaged in unprofessional conduct during the residency program, which meant he failed to satisfactorily perform in the core competency area of professionalism. [¶ 13] Our analysis of Abdullah's breach of contract claim requires consideration of the scope of judicial analysis of the decision to dismiss Abdullah from the residency program. We have said the prima facie elements of a breach of contract action are the existence of a contract, a breach of the contract, and damages flowing from the breach of contract. Van Sickle v. Hallmark & Associates, 2008 ND 12, ¶ 11, 744 N.W.2d 532. A breach of contract occurs when there is nonperformance of a contractual duty. Id. Whether a contract has been substantially performed and whether a party has breached a contract generally are questions of fact. Wachter v. Gratech Co. Ltd., 2000 ND 62, ¶ 17, 608 N.W.2d 279. [¶ 14] In Thompson v. Associated Potato Growers, Inc., 2000 ND 95, ¶ 20, 610 N.W.2d 53 (quoting Cotran v. Rollins Hudig Hall Int'l, Inc., 948 P.2d 412, 422 (Cal. 1998)), we held a private employer's decision to terminate an employee for cause was subject to judicial analysis under an objective good-faith standard, in which: an employer is justified in terminating an employee for good cause for "fair and honest reasons, regulated by good faith on the part of the employer, that are not trivial, arbitrary or capricious, unrelated to business needs or goals, or pretextual. A reasoned conclusion, in short, supported by substantial evidence gathered through an adequate investigation that includes notice of the claimed misconduct and a chance for the employee to respond." [¶ 15] In Peterson v. North Dakota Univ. Sys., 2004 ND 82, ¶¶ 11-1 8, 678 N.W.2d 163, we considered the standard for judicial analysis of a tenured university instructor's breach of contract action against the State. We concluded that in a breach of contract action involving the Board of Higher Education's dismissal of a contract employee, judicial analysis of the substantive decision to terminate the employee was limited to deciding whether a reasoning mind could have reached the same conclusion on the evidence presented. Id. at ¶ 18. See also Ellis v. North Dakota State Univ., 2009 ND 59, ¶ 42, 764 N.W.2d 192 (applying Peterson to termination action brought under Human Right's Act). In Peterson, at ¶ 24, we affirmed a summary judgment dismissal of the tenured university instructor's breach of contract action: Viewing the facts and reasonable inferences in a light most favorable to Peterson, we conclude she has not raised a genuine or material issue of fact showing a reasoning mind could not have concluded there was adequate cause to dismiss her. Rather, the record reflects that different committees, boards, or persons placed different weight on the evidence presented. Peterson contracted for the procedures afforded to her. A breach of Peterson's employment contract does not occur merely because she disagrees with the substantive result of those procedures. The mere fact that different opinions could be reached based on the facts is not sufficient to establish the Board breached her employment contract. There was sufficient evidence in the record for a reasoning mind to conclude clear and convincing evidence existed to dismiss Peterson for cause. Accordingly, we affirm the summary judgment dismissing Peterson's breach of contract claim. [¶ 16] A common thread in Thompson and Peterson is that, in a breach of contract action, we afford a high degree of deference to an employer's decision to terminate an employee's employment for cause. Here, Abdullah was dismissed from the residency training program at a public educational institution for proffered reasons involving professionalism and academic performance. "Courts are particularly ill-equipped to evaluate academic performance." Board of Curators of Univ. of Missouri v. Horowitz, 435 U.S. 78, 92 (1978). "Academic evaluations of a student, in contrast to disciplinary determinations, bear little resemblance to the judicial and administrative factfinding proceedings . . . which . . . traditionally attached a full-hearing requirement." Id. at 89. "[T]he determination whether to dismiss a student for academic reasons requires an expert evaluation of cumulative information and is not readily adapted to the procedural tools of judicial or administrative decisionmaking." Id. at 90. [¶ 17] In Regents of University of Michigan v. Ewing, 474 U.S. 214, 225-27 (1985) (citations and footnotes omitted), the United States Supreme Court discussed a court's "narrow avenue for judicial review" of an academic decision to dismiss a student from a medical school program in the context of a substantive due process claim: When judges are asked to review the substance of a genuinely academic decision, such as this one, they should show great respect for the faculty's professional judgment. Plainly, they may not override it unless it is such a substantial departure from accepted academic norms as to demonstrate that the person or committee responsible did not actually exercise professional judgment. . . . . Considerations of profound importance counsel restrained judicial review of the substance of academic decisions. As JUSTICE WHITE has explained: "Although the Court regularly proceeds on the assumption that the Due Process Clause has more than a procedural dimension, we must always bear in mind that the substantive content of the Clause is suggested neither by its language nor by preconstitutional history; that content is nothing more than the accumulated product of judicial interpretation of the Fifth and Fourteenth Amendments. This is . . . only to underline Mr. Justice Black's constant reminder to his colleagues that the Court has no license to invalidate legislation which it thinks merely arbitrary or unreasonable." Added to our concern for lack of standards is a reluctance to trench on the prerogatives of state and local educational institutions and our responsibility to safeguard their academic freedom, "a special concern of the First Amendment." If a "federal court is not the appropriate forum in which to review the multitude of personnel decisions that are made daily by public agencies," far less is it suited to evaluate the substance of the multitude of academic decisions that are made daily by faculty members of public educational institutions-decisions that require "an expert evaluation of cumulative information and [are] not readily adapted to the procedural tools of judicial or administrative decisionmaking." [¶ 18] Other courts have held that an academic decision to dismiss a resident from a residency program is entitled to deference. See Bell v. Ohio State University, 351 F.3d 240, 249-52 (6th Cir. 2003) (stating no basis for finding medical student's interest in continuing medical education was protected by substantive due process and court's review of academic decision must show great respect for faculty's professional judgment); Gupta v. New Britain Gen. Hosp., 687 A.2d 111, 117-22 (Conn. 1996) (residency agreement between physician and hospital created educational, rather than employment, relationship and decision to dismiss resident from program for poor clinical performance was academic decision entitled to deference). [¶ 19] We conclude the deferential standard from Peterson is applicable to the decision to dismiss Abdullah from the residency program. The decision to dismiss Abdullah was made after he was afforded procedural safeguards, and the record of the proceedings before the Resident Fair Process and Grievance Hearing Panel includes evidence that the substantive decision to dismiss him from the residency program was not a substantial departure from accepted academic norms. Although Abdullah claims the dismissal was arbitrary and capricious and not in good faith, there is sufficient evidence in the record for a reasoning mind to conclude Abdullah was dismissed for incompetence in the area of professionalism. A determination of qualifications and educational experience to practice medicine involves expert evaluation of cumulative information. See Horowitz, 435 U.S. at 90. See also Singha v. North Dakota State Bd. of Med. Exam'rs, 1998 ND 42, ¶ 32, 574 N.W.2d 838. The district court applied deference to the substantive decision to dismiss Abdullah, concluding the dismissal was an academic decision based on professionalism, and decided a reasonable person could find Abdullah engaged in unprofessional conduct during the residency program. See Peterson, 2004 ND 82, ¶ 24, 678 N.W.2d 163. Although Abdullah claims he was dismissed from the residency program for disciplinary reasons and not academic reasons, in the context of the deference accorded the educational institution's decision and the evidence presented at the proceedings before the Hearing Panel, we conclude the district court did not err in granting summary judgment on Abdullah's breach of contract claim. B [¶ 20] Abdullah claims he had a prospective employment contract with another hospital after graduation, and the district court erred in granting Theige summary judgment in his individual capacity on Abdullah's claim for intentional interference with a business opportunity. Abdullah argues Theige acted recklessly and willfully, which coupled with the slanderous per se nature of Theige's allegations, precludes summary judgment. [¶ 21] In rejecting Abdullah's claim for tortious interference with a business relationship, the district court decided Abdullah failed to establish a predicate independent tort of slander necessary for that claim. [¶ 22] In Trade'N Post, L.L.C. v. World Duty Free Americas, Inc., 2001 ND 116, ¶ 35, 628 N.W.2d 707, we recognized a common law action for unlawful interference with a business relationship. We held a plaintiff must prove the following elements to prevail in a claim for unlawful interference with a business relationship: (1) the existence of a valid business relationship or expectancy; (2) knowledge by the interferer of the relationship or expectancy; (3) an independently tortious or otherwise unlawful act of interference by the interferer; (4) proof that the interference caused the harm sustained; and (5) actual damages to the party whose relationship or expectancy was disrupted. Id. at ¶ 36. [¶ 23] Although Abdullah's complaint does not specifically identify an independent tort to support his claim for unlawful interference with a business opportunity, he asserts statements by Theige were slanderous per se. However, he has not specified which statements by Theige were slanderous per se. Abdullah's amended complaint alleges "Theige acted recklessly or grossly negligently, with malfeasance, willfully and wantonly" in interfering with Abdullah's employment opportunity with another hospital after his scheduled graduation and Theige's actions included "a wrongful suspension from the program only eight (8) days prior to completion, issuance of an informal dismissal from the program in a letter dated November 6, 2006, asserted [Abdullah] failed to disclose authorship, which was erroneous, and failure to disclose in a timely fashion the evidence relied upon for the administrative hearing." In the district court, Abdullah argued Theige told the Hearing Panel that Abdullah was dishonest. However, Abdullah has not marshaled any other specific facts or legal authority to support the existence of an independent tort. [¶ 24] Under N.D.C.C. § 32-12.2-02(3)(b) and (d), a state employee may not be held liable for claims based upon a discretionary function, regardless of whether the discretion is abused, and a state employee may not be held liable for a decision resulting from a quasi-judicial act. A state employee may not be held liable in the employee's individual capacity for acts occurring within the scope of the employee's employment. N.D.C.C. § 32-12.2-03(3). See Nelson v. Gillette, 1997 ND 205, ¶¶ 13-20, 571 N.W.2d 332 (discussing scope of employment in context of action against political subdivision and social worker). Here, Abdullah does not dispute that Theige was acting within the scope of his employment as the director of the residency program at the UND School of Medicine, and we conclude Theige is immune from liability in his individual capacity under N.D.C.C. §§ 32-12.2-02(3)(b) and (d) and 32-12.2-03(3). See Lawrence v. Roberdeau, 2003 ND 124, ¶¶ 13-1 4, 665 N.W.2d 719 (testimony at judicial hearing governed by witness immunity). We conclude the district court did not err in granting summary judgment on Abdullah's claim for interference with a business opportunity. C [¶ 25] Abdullah argues the district court erred in granting summary judgment on his substantive due process claim under 42 U.S.C. § 1983. He asserts there are genuine issues of material fact about whether the actions by the University and Theige were unreasonable or arbitrary. The University and Theige argue Abdullah failed to demonstrate a violation of a clearly established law, because the right to attend a public school is not a fundamental right for purposes of substantive due process. [¶ 26] In rejecting Abdullah's substantive due process claim, the district court said Abdullah had no substantive right to continued education. The court explained that even if Abdullah had a substantive due process right to continuing education in the residency program, he failed to establish he was arbitrarily and capriciously dismissed from the program and a reasonable person could conclude he was properly dismissed from the program because of professionalism concerns. [¶ 27] In Washington v. Glucksberg, 521 U.S. 702, 719-21 (1997) (citations omitted), the United States Supreme Court explained parameters for the analysis of a substantive due process claim in the context of rejecting a substantive due process right to assisted suicide: The Due Process Clause guarantees more than fair process, and the "liberty" it protects includes more than the absence of physical restraint. The Clause also provides heightened protection against government interference with certain fundamental rights and liberty interests. In a long line of cases, we have held that, in addition to the specific freedoms protected by the Bill of Rights, the "liberty" specially protected by the Due Process Clause includes the rights to marry, to have children, to direct the education and upbringing of one's children, to marital privacy, to use contraception, to bodily integrity, and to abortion. We have also assumed, and strongly suggested, that the Due Process Clause protects the traditional right to refuse unwanted lifesaving medical treatment. But we "ha[ve] always been reluctant to expand the concept of substantive due process because guideposts for responsible decisionmaking in this unchartered area are scarce and open-ended." By extending constitutional protection to an asserted right or liberty interest, we, to a great extent, place the matter outside the arena of public debate and legislative action. We must therefore "exercise the utmost care whenever we are asked to break new ground in this field," lest the liberty protected by the Due Process Clause be subtly transformed into the policy preferences of the Members of this Court. Our established method of substantive-due-process analysis has two primary features: First, we have regularly observed that the Due Process Clause specially protects those fundamental rights and liberties which are, objectively," deeply rooted in this Nation's history and tradition, and "implicit in the concept of ordered liberty," such that "neither liberty nor justice would exist if they were sacrificed." Second, we have required in substantive-due-process cases a "careful description" of the asserted fundamental liberty interest. Our Nation's history, legal traditions, and practices thus provide the crucial "guideposts for responsible decisionmaking," that direct and restrain our exposition of the Due Process Clause. As we stated recently . . . the Fourteenth Amendment "forbids the government to infringe . . . `fundamental' liberty interests at all, no matter what process is provided, unless the infringement is narrowly tailored to serve a compelling state interest." [¶ 28] In two cases, the United States Supreme Court has assumed the existence of a substantive due process right in the context of academic dismissals from a state educational institution, but the Court has held that even if students' assumed property interest gave rise to a substantive due process right, the dismissals were not arbitrary and capricious. See Ewing, 474 U.S. at 222-23; Horowitz, 435 U.S. at 91-92. In Bell, however, the Sixth Circuit Court of Appeals rejected a claim that substantive due process protects a medical student's interest in continuing education. 351 F.3d at 251. After stating that the interests protected by substantive due process are much narrower than those protected by procedural due process, the court explained: Where . . . there is no equal protection violation, we can see no basis for finding that a medical student's interest in continuing her medical school education is protected by substantive due process (stressing, in the public university context, the similarity of equal protection and substantive due process). Certainly the contention that the medical college's actions were arbitrary or capricious cannot be sufficient; otherwise judicial review for compliance with substantive due process would become the equivalent of a typical state or federal Administrative Procedure Act. 351 F.3d at 251 (footnote and citations omitted). [¶ 29] We agree with the rationale of Bell and conclude Abdullah has not demonstrated that he has a substantive due process right to graduate from a public medical school. See C.D. v. Discroll, 82 F.3d 383, 387-88 (11th Cir. Ct. App. 1996) (student's suspension for fighting did not violate substantive due process; right to attend public schools is a state created right rather than a fundamental right for purposes of substantive due process). We conclude the district court did not err in granting summary judgment on Abdullah's substantive due process claim. D [¶ 30] Abdullah argues the district court erred in granting summary judgment on his claim for a violation of the ADA. The University and Theige respond that sovereign immunity bars Abdullah's claim for money damages under Title I of the ADA and that Abdullah failed to address the legal elements for a disability claim. [¶ 31] In rejecting Abdullah's claim for violation of the ADA, the district court said Abdullah had failed to present any facts to show that the University regarded his bouts with sleep deprivation as a disability and that the University dismissed Abdullah from the residency program because of that perceived disability. Rather, the district court concluded the evidence was clear the University dismissed Abdullah for professionalism concerns, which the court said a reasoning mind could find reasonable. [¶ 32] In the district court and this Court, Abdullah essentially concedes Board of Trustees of Univ. of Alabama v. Garrett, 531 U.S. 356 (2001), supports the University's position, but, without citing any other authority to support his ADA claim, Abdullah asserts he could amend his complaint to correct the "technicality." Abdullah has not moved to amend his complaint. Moreover, "[i]n summary judgment proceedings, neither the trial court nor the appellate court has any obligation, duty, or responsibility to search the record for evidence opposing the motion for summary judgment." Fish, 2003 ND 185, ¶ 15, 671 N.W.2d 819 (quoting Anderson, 2001 ND 125, ¶ 14, 630 N.W.2d 46). "The opposing party must also explain the connection between the factual assertions and the legal theories in the case, and cannot leave to the court the chore of divining what facts are relevant or why facts are relevant, let alone material, to the claim for relief." Fish, at ¶ 15 (quoting Anderson, at ¶ 14). [¶ 33] Abdullah has not identified facts to support an ADA claim or to identify a disability under the relevant statutes, or how those factual assertions may be relevant to his legal theory. We agree with the district court that Abdullah has failed to provide any facts to show that the University regarded his bout with sleep deprivation as a disability and that the University dismissed Abdullah from the residency program because of that perceived disability. The evidence before the Hearing Panel establishes that the decision to dismiss Abdullah was not based on his perceived mental health, but was based on his lack of professionalism. On this record we conclude the district court did not err in granting summary judgment on Abdullah's ADA claim. IV [¶ 34] We affirm the summary judgment. [¶ 35] Mary Muehlen Maring Carol Ronning Kapsner Ronald E. Goodman, S.J. Kirk Smith, S.J. Gerald W. VandeWalle, C.J. [¶ 36] The Honorable Ronald E. Goodman, S.J., and the Honorable Kirk Smith, S.J., sitting in place of Sandstrom, J., and Crothers, J., disqualified.
/* Unix SMB/CIFS implementation. Inter-process communication and named pipe handling Copyright (C) Andrew Tridgell 1992-1998 SMB Version handling Copyright (C) John H Terpstra 1995-1998 This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ /* This file handles the named pipe and mailslot calls in the SMBtrans protocol */ #include "includes.h" extern int max_send; #define NERR_notsupported 50 extern int smb_read_error; /******************************************************************* copies parameters and data, as needed, into the smb buffer *both* the data and params sections should be aligned. this is fudged in the rpc pipes by at present, only the data section is. this may be a possible cause of some of the ipc problems being experienced. lkcl26dec97 ******************************************************************/ static void copy_trans_params_and_data(char *outbuf, int align, char *rparam, int param_offset, int param_len, char *rdata, int data_offset, int data_len) { char *copy_into = smb_buf(outbuf)+1; if(param_len < 0) param_len = 0; if(data_len < 0) data_len = 0; DEBUG(5,("copy_trans_params_and_data: params[%d..%d] data[%d..%d]\n", param_offset, param_offset + param_len, data_offset , data_offset + data_len)); if (param_len) memcpy(copy_into, &rparam[param_offset], param_len); copy_into += param_len + align; if (data_len ) memcpy(copy_into, &rdata[data_offset], data_len); } /**************************************************************************** Send a trans reply. ****************************************************************************/ void send_trans_reply(char *outbuf, char *rparam, int rparam_len, char *rdata, int rdata_len, BOOL buffer_too_large) { int this_ldata,this_lparam; int tot_data_sent = 0; int tot_param_sent = 0; int align; int ldata = rdata ? rdata_len : 0; int lparam = rparam ? rparam_len : 0; if (buffer_too_large) DEBUG(5,("send_trans_reply: buffer %d too large\n", ldata )); this_lparam = MIN(lparam,max_send - 500); /* hack */ this_ldata = MIN(ldata,max_send - (500+this_lparam)); align = ((this_lparam)%4); if (buffer_too_large) { ERROR_BOTH(STATUS_BUFFER_OVERFLOW,ERRDOS,ERRmoredata); } set_message(outbuf,10,1+align+this_ldata+this_lparam,True); copy_trans_params_and_data(outbuf, align, rparam, tot_param_sent, this_lparam, rdata, tot_data_sent, this_ldata); SSVAL(outbuf,smb_vwv0,lparam); SSVAL(outbuf,smb_vwv1,ldata); SSVAL(outbuf,smb_vwv3,this_lparam); SSVAL(outbuf,smb_vwv4,smb_offset(smb_buf(outbuf)+1,outbuf)); SSVAL(outbuf,smb_vwv5,0); SSVAL(outbuf,smb_vwv6,this_ldata); SSVAL(outbuf,smb_vwv7,smb_offset(smb_buf(outbuf)+1+this_lparam+align,outbuf)); SSVAL(outbuf,smb_vwv8,0); SSVAL(outbuf,smb_vwv9,0); show_msg(outbuf); if (!send_smb(smbd_server_fd(),outbuf)) exit_server_cleanly("send_trans_reply: send_smb failed."); tot_data_sent = this_ldata; tot_param_sent = this_lparam; while (tot_data_sent < ldata || tot_param_sent < lparam) { this_lparam = MIN(lparam-tot_param_sent, max_send - 500); /* hack */ this_ldata = MIN(ldata -tot_data_sent, max_send - (500+this_lparam)); if(this_lparam < 0) this_lparam = 0; if(this_ldata < 0) this_ldata = 0; align = (this_lparam%4); set_message(outbuf,10,1+this_ldata+this_lparam+align,False); copy_trans_params_and_data(outbuf, align, rparam, tot_param_sent, this_lparam, rdata, tot_data_sent, this_ldata); SSVAL(outbuf,smb_vwv3,this_lparam); SSVAL(outbuf,smb_vwv4,smb_offset(smb_buf(outbuf)+1,outbuf)); SSVAL(outbuf,smb_vwv5,tot_param_sent); SSVAL(outbuf,smb_vwv6,this_ldata); SSVAL(outbuf,smb_vwv7,smb_offset(smb_buf(outbuf)+1+this_lparam+align,outbuf)); SSVAL(outbuf,smb_vwv8,tot_data_sent); SSVAL(outbuf,smb_vwv9,0); show_msg(outbuf); if (!send_smb(smbd_server_fd(),outbuf)) exit_server_cleanly("send_trans_reply: send_smb failed."); tot_data_sent += this_ldata; tot_param_sent += this_lparam; } } /**************************************************************************** Start the first part of an RPC reply which began with an SMBtrans request. ****************************************************************************/ static BOOL api_rpc_trans_reply(char *outbuf, smb_np_struct *p) { BOOL is_data_outstanding; char *rdata = (char *)SMB_MALLOC(p->max_trans_reply); int data_len; if(rdata == NULL) { DEBUG(0,("api_rpc_trans_reply: malloc fail.\n")); return False; } if((data_len = read_from_pipe( p, rdata, p->max_trans_reply, &is_data_outstanding)) < 0) { SAFE_FREE(rdata); return False; } send_trans_reply(outbuf, NULL, 0, rdata, data_len, is_data_outstanding); SAFE_FREE(rdata); return True; } /**************************************************************************** WaitNamedPipeHandleState ****************************************************************************/ static BOOL api_WNPHS(char *outbuf, smb_np_struct *p, char *param, int param_len) { uint16 priority; if (!param || param_len < 2) return False; priority = SVAL(param,0); DEBUG(4,("WaitNamedPipeHandleState priority %x\n", priority)); if (wait_rpc_pipe_hnd_state(p, priority)) { /* now send the reply */ send_trans_reply(outbuf, NULL, 0, NULL, 0, False); return True; } return False; } /**************************************************************************** SetNamedPipeHandleState ****************************************************************************/ static BOOL api_SNPHS(char *outbuf, smb_np_struct *p, char *param, int param_len) { uint16 id; if (!param || param_len < 2) return False; id = SVAL(param,0); DEBUG(4,("SetNamedPipeHandleState to code %x\n", id)); if (set_rpc_pipe_hnd_state(p, id)) { /* now send the reply */ send_trans_reply(outbuf, NULL, 0, NULL, 0, False); return True; } return False; } /**************************************************************************** When no reply is generated, indicate unsupported. ****************************************************************************/ static BOOL api_no_reply(char *outbuf, int max_rdata_len) { char rparam[4]; /* unsupported */ SSVAL(rparam,0,NERR_notsupported); SSVAL(rparam,2,0); /* converter word */ DEBUG(3,("Unsupported API fd command\n")); /* now send the reply */ send_trans_reply(outbuf, rparam, 4, NULL, 0, False); return -1; } /**************************************************************************** Handle remote api calls delivered to a named pipe already opened. ****************************************************************************/ static int api_fd_reply(connection_struct *conn,uint16 vuid,char *outbuf, uint16 *setup,char *data,char *params, int suwcnt,int tdscnt,int tpscnt,int mdrcnt,int mprcnt) { BOOL reply = False; smb_np_struct *p = NULL; int pnum; int subcommand; DEBUG(5,("api_fd_reply\n")); /* First find out the name of this file. */ if (suwcnt != 2) { DEBUG(0,("Unexpected named pipe transaction.\n")); return ERROR_NT(NT_STATUS_INVALID_PARAMETER); } /* Get the file handle and hence the file name. */ /* * NB. The setup array has already been transformed * via SVAL and so is in gost byte order. */ pnum = ((int)setup[1]) & 0xFFFF; subcommand = ((int)setup[0]) & 0xFFFF; if(!(p = get_rpc_pipe(pnum))) { if (subcommand == TRANSACT_WAITNAMEDPIPEHANDLESTATE) { /* Win9x does this call with a unicode pipe name, not a pnum. */ /* Just return success for now... */ DEBUG(3,("Got TRANSACT_WAITNAMEDPIPEHANDLESTATE on text pipe name\n")); send_trans_reply(outbuf, NULL, 0, NULL, 0, False); return -1; } DEBUG(1,("api_fd_reply: INVALID PIPE HANDLE: %x\n", pnum)); return ERROR_NT(NT_STATUS_INVALID_HANDLE); } if (vuid != p->vuid) { DEBUG(1, ("Got pipe request (pnum %x) using invalid VUID %d, " "expected %d\n", pnum, vuid, p->vuid)); return ERROR_NT(NT_STATUS_INVALID_HANDLE); } DEBUG(3,("Got API command 0x%x on pipe \"%s\" (pnum %x)\n", subcommand, p->name, pnum)); /* record maximum data length that can be transmitted in an SMBtrans */ p->max_trans_reply = mdrcnt; DEBUG(10,("api_fd_reply: p:%p max_trans_reply: %d\n", p, p->max_trans_reply)); switch (subcommand) { case TRANSACT_DCERPCCMD: /* dce/rpc command */ reply = write_to_pipe(p, data, tdscnt); if (reply) reply = api_rpc_trans_reply(outbuf, p); break; case TRANSACT_WAITNAMEDPIPEHANDLESTATE: /* Wait Named Pipe Handle state */ reply = api_WNPHS(outbuf, p, params, tpscnt); break; case TRANSACT_SETNAMEDPIPEHANDLESTATE: /* Set Named Pipe Handle state */ reply = api_SNPHS(outbuf, p, params, tpscnt); break; default: return ERROR_NT(NT_STATUS_INVALID_PARAMETER); } if (!reply) return api_no_reply(outbuf, mdrcnt); return -1; } /**************************************************************************** handle named pipe commands ****************************************************************************/ static int named_pipe(connection_struct *conn,uint16 vuid, char *outbuf,char *name, uint16 *setup,char *data,char *params, int suwcnt,int tdscnt,int tpscnt, int msrcnt,int mdrcnt,int mprcnt) { DEBUG(3,("named pipe command on <%s> name\n", name)); if (strequal(name,"LANMAN")) return api_reply(conn,vuid,outbuf,data,params,tdscnt,tpscnt,mdrcnt,mprcnt); if (strequal(name,"WKSSVC") || strequal(name,"SRVSVC") || strequal(name,"WINREG") || strequal(name,"SAMR") || strequal(name,"LSARPC")) { DEBUG(4,("named pipe command from Win95 (wow!)\n")); return api_fd_reply(conn,vuid,outbuf,setup,data,params,suwcnt,tdscnt,tpscnt,mdrcnt,mprcnt); } if (strlen(name) < 1) return api_fd_reply(conn,vuid,outbuf,setup,data,params,suwcnt,tdscnt,tpscnt,mdrcnt,mprcnt); if (setup) DEBUG(3,("unknown named pipe: setup 0x%X setup1=%d\n", (int)setup[0],(int)setup[1])); return 0; } static NTSTATUS handle_trans(connection_struct *conn, struct trans_state *state, char *outbuf, int *outsize) { char *local_machine_name; int name_offset = 0; DEBUG(3,("trans <%s> data=%u params=%u setup=%u\n", state->name,(unsigned int)state->total_data,(unsigned int)state->total_param, (unsigned int)state->setup_count)); /* * WinCE wierdness.... */ local_machine_name = talloc_asprintf(state, "\\%s\\", get_local_machine_name()); if (local_machine_name == NULL) { return NT_STATUS_NO_MEMORY; } if (strnequal(state->name, local_machine_name, strlen(local_machine_name))) { name_offset = strlen(local_machine_name)-1; } if (!strnequal(&state->name[name_offset], "\\PIPE", strlen("\\PIPE"))) { return NT_STATUS_NOT_SUPPORTED; } name_offset += strlen("\\PIPE"); /* Win9x weirdness. When talking to a unicode server Win9x only sends \PIPE instead of \PIPE\ */ if (state->name[name_offset] == '\\') name_offset++; DEBUG(5,("calling named_pipe\n")); *outsize = named_pipe(conn, state->vuid, outbuf, state->name+name_offset, state->setup,state->data, state->param, state->setup_count,state->total_data, state->total_param, state->max_setup_return, state->max_data_return, state->max_param_return); if (*outsize == 0) { return NT_STATUS_NOT_SUPPORTED; } if (state->close_on_completion) close_cnum(conn,state->vuid); return NT_STATUS_OK; } /**************************************************************************** Reply to a SMBtrans. ****************************************************************************/ int reply_trans(connection_struct *conn, char *inbuf,char *outbuf, int size, int bufsize) { int outsize = 0; unsigned int dsoff = SVAL(inbuf, smb_dsoff); unsigned int dscnt = SVAL(inbuf, smb_dscnt); unsigned int psoff = SVAL(inbuf, smb_psoff); unsigned int pscnt = SVAL(inbuf, smb_pscnt); unsigned int av_size = size-4; struct trans_state *state; NTSTATUS result; START_PROFILE(SMBtrans); result = allow_new_trans(conn->pending_trans, SVAL(inbuf, smb_mid)); if (!NT_STATUS_IS_OK(result)) { DEBUG(2, ("Got invalid trans request: %s\n", nt_errstr(result))); END_PROFILE(SMBtrans); return ERROR_NT(result); } if ((state = TALLOC_P(conn->mem_ctx, struct trans_state)) == NULL) { DEBUG(0, ("talloc failed\n")); END_PROFILE(SMBtrans); return ERROR_NT(NT_STATUS_NO_MEMORY); } state->cmd = SMBtrans; state->mid = SVAL(inbuf, smb_mid); state->vuid = SVAL(inbuf, smb_uid); state->setup_count = CVAL(inbuf, smb_suwcnt); state->setup = NULL; state->total_param = SVAL(inbuf, smb_tpscnt); state->param = NULL; state->total_data = SVAL(inbuf, smb_tdscnt); state->data = NULL; state->max_param_return = SVAL(inbuf, smb_mprcnt); state->max_data_return = SVAL(inbuf, smb_mdrcnt); state->max_setup_return = CVAL(inbuf, smb_msrcnt); state->close_on_completion = BITSETW(inbuf+smb_vwv5,0); state->one_way = BITSETW(inbuf+smb_vwv5,1); memset(state->name, '\0',sizeof(state->name)); srvstr_pull_buf(inbuf, state->name, smb_buf(inbuf), sizeof(state->name), STR_TERMINATE); if ((dscnt > state->total_data) || (pscnt > state->total_param)) goto bad_param; if (state->total_data) { /* Can't use talloc here, the core routines do realloc on the * params and data. Out of paranoia, 100 bytes too many. */ state->data = (char *)SMB_MALLOC(state->total_data+100); if (state->data == NULL) { DEBUG(0,("reply_trans: data malloc fail for %u " "bytes !\n", (unsigned int)state->total_data)); TALLOC_FREE(state); END_PROFILE(SMBtrans); return(ERROR_DOS(ERRDOS,ERRnomem)); } /* null-terminate the slack space */ memset(&state->data[state->total_data], 0, 100); if (dscnt > state->total_data || dsoff+dscnt < dsoff) { goto bad_param; } if (dsoff > av_size || dscnt > av_size || dsoff+dscnt > av_size) { goto bad_param; } memcpy(state->data,smb_base(inbuf)+dsoff,dscnt); } if (state->total_param) { /* Can't use talloc here, the core routines do realloc on the * params and data. Out of paranoia, 100 bytes too many */ state->param = (char *)SMB_MALLOC(state->total_param+100); if (state->param == NULL) { DEBUG(0,("reply_trans: param malloc fail for %u " "bytes !\n", (unsigned int)state->total_param)); SAFE_FREE(state->data); TALLOC_FREE(state); END_PROFILE(SMBtrans); return(ERROR_DOS(ERRDOS,ERRnomem)); } /* null-terminate the slack space */ memset(&state->param[state->total_param], 0, 100); if (pscnt > state->total_param || psoff+pscnt < psoff) { goto bad_param; } if (psoff > av_size || pscnt > av_size || psoff+pscnt > av_size) { goto bad_param; } memcpy(state->param,smb_base(inbuf)+psoff,pscnt); } state->received_data = dscnt; state->received_param = pscnt; if (state->setup_count) { unsigned int i; if((state->setup = TALLOC_ARRAY( state, uint16, state->setup_count)) == NULL) { DEBUG(0,("reply_trans: setup malloc fail for %u " "bytes !\n", (unsigned int) (state->setup_count * sizeof(uint16)))); TALLOC_FREE(state); END_PROFILE(SMBtrans); return(ERROR_DOS(ERRDOS,ERRnomem)); } if (inbuf+smb_vwv14+(state->setup_count*SIZEOFWORD) > inbuf + size) goto bad_param; if ((smb_vwv14+(state->setup_count*SIZEOFWORD) < smb_vwv14) || (smb_vwv14+(state->setup_count*SIZEOFWORD) < (state->setup_count*SIZEOFWORD))) goto bad_param; for (i=0;i<state->setup_count;i++) state->setup[i] = SVAL(inbuf,smb_vwv14+i*SIZEOFWORD); } state->received_param = pscnt; if ((state->received_param == state->total_param) && (state->received_data == state->total_data)) { result = handle_trans(conn, state, outbuf, &outsize); SAFE_FREE(state->data); SAFE_FREE(state->param); TALLOC_FREE(state); if (!NT_STATUS_IS_OK(result)) { END_PROFILE(SMBtrans); return ERROR_NT(result); } if (outsize == 0) { END_PROFILE(SMBtrans); return ERROR_NT(NT_STATUS_INTERNAL_ERROR); } END_PROFILE(SMBtrans); return outsize; } DLIST_ADD(conn->pending_trans, state); /* We need to send an interim response then receive the rest of the parameter/data bytes */ outsize = set_message(outbuf,0,0,True); show_msg(outbuf); END_PROFILE(SMBtrans); return outsize; bad_param: DEBUG(0,("reply_trans: invalid trans parameters\n")); SAFE_FREE(state->data); SAFE_FREE(state->param); TALLOC_FREE(state); END_PROFILE(SMBtrans); return ERROR_NT(NT_STATUS_INVALID_PARAMETER); } /**************************************************************************** Reply to a secondary SMBtrans. ****************************************************************************/ int reply_transs(connection_struct *conn, char *inbuf,char *outbuf, int size, int bufsize) { int outsize = 0; unsigned int pcnt,poff,dcnt,doff,pdisp,ddisp; unsigned int av_size = size-4; struct trans_state *state; NTSTATUS result; START_PROFILE(SMBtranss); show_msg(inbuf); for (state = conn->pending_trans; state != NULL; state = state->next) { if (state->mid == SVAL(inbuf,smb_mid)) { break; } } if ((state == NULL) || (state->cmd != SMBtrans)) { END_PROFILE(SMBtranss); return ERROR_NT(NT_STATUS_INVALID_PARAMETER); } /* Revise total_params and total_data in case they have changed * downwards */ if (SVAL(inbuf, smb_vwv0) < state->total_param) state->total_param = SVAL(inbuf,smb_vwv0); if (SVAL(inbuf, smb_vwv1) < state->total_data) state->total_data = SVAL(inbuf,smb_vwv1); pcnt = SVAL(inbuf, smb_spscnt); poff = SVAL(inbuf, smb_spsoff); pdisp = SVAL(inbuf, smb_spsdisp); dcnt = SVAL(inbuf, smb_sdscnt); doff = SVAL(inbuf, smb_sdsoff); ddisp = SVAL(inbuf, smb_sdsdisp); state->received_param += pcnt; state->received_data += dcnt; if ((state->received_data > state->total_data) || (state->received_param > state->total_param)) goto bad_param; if (pcnt) { if (pdisp > state->total_param || pcnt > state->total_param || pdisp+pcnt > state->total_param || pdisp+pcnt < pdisp) { goto bad_param; } if (poff > av_size || pcnt > av_size || poff+pcnt > av_size || poff+pcnt < poff) { goto bad_param; } memcpy(state->param+pdisp,smb_base(inbuf)+poff, pcnt); } if (dcnt) { if (ddisp > state->total_data || dcnt > state->total_data || ddisp+dcnt > state->total_data || ddisp+dcnt < ddisp) { goto bad_param; } if (doff > av_size || dcnt > av_size || doff+dcnt > av_size || doff+dcnt < doff) { goto bad_param; } memcpy(state->data+ddisp, smb_base(inbuf)+doff, dcnt); } if ((state->received_param < state->total_param) || (state->received_data < state->total_data)) { END_PROFILE(SMBtranss); return -1; } /* construct_reply_common has done us the favor to pre-fill the * command field with SMBtranss which is wrong :-) */ SCVAL(outbuf,smb_com,SMBtrans); result = handle_trans(conn, state, outbuf, &outsize); DLIST_REMOVE(conn->pending_trans, state); SAFE_FREE(state->data); SAFE_FREE(state->param); TALLOC_FREE(state); if ((outsize == 0) || !NT_STATUS_IS_OK(result)) { END_PROFILE(SMBtranss); return(ERROR_DOS(ERRSRV,ERRnosupport)); } END_PROFILE(SMBtranss); return(outsize); bad_param: DEBUG(0,("reply_transs: invalid trans parameters\n")); DLIST_REMOVE(conn->pending_trans, state); SAFE_FREE(state->data); SAFE_FREE(state->param); TALLOC_FREE(state); END_PROFILE(SMBtranss); return ERROR_NT(NT_STATUS_INVALID_PARAMETER); }
During DNA biosynthesis, ribonucleoside diphosphates are converted into their deoxyribonucleoside equivalents via the enzymatic activity of ribonucleotide reductase (class I--III)^[@R8]^. Crucially, a (3′,2′)-spin center shift occurs, resulting in β-C--O scission and elimination of water ([Fig. 1a](#F1){ref-type="fig"}). Considering the efficiency of this mild enzymatic process to cleave C--O bonds to generate transient radicals, we postulated whether an analogous chemical process could occur with simple alcohols, such as methanol, to access radical intermediates for use in challenging bond constructions ([Fig. 1b](#F1){ref-type="fig"}). In the medicinal chemistry community, there is growing demand for the direct introduction of alkyl groups -- especially methyl groups -- to heteroarenes, given their influence on drug metabolism and pharmacokinetic profiles^[@R9]^. The open-shell addition of alkyl radical intermediates to heteroarenes, known as the Minisci reaction^[@R10]^, has become a mainstay transformation with broad application within modern drug discovery^[@R11]^. Unfortunately, many current methods are limited in their application to late-stage functionalization of complex molecules due to their dependence upon the use of strong stoichiometric oxidants or elevated temperatures to generate the requisite alkyl radicals^[@R3]--[@R6]^. DiRocco and coworkers recently demonstrated a photoredox-catalyzed alkylation protocol using peroxides as the alkyl radical precursors^[@R7]^. Given the state of the art, we recently questioned whether a general alkylation protocol could be devised in which a broad range of substituents could be installed from simple commercial alcohols under mild conditions. Visible light-mediated photoredox catalysis has emerged in recent years as a powerful technique in organic synthesis that facilitates single-electron transfer (SET) events with organic substrates^[@R12]--[@R14]^. This general strategy allows for the development of bond constructions that are often elusive or currently impossible via classical two-electron pathways. Recently, our laboratory introduced a novel dual photoredox-organocatalytic platform to enable the functionalization of unactivated *sp*^3^-C--H bonds^[@R15]--[@R17]^. This catalytic manifold provides access to radical intermediates via C--H abstraction, resulting in the construction of challenging C--C bonds via a radical--radical coupling mechanism. With the insight gained from this dual catalytic system and our recent work on the development of a photoredox-catalyzed Minisci reaction^[@R18]^, we questioned whether it would be possible to generate alkyl radicals from alcohols and employ them as alkylating agents in a heteroaromatic C--H functionalization reaction ([Fig. 1c](#F1){ref-type="fig"}). While there are a few early reports of alcohols as alkyl radical precursors formed via high-energy irradiation (UV light and gamma rays)^[@R19]--[@R21]^, a general and robust strategy for using alcohols as latent alkylating agents has been elusive. This transformation would represent a direct C--H alkylation of heteroaromatics with alcohols via a spin-center shift pathway, eliminating H~2~O as the only byproduct. We recognized that this mild alkylating procedure would serve as a powerful and general method in late-stage functionalization, using commercially available and abundant alcohols as latent alkylating agents. A detailed description of our proposed dual catalytic mechanism for the alkylation of heteroarenes with alcohols is outlined in [Fig. 2](#F2){ref-type="fig"}. Irradiation of Ir(ppy)~2~(dtbbpy)^+^ (**1**) (ppy = 2-phenylpyridine, dtbbpy = 4,4′-di-*tert*-butyl-2,2′-bipyridine) will generate the long-lived ^\*^Ir(ppy)~2~(dtbbpy)^+^ (**2**) excited state (τ = 557 ns)^[@R22]^. As ^\*^Ir(ppy)~2~(dtbbpy)^+^ (**2**) can function as a reductant or an oxidant, we postulated that **2** would undergo a single-electron transfer event with a sacrificial quantity of protonated heteroarene **3** to initiate the first catalytic cycle and provide the oxidizing Ir(ppy)~2~(dtbbpy)^2+^ (**4**). Given the established oxidation potential of Ir(ppy)~2~(dtbbpy)^2+^ (**4**) \[*E*~1/2~^red^ = +1.21 V vs. saturated calomel electrode (SCE) in CH~3~CN\]^[@R22]^, we anticipated that single-electron transfer (SET) from the thiol catalyst **5** (*E*~1/2~^red^ = +0.85 V vs. SCE for cysteine)^[@R23]^ to Ir(ppy)~2~(dtbbpy)^2+^ (**4**) would occur and, after deprotonation, furnish the thiyl radical **6** while returning Ir(ppy)~2~(dtbbpy)^+^ (**1**) to the catalytic cycle. At this stage, we presumed that the thiyl radical **6** would undergo hydrogen atom transfer with the alcohol **7** (a comparable thiol, methyl 2-mercaptoacetate S--H BDE = 87 kcal/mol^[@R24]^, MeOH α-C--H BDE = 96 kcal/mol^[@R25]^) to provide the α-oxy radical **8** and regenerate the thiol catalyst **5**, driven by the polar effect in the transition state^[@R26]^. The polar effect is a remarkable property that enables significantly endergonic C--H abstractions that would not be possible otherwise^[@R27]^. The nucleophilic α-oxy radical **8** would then add to the protonated electron-deficient 00heteroarene **3** in a Minisci-type pathway to afford the aminyl radical cation **9**. The resulting α-C--H bond of **9** is sufficiently acidic to undergo deprotonation to form the α-amino radical **10**^[@R28]^. At this juncture, intermediate **10** is primed to undergo a spin-center shift to eliminate H~2~O and generate benzylic radical **11**. The resulting open-shell species would then undergo protonation followed by a second SET event with the excited photocatalyst to regenerate the active oxidant Ir(ppy)~2~(dtbbpy)^2+^ (**4**) while providing desired alkylation product **12**. We first examined this new alkylation protocol using isoquinoline and methanol as the coupling partners and evaluated a range of photocatalysts and thiol catalysts. We were pleased to discover that using Ir(ppy)~2~(dtbbpy)PF~6~ (**1**) and ethyl 2-mercaptopropionate (**5**), along with *p*-toluenesulfonic acid and blue LEDs as the light source, we were able to achieve the desired C--C coupling to provide 1-methylisoquinoline (**15**) in 92% yield (see [Supplementary Information](#SD1){ref-type="supplementary-material"}). Importantly, we observed none of the desired product in the absence of photocatalyst, thiol catalyst, acid, or light, demonstrating the requirement of all components in this dual catalytic protocol. Notably, this method requires only weak visible light and ambient temperature to install methyl substituents using methanol as the alkylating agent. With the optimal conditions in hand, we sought to evaluate the generality of this dual catalytic alkylation transformation. As highlighted in [Fig. 3a](#F3){ref-type="fig"}, a wide range of heteroaromatics are methylated under the reaction conditions. Isoquinolines with electron-donating or -withdrawing substituents (such as methyl substituents, esters, and halides) are functionalized in excellent efficiencies (**15**--**18**, 85--98% yield). Quinolines perform effectively, including those that contain non-participating functionality (**19**--**23**, 65--95% yield), in addition to phthalazine and phenanthridine coupling partners (**24** and **25**, 70% and 93% yield). Moreover, a wide range of pyridine derivatives containing diverse functionality (such as esters, amides, arenes, nitriles, and trifluoromethyl groups) can be converted into the desired methylation products in high yield (**26**--**32**, 65--91% yield). Next, we sought to investigate the nature of the alcohol coupling partner, as demonstrated in [Fig. 3b](#F3){ref-type="fig"}. A broad array of primary alcohols can effectively serve as alkylating agents in this new alkylation reaction. In contrast to the methylation conditions highlighted above, alcohols in [Fig. 3b](#F3){ref-type="fig"} typically employ methyl thioglycolate **13** as the C--H abstraction catalyst. Importantly, simple aliphatic alcohols such as ethanol and propanol deliver the alkylated isoquinoline product in high yields (**33** and **34**, 95% and 96% yield). Steric bulk proximal to the alcohol functionality is tolerated, as exemplified by the presence of isopropyl, β-tetrahydropyran, β-aryl, and β-adamantyl substituents (**35--38**, 87--92% yield). The presence of an electron-withdrawing trifluoromethyl (CF~3~) group distal to the alcohol decreases the rate of the reaction; however, employing the more electrophilic thiol catalyst, 2,2,2-trifluoroethanethiol (**14**) can promote the transformation more efficiently, possibly due to the polar effect on the HAT transition state (**39**, 93% yield)^[@R26]^. To our delight, diols also participate readily in this alkylation protocol (**40** and **41**, 88% and 81% yield). It should be noted that 1,3-butanediol demonstrates exceptional chemoselectivity and undergoes alkylation exclusively at the primary alcohol site. We speculate that the corresponding α-oxy radical at the secondary alcohol position does not attack the protonated heteroarene due to its increased steric hindrance. For these alkylating agents with multiple reactive sites (**41**, **43**, and **44**), thiol catalyst **5** is the most effective HAT catalyst -- mechanistic studies are ongoing to elucidate the origin of these differences in catalyst reactivity. Ethers, in the form of differentially substituted tetrahydrofurans, are also competent alkylating agents in this dual catalytic platform (**42**--**44**, 72--90% yield). In the elimination step, the tetrahydrofuran ring opens to reveal a pendent hydroxyl group. Interestingly, 3-hydroxytetrahydrofuran and tetrahydrofurfuryl alcohol react regioselectively at the ether α-oxy site distal to the alcohol to afford alkylation products with terminal pinacol motifs. We attribute this exclusive regioselectivity to a subtle influence on C--H BDE due to the inductive influence of the oxygen atoms. The application of these substrates represents an effective method to install vicinal diol motifs that would be inaccessible using traditional oxidative alkylation methods. Finally, the utility of this mild alkylation protocol has been demonstrated by the late-stage functionalization of several pharmaceutical compounds. Using methanol as a simple methylating agent, fasudil, a potent Rho-associated protein kinase inhibitor and vasodilator, can be methylated in 82% yield (product **45**). Additionally, milrinone, a phosphodiesterase 3 inhibitor and vasodilator, can be alkylated with 3-phenylpropanol in 43% yield (product **46**). Mechanistic studies have been conducted to support the proposed pathway outlined in [Fig. 2](#F2){ref-type="fig"}. Stern--Volmer fluorescence quenching experiments have demonstrated that the \*Ir^III^ excited state **2** is quenched in the presence of protonated heteroarene **3**, but not in the presence of the unprotonated heteroarene or thiol catalyst **5**, implying an oxidative quenching pathway (see [Supplementary Information](#SD1){ref-type="supplementary-material"}). Additionally, a series of experiments were conducted to investigate the proposed spin-center shift elimination. Upon exposing hydroxylated intermediate **47** to the reaction conditions, only a modest amount of the methylated isoquinoline **15** is observed (8% yield, entry 1, [Fig. 4a](#F4){ref-type="fig"}). In the absence of an acid additive, only trace yields of the desired product is formed (2% yield, entry 2, [Fig. 4a](#F4){ref-type="fig"}). However, in the presence of a stoichiometric reductant and *p*-toluenesulfonic acid, the elimination of oxygen can be achieved in good efficiency (60% yield, entry 3, [Fig. 4a](#F4){ref-type="fig"}). Crucially, this elimination pathway is shut down in the absence of either light or photo-catalyst (entries 4 and 5, [Fig. 4a](#F4){ref-type="fig"}). Therefore, this net reductive process supports the proposed generation of α-amino radical **48**, which could readily form deoxygenated product **15** via a spin-center shift pathway to β-amino radical **49** ([Fig. 4b](#F4){ref-type="fig"}). This elimination pathway is further corroborated by a series of radical trapping experiments ([Fig. 4c](#F4){ref-type="fig"} and [SI](#SD1){ref-type="supplementary-material"}). In the presence of styrene, hydroxymethyl arene **47** is transformed to adduct **50** ([Fig. 4c](#F4){ref-type="fig"}, 65% yield), presumably via the intermediacy of β-amino radical **49**. Finally, while we support the mechanism outlined in [Figure 2](#F2){ref-type="fig"}, we cannot rule out the possibility of a radical chain pathway in which radical **11** abstracts an H-atom from alcohol **7** or thiol catalyst **5**. In summary, this alkylation strategy represents the first general use of alcohols as simple alkylating agents and enables rapid late-stage derivatization of medicinally relevant molecules. Given the influence on drug pharmacokinetics and ADME properties, this method of installing inert alkyl groups will likely find wide application in the medicinal chemistry community. We have developed a mild and operationally simple alkylation reaction via the synergistic merger of photoredox and thiol HAT organocatalysis to forge challenging heteroaryl C--C bonds using alcohols as latent nucleophiles. This bio-inspired strategy mimics the key step in enzyme-catalyzed DNA biosynthesis via a novel spin-center shift elimination of H~2~O to generate radical intermediates from simple alcohols. Supplementary Material {#S1} ====================== Financial support was provided by NIHGMS (R01 GM103558-03) and kind gifts from Merck and Amgen. J.J. thanks Jack A. Terrett for assistance in preparing this manuscript. [Supplementary Information](#SD1){ref-type="supplementary-material"} is linked to the online version of the paper at [www.nature.com/nature](www.nature.com/nature). **Author Contributions** J.J. performed and analyzed experiments. J.J. and D.W.C.M. designed experiments to develop this reaction and probe its utility, and also prepared this manuscript. The authors declare no competing financial interests. Readers are welcome to comment on the online version of this article at [www.nature.com/nature](www.nature.com/nature). ![Bio-inspired alkylation process using alcohols as spin-center shift equivalents via a dual catalytic platform\ (**a**) DNA biosynthesis occurs via a spin-center shift (SCS) process, catalyzed by ribonucleotide reductase (RNR) class I to generate a carbon-centered radical. (**b**) Alcohols as radical intermediates when SCS-allowed. (**c**) Proposed direct installation of alkyl groups using alcohols under mild photoredox organocatalytic conditions.](nihms703713f1){#F1} ![Proposed mechanism for the direct alkylation of heteroaromatic C--H bonds via photoredox organocatalysis\ The catalytic cycle is initiated via excitation of photocatalyst **1** to give the excited state **2**. A sacrificial amount of heteroarene **3** oxidizes ^\*^Ir^III^ **2** to Ir^IV^ **4**, which then oxidizes thiol catalyst **5** to generate thiyl radical **6** and regenerate catalyst **1**. Thiyl radical **6** then abstracts a hydrogen atom from alcohol **7** to form α-oxy radical **8**. Radical **8** adds to heteroarene **3**, producing radical cation **9**, which after deprotonation forms α-amino radical **10**. Spin-center shift elimination of H~2~O forms radical intermediate **11**. Protonation and reduction by ^\*^Ir^III^ **2** delivers alkylated product **12**.](nihms703713f2){#F2} ![Substrate scope for the alkylation of heteroaromatic C--H bonds with alcohols via the dual photoredox organocata-lytic platform\ A broad range of heteroaromatics and alcohols are efficiently coupled to produce alkylated heterocycles under the standard reaction conditions (top, generalized reaction). (**a**) A variety of isoquinolines, quinolines, phthalazines, phenanthridines, and pyridines are efficiently methylated using methanol as the alkylating reagent. (**b**) A diverse selection of alcohols serve as effective alkylating agents in this dual catalytic protocol. (**c**) Ethers are also amenable to the transformation -- the products are the corresponding ring opened alcohols. (**d**) Two pharmaceuticals, fasudil and milrinone, can be alkylated using this protocol, demonstrating its utility in late-stage functionalization. Isolated yields are indicated below each entry. See [Supplementary Information](#SD1){ref-type="supplementary-material"} for experimental details.](nihms703713f3){#F3} ![Mechanistic studies support spin-center shift elimination pathway\ (**a**) Hydroxymethyl intermediate **47** can be converted to methylated **15** under net reductive conditions upon addition of formic acid-tributylamine and *p*-toluenesulfonic acid. (**b**) Deoxygenation of **47** likely proceeds via a spin-center shift pathway to cleave the alcohol C--O bond. (**c**) In the presence of styrene, **47** is converted to **50**, presumably by trapping of radical **49**.](nihms703713f4){#F4}
Related literature {#sec1} ================== For transition metal complexes with imino­phospho­ranyl derivatives, see: Avis *et al.* (1996[@bb1], 1997[@bb2]). For the catalytic activity of bis­(imino­phospho­ran­yl)methane and its derivatives, see: Hill & Hitchcock (2002[@bb4]); Ma *et al.* (2011[@bb5]). For the crystal structure of an analogous compound, see: Hill & Hitchcock (2002[@bb4]). Experimental {#sec2} ============ {#sec2.1} ### Crystal data {#sec2.1.1} C~35~H~30~N~4~P~2~*M* *~r~* = 568.57Monoclinic,*a* = 22.505 (7) Å*b* = 9.142 (3) Å*c* = 29.606 (9) Åβ = 102.877 (5)°*V* = 5938 (3) Å^3^*Z* = 8Mo *K*α radiationμ = 0.18 mm^−1^*T* = 293 K0.40 × 0.30 × 0.20 mm ### Data collection {#sec2.1.2} Bruker APEXII CCD area-detector diffractometerAbsorption correction: multi-scan (*SADABS*; Bruker, 2007[@bb3]) *T* ~min~ = 0.753, *T* ~max~ = 1.00016500 measured reflections6035 independent reflections4510 reflections with *I* \> 2σ(*I*)*R* ~int~ = 0.041 ### Refinement {#sec2.1.3} *R*\[*F* ^2^ \> 2σ(*F* ^2^)\] = 0.060*wR*(*F* ^2^) = 0.126*S* = 1.156035 reflections370 parametersH-atom parameters constrainedΔρ~max~ = 0.39 e Å^−3^Δρ~min~ = −0.28 e Å^−3^ {#d5e473} Data collection: *APEX2* (Bruker, 2007[@bb3]); cell refinement: *APEX2* and *SAINT* (Bruker, 2007[@bb3]); data reduction: *SAINT*; program(s) used to solve structure: *SHELXS97* (Sheldrick, 2008[@bb6]); program(s) used to refine structure: *SHELXL97* (Sheldrick, 2008[@bb6]); molecular graphics: *SHELXTL* (Sheldrick, 2008[@bb6]); software used to prepare material for publication: *SHELXTL* and *PLATON* (Spek, 2009[@bb7]). Supplementary Material ====================== Crystal structure: contains datablock(s) I, global. DOI: [10.1107/S1600536811037007/ez2257sup1.cif](http://dx.doi.org/10.1107/S1600536811037007/ez2257sup1.cif) Structure factors: contains datablock(s) I. DOI: [10.1107/S1600536811037007/ez2257Isup2.hkl](http://dx.doi.org/10.1107/S1600536811037007/ez2257Isup2.hkl) Supplementary material file. DOI: [10.1107/S1600536811037007/ez2257Isup3.cml](http://dx.doi.org/10.1107/S1600536811037007/ez2257Isup3.cml) Additional supplementary materials: [crystallographic information](http://scripts.iucr.org/cgi-bin/sendsupfiles?ez2257&file=ez2257sup0.html&mime=text/html); [3D view](http://scripts.iucr.org/cgi-bin/sendcif?ez2257sup1&Qmime=cif); [checkCIF report](http://scripts.iucr.org/cgi-bin/paper?ez2257&checkcif=yes) Supplementary data and figures for this paper are available from the IUCr electronic archives (Reference: [EZ2257](http://scripts.iucr.org/cgi-bin/sendsup?ez2257)). The authors are grateful for financial support from Applied and Basic Research Foundation of Yunnan Province (No. 2009CD154), Open Foundation of Key Laboratory of Ethnic Medicine Resource Chemistry, State Ethnic Affairs Commission & Ministry of Education, Yunnan University of Nationalities (No. MZY100101). Comment ======= Bis(iminophosphorany)methane and its derivatives are attracting much attention due to their flexible coordination behavior (Avis *et al.*, 1996, 1997) and the catalytic activity of their transition metal complexes (Hill & Hitchcock, 2002; Ma *et al.*, 2011).Herein, we report the crystal structure of a mono-phosphinimine, namely ((N-2-pyridylimino)diphenylphosphoranyl)(N-2-pyridyl-N-diphenylphosphinoamino) methane. In the crystal structure of the title compound, the (N-2-pyridylimino)diphenylphosphoranyl and the N-2-pyridyl-N-diphenylphosphinoamino groups are attached to the methyl with a P2---C6---N1 angle of 114.09 (2)°. The P2=N3 bond length of 1.593 (2) Å is comparable to that of P=N distances of 1.555 (3) and 1.573 (3) Å in bis(iminophosphorany)methane (Hill & Hitchcock, 2002). The molecules stack along the *b* axis and interconnect through C32---H32(pyridyl)···N2^i^(pyridyl) interactions (D···A 3.577 (3)Å, Table 1), forming an infinite chain. These parallel chains are further interconnected via C21---H21(benzene)··· N3^ii^(amino) and C28---H28(benzene)···Cg^iii^ interactions to form a three-dimensional framework (Cg represents the C7 to C12 benzene ring, Table 1). Symmetry codes: i: x, y+1, z; ii: x, -y-1, z-0.5; iii: -x, y-1, -z+0.5. Experimental {#experimental} ============ To a solution of 0.4g (0.1 mmol) N-((pyridin-2-ylamino)methyl)pyridind-2-amine in 40 ml CH~2~Cl~2~ at room temperature a solution of 0.45 g (0.2 mmol) chlorodiphenylphosphine in the presence of Et~3~N in 10 ml toluene was added dropwise, during which N~2~ gas evolved. After 2 h stirring the resultant yellow solution was evaporated, giving a white powder. The white powder was then separated and purified by column chromatography on silica gel (column of 2 cm diameter, eluent: dichloromethane/acetate = 95:5, v/v), and the title compound was obtained in 60% yield. Orange crystals of the title compound having average dimensions of 0.40 × 0.30 × 0.20 mm^3^ were obtained by slow evaporation from a solution of dichloromethane/*N,N*-dimethylformamide 1/1 (v/v). Refinement {#refinement} ========== The hydrogen atoms were placed in idealized positions and allowed to ride on the relevant carbon atoms, with C---H = 0.93 Å and 0.97 Å for aryl and methylene hydrogens, respectively. *U~iso~*(H) = 1.2*U~eq~*(C). Figures ======= ![The atom-numbering scheme of the title compound. Displacement ellipsoids are drawn at the 30% probability level and H atoms are omitted for clarity.](e-67-o2683-fig1){#Fap1} ![A view of the packing of the title compound. The red dashed lines represent C32---H32(pyridyl)···N2i(pyridyl) interactions that connect the molecules along the b axis (symmetry code: i: -x+1, -y, -z). The other interactions are omitted for clarity. Color codes: Green P, Blue N,Gray C.](e-67-o2683-fig2){#Fap2} Crystal data {#tablewrapcrystaldatalong} ============ ---------------------- -------------------------------------- C~35~H~30~N~4~P~2~ *F*(000) = 2384 *M~r~* = 568.57 *D*~x~ = 1.272 Mg m^−3^ Monoclinic, *C*2/*c* Mo *K*α radiation, λ = 0.71073 Å *a* = 22.505 (7) Å Cell parameters from 241 reflections *b* = 9.142 (3) Å θ = 2.1--26.3° *c* = 29.606 (9) Å µ = 0.18 mm^−1^ β = 102.877 (5)° *T* = 293 K *V* = 5938 (3) Å^3^ Block, orange *Z* = 8 0.40 × 0.30 × 0.20 mm ---------------------- -------------------------------------- Data collection {#tablewrapdatacollectionlong} =============== ------------------------------------------------------------ -------------------------------------- Bruker APEXII CCD area-detector diffractometer 6035 independent reflections Radiation source: fine-focus sealed tube 4510 reflections with *I* \> 2σ(*I*) graphite *R*~int~ = 0.041 ω scans θ~max~ = 26.3°, θ~min~ = 2.1° Absorption correction: multi-scan (*SADABS*; Bruker, 2007) *h* = −27→28 *T*~min~ = 0.753, *T*~max~ = 1.000 *k* = −11→9 16500 measured reflections *l* = −36→33 ------------------------------------------------------------ -------------------------------------- Refinement {#tablewraprefinementdatalong} ========== ------------------------------------- ------------------------------------------------------------------------------------------------- Refinement on *F*^2^ Primary atom site location: structure-invariant direct methods Least-squares matrix: full Secondary atom site location: difference Fourier map *R*\[*F*^2^ \> 2σ(*F*^2^)\] = 0.060 Hydrogen site location: inferred from neighbouring sites *wR*(*F*^2^) = 0.126 H-atom parameters constrained *S* = 1.15 *w* = 1/\[σ^2^(*F*~o~^2^) + (0.0394*P*)^2^ + 6.1976*P*\] where *P* = (*F*~o~^2^ + 2*F*~c~^2^)/3 6035 reflections (Δ/σ)~max~ \< 0.001 370 parameters Δρ~max~ = 0.39 e Å^−3^ 0 restraints Δρ~min~ = −0.28 e Å^−3^ ------------------------------------- ------------------------------------------------------------------------------------------------- Special details {#specialdetails} =============== ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Refinement. Refinement of F^2^ against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F^2^, conventional R-factors R are based on F, with F set to zero for negative F^2^. The threshold expression of F^2^ \> 2sigma(F^2^) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F^2^ are statistically about twice as large as those based on F, and R- factors based on ALL data will be even larger. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å^2^) {#tablewrapcoords} ================================================================================================== ----- --------------- ------------- -------------- -------------------- -- *x* *y* *z* *U*~iso~\*/*U*~eq~ P1 −0.01563 (3) 0.63607 (8) 0.14930 (2) 0.03712 (18) P2 0.16356 (3) 0.73616 (7) 0.14753 (2) 0.02853 (16) N1 0.05842 (8) 0.5832 (2) 0.15430 (7) 0.0323 (5) N2 0.11485 (10) 0.3691 (3) 0.17411 (7) 0.0408 (5) N3 0.15628 (10) 0.9082 (2) 0.13974 (7) 0.0357 (5) N4 0.14031 (12) 0.8766 (3) 0.05916 (7) 0.0501 (6) C1 0.08083 (10) 0.4717 (3) 0.18710 (8) 0.0320 (6) C2 0.06822 (12) 0.4749 (3) 0.23102 (9) 0.0411 (6) H2 0.0448 0.5498 0.2393 0.049\* C3 0.09084 (13) 0.3661 (4) 0.26186 (10) 0.0487 (7) H3 0.0828 0.3661 0.2914 0.058\* C4 0.12536 (14) 0.2572 (4) 0.24880 (10) 0.0550 (8) H4 0.1410 0.1817 0.2690 0.066\* C5 0.13608 (14) 0.2634 (4) 0.20499 (10) 0.0548 (8) H5 0.1595 0.1896 0.1961 0.066\* C6 0.09627 (10) 0.6230 (3) 0.12172 (8) 0.0320 (6) H6A 0.0713 0.6761 0.0960 0.038\* H6B 0.1101 0.5341 0.1094 0.038\* C7 −0.01288 (12) 0.8317 (3) 0.13863 (11) 0.0451 (7) C8 −0.01064 (15) 0.9233 (4) 0.17666 (13) 0.0719 (11) H8 −0.0121 0.8834 0.2053 0.086\* C9 −0.0062 (2) 1.0736 (5) 0.1719 (2) 0.110 (2) H9 −0.0043 1.1339 0.1975 0.132\* C10 −0.00475 (19) 1.1337 (5) 0.1302 (2) 0.113 (2) H10 −0.0017 1.2345 0.1273 0.135\* C11 −0.00772 (16) 1.0462 (5) 0.09285 (19) 0.0899 (14) H11 −0.0070 1.0877 0.0643 0.108\* C12 −0.01182 (13) 0.8961 (4) 0.09660 (13) 0.0601 (9) H12 −0.0139 0.8377 0.0706 0.072\* C13 −0.05681 (12) 0.5681 (3) 0.09276 (9) 0.0415 (6) C14 −0.03331 (15) 0.4781 (4) 0.06367 (11) 0.0565 (8) H14 0.0080 0.4557 0.0708 0.068\* C15 −0.07070 (19) 0.4202 (4) 0.02363 (12) 0.0731 (11) H15 −0.0541 0.3603 0.0042 0.088\* C16 −0.1314 (2) 0.4510 (5) 0.01285 (13) 0.0787 (12) H16 −0.1563 0.4120 −0.0138 0.094\* C17 −0.15543 (17) 0.5391 (5) 0.04118 (15) 0.0806 (12) H17 −0.1969 0.5602 0.0338 0.097\* C18 −0.11885 (14) 0.5978 (4) 0.08082 (12) 0.0607 (9) H18 −0.1360 0.6581 0.0998 0.073\* C19 0.17852 (10) 0.7135 (3) 0.20950 (8) 0.0297 (5) C20 0.21447 (11) 0.6004 (3) 0.23177 (8) 0.0373 (6) H20 0.2324 0.5353 0.2147 0.045\* C21 0.22379 (12) 0.5835 (4) 0.27910 (9) 0.0472 (7) H21 0.2477 0.5070 0.2939 0.057\* C22 0.19752 (14) 0.6807 (4) 0.30441 (9) 0.0545 (8) H22 0.2040 0.6698 0.3364 0.065\* C23 0.16185 (14) 0.7935 (4) 0.28284 (10) 0.0534 (8) H23 0.1442 0.8584 0.3002 0.064\* C24 0.15210 (12) 0.8109 (3) 0.23526 (9) 0.0403 (6) H24 0.1280 0.8874 0.2207 0.048\* C25 0.22768 (11) 0.6569 (3) 0.12830 (8) 0.0323 (6) C26 0.23091 (12) 0.5109 (3) 0.11729 (9) 0.0423 (7) H26 0.1993 0.4478 0.1195 0.051\* C27 0.28149 (14) 0.4588 (4) 0.10292 (10) 0.0559 (8) H27 0.2831 0.3611 0.0945 0.067\* C28 0.32934 (14) 0.5506 (4) 0.10102 (11) 0.0604 (9) H28 0.3637 0.5143 0.0923 0.072\* C29 0.32629 (14) 0.6950 (5) 0.11197 (12) 0.0644 (10) H29 0.3586 0.7570 0.1105 0.077\* C30 0.27536 (12) 0.7498 (4) 0.12529 (10) 0.0498 (7) H30 0.2732 0.8487 0.1322 0.060\* C31 0.14809 (12) 0.9659 (3) 0.09597 (8) 0.0365 (6) C32 0.14870 (16) 1.1179 (3) 0.09068 (10) 0.0561 (8) H32 0.1532 1.1783 0.1165 0.067\* C33 0.14268 (19) 1.1772 (4) 0.04781 (11) 0.0737 (11) H33 0.1427 1.2782 0.0440 0.088\* C34 0.1366 (2) 1.0861 (4) 0.01011 (11) 0.0878 (14) H34 0.1333 1.1235 −0.0196 0.105\* C35 0.1355 (2) 0.9390 (4) 0.01768 (10) 0.0770 (12) H35 0.1310 0.8776 −0.0079 0.092\* ----- --------------- ------------- -------------- -------------------- -- Atomic displacement parameters (Å^2^) {#tablewrapadps} ===================================== ----- ------------- ------------- ------------- -------------- -------------- -------------- *U*^11^ *U*^22^ *U*^33^ *U*^12^ *U*^13^ *U*^23^ P1 0.0288 (3) 0.0405 (4) 0.0426 (4) 0.0037 (3) 0.0091 (3) 0.0017 (3) P2 0.0308 (3) 0.0285 (4) 0.0267 (3) 0.0002 (3) 0.0074 (2) 0.0001 (3) N1 0.0263 (10) 0.0357 (13) 0.0360 (11) 0.0026 (9) 0.0094 (8) 0.0051 (9) N2 0.0424 (12) 0.0394 (14) 0.0400 (12) 0.0099 (11) 0.0078 (10) −0.0005 (10) N3 0.0459 (12) 0.0303 (12) 0.0310 (11) 0.0001 (10) 0.0089 (9) −0.0006 (9) N4 0.0818 (18) 0.0345 (14) 0.0315 (12) −0.0078 (13) 0.0071 (11) −0.0018 (10) C1 0.0263 (12) 0.0317 (14) 0.0379 (14) −0.0039 (11) 0.0066 (10) −0.0003 (11) C2 0.0362 (14) 0.0461 (18) 0.0442 (15) 0.0032 (13) 0.0159 (12) 0.0044 (13) C3 0.0500 (17) 0.056 (2) 0.0418 (16) −0.0007 (15) 0.0136 (13) 0.0111 (14) C4 0.0609 (19) 0.0467 (19) 0.0528 (18) 0.0083 (16) 0.0030 (15) 0.0159 (15) C5 0.0578 (18) 0.048 (2) 0.0556 (18) 0.0198 (16) 0.0067 (14) 0.0010 (15) C6 0.0301 (12) 0.0380 (15) 0.0281 (12) −0.0013 (11) 0.0070 (10) 0.0008 (11) C7 0.0306 (14) 0.0343 (16) 0.0661 (19) 0.0075 (12) 0.0013 (13) −0.0053 (14) C8 0.058 (2) 0.062 (2) 0.082 (2) 0.0150 (18) −0.0128 (18) −0.029 (2) C9 0.079 (3) 0.064 (3) 0.156 (5) 0.021 (2) −0.042 (3) −0.051 (3) C10 0.066 (3) 0.038 (2) 0.201 (6) −0.006 (2) −0.038 (3) 0.002 (3) C11 0.048 (2) 0.055 (3) 0.157 (4) 0.0017 (19) 0.003 (2) 0.035 (3) C12 0.0427 (17) 0.046 (2) 0.090 (3) 0.0077 (14) 0.0112 (16) 0.0131 (18) C13 0.0417 (15) 0.0305 (15) 0.0494 (16) −0.0043 (12) 0.0041 (12) 0.0082 (13) C14 0.0552 (18) 0.050 (2) 0.063 (2) −0.0137 (16) 0.0117 (15) −0.0091 (16) C15 0.099 (3) 0.059 (2) 0.060 (2) −0.030 (2) 0.014 (2) −0.0107 (18) C16 0.095 (3) 0.061 (3) 0.062 (2) −0.032 (2) −0.020 (2) 0.013 (2) C17 0.059 (2) 0.070 (3) 0.093 (3) −0.007 (2) −0.026 (2) 0.014 (2) C18 0.0442 (17) 0.054 (2) 0.076 (2) 0.0047 (15) −0.0049 (15) 0.0022 (17) C19 0.0283 (12) 0.0321 (14) 0.0286 (12) −0.0041 (10) 0.0059 (9) −0.0005 (10) C20 0.0334 (13) 0.0443 (17) 0.0342 (14) 0.0005 (12) 0.0073 (11) 0.0013 (12) C21 0.0410 (15) 0.057 (2) 0.0401 (15) −0.0001 (14) 0.0013 (12) 0.0097 (14) C22 0.0610 (19) 0.073 (2) 0.0270 (14) −0.0051 (18) 0.0056 (13) 0.0026 (15) C23 0.0640 (19) 0.061 (2) 0.0374 (16) 0.0020 (17) 0.0171 (14) −0.0099 (15) C24 0.0469 (15) 0.0382 (16) 0.0357 (14) 0.0028 (13) 0.0088 (12) −0.0018 (12) C25 0.0311 (12) 0.0385 (16) 0.0284 (12) 0.0005 (11) 0.0085 (10) 0.0032 (11) C26 0.0397 (14) 0.0426 (18) 0.0480 (16) 0.0052 (13) 0.0167 (12) 0.0013 (13) C27 0.0609 (19) 0.053 (2) 0.0583 (19) 0.0199 (17) 0.0230 (15) 0.0017 (15) C28 0.0475 (18) 0.085 (3) 0.0563 (19) 0.0178 (18) 0.0281 (15) 0.0078 (18) C29 0.0432 (17) 0.084 (3) 0.073 (2) −0.0059 (17) 0.0275 (16) 0.005 (2) C30 0.0432 (15) 0.0495 (19) 0.0613 (19) −0.0042 (14) 0.0211 (14) −0.0022 (15) C31 0.0419 (14) 0.0353 (16) 0.0309 (13) −0.0026 (12) 0.0050 (11) −0.0006 (11) C32 0.097 (2) 0.0338 (17) 0.0373 (16) −0.0079 (17) 0.0133 (16) −0.0033 (13) C33 0.133 (3) 0.0322 (18) 0.051 (2) −0.006 (2) 0.010 (2) 0.0100 (15) C34 0.167 (4) 0.055 (2) 0.0338 (18) −0.021 (3) 0.006 (2) 0.0096 (16) C35 0.146 (4) 0.049 (2) 0.0304 (16) −0.021 (2) 0.0057 (18) −0.0011 (15) ----- ------------- ------------- ------------- -------------- -------------- -------------- Geometric parameters (Å, °) {#tablewrapgeomlong} =========================== ----------------------- -------------- ----------------------- ------------- P1---N1 1.710 (2) C14---H14 0.9300 P1---C7 1.820 (3) C15---C16 1.362 (6) P1---C13 1.832 (3) C15---H15 0.9300 P2---N3 1.593 (2) C16---C17 1.359 (6) P2---C19 1.802 (2) C16---H16 0.9300 P2---C25 1.816 (2) C17---C18 1.383 (5) P2---C6 1.853 (2) C17---H17 0.9300 N1---C1 1.421 (3) C18---H18 0.9300 N1---C6 1.468 (3) C19---C20 1.386 (3) N2---C1 1.321 (3) C19---C24 1.389 (3) N2---C5 1.343 (4) C20---C21 1.379 (4) N3---C31 1.373 (3) C20---H20 0.9300 N4---C35 1.336 (4) C21---C22 1.378 (4) N4---C31 1.342 (3) C21---H21 0.9300 C1---C2 1.391 (3) C22---C23 1.374 (4) C2---C3 1.369 (4) C22---H22 0.9300 C2---H2 0.9300 C23---C24 1.386 (4) C3---C4 1.370 (4) C23---H23 0.9300 C3---H3 0.9300 C24---H24 0.9300 C4---C5 1.372 (4) C25---C26 1.379 (4) C4---H4 0.9300 C25---C30 1.387 (4) C5---H5 0.9300 C26---C27 1.386 (4) C6---H6A 0.9700 C26---H26 0.9300 C6---H6B 0.9700 C27---C28 1.376 (5) C7---C12 1.382 (4) C27---H27 0.9300 C7---C8 1.395 (4) C28---C29 1.365 (5) C8---C9 1.388 (6) C28---H28 0.9300 C8---H8 0.9300 C29---C30 1.386 (4) C9---C10 1.359 (7) C29---H29 0.9300 C9---H9 0.9300 C30---H30 0.9300 C10---C11 1.354 (7) C31---C32 1.399 (4) C10---H10 0.9300 C32---C33 1.359 (4) C11---C12 1.382 (5) C32---H32 0.9300 C11---H11 0.9300 C33---C34 1.375 (5) C12---H12 0.9300 C33---H33 0.9300 C13---C14 1.379 (4) C34---C35 1.364 (5) C13---C18 1.389 (4) C34---H34 0.9300 C14---C15 1.396 (4) C35---H35 0.9300 N1---P1---C7 102.86 (11) C14---C15---H15 119.9 N1---P1---C13 105.55 (12) C17---C16---C15 119.6 (3) C7---P1---C13 101.82 (13) C17---C16---H16 120.2 N3---P2---C19 104.52 (11) C15---C16---H16 120.2 N3---P2---C25 114.34 (12) C16---C17---C18 120.7 (4) C19---P2---C25 106.98 (11) C16---C17---H17 119.6 N3---P2---C6 116.32 (12) C18---C17---H17 119.6 C19---P2---C6 107.81 (11) C17---C18---C13 121.0 (4) C25---P2---C6 106.35 (12) C17---C18---H18 119.5 C1---N1---C6 117.21 (19) C13---C18---H18 119.5 C1---N1---P1 116.91 (15) C20---C19---C24 119.5 (2) C6---N1---P1 124.78 (16) C20---C19---P2 121.79 (19) C1---N2---C5 117.0 (2) C24---C19---P2 118.66 (19) C31---N3---P2 120.27 (18) C21---C20---C19 120.5 (3) C35---N4---C31 117.1 (3) C21---C20---H20 119.8 N2---C1---C2 122.6 (2) C19---C20---H20 119.8 N2---C1---N1 116.8 (2) C22---C21---C20 119.6 (3) C2---C1---N1 120.6 (2) C22---C21---H21 120.2 C3---C2---C1 119.0 (3) C20---C21---H21 120.2 C3---C2---H2 120.5 C23---C22---C21 120.6 (3) C1---C2---H2 120.5 C23---C22---H22 119.7 C2---C3---C4 119.3 (3) C21---C22---H22 119.7 C2---C3---H3 120.4 C22---C23---C24 120.2 (3) C4---C3---H3 120.4 C22---C23---H23 119.9 C3---C4---C5 117.9 (3) C24---C23---H23 119.9 C3---C4---H4 121.1 C23---C24---C19 119.6 (3) C5---C4---H4 121.1 C23---C24---H24 120.2 N2---C5---C4 124.2 (3) C19---C24---H24 120.2 N2---C5---H5 117.9 C26---C25---C30 119.7 (2) C4---C5---H5 117.9 C26---C25---P2 123.20 (19) N1---C6---P2 114.09 (16) C30---C25---P2 117.1 (2) N1---C6---H6A 108.7 C25---C26---C27 119.7 (3) P2---C6---H6A 108.7 C25---C26---H26 120.2 N1---C6---H6B 108.7 C27---C26---H26 120.2 P2---C6---H6B 108.7 C28---C27---C26 120.5 (3) H6A---C6---H6B 107.6 C28---C27---H27 119.8 C12---C7---C8 117.8 (3) C26---C27---H27 119.8 C12---C7---P1 125.7 (2) C29---C28---C27 119.9 (3) C8---C7---P1 116.5 (3) C29---C28---H28 120.0 C9---C8---C7 120.2 (4) C27---C28---H28 120.0 C9---C8---H8 119.9 C28---C29---C30 120.4 (3) C7---C8---H8 119.9 C28---C29---H29 119.8 C10---C9---C8 120.6 (5) C30---C29---H29 119.8 C10---C9---H9 119.7 C29---C30---C25 119.8 (3) C8---C9---H9 119.7 C29---C30---H30 120.1 C11---C10---C9 119.8 (4) C25---C30---H30 120.1 C11---C10---H10 120.1 N4---C31---N3 119.9 (2) C9---C10---H10 120.1 N4---C31---C32 121.1 (2) C10---C11---C12 120.9 (5) N3---C31---C32 119.0 (2) C10---C11---H11 119.6 C33---C32---C31 120.0 (3) C12---C11---H11 119.6 C33---C32---H32 120.0 C11---C12---C7 120.7 (4) C31---C32---H32 120.0 C11---C12---H12 119.7 C32---C33---C34 119.2 (3) C7---C12---H12 119.7 C32---C33---H33 120.4 C14---C13---C18 117.5 (3) C34---C33---H33 120.4 C14---C13---P1 125.9 (2) C35---C34---C33 117.8 (3) C18---C13---P1 116.2 (2) C35---C34---H34 121.1 C13---C14---C15 120.9 (3) C33---C34---H34 121.1 C13---C14---H14 119.5 N4---C35---C34 124.8 (3) C15---C14---H14 119.5 N4---C35---H35 117.6 C16---C15---C14 120.3 (4) C34---C35---H35 117.6 C16---C15---H15 119.9 C7---P1---N1---C1 143.92 (19) C14---C15---C16---C17 0.3 (6) C13---P1---N1---C1 −109.72 (19) C15---C16---C17---C18 0.0 (6) C7---P1---N1---C6 −48.4 (2) C16---C17---C18---C13 −0.2 (6) C13---P1---N1---C6 58.0 (2) C14---C13---C18---C17 0.0 (5) C19---P2---N3---C31 175.99 (19) P1---C13---C18---C17 −173.4 (3) C25---P2---N3---C31 59.4 (2) N3---P2---C19---C20 −148.8 (2) C6---P2---N3---C31 −65.3 (2) C25---P2---C19---C20 −27.2 (2) C5---N2---C1---C2 1.7 (4) C6---P2---C19---C20 86.8 (2) C5---N2---C1---N1 −179.9 (2) N3---P2---C19---C24 32.5 (2) C6---N1---C1---N2 −31.0 (3) C25---P2---C19---C24 154.1 (2) P1---N1---C1---N2 137.68 (19) C6---P2---C19---C24 −91.9 (2) C6---N1---C1---C2 147.4 (2) C24---C19---C20---C21 0.4 (4) P1---N1---C1---C2 −43.9 (3) P2---C19---C20---C21 −178.3 (2) N2---C1---C2---C3 −1.3 (4) C19---C20---C21---C22 −0.5 (4) N1---C1---C2---C3 −179.6 (2) C20---C21---C22---C23 0.4 (5) C1---C2---C3---C4 0.2 (4) C21---C22---C23---C24 −0.2 (5) C2---C3---C4---C5 0.5 (5) C22---C23---C24---C19 0.1 (4) C1---N2---C5---C4 −1.0 (5) C20---C19---C24---C23 −0.2 (4) C3---C4---C5---N2 −0.1 (5) P2---C19---C24---C23 178.5 (2) C1---N1---C6---P2 −75.7 (3) N3---P2---C25---C26 −157.5 (2) P1---N1---C6---P2 116.67 (18) C19---P2---C25---C26 87.3 (2) N3---P2---C6---N1 −97.21 (19) C6---P2---C25---C26 −27.7 (2) C19---P2---C6---N1 19.7 (2) N3---P2---C25---C30 23.1 (2) C25---P2---C6---N1 134.16 (18) C19---P2---C25---C30 −92.1 (2) N1---P1---C7---C12 79.6 (3) C6---P2---C25---C30 152.9 (2) C13---P1---C7---C12 −29.6 (3) C30---C25---C26---C27 −0.6 (4) N1---P1---C7---C8 −99.3 (2) P2---C25---C26---C27 −179.9 (2) C13---P1---C7---C8 151.5 (2) C25---C26---C27---C28 2.1 (4) C12---C7---C8---C9 −1.3 (5) C26---C27---C28---C29 −2.0 (5) P1---C7---C8---C9 177.7 (3) C27---C28---C29---C30 0.4 (5) C7---C8---C9---C10 0.8 (6) C28---C29---C30---C25 1.2 (5) C8---C9---C10---C11 0.1 (7) C26---C25---C30---C29 −1.0 (4) C9---C10---C11---C12 −0.5 (6) P2---C25---C30---C29 178.3 (2) C10---C11---C12---C7 −0.1 (5) C35---N4---C31---N3 −176.9 (3) C8---C7---C12---C11 1.0 (4) C35---N4---C31---C32 2.4 (5) P1---C7---C12---C11 −177.9 (2) P2---N3---C31---N4 6.4 (3) N1---P1---C13---C14 7.1 (3) P2---N3---C31---C32 −172.9 (2) C7---P1---C13---C14 114.2 (3) N4---C31---C32---C33 −1.4 (5) N1---P1---C13---C18 179.8 (2) N3---C31---C32---C33 177.8 (3) C7---P1---C13---C18 −73.1 (3) C31---C32---C33---C34 −0.5 (6) C18---C13---C14---C15 0.4 (5) C32---C33---C34---C35 1.4 (7) P1---C13---C14---C15 173.0 (2) C31---N4---C35---C34 −1.5 (6) C13---C14---C15---C16 −0.5 (5) C33---C34---C35---N4 −0.4 (7) ----------------------- -------------- ----------------------- ------------- Hydrogen-bond geometry (Å, °) {#tablewraphbondslong} ============================= -------------------------------------- Cg is the centroids of C7--C12 ring. -------------------------------------- --------------------- --------- --------- ----------- --------------- *D*---H···*A* *D*---H H···*A* *D*···*A* *D*---H···*A* C32---H32···N2^i^ 0.93 2.71 3.577 (3) 155\. C21---H21···N3^ii^ 0.93 2.73 3.569 (4) 150\. C28---H28···Cg^iii^ 0.93 2.87 3.603 (3) 136\. --------------------- --------- --------- ----------- --------------- Symmetry codes: (i) *x*, *y*+1, *z*; (ii) −*x*+1/2, *y*−1/2, −*z*+1/2; (iii) −*x*, *y*−1, −*z*+1/2. ###### Hydrogen-bond geometry (Å, °) *Cg* is the centroids of C7--C12 ring. *D*---H⋯*A* *D*---H H⋯*A* *D*⋯*A* *D*---H⋯*A* --------------------- --------- ------- ----------- ------------- C32---H32⋯N2^i^ 0.93 2.71 3.577 (3) 155 C21---H21⋯N3^ii^ 0.93 2.73 3.569 (4) 150 C28---H28⋯*Cg*^iii^ 0.93 2.87 3.603 (3) 136 Symmetry codes: (i) ; (ii) ; (iii) .
They Call Her… Cleopatra Wong There are certain films that become associated with one indelible image. For example, it’s hard to think of North by Northwest without conjuring a mental picture of Cary Grant being chased by that crop-duster, or of Singin’ in the Rain without immediately seeing Gene Kelly hanging off of that lamppost. In the case of the Filipino action film They Call Her… Cleopatra Wong, the image that invariably comes to mind – for those familiar with the film, at least – is that of comely star Marrie Lee brandishing an imposing looking, quadruple-barreled, sawed-off shotgun while dressed in a nun’s habit and wimple (thanks, El Santo). Marrie’s character is wearing that get-up for the purpose of infiltrating a gang of criminals who are also disguised as nuns. Though, of course, knowing the context of the image doesn’t do anything to reduce its fetishistic sexual charge. In fact, to my mind, the whole scenario is a perfect example of a filmmaker trying to have it both ways. In a country as deeply hit by the Catholic whammy as the Philippines, a nun’s habiliments carry a not inconsiderable amount of symbolic freight, and producer/writer/director Bobby Suarez here uses the criminals’ sacrilegious employment of that garb as a “how bad are they” demonstration of the depths of those criminals’ villainy, but then also employs it in much the same manner himself in order to titillate and scandalize his audience. This is a classic exploitation movie gambit, of course, but I think that, in this case, it’s also representative of an ambivalence that’s characteristic of both modern Catholicism (how many “ex-Catholics” do you know who aren’t still as deeply affected by the religion as they were when they were practicing it?) and of Filipino culture. After all, only the Philippines could have produced a movie like Elwood Perez’ Silip, a film that, for all intents and purposes, seems to be a screed against religious-based sexual repression and its resultant perversion of desire, but which couches its message in so much harrowing imagery of blood sacrifice and martyrdom that it’s difficult to fully enjoy the abundant full-frontal nudity and near-hardcore sex that it puts in service of expressing it. True, it’s still possible for the dedicated viewer to appreciate the naked form of the film’s gorgeous star Maria Isabel Lopez, but not without paying a certain amount of penance. It’s as if we’re seeing played out in the film the battle between the desire to cast off the punitive, bloody-minded version of Catholicism inherited from the country’s Spanish colonizers and the deeply ingrained practice of that religion forged from hundreds of years of observance. Thankfully, Cleopatra Wong‘s version of this ambivalence is not so jarring as to beg inquiry into its cultural roots, with the result that we can simply enjoy it as a film about a hot chick who dresses up like a church lady and blows shit up. This is clearly how both God and Bobby Suarez intended it. They Call Her… Cleopatra Wong was the second film to be produced by Suarez’s BAS Film Productions, and the first to be directed by him — although he did so under the pseudonym George Richardson, presumably to enhance the film’s export-friendly “international” feel. Like the previous BAS production, The Bionic Boy, the film relied on partial financial backing from Singapore, and drew from the local Singaporean talent pool for its star. In the case of The Bionic Boy, that star was a nine-year-old karate champion by the name of Johnson Yap, and, in the case of Cleopatra Wong, it was a seventeen year old typist with precious little film experience by the name of Doris Young. Young was chosen by Suarez from over three hundred applicants drawn from casting calls held in Singapore, Hong Kong, Malaysia and the Philippines, and was soon re-christened by the director with the name Marrie Lee — an attempt by Suarez to encourage associations with Bruce Lee in the minds of his intended audience. I couldn’t find any information about what kind of martial arts background Young might have had at the time of making Cleopatra Wong, aside from whatever training she was given in preparation for the film, but it really doesn’t matter. The goal with the film was to combine elements of Hong Kong action movies and the Bond films, while at the same time — and most obviously — creating an Asian counterpart to blaxploitation heroines like Tamara Dobson’s Cleopatra Jones. And in terms of the authenticity of its kung fu action, the finished product bares a far stronger family resemblance to American blaxploitation cinema than to any of its other inspirations. This is the type of martial arts film where the emphasis is placed firmly on striking bad-ass looking poses as opposed to actually executing any convincing looking moves. In fact, one actor in particular – playing a track-suited crime boss who, in the English dub, welcomes Cleo to his “viller” – exhibits a style very similar to that of the type of over-enthusiastic Kung Fu Theater fan you’d see practicing his moves in a 7-11 parking lot back in the day. In any case, Young deserves to be commended for the fact that she reportedly performed the majority of her own stunts in the movie, and ended up with her fair share of scrapes, bruises and powder burns to prove it. Young’s character, the titular Ms. Wong, is a Singapore-based Interpol agent who’s called upon by her superiors to investigate a mysterious counterfeiting ring. Said counterfeiters, it seems, are seeking to undermine the ASEAN nations by flooding their economies with fake currency, though from where and through what channels is unknown. Leaving her latest boy-toy in her hotel room in Manila, Cleo cuts her vacation short and heads back to Singapore to start the hunt. Her first move is to try and draw the attention of the criminals by going into a department store and buying an expensive watch with a wad of fake cash. Fortunately, it’s obviously a slow news day in Singapore, and once she’s apprehended by the store’s security guards — who easily identify the money as counterfeit despite us just being told how completely indistinguishable it was from the real thing — it ends up getting splashed across the next day’s front page headlines. And they don’t even try to sex it up. The headline just reads “WOMAN NABBED IN DEPT. STORE”, which suggests to me that the paper’s “B” section is probably filled with breathless accounts of people short tipping in restaurants and staring threateningly at small dogs. This is obviously what passes for lurid criminal exploits in a country where you can get arrested for chewing gum. Anyway, Cleo’s newfound notoriety as “Woman Nabbed in Dept. Store” leads to her being abducted and taken to the “viller” of Argo, the aforementioned track-suited boss of the counterfeiting ring’s Singapore operation. Given her reputation as a woman who allegedly tries to obtain watches through illegal means, Argo naturally wants to see a demonstration of her kung fu skills, and so a pair of fights are staged on the spot. The first fight, between Cleo and a trio of middle-aged wrestlers, is settled when Cleo whips off her skirt at a key moment to reveal the bright yellow hot pants jumpsuit she’s wearing underneath, with the result that her opponents become too preoccupied with making boner eyes at her to evade her lightning fists. The next involves a couple dozen karate guys, and ends when Cleo does one of those reverse-motion assisted leaps over the viller wall. Her encounter with Argo having provided nothing more than the opportunity for a couple of pointless but entertaining action set pieces, Cleo next follows a lead provided by her superiors to the film’s next exciting international location, Hong Kong. Despite the film’s Asian pedigree, said locale is introduced with the kind of “ching chong chopsticks” musical cue you’d expect to hear in a Mr. Wong movie from the thirties — whereas elsewhere Cleopatra Wong’s score is of a jaunty variety situated squarely in the no man’s land between blaxploitation funk and seventies shopping mall music. Cleo’s HK jaunt leads to the discovery that the phony bills are being smuggled inside jars of strawberry jam that are being shipped in from the Philippines. After a return to the P.I., several changes of outfits, and a couple more shambolic kung fu battles, Cleo’s diligent detective work leads to her uncovering the counterfeiters’ hideout: a catholic monastery located on a remote hillside. The bad guys, we will see, have imprisoned the nuns who are the rightful dwellers of the place, and are using its grounds to both print the fake bills and produce the jars of delicious breakfast spread in which they’re being smuggled. Cleo’s suspicions are confirmed when she observes, in the course of doing some helicopter surveillance, that the nuns who patrol the grounds are dudes, and that they are concealing automatic weapons under their habits. In those moments in They Call Her… Cleopatra Wong when Doris Young isn’t engaging in faux kung fu battles, or gamely performing motorcycle stunts, it can’t be said that she exactly burns up the screen with her charisma and sex appeal. This is not to say, however, that she lacks presence entirely. It’s just that hers is a fairly low key presence. Overall I’d say that she comes across as being pleasant and likable, though that impression on my part might just as easily have come from viewing her current website, where she devotes more space to her two dogs than to her entire film career. In any case, her low intensity performance suits the film well, because, compared to more bloody, revenge-minded action fare like Suarez’s later One Armed Executioner, it’s a fairly lighthearted affair, obviously intended not to be taken too seriously by anyone. To this end, Suarez does a passable job of keeping things breezing along, though he makes a mistake all too common in low budget action films: that of not distinguishing between action and mere movement. In the Hong Kong sequence, we’re treated to a scene of Cleo tailing a very slow moving truckload of strawberry jam in what seems to be real time — and every time we think that the scene has ended, we find that we’re only cutting to another leg of the journey. In addition, the lengthy sequence in which Cleo escape from Argo and his goons includes a “chase” between two aerial cable cars that depends entirely for its suspense on its audience being ignorant of how aerial cable cars actually function. Still, these are just isolated instances, as the movie’s thrills are for the most part adequately thrilling — if only by virtue of their silliness, or of Doris Young’s dogged commitment to selling them. They Call Her… Cleopatra Wong‘s crowning action set piece, the one that we’ve all tuned in for, gratifyingly takes up almost the entirety of the film’s final act. Cleo returns to her superior and tells him of her discovery of the gang’s hideout, hoping to secure a warrant so that she can make a search of the monastery. Unfortunately, her boss refuses, saying that there’s not enough evidence. Concerns over separation of church and state are also raised. Given such very understandable sensitivities, and the corresponding need to proceed with tact and caution, it is determined that the only alternative is to stage an armed, guerrilla-style raid on the monastery, shooting all of the bad guys inside and then blowing it up with plastic explosives once done. To this end, Cleo recruits four generously mustached cohorts — including the One-Armed Executioner himself, Franco “Chito” Guerrero — to assist her. Soon the drop is made, and the five, after making quick work of some guards on the monastery’s periphery, have all kitted themselves out as gun-wielding brides of Christ, ready to rain hell on the godless gang of funny-money makers. At one point during the closing moments of They Call Her… Cleopatra Wong, I paused to reflect upon the fact that I had been watching mustached men dressed as nuns shooting each other in slow motion for what seemed like twenty minutes, and that, while I had been moderately entertained by the spectacle, it certainly hadn’t inspired anything close to the stunned incredulity that such a scenario would seem to warrant. It is at times like these, I reckon, that I need to watch something made in Japan during the seventies — preferably directed by Norifumi Suzuki — in order to stir my jaded sensibilities back into a state appropriate to a sensate human being with a fully developed moral core. So preoccupied did I become with this troubling state of affairs that I almost failed to register the film’s climax, in which Cleopatra Wong chases the lead villain on a tricked out, MegaForce-worthy motorcycle equipped with rear-mounted machine guns and then, in the film’s lone instance of conspicuous production value, use her archery skills to blow up his helicopter with a rocket-tipped arrow. Bobby Suarez would bring Doris Young and Cleopatra Wong back to the big screen, shortly after the debut of They Call Her… Cleopatra Wong, in Dynamite Johnson, a film that was essentially an all-purpose sequel to both Cleopatra Wong and The Bionic Boy, in which Young costarred with Johnson Yap. After that, production on a third Cleopatra Wong film, Code Name: The Destroyers, was begun in Malaysia, but was hastily aborted after things went sour with the Malaysian backers and Suarez and crew had to flee the country. To recoup the loss from that debacle, Suarez then churned out an even-cheaper-than-usual final entry in the series, Pay or Die, which he hastily sold off at a bargain price. Soon after that, in 1981, Doris Young, aka Marrie Lee, hung up her wimple and shotgun for good and retired from the entertainment business. Today she runs a business selling healthcare products, but obviously – judging from her apparent willingness to cheerfully hold forth on the subject – looks back on her days as Cleopatra Wong with fondness And that fact adds yet another dimension to that oh-so-famous image of Marrie Lee. Because we can now gaze upon it, happy in the knowledge that the woman behind it got in, made her contribution, and then got out before the price of fame became too much. After all, any story concerning a shotgun-wielding nun deserves a happy ending, and we can all thank Doris Young for giving us one. hi, basically i dont have comment on the review, maybe i need to watch the whole movie first. i just want to say that i am really interested on knowing more things regarding Mr. Suarez. for me i think he is genius, anyway that is my opinion. i have not heard so much of him on the industry where he worked with. how come that this guy who have the ability to do create such movies that has a guts of biting an international markets doesnt have much recognition from our film industry? (or maybe i just didnt know, maybe?) anyway i really thinks his life and works are interesting. Alvin, of coz I remember you. I wonder if you will get to read this as it is over a year since your comment but if u do, please leave me a message at the enquiry page on my website cleopatrawong.com… I do hope we can get in touch again. I was in Bulacan this year Feb to attend Bobby’s funeral. Got to meet some of the cast and crew but it was like 30 years since I last met them. Anyway I hope to hear from you again…Cheers Cleo
805 F.Supp. 126 (1992) Benjamin J. ANDREWS, Jr., Frances C. Andrews, Plaintiffs, v. UNITED STATES of America, Defendant. No. 90-CV-724A. United States District Court, W.D. New York. September 17, 1992. *127 Kenneth Bersani, Gough, Skipworth, Summers, Eves & Trevett, PC, Rochester, N.Y., for plaintiffs. Steven E. Cole, U.S. Dept. of Justice, Tax Div., Washington, D.C., for defendant. ORDER ARCARA, District Judge. This Court, having carefully reviewed Magistrate Judge Carol E. Heckman's Report and Recommendation of August 13, 1992, as well as the pleadings and materials submitted by both parties; and no objections having been timely filed to the *128 Magistrate Judge's Report in the above-captioned matter, it is hereby ORDERED, that pursuant to 28 U.S.C. § 636(b)(1), the Magistrate Judge's Report and Recommendation is accepted in its entirety. IT IS FURTHER ORDERED that the Government's motion for summary judgment is granted. Further, that the Clerk of the Court is directed to enter final judgment in favor of the Government and against the plaintiff. It is so ordered. REPORT AND RECOMMENDATION HECKMAN, United States Magistrate Judge. In this case, Plaintiffs seek to estop the Government from collecting taxes which are admittedly due, arguing that they relied on the erroneous advice of an IRS employee. For the reasons set forth below, it is recommended that summary judgment be granted to the Government. This matter was referred to the undersigned by the Hon. Richard J. Arcara to hear and report on Defendant's motion for summary judgment, pursuant to 28 U.S.C. § 636(b)(1)(B). The following constitutes the undersigned's proposed findings and recommendations for the disposition of the motion. FACTS In accordance with Local Rule 25, the Government filed a Statement of Undisputed Facts as part of its motion for summary judgment. Plaintiffs, however, failed to comply with Local Rule 25. Since Plaintiffs have not controverted Defendant's statement, the material facts set forth in that statement are deemed admitted for purposes of this motion. Rule 25, Local Rules of the Western District of New York. This suit was filed under § 7422 of the Internal Revenue Code to recover income taxes and interest alleged to be erroneously collected by the IRS for the tax years 1978 through 1982. Plaintiff Benjamin J. Andrews, Jr. is an attorney, and Plaintiff Frances C. Andrews is his wife. In their original Form 1040 for tax year 1980, Plaintiffs claimed a loss of $27,321 for an investment in a partnership known as "Lighthouse Hill Associates" ("LHA"). The Plaintiffs claimed a deduction of $30,137 in 1979 for this partnership, and of $52,500 in 1978 for another partnership loss. The Internal Revenue Service subsequently audited these returns, as well as the returns for 1981 and 1982. On April 6, 1984, the Andrews' received a statutory Notice of Deficiency for the year 1980, stating that Plaintiffs owed an increase of tax for that year in the amount of $10,953. This Notice of Deficiency was based primarily upon the full disallowance of the loss Plaintiffs had claimed for LHA (Gov't Ex.A, attached to Defendant's Memorandum in Support of Motion for Summary Judgment, Item 9). Plaintiffs then petitioned the United States Tax Court for a redetermination of this deficiency, claiming that the IRS disallowance of their $27,321 deduction relating to LHA was erroneous. On October 31, 1984, this petition was dismissed as untimely (id., Gov't Exs.B & C). On May 10, 1985, the IRS assessed the $10,953 tax deficiency (plus interest) against the Plaintiffs for 1980. Plaintiffs then filed a Form 1040X, Amended U.S. Individual Income Tax Return for 1980, wherein they claimed that their tax assessment of $10,953 should be reduced to $8,347, representing allowance by the IRS of a deduction for Plaintiffs' out-of-pocket expenditures for the partnership, as well as a deduction for $672 for income which was turned over to Mr. Andrews' law firm (then Saperston & Day) (id., Gov't Ex.D). According to this Form 1040X, Plaintiffs still owed an additional $5,264. This return was accompanied by a cover letter from Plaintiff Benjamin Andrews, which stated: After you have had an opportunity to examine [the amended return], I trust that you will forward a revised bill, including the breakdown of interest owed. Id. In the meantime, the IRS began to examine Plaintiffs' 1978, 1979, 1981 and 1982 *129 returns as well. Eventually, all five years were considered together. On September 26, 1985, Plaintiffs accepted an offer by the IRS to settle their 1980 case for allowance of a deduction in the amount of their out-of-pocket expenses of $7,500 relating to LHA (id., Gov't Ex.E). Plaintiffs also accepted the IRS offer to settle their 1979 tax case for an allowance of a $10,000 deduction for out-of-pocket expenses. On October 18, 1985, the IRS sent the Plaintiffs a letter informing them that their claim for partial abatement of their 1980 income tax liability had been allowed (id., Gov't Ex.F). Enclosed with this October 18, 1985 letter was a Form 1902-B, Report of Individual Income Tax Examination Changes, reflecting allowance of a deduction for Plaintiffs' out-of-pocket investment of $7,500 to LHA and $673 for non-employment compensation (id., Ex.F, p. 2). The October 18, 1985 letter clearly stated: A partial abatement is shown on the report enclosed. The remaining balance is due and payable immediately. (id., Ex.F, p. 1). On the next page, the computation sheet stated that the assessment for 1980 would be reduced by $4,544 as a result of the adjustment, but on another line it erroneously stated that there had been a $4,544 "overpayment." It is this erroneous statement, made three weeks after the settlement was agreed upon, that Plaintiffs rely upon for their claim that all of the liability arising from the settlement should be abated. On November 1, 1985, Plaintiffs signed Form 906, "Closing Agreement on Final Determination Covering Specific Matters," which was subsequently signed by the IRS on December 6, 1985 (id., Gov't Ex.G). In that agreement, the Plaintiffs were allowed a deduction for out-of-pocket expenses in the amount of $7,500 for investments in LHA for 1980 and $10,000 for 1979. Plaintiffs also executed a closing agreement for 1978 wherein they were allowed out-of-pocket expenses of $15,000 relating to the partnership in that year, as well as closing agreements for 1981 and 1982. Plaintiffs also executed and submitted the examination reports for 1978 through 1982 (id., Gov't Exs.I-M). Plaintiffs contend, and the Government does not dispute, that they were verbally advised by the IRS agent assigned to their case that the total cost of the settlement, including interest, would be $27,000.[1] In fact, at oral argument, the Government stipulated for purposes of this motion that the IRS employee at some point advised the taxpayers orally that the total amount owed would be $27,000 for all five years. Plaintiffs paid the $27,000, believing that this would resolve the matter entirely. In July of 1986, the IRS sent Plaintiffs a bill for an additional $12,465 for 1980. This amount was based on the original assessment of $10,953, plus interest, less the abatement of $4,544 and corresponding abated interest. On September 25, 1986, Plaintiffs filed a second Form 1040X, Amended U.S. Individual Income Tax Return, for the years 1978 through 1982, which stated in pertinent part as follows (id., Gov't Ex.H): At the time the Andrews settled their case, they requested from the Service a calculation of the tax and interest due for all five years, which was approximately $27,000, and paid this amount prior to the end of 1985. In 1986, the Andrews were contacted by this Service concerning additional amounts allegedly owed for 1980, 1981 and 1982. They had paid the additional amounts for 1981 and 1982. The amount allegedly owed for 1980 is in suspense. The amounts allegedly owed for 1980, 1981 and 1982 are in excess of those amounts which the Andrews were told would be owed when they settled the case. The apparent discrepancy is one which was created by the Service. The document which the Andrews signed for 1980 shows that an overpayment is due them in the amount of $4,544. The Service now alleges that an amount in excess of $12,000 is in fact owed for that year. *130 This was not the statement made by the Service to the Andrews at the time that they settled. Instead, they were advised by the Service that the net adjustments would result in an overpayment in 1980 of $4,544, as is reflected in the attached report dated 10/17/85. DISCUSSION Plaintiffs now claim that they were induced to enter into closing agreements with the Internal Revenue Service for the years 1978 through 1982 based on misrepresentations by the IRS as to the total amount due under the settlement agreement with respect to the years in question. The United States has moved for summary judgment on the basis that Plaintiffs cannot, as a matter of law, set aside the closing agreements in question. Furthermore, the United States argues that even if the closing agreements were to be set aside, the Plaintiffs have not claimed, nor may they now attempt to claim, that they are entitled to allowance of deductions in excess of the amounts they were allowed under the settlement agreement. Finally, since Plaintiffs' complaint sounds in estoppel, the Government argues that estoppel will not lie against the United States in matters involving the United States Treasury. Pursuant to Rule 56(c) of the Federal Rules of Civil Procedure, summary judgment "shall be rendered forthwith if the pleadings, depositions, answers to interrogatories, and admissions on file, together with affidavits, if any, show that there is no genuine issue as to any material fact and that the moving party is entitled to summary judgment as a matter of law." Fed.Rule Civ.P. 56(c). [T]he plain language of Rule 56(c) mandates the entry of summary judgment, after adequate time for discovery and upon motion, against a party who fails to make a showing essential to that party's case, and on which that party will bear the burden of proof at trial. Celotex Corp. v. Catrett, 477 U.S. 317, 322, 106 S.Ct. 2548, 2552, 91 L.Ed.2d 265 (1986). As already discussed, the record in this case leaves all essential facts undisputed, making this case strictly a question of law. The Internal Revenue Code states that the Commissioner "may enter into a closing agreement which, if approved by the designated official, is final and conclusive in the absence of fraud, malfeasance, or misrepresentation." 26 U.S.C. § 7121. For a closing agreement to be set aside for misrepresentation of a material fact, there must be a mutual mistake of fact which is more than a misstatement. "[I]nnocent mistakes should be buried in a closing agreement." Commissioner v. Ingraham, 87 F.2d 915, 916 (3rd Cir.1937). A mere mistake of fact or law, whether unilateral or mutual, no matter how material, is not a misrepresentation. Id.; see also, Cramp Shipbuilding Company v. Commissioner, 14 Tax Ct. 33 (1950). It is apparent from a review of Form 1902-B (Item 9, Govt. Ex.F) that it contains a mistake. On the cover letter, the document indicates that "a partial abatement is shown on the report enclosed or the remaining balance is due and payable immediately." On the next page, the report contains the mechanical computations showing an adjustment in income in favor of the taxpayer in the amount of $8,173 and an over-assessment in tax in the amount of $4,544. Then, the over-assessment is again entered under the column that is described as "overpayment." It is readily apparent that the IRS employee merely entered this figure on the wrong line and that additional tax was due for the year 1980. Furthermore, the record in this case shows that Plaintiffs were well aware that they had a net tax due for the year 1980. Plaintiffs agreed to settle their 1978 to 1982 tax cases with the IRS on September 26, 1985, some three weeks prior to the date of Form 1902-B. Form 1902-B merely constituted the mechanical computation based upon the prior settlement agreement. Thus, the alleged misrepresentation made by the IRS could have had no effect upon the Plaintiffs' decision to agree to settle that issue, and cannot provide a basis for a *131 refund of amounts paid pursuant to the settlement. Further support for this view is found in the first Form 1040X (Item 9, Gov't Ex.D), filed by Plaintiffs in June of 1985. In their cover letter enclosing the form, Plaintiffs recognized that, even if the IRS allowed their proposed deductions in full, additional tax and interest would remain due and owing for 1980. Thus, even if this error amounted to a "misrepresentation," it is apparent that it was not material because the Plaintiffs had previously agreed to the adjustments in the closing agreements and previously acknowledged that additional tax was due. In response, Plaintiffs have attempted to obfuscate the facts by intimating, but not specifically stating, that the alleged misrepresentations may have occurred prior to September 26, 1985, the date Plaintiffs accepted IRS's offer to settle their 1980 case (Item 9, Gov't Ex.E). However, when specifically questioned on this point at oral argument, Plaintiffs' counsel could only cite the affidavit of Benjamin J. Andrews, dated October 3, 1991, in support of this position. Nothing in the Andrews affidavit provides any time frame when the alleged misrepresentation was made. Moreover, the attachment to Form 1040X (id., Gov't Ex.H), filed by Plaintiffs and quoted above at length, states that the Plaintiffs were told the total cost would be $27,000 at the time that they settled their case. Plaintiffs also contended at oral argument that the document by which Plaintiffs accepted the IRS's offer of settlement (Item 9, Gov't Ex.E) was executed with the implicit understanding that it would not be binding. However, there is nothing in the Andrews affidavit or in any other part of the record which would support this position. To the contrary, as already noted, in June of 1985, Plaintiffs clearly contemplated additional tax due and owing for 1980. But, even if Plaintiffs had successfully disputed the timing of the alleged misrepresentations, such that they could establish a basis for contending that the closing agreements should be set aside, it would nevertheless be insufficient to defeat the motion for summary judgment. Plaintiffs do not assert that their deductions for losses relating to LHA should have been allowed by the IRS in full. See Jones v. Liberty Glass Co., 332 U.S. 524, 531, 68 S.Ct. 229, 232, 92 L.Ed. 142 (1947). As a matter of law, Plaintiffs cannot establish an overpayment which should be refunded to them. In establishing an overpayment, Plaintiffs are limited to the grounds set forth in their claims for a refund. See, e.g., United States v. Felt & Tarrant Co., 283 U.S. 269, 51 S.Ct. 376, 75 L.Ed. 1025 (1931); Ronald Press v. Shea, 114 F.2d 453 (2d Cir.1940). In the second Form 1040X, filed on September 25, 1986, Plaintiffs' sole contention is an estoppel claim — i.e., "[t]he amounts allegedly owed for 1980, 1981 and 1982 are in excess of those amounts which the Andrews were told would be owed when they settled the case" (Item 9, Gov't Ex.H). Thus, Plaintiffs contend that they are entitled to a refund of taxes due for 1980 based on an alleged miscalculation of tax due with respect to the settlement agreement for 1978 to 1982, such that they were induced to settle with the IRS for those years based on incorrect information provided by the IRS. This claim sounds only in estoppel. This claim must fail for a number of reasons. First, the essential elements of estoppel are missing. Heckler v. Community Health Services, 467 U.S. 51, 59, 104 S.Ct. 2218, 2223, 81 L.Ed.2d 42 (1984). The settlement was agreed upon before the computation was made and therefore there could be no reasonable reliance. In addition, any misrepresentation was one of law — i.e., the legal computation of the agreed upon deduction — not one of fact. Finally, Plaintiffs cannot show that they changed their position for the worse in reliance on advice from the IRS. To the contrary, all indications are that Plaintiffs did know of this IRS mistake because they acknowledged that taxes were due as early as June of 1985 (see Ex.D). *132 But even if the required elements of estoppel were present in this case, estoppel will not lie against the United States under the circumstances presented. Office of Personnel Management v. Richmond, 496 U.S. 414, 110 S.Ct. 2465, 110 L.Ed.2d 387 (1990). There, the Supreme Court held that erroneous advice given by a government employee concerning a claimant's eligibility for disability benefits did not estop the Government from denying benefits not otherwise permitted by law. The Court noted that it was undisputed that the award sought by the claimant was in direct contravention of the federal statute. Relying on the Appropriations Clause of the Constitution, the Supreme Court refused to apply the equitable doctrine of estoppel because the payment of public funds had not been authorized by Congress. Id., 496 U.S. at 426, 110 S.Ct. at 2472. The Court stated: Extended to its logical conclusion, operation of estoppel against the Government in the context of payment of money from the Treasury could in fact render the Appropriations Clause a nullity. If agents of the Executive were able, by their unauthorized oral or written statements to citizens, to obligate the Treasury for the payment of funds, the control over public funds that the Clause reposes in Congress in effect could be transferred to the Executive. . . . . . As for monetary claims, it is enough to say that this Court has never upheld an assertion of estoppel against the Government by a claimant seeking public funds. In this context there can be no estoppel, for courts cannot estop the Constitution. Id., 496 U.S. at 428, 434, 110 S.Ct. at 2473, 2476; accord, Heckler v. Community Health Services, 467 U.S. 51, 104 S.Ct. 2218, 81 L.Ed.2d 42 (1984) (government cannot be estopped from recovering medicare overpayments even though recipient relied on express authorization of government agent in making expenditures). This case is on all fours with Richmond and Heckler. Here, Plaintiffs cannot show an entitlement under the Internal Revenue Code to the refund they seek. Since Plaintiffs' claim has not been authorized by an Act of Congress, it is prohibited by the Appropriations Clause. The final argument made by Plaintiffs relates to the statute of limitations. Plaintiffs consented to waive the statute of limitations for assessment under 26 U.S.C. § 6501(a) by filing Forms 872-a (Special Consent to Extend the Time to Assess Tax), which consents were terminated by assessments of tax deficiencies for 1978 to 1982. Plaintiffs argue that those assessments were illegal or erroneous, and that because the consents were terminated, it is now too late to make a correct assessment. This argument is without merit. Plaintiffs must demonstrate that they have overpaid a certain amount of tax in order to be entitled to a refund of that overpayment. Plaintiffs are not entitled to have the entire assessment invalidated. United States v. Schroeder, 900 F.2d 1144, 1148 (7th Cir.1990). The case cited by Plaintiffs, Roszkos v. Commissioner, 850 F.2d 514 (9th Cir.1988), does not support their theory. Roszkos merely held that an invalid notice of deficiency issued by the IRS did not terminate the taxpayer's consent to extend the statute of limitations on assessment. It has no applicability to the facts before this Court. Moreover, Section 6501 does not forbid the Government from collecting and retaining taxes voluntarily paid without assessment and which do not constitute an overpayment. Ewing v. U.S., 914 F.2d 499, 503-504 (4th Cir.1990). Thus, even if the assessments were invalid, because the Plaintiffs voluntarily paid the tax and have not established any overpayment, the Government is entitled to retain those amounts as a matter of law. For the foregoing reasons, it is recommended that summary judgment be granted to the Government. Pursuant to 28 U.S.C. § 636(b)(1), it is hereby ORDERED, that this Report and Recommendation be filed with the Clerk of the Court. *133 ANY OBJECTIONS to this Report and Recommendation must be filed with the Clerk of this Court within ten (10) days after being served with this Report and Recommendation in accordance with the above statute, Fed.R.Civ.P. 72(b) and Local Rule 30(a)(3). Failure to file objections within the specified time or to request an extension of such time waives the right to appeal the District Court's Order. Thomas v. Arn, 474 U.S. 140, 106 S.Ct. 466, 88 L.Ed.2d 435 (1985); Wesolek, et al. v. Canadair Ltd., et al., 838 F.2d 55 (2d Cir.1988). The parties are reminded that, pursuant to Rule 30(a)(3) of the Local Rules for the Western District of New York, "written objections shall specifically identify the portions of the proposed findings and recommendations to which objection is made and the basis for such objection and shall be supported by legal authority." Failure to comply with the provisions of Rule 30(a)(3), or with the similar provisions of Rule 30(a)(2) (concerning objections to a Magistrate Judge's Decision and Order), may result in the District Court's refusal to consider the objection. Let the Clerk send a copy of this Order and a copy of the Report and Recommendation to the attorneys for the Plaintiff and the Defendants. SO ORDERED. NOTES [1] The closing documents themselves do not specify the total tax bill.
<?xml version="1.0" encoding="UTF-8"?> <interface> <!-- interface-requires gtk+ 2.12 --> <!-- interface-requires kiwiwidgets 0.0 --> <!-- interface-naming-policy toplevel-contextual --> <object class="GtkWindow" id="DeviceSettingsEditor"> <property name="can_focus">True</property> <property name="default_width">440</property> <property name="default_height">250</property> <child> <object class="GtkTable" id="table1"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="border_width">6</property> <property name="n_rows">5</property> <property name="n_columns">4</property> <property name="column_spacing">6</property> <property name="row_spacing">6</property> <child> <object class="GtkLabel" id="label5"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="xalign">1</property> <property name="label" translatable="yes">Device Type:</property> </object> <packing> <property name="x_options">GTK_FILL</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="GtkAlignment" id="alignment1"> <property name="visible">True</property> <property name="can_focus">True</property> <child> <object class="GtkHSeparator" id="hseparator1"> <property name="visible">True</property> <property name="can_focus">True</property> </object> </child> </object> <packing> <property name="right_attach">4</property> <property name="top_attach">1</property> <property name="bottom_attach">2</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="GtkLabel" id="label6"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="xalign">1</property> <property name="label" translatable="yes">Brand:</property> </object> <packing> <property name="top_attach">2</property> <property name="bottom_attach">3</property> <property name="x_options">GTK_FILL</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="GtkLabel" id="label7"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="xalign">1</property> <property name="label" translatable="yes">Port:</property> </object> <packing> <property name="top_attach">3</property> <property name="bottom_attach">4</property> <property name="x_options">GTK_FILL</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="ProxyComboBox" id="type_combo"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="data_type">int</property> <property name="model_attribute">type</property> </object> <packing> <property name="left_attach">1</property> <property name="right_attach">2</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="ProxyComboBox" id="brand_combo"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="data_type">str</property> <property name="model_attribute">brand</property> </object> <packing> <property name="left_attach">1</property> <property name="right_attach">2</property> <property name="top_attach">2</property> <property name="bottom_attach">3</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="ProxyComboBox" id="device_combo"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="data_type">str</property> <property name="model_attribute">device_name</property> </object> <packing> <property name="left_attach">1</property> <property name="right_attach">2</property> <property name="top_attach">3</property> <property name="bottom_attach">4</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="GtkLabel" id="label10"> <property name="visible">True</property> <property name="can_focus">True</property> </object> <packing> <property name="left_attach">2</property> <property name="right_attach">4</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="GtkLabel" id="label8"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="xalign">1</property> <property name="label" translatable="yes">Model:</property> </object> <packing> <property name="left_attach">2</property> <property name="right_attach">3</property> <property name="top_attach">2</property> <property name="bottom_attach">3</property> <property name="x_options">GTK_FILL</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="GtkLabel" id="label9"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="xalign">1</property> <property name="label" translatable="yes">Host:</property> </object> <packing> <property name="left_attach">2</property> <property name="right_attach">3</property> <property name="top_attach">3</property> <property name="bottom_attach">4</property> <property name="x_options">GTK_FILL</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="ProxyComboBox" id="model_combo"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="data_type">str</property> <property name="model_attribute">model</property> </object> <packing> <property name="left_attach">3</property> <property name="right_attach">4</property> <property name="top_attach">2</property> <property name="bottom_attach">3</property> <property name="x_options">GTK_FILL</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="ProxyComboEntry" id="station"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="data_type">object</property> <property name="mandatory">True</property> <property name="model_attribute">station</property> </object> <packing> <property name="left_attach">3</property> <property name="right_attach">4</property> <property name="top_attach">3</property> <property name="bottom_attach">4</property> <property name="x_options">GTK_FILL</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="ProxyCheckButton" id="is_active_button"> <property name="label" translatable="yes">Active</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">False</property> <property name="use_action_appearance">False</property> <property name="draw_indicator">True</property> <property name="data_type">bool</property> <property name="model_attribute">is_active</property> </object> <packing> <property name="left_attach">2</property> <property name="right_attach">4</property> <property name="top_attach">4</property> <property name="bottom_attach">5</property> <property name="y_options">GTK_FILL</property> </packing> </child> <child> <object class="GtkLabel" id="label1"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="xalign">1</property> <property name="label" translatable="yes">Baudrate:</property> </object> <packing> <property name="top_attach">4</property> <property name="bottom_attach">5</property> </packing> </child> <child> <object class="ProxyComboBox" id="baudrate"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="data_type">int</property> </object> <packing> <property name="left_attach">1</property> <property name="right_attach">2</property> <property name="top_attach">4</property> <property name="bottom_attach">5</property> </packing> </child> </object> </child> </object> </interface>
--- abstract: | We study entire functions whose zeros and one-points lie on distinct finite systems of rays. General restrictions on these rays are obtained. In particular, we show that the zeros and one-points can lie on two different lines only for quadratic polynomials and exponential functions. Non-trivial examples of entire functions with zeros and one-points on different rays are constructed, using the Stokes phenomenon for second order linear differential equations. MSC 2010: 30D20, 30D35, 34M40, 30D05. Keywords: entire function, radially distributed value, linearly distributed value, value distribution, linear differential equation, Stokes phenomenon, spectral determinants. author: - 'Walter Bergweiler, Alexandre Eremenko[^1]$\;$ and Aimo Hinkkanen' title: Entire functions with two radially distributed values --- Introduction {#sec1} ============ The zeros of an entire function can be arbitrarily assigned, but in general one cannot assign the preimages of two values [@Nev]. Since this work of Nevanlinna, various necessary conditions which the sets of zeros and $1$-points of an entire function must satisfy were found; see, e.g., [@Ozawa; @RubelYang; @Winkler]. Besides an intrinsic interest, these conditions are relevant to control theory [@Bl; @BE; @E]. In this paper we study the simplest setting when the zeros and $1$-points lie on finitely many rays, or are close to finitely many rays. We begin by recalling some classical results. The word “ray” in this paper will always mean a ray from the origin. For an entire function $f$, we say that a value $a$ is [*radially distributed*]{} if the set $f^{-1}(a)$ is contained in the union of finitely many rays. [(A. Edrei [@Edr])]{} Suppose that all zeros and $1$-points of an entire function $f$ are distributed on a finite set of rays, and let $\omega$ be the smallest angle between these rays. Then the order of $f$ is at most $\pi/\omega$. [(I. N. Baker [@Ba], T. Kobayashi [@Kob])]{} Suppose that all zeros of a transcendental entire function $f$ lie on a line $L_1$ and all $1$-points lie on a different line $L_2$ parallel to $L_1$. Then $f(z)=P(e^{az})$ with some $a\in\C$ and a polynomial $P$. We complement the theorem of Baker and Kobayashi with the following result. \[thm1\] Suppose that all zeros of an entire function $f$ lie on a line $L_1$ and all $1$-points lie on a different line $L_2$ intersecting $L_1$. Then $f$ is either of the form $f(z)=e^{az+b}$ or $f(z)=1-e^{az+b}$, or a polynomial of degree at most $2$. As a corollary we obtain that if the zeros and $1$-points of an entire function $f$ lie on two distinct rays, then $f$ is a polynomial of degree at most $2$. It is remarkable that there are non-trivial examples of entire functions whose zeros lie on the positive ray and all $1$-points lie on two rays that are not contained in the real line. \[thm2\] For every integer $m\geq 3$, there exists an entire function $f$ of order $1/2+1/m$ whose zeros are positive and whose $1$-points lie on the two rays $\left\{ z \colon \arg z=\pm 2\pi/(m+2)\right\}$. Theorem \[thm1\] implies that such functions do not exist for $m=2$. Taking $f(z^n)$ we obtain an entire function whose zeros lie on $n$ rays and whose $1$-points lie on $2n$ rays distinct from those rays where the zeros lie. Now we relax the condition that the zeros and $1$-points are radially distributed. We use the standard notation of the theory of entire and meromorphic functions [@GO]. Let $$A=\bigcup_{j=1}^n A_j,\quad A_j=\{ te^{i\alpha_j} \colon t\geq 0\}, \quad 0\leq \alpha_1\ldots<\alpha_n<2\pi,$$ be a finite union of rays. For $\varepsilon>0$ let $A_\varepsilon$ be the union of the sectors $$A_\varepsilon=\bigcup_{j=1}^n\{ z \colon |\arg z-\alpha_j|<\varepsilon\}.$$ We say that the $a$-points of an entire function $f$ are [*close*]{} to the set $A$ if, for every $\varepsilon>0$, we have $$n(r,\C\backslash A_\varepsilon,a,f)=o(\log M(r)),\quad r\to\infty,$$ where the left hand side is the number of $a$-points in $\{ z\in \C\backslash A_\varepsilon \colon |z|\leq r\}$ and $M(r)=\max \{ |f(z)| \colon |z|\leq r \} $. Our next result concerns the situation when the zeros are close to a finite union of rays $A$ and the $1$-points are close to a finite union of rays $B$, where the sets $A$ and $B$ are disjoint, apart from the origin. We will assume that the system of rays $A\cup B$ is [*minimal*]{} is the sense that for every ray $\{ t e^{i\alpha}\colon t\geq0\}\subset A\cup B$ there is a sequence $(z_k)$ tending to $\infty$ such that $f(z_k)\in\{0,1\}$ for all $k$ and $\arg z_k\to \alpha$ as $k\to\infty$. \[thm3\] Let $f$ be a transcendental entire function of order $\rho<\infty$ whose zeros are close to $A$ and whose $1$-points are close to $B$, with $A\cap B=\{ 0 \} $. Suppose that the system $A\cup B$ is minimal. Then $$\label{piomega} \rho=\frac{\pi}{\omega}>\frac12,$$ where $\omega$ is the [*largest*]{} angle between adjacent rays in $A\cup B$, and there exists a system of rays $C=\bigcup_{j=1}^{2m}C_j \subset A\cup B$, with $m\geq 1$, partitioning the plane into $2m$ sectors $S_j$ such that $\partial S_j=C_j\cup C_{j+1}$ for $1\leq j \leq 2m-1$ and $\partial S_{2m} = C_{2m} \cup C_1$, with the following properties: $(i)$ The angle of $S_j$ at $0$ is $\pi/\rho$ when $j$ is even, and at most $\pi/\rho$ when $j$ is odd. $(ii)$ Both boundary rays of an odd sector belong to the same set, $A$ or $B$. $(iii)$ There are no rays of $A\cup B$ inside the even sectors. $(iv)$ If there are rays of $A$ inside an odd sector, then the boundary rays of this sector belong to $B$. If there are rays of $B$ inside an odd sector, then the boundary rays of this sector belong to $A$. $(v)$ If there are no rays of $A\cup B$ in an odd sector, then its opening angle is $\pi/\rho$. The next result – whose proof we will only sketch – shows that Theorem \[thm3\] is best possible. \[thm4\] Let $A$ and $B$ be systems of rays, satisfying conditions $(i)$–$(v)$ of Theorem $\ref{thm3}$ for some $\rho\in(0,\infty)$. Then there exists an entire function of order $\rho$ whose zeros are close to $A$ and whose $1$-points are close to $B$. Moreover, for all finite systems of rays $A$ and $B$ there exists an entire function of infinite order whose zeros are close to $A$ and whose $1$-points are close to $B$. We illustrate our results by considering the case of three distinct rays. First we note the following consequence of Theorems \[thm1\] and \[thm3\]. Let $f$ be a transcendental entire function whose zeros lie on a ray $L_0$ and whose $1$-points lie on two rays $L_1$ and $L_{-1}$. Suppose that the numbers of zeros and $1$-points are infinite. Then $\angle(L_0,L_1)=\angle(L_0,L_{-1})<\pi/2$. Now we consider three rays $$L_j=\{ te^{ij\alpha}:t\geq 0\},\quad j\in\{-1,0,1\},$$ with $\alpha\in (0,\pi)$. Theorems \[thm1\]–\[thm3\] imply the following: Theorem \[thm2\] shows that for certain $\alpha\in(0,\pi/2)$ there exists a transcendental entire function of order $\pi/(2\pi-2\alpha)$ whose zeros lie on $L_0$ while its $1$-points lie on $L_1\cup L_{-1}$. It remains open whether this holds for all $\alpha\in(0,\pi/2)$; see the discussion at the end of the paper on possible generalizations of this theorem. If $\alpha=\pi/2$, then, according to Theorem \[thm1\], there is no transcendental entire function with infinitely many zeros on $L_0$ and infinitely many $1$-points on $L_1\cup L_{-1}$. However, the entire function $f(z)=1/\Gamma(-z)$ has zeros on $L_0$ and $1$-points close to the imaginary axis. This follows from Stirling’s formula. Finally, Theorem \[thm3\] implies that if $\alpha\in (\pi/2,\pi)$, then there is no transcendental entire function whose zeros are close to $L_0$ and whose $1$-points are close to $L_1\cup L_{-1}$. Theorems \[thm1\] and \[thm2\] answer questions 3.1 and 3.2 asked by Gary Gundersen in [@Gunder]. We thank him for drawing our attention to these questions and for interesting discussions which stimulated this work. The plan of the paper is the following. In section \[sec2\] we prove Theorem \[thm3\] and the corollary. The proof of Theorem \[thm4\] showing the sharpness of Theorem \[thm3\] is then sketched in section \[proofthm4\]. In section \[sec3\] we will use Theorem \[thm3\] to prove Theorem \[thm1\]. The proof of Theorem \[thm2\] is independent of the rest and will be given in section \[sec4\]. Proof of Theorem \[thm3\] and the corollary {#sec2} =========================================== *Proof of Theorem* [\[thm3\]]{}. If the $a$-points of $f$ are close to a finite system of rays, then evidently $f(z+c)$ has the same property for every $c\in\C$, with the same rays. Therefore we may assume without loss of generality that $$\label{B} f(0)\not\in\{0,1\}.$$ Let $(r_k)$ be a sequence tending to $\infty$ with the property that $$\label{A} \log M(tr_k)=O(\log M(r_k)),\quad k\to\infty,$$ for every $t>0$. Such sequences always exist for functions of finite order. A sequence $(r_k)$ is called a sequence of [*Pólya peaks of order $\lambda\in [0,\infty)$*]{} for $\log M(r)$, if for every $\varepsilon>0$ we have $$\label{pp} \log M(tr_k)\leq (1+\varepsilon)t^{\lambda}\log M(r_k), \quad \varepsilon\leq t\leq \varepsilon^{-1},$$ when $k$ is large enough. It is clear that every sequence of Pólya peaks satisfies (\[A\]). According to a result of Drasin and Shea [@DS], Pólya peaks of order $\lambda$ exist for all finite $\lambda\in[\rho_*,\rho^*]$, where $$\label{rho1} \rho^*=\sup\left\{ p>0 \colon \limsup_{r,A\to\infty} \frac{\log M(Ar)}{A^p\log M(r)}=\infty\right\}$$ and $$\rho_*=\inf\left\{ p>0 \colon \liminf_{r,A\to\infty} \frac{\log M(Ar)}{A^p\log M(r)}=0\right\}.$$ We always have $$0\leq\rho_*\leq\rho\leq\rho^*\leq\infty,$$ so when $\rho<\infty$, then there exist Pólya peaks of some (finite) order $\lambda$. We refer to [@Hor Ch. III], [@Hor2 Ch. III] and [@Ran] for the basic results on subharmonic functions used below. Fixing a sequence $(r_k)$ with the property (\[A\]), we consider the two sequences $(u_k)$ and $(v_k)$ of subharmonic functions given by $$u_k(z) = \frac{\log|f(r_kz)|}{\log M(r_k)}\quad\mbox{and}\quad v_k(z) = \frac{\log|f(r_kz)-1|}{\log M(r_k)}.$$ In view of (\[A\]), these sequences are bounded from above on every compact subset of $\C$. It follows from (\[B\]) that the sequences $u_k(0)$ and $v_k(0)$ tend to $0$. According to a well known compactness principle (see, for example, [@Hor Theorems 4.1.8, 4.1.9] or [@Hor2 Theorems 3.2.12, 3.2.13]), one can choose a subsequence of $(r_k)$, which we do without changing notation, such that the limit functions $$\label{lim} u(z)=\lim_{k\to\infty}\frac{\log|f(r_kz)|}{\log M(r_k)}\quad\mbox{and}\quad v(z)=\lim_{k\to\infty}\frac{\log|f(r_kz)-1|}{\log M(r_k)}$$ exist and are subharmonic. Here the convergence is in the Schwartz space $\mathscr{D}'$. It implies the convergence of the Riesz measures, as the Laplacian is continuous in $\mathscr{D}'$. The functions $u$ and $v$ are non-zero subharmonic functions, and they have the following properties: $(a)$ $u^+=v^+$. $(b)$ $\{ z \colon u(z)<0\}\cap\{ z \colon v(z)<0\}=\emptyset$. $(c)$ $u$ is harmonic in $\C\backslash A$ and $v$ is harmonic in $\C\backslash B$. If $(r_k)$ is a sequence of Pólya peaks of order $\lambda>0$, then we have the additional property $(d)$ $u(0)=v(0)=0$, and $\max\{u(z),v(z)\}\leq |z|^\lambda$ for all $z\in {\mathbb C}$. Properties $(a)$ and $(b)$ are evident. Property $(c)$ holds because the Laplacian is continuous in $\mathscr{D}^\prime$. Property $(d)$ is a consequence of (\[B\]) and (\[pp\]). Indeed, (\[B\]) and $$u(0)\geq\limsup_{k\to\infty} u_k(0),$$ (see [@Hor (4.1.8)]) imply that $u(0)\geq 0$, while (\[pp\]) yields $u(z)\leq|z|^\lambda$ and thus, in particular, $u(0)=0$. The same argument applies to $v$. The components of the complement $\C\backslash(A\cup B)$ will be called [*sectors of the system*]{} $A\cup B$. \[le1\] Let $u$ and $v$ be two non-zero subharmonic functions in the plane which satisfy $(a)$, $(b)$ and $(c)$. Then either $u(z)\equiv v(z)\equiv c$ for some $c>0$, or there exist an even number of rays $C_1,\ldots,C_{2m}$, with $m\geq 1$, that belong to $A\cup B$ and partition the plane into sectors $S_j$, so that $\partial S_j=C_j\cup C_{j+1}$ for $1\leq j\leq 2m-1$ and $\partial S_{2m}= C_{2m}\cup C_1$, such that $u(z)=v(z)>0$ for $z$ in the even sectors while $u(z)\leq 0$ and $v(z)\leq 0$ for $z$ in the odd sectors. If $u$ and $v$ are given by , where $f$ is the function from Theorem [[\[thm3\]]{}]{} and $(r_k)$ is a sequence satisfying , then in each odd sector, one of the functions $u$ and $v$ is negative while the other one is equal to zero. Moreover, properties $(ii)$ and $(iv)$ of Theorem [[\[thm3\]]{}]{} hold. [*Proof of Lemma*]{} \[le1\]. If $D$ is a sector of the system $A\cup B$, and if at some point $z_0$ in $D$, we have $\max\{ u(z_0),v(z_0)\}>0$, then $u(z)=v(z)>0$ for all points $z\in D$. Indeed, both $u$ and $v$ are harmonic in $D$ by $(c)$, and $(a)$ gives $$\label{aa} u(z_0)=v(z_0)>0.$$ If $\min\{ u(z_1),v(z_1)\}<0$ for some $z_1\in D$, then this also holds in a neighborhood of $z_1$, and one of the functions $u$ and $v$ must be zero in this neighborhood by $(b)$. Then it is identically equal to zero in $D$ which contradicts (\[aa\]). Thus $u$ and $v$ are non-negative in $D$, and the minimum principle implies that they are positive. Then they are equal in $D$ by $(a)$. Such sectors $D$ will be called [*positive*]{} sectors. If one of the functions $u$ and $v$ is constant, then both functions are equal to the same positive constant. This follows from $(a)$ and $(b)$. For the rest of the proof we assume that they are non-constant. Suppose that some ray $L\subset A\cup B$ has the property that positive sectors $D_1$ and $D_2$ are adjacent to $L$ on both sides, that is, $L=\partial D_1\cap\partial D_2$. (We will see in a moment that $D_1\neq D_2$). Then we have $u(z)=v(z)$ for $z\in D=D_1\cup D_2\cup L$, in view of $(a)$, and $u$ and $v$ must be positive and harmonic in $D$ in view of $(c)$. If there are no non-positive sectors, then $u$ and $v$ are equal, positive and harmonic is $\C\backslash\{0\}$, which is impossible under the current assumption that they are non-constant. So there is at least one non-positive sector. In particular, $D_1\neq D_2$ in the previous paragraph. Let $D$ be a positive sector. Let $z_0$ be a point on $\partial D\cap\partial D'$, where $D'$ is a non-positive sector. This means that $u(z)\leq 0$ and $v(z)\leq 0$ in $D'$. Then $u(z_0)=v(z_0)=0$. Indeed, $u(z_0)=v(z_0)\geq 0$ by the upper semi-continuity of subharmonic functions. As $D'$ is not thin at $z_0$ (in the sense of potential theory, see [@Ran]) we obtain that $u(z_0)=v(z_0)=0$. Let $C$ be the union of those rays in $A\cup B$ which separate a positive and a non-positive sector. It follows from the above considerations that $C$ can be written in the form $C=\bigcup_{j=1}^{2m}C_j$, with $m\geq 1$, with rays $C_j\subset A\cup B$ so that in the sector $S_j$ between $C_j$ and $C_{j+1}$ the functions $u$ and $v$ are positive for even $j$ and non-positive for odd $j$. Moreover, we have $u(z)=v(z)=0$ for $z\in C$. Note that the sectors with respect to the system $C$ may be unions of several sectors and rays of the system $A\cup B$. Suppose now that $u$ and $v$ are given by (\[lim\]), where the sequence $(r_k)$ satisfies (\[A\]). Let $S_{2j-1}$ be an odd sector. Then $u(z)\leq 0$ in $S_{2j-1}$. If $u(z)=0$ in $S_{2j-1}$, then $u$ is not harmonic on either of the two rays in $\partial S_{2j-1}$, so $f$ has infinitely many zeros close to these two rays. Therefore $f$ cannot have infinitely many $1$-points close to either one of these two rays, and thus $v$ is harmonic in a neighborhood of $\partial S_{2j-1}\backslash \{0\}$. As $v(z)=0$ on $\partial S_{2j-1}$, we conclude that $v(z)<0$ in $S_{2j-1}$. The same argument applies with the roles of $u$ and $v$ interchanged. Thus in each odd sector $S_{2j-1}$ one of the two functions $u$ and $v$ is strictly negative, and the other function is equal to zero. If $u(z)<0$ in $S_{2j-1}$ then both rays of $\partial S_{2j-1}$ belong to $B$ and all rays of $A\cup B$, if any, inside $S_{2j-1}$ belong to $A$, and analogously if $v(z)<0$ in $S_{2j-1}$. This proves Lemma \[le1\]. We return to the proof of Theorem \[thm3\]. Lemma \[le1\] does not exclude the possibility that the set of rays $C_j$ is empty, and thus the whole plane coincides with one positive sector. In this case $u$ and $v$ are identically equal to the same positive constant. The following argument shows that this is impossible. We will in fact show that there are no rays of $A\cup B$ inside the even sectors $S_{2j}$, that is, the even sectors of the system $C$ coincide with the positive sectors of the system $A\cup B$. Consider an even sector $S_{2j}$. As $u$ is positive and harmonic in $S_{2j}$ and zero on the boundary, it must have the form $$\label{ord} u(re^{it})=c_jr^{\gamma_j}\cos(\gamma_j t-t_j),$$ where $\pi/\gamma_j$ is the angle of this sector at the origin. This can be seen by transforming the sector $S_{2j}$ to a half-plane, for which the result is standard [@Boas Theorem I]. For a given system $A\cup B$, there are only finitely many possibilities for these numbers $\gamma_j$. Thus if $(r_k)$ is a sequence of Pólya peaks of order $\lambda>0$, so that $(d)$ holds, we obtain by comparing $(d)$ with (\[ord\]) that $\gamma_j=\lambda$ for all $j$. As the possible values of $\lambda$ always fill a closed set $[\rho_*,\rho^*]$, where $\rho^*\leq+\infty$, we conclude that this closed set is degenerate to a point, that is, $\rho^*=\rho_*=\rho$. In particular, $\rho^*<\infty$, and (\[rho1\]) implies that [*every*]{} sequence $(r_k)$ tending to $\infty$ satisfies (\[pp\]). Also this shows that the angle of every even sector at the origin is equal to $\pi/\rho$, proving the first statement of $(i)$. For $r_0>0$ such that $M(r_0)>1$ we consider the curve mapping $[r_0,\infty)$ to $\mathscr{D}'\times\mathscr{D}'$ given by $$r\mapsto\left(\frac{\log|f(rz)|}{\log M(r)},\frac{\log|f(rz)-1|}{\log M(r)} \right).$$ Let $F$ be the limit set of this curve when $r\to\infty$. It consists of pairs $(u,v)$ satisfying $(a)$, $(b)$ and $(c)$ and thus satisfying the conclusions of Lemma \[le1\]. As a limit set of a curve, $F$ is connected. In each sector of the system $A\cup B$ either both of the functions $u$ and $v$ are positive, or one is negative. We conclude that the sectors $S_j$ can be chosen independently of the sequence $(r_k)$. Now suppose that a ray $L=\{ te^{i\beta} \colon t\geq 0\}$ of the set $A$ lies inside an even sector $S_{2j}$. By assumption, there is an infinite sequence of zeros $(z_k)$ of the form $z_k=r_ke^{i\beta_k}$ with $r_k\to\infty$ and $\beta_k\to \beta$. Passing to a subsequence we may assume that the limits in (\[lim\]) exist. Let $$u_k(z)=\frac{\log|f(r_kz)|}{\log M(r_k)}.$$ We have $u_k\to u$ in $\mathscr{D}'$. According to Azarin [@A], this convergence also holds in the following sense: for every $\varepsilon>0$ the set $$\{ z \colon |u(z)-u_k(z)|>\varepsilon\}$$ can be covered by discs the sum of whose radii is at most $\varepsilon$, when $k$ is large enough. Let $D$ be the closed disk with the center at $e^{i\beta}$ of radius $\delta$ so small that $D\subset S_{2j}$. Then $\mu:=\min\{ u(z) \colon z\in D\}>0$. Choosing $\varepsilon<\min\{\delta/2,\mu/2\}$ we see that, for each large $k$, there is a circle $T_k$ around $e^{i\beta}$ such that $z_k/r_k=e^{i\beta_k}\in B_k\subset D$, where $B_k$ is the disk with $T_k= \partial B_k$, and $$u_k(z)\geq \mu/2,\quad z\in T_k.$$ This means that $\log|f(z)|\geq(\mu/2)\log M(r_k)$ for $z$ on the circles $$\label{circ} \{ z \colon z/r_k\in T_k\}.$$ Each of these circles encloses a zero of $f$, namely $z_k$. Thus, by Rouche’s theorem, each of them also contains a $1$-point. This is a contradiction, because the circles (\[circ\]) remain inside closed subsectors of $S_{2j}$ that do not contain rays from $B$. A similar argument shows that there are no rays from $B$ inside any even sector $S_{2j}$. Thus no rays of the system $A\cup B$ are contained in the even sectors. As $f$ is transcendental, Picard’s theorem yields that the system $A\cup B$ contains at least one ray. We conclude that $u$ and $v$ are not constant, no matter what sequence $(r_k)$ was used to define them. This also implies (\[piomega\]), and proves $(iii)$ and the fact that the even sectors of the system $C$ coincide with the positive sectors of the system $A\cup B$. It remains to prove $(v)$ and the second statement of $(i)$. Let $S_{2j-1}$ be an odd sector with angle $\pi/\gamma$ at the origin. Let us consider again the limit functions $u$ and $v$ obtained from the Pólya peaks $r_k$. Then we have $(d)$ with $\lambda=\rho$. One of the two subharmonic functions, say $u$, is negative in $S_{2j-1}$, and zero on the boundary of $S_{2j-1}$. Let $h$ be the least harmonic majorant of $u$ in $G=S_{2j-1}\cap\{ z\colon |z|<1\}$. Then $h$ is a negative harmonic function in $G$, equal to zero on the straight segments of $\partial G$. Similarly as in  it follows that $$\int_{re^{it}\in G} h(re^{it})\, dt\leq -cr^\gamma, \quad r<1,$$ where $c>0$. Then $u$ satisfies the same inequality, and combined with property $(d)$ and using $$0=u(0)\leq\int_0^{2\pi}u(re^{it})\, dt$$ this implies that $\gamma\geq\rho$ so that $\pi/\gamma\leq\pi/\rho$. This proves the second part of $(i)$. Finally, if there are no rays of $A$ inside $S_{2j-1}$, then $u$ is harmonic in $S_{2j-1}$, so it is of the form (\[ord\]), and it is a harmonic continuation from an adjacent even sector, so we must have $\gamma=\rho$. A similar argument applies if $v$ is negative and there are no rays of $B$ inside $S_{2j-1}$. This proves $(v)$ and completes the proof of Theorem \[thm3\]. *Proof of the Corollary*. We assume without loss of generality that $L_0$ is the positive ray. The order of $f$ must be finite by Theorem A, so Theorem \[thm3\] is applicable. As there are only three rays, the number $m$ in Theorem \[thm3\] must be $1$. So we have one even sector of opening $\pi/\rho$ and one odd sector of opening at most $\pi/\rho$. In view of $(ii)$, the common boundary of the odd and even sector is $L_1\cup L_{-1}$. So $L_0$ lies inside the odd sector. Thus $\rho\leq 1$. The possibility that $\rho=1$ is excluded by (\[piomega\]) and Theorem 1. As $\rho<1$, the function $f$ is of genus zero and thus of the form $$f(z)=cg(z),\quad g(z)=z^n\prod_{j=1}^\infty\left(1-\frac{z}{z_k}\right),$$ where $n$ is a non-negative integer and $(z_k)$ is a sequence of positive numbers tending to $\infty$. If $c$ is real, we conclude that the rays $L_1$ and $L_{-1}$ are symmetric with respect to $L_0$ which proves the corollary in this case. Suppose now that $L_{1}$ and $L_{-1}$ are not symmetric with respect to $L_0$ so that $c$ is not real. Let us set $a=1/c$. Then $f(z)=1$ is equivalent to $g(z)=a$. We consider the function $h(z)=(g(z)-a)/(\overline{a}-a)$. In view of the symmetry of $g$, the zeros of $h$ lie on the rays $L_1$ and $L_{-1}$, while the $1$-points lie on the reflected rays $\overline{L_1}$ and $\overline{L_{-1}}$. Since $L_0$ lies in the odd sector, which has angle $<\pi$ at the origin, it follows that the two rays $L_1$, $L_{-1}$ are interlaced with the two rays $\overline{L_1}$, $\overline{L_{-1}}$. This contradicts $(ii)$ and $(iii)$ of Theorem \[thm3\], and completes the proof of the corollary. Sketch of the proof of Theorem \[thm4\] {#proofthm4} ======================================= We only indicate the construction of examples showing that Theorem \[thm3\] is best possible, as this construction is well-known, see for example [@Drasin], where a similar construction was used for the first time. We fix $\rho\in (1/2,\infty)$ and construct a $\rho$-trigonometrically convex function $h$ such that the union of the even sectors coincides with the set $$\{ re^{it}\colon r> 0,\; h(t)>0\},$$ and such that $h$ is trigonometric except at the arguments of some rays inside the odd sectors. If there are no rays of $A\cup B$ in the odd sectors at all, then $\rho$ must be an integer, and we just take $h$ to be of the form (\[ord\]) with $\gamma_j=\rho$. Then we discretize the Riesz mass of the subharmonic function $$w(re^{it})=r^\rho h(t),$$ as it is done in [@A], and obtain an entire function $g$ with zeros on some rays $A\cup B$ which lie in the odd sectors, and such that $$\lim_{r \to\infty}r^{-\rho}\log|f(rz)|=w(z).$$ If there are odd sectors with opening $\pi/\rho$, then $h$ must be trigonometric on the intervals corresponding to these sectors, so we multiply $f$ by a canonical product of order smaller than $\rho$ to achieve that $f$ has infinitely many zeros on all those rays of the system $A\cup B$ which belong to the odd sectors. Then we label the odd sectors with labels $0$ and $1$: if the boundary of an odd sector belongs to $A$, we label it with $1$, and if the boundary belongs to $B$ we label it with $0$. Let $S_j$ be an odd sector labeled with $1$. Consider the component $D_j$ of the set $\{ z \colon |f(z)|<2\}$ which is asymptotic to $S_j$. Let $p$ be a quasiconformal map of the disk $\{ z \colon |z|<2\}$ onto itself, equal to the identity mapping on the boundary, and such that $p(0)=1$, whose complex dilatation is supported by the set $\{ z \colon 3/2\leq |z|\leq 2\}$. We define $$G(z)=\left\{\begin{array}{ll} p(g(z)),& z\in\bigcup_j D_j,\\ g(z),&\mbox{otherwise}.\end{array}\right.$$ Here the union is over all odd sectors labeled with $1$. This $G$ is a quasiregular map of the plane, whose dilatation is supported by a small set $E$ in the sense that $$\int_E\frac{dxdy}{x^2+y^2}<\infty.$$ Then the theorem of Teichmüller–Wittich–Belinski [@LV §V.6] guarantees the existence of a quasiconformal map $\phi$ such that $f=G\circ\phi$ is an entire function, and $\phi(z)\sim z$ as $z\to\infty$. It is easy to verify that $f$ has all the required properties. For the construction of infinite order functions, let $A=\bigcup_{j=1}^m \{te^{i\alpha_j}\colon t\geq 0\}$ and $B=\bigcup_{k=1}^n \{te^{i\beta_k}\colon t\geq 0\}$ be two finite systems of rays with $A\cap B=\{0\}$. Again we only sketch the argument. First we note that by [@PolyaSzego Part III, Problems 158–160] there exists an entire function $E$ such that $z^2(E(z)+1/z)$ is bounded outside the half-strip $S=\{z\colon \re z> 0,|\im z|< \pi\}$. In particular, $E$ is bounded outside $S$. Considering $F(z)=\delta(E(z)-c)/((z-a)(z-b))$, where $\delta>0$ is small, $c\in\C$, and $a$ and $b$ are $c$-points of $E$, we obtain an entire function $F$ such that $$|F(z)|\leq \frac{1}{|z|^2} \leq \frac{1}{\dist(z,S)^2}, \quad z\notin S,$$ where $\dist(z,S)$ denotes the distance from $z$ to $S$. For some large $R>0$ we now consider the functions $$a_j(z)=1+z\exp F(e^{-i\alpha_j}z-R) \quad\text{and}\quad b_k(z)=z\exp F(e^{-i\beta_k}z-R).$$ With $S_j=\{e^{i\alpha_j}(z+R)\colon z\in S\}$ we find, noting that $|e^w-1|\leq 2|w|$ for $|w|\leq 1$, that $$\begin{aligned} |a_j(z)-z-1|&\leq \left|z\left(\exp F(e^{-i\alpha_j}z-R)-1\right)\right| \leq 2|z F(e^{-i\alpha_j}z-R)| \\ & \leq \frac{2|z|}{\dist(e^{-i\alpha_j}z-R,S)^2} = \frac{2|z|}{\dist(z,S_j)^2}, \quad z\notin S_j.\end{aligned}$$ Similarly, with $T_k=\{e^{i\beta_k}(z+R)\colon z\in S\}$ we have $$|b_k(z)-z| \leq \frac{2|z|}{\dist(z,T_k)^2}, \quad z\notin T_k.$$ We choose $\varepsilon>0$ so small that the sectors $U_j=\{z\colon |\arg(z- e^{i \alpha_j} R/2)-\alpha_j|\leq \varepsilon\}$ and $V_k=\{z\colon |\arg(z- e^{i \beta_k} R/2)-\beta_k|\leq \varepsilon\}$ are disjoint and put $$G(z)= \begin{cases} a_j(z), \quad z\in U_j,\\ b_k(z), \quad z\in V_k. \end{cases}$$ Then $$G(z)=z+O(1), \quad z\in \bigcup_{j=1}^m\partial U_j \cup \bigcup_{k=1}^n\partial V_k .$$ This allows to extend $G$ to a quasiregular map of the plane which satisfies $$G(z)=z+O(1), \quad z\in \C\backslash \left(\bigcup_{j=1}^m U_j \cup \bigcup_{k=1}^n V_k \right)$$ and whose dilatation $K_G$ satisfies $K_G(z)=1+O(1/|z|)$ as $z\to\infty$. Again the theorem of Teichmüller–Wittich–Belinski yields the existence of a quasiconformal map $\phi$ such that $f=G\circ\phi$ is entire and $\phi(z)\sim z$ as $z\to\infty$. It is not difficult to show that the zeros of $f$ are close to $A$ and the $1$-points of $f$ are close to $B$. We note that the method does not actually require that the rays that form $A$ are distinct from those that form $B$. Indeed, if we want that both zeros and $1$-points accumulate at $\{te^{i\alpha_j}\colon t\geq 0\}$, we only have to choose $a_j(z)=c+z\exp F(e^{-i\alpha_j}z-R)$ with a constant $c$ different from $0$ and $1$. Proof of Theorem \[thm1\] {#sec3} ========================= According to Theorem A, the order $\rho$ of $f$ is finite. First we deal with the case when $f$ is a polynomial, following Baker [@Ba0]. Without loss of generality, we may assume that $L_1$ is the real line. Then $f=cg$, where $g$ is a real polynomial with all zeros real. Then all zeros of $f'$ are real. Similarly we conclude that all zeros of $f'$ lie on $L_2$, and hence the point $z_0$ of intersection of $L_1$ and $L_2$ is the only possible zero of $f'$. Hence $f(z)=c_1(z-z_0)^n+c_2$ for some $n\geq 1$ and some $c_1,c_2\in {\mathbb C}$ with $c_1\not= 0$. Such a function $f$ can satisfy the assumptions of Theorem \[thm1\] only if $f$ is a polynomial of degree at most $2$. Notice that this argument can be extended to functions of order less than $2$, but we do not use this. Suppose now that $f$ is transcendental. Then we use Theorem \[thm3\]. This theorem implies that there exists at least one even sector. If there is only one even sector, and its angle is greater than $\pi$, then the odd sector does not contain rays of $A\cup B$, so by $(v)$ its opening must be the same as the opening of the even sector, which is a contradiction. If there are two even sectors, then the odd sectors contain no rays of $A\cup B$. It follows from $(i)$ and $(v)$ that all sectors must have opening $\pi/2$. Then by $(ii)$ the zeros are close to the boundary of a quadrant, and the $1$-points are close to the the boundary of the opposite quadrant. But by hypothesis the zeros lie on a line and the $1$-points lie on a line. We conclude that the zeros are actually close to one ray and the $1$-points are close to another ray. But then there is only one even sector, contradicting our assumption at the beginning of this paragraph. The only remaining possibility is that there is one even sector with opening $\pi$. Then $\rho=1$, and we assume without loss of generality that this sector is the upper half plane and the $1$-points are real. This means that $B$ consists of two rays whose union is the real line. Using the notation of the proof of Theorem \[thm3\], and choosing a sequence $(r_k)$ of Pólya peaks of order $1$, we obtain $u(z)=\Ima z$ and $v(z)= \Ima^+ z$. This implies that $$\label{N} N(r_k,1,f)\sim \frac{1}{\pi}\log M(r_k),\quad k\to\infty.$$ Now let $g(z)=f(z)\overline{f(\overline{z})}$. As all $1$-points of $f$ are real, $f(z)=1$ implies that $g(z)=1$, so if $g\not\equiv 1$, we will have from (\[N\]) that $$\label{Ng} N(r_k,1,g)\geq (1-o(1) \frac{1}{\pi} \log M(r_k),\quad k\to\infty.$$ Now define the subharmonic function $$w(z)=\lim_{k\to\infty}\frac{\log|g(r_kz)|}{\log M(r_k)}.$$ It is evident that $w(z)=u(z)+u(\overline{z})=0$. Together with (\[Ng\]) this implies that $g(z)\equiv 1$. We conclude that with this normalization, $f$ has the form $f(z)=\exp(icz+id)$, where $c$ and $d$ are real. This completes the proof of Theorem \[thm1\]. Proof of Theorem \[thm2\] {#sec4} ========================= We consider differential equations $$\label{1} -y^{\prime\prime}+ \left((-1)^\ell z^m+E\right)y=0, \quad\ell\in\{0,1\}, \quad m\geq 3, \quad E\in {\mathbb C}.$$ Here $m$ is an integer, so all solutions are entire functions. The equation has the following symmetry property. Set $$\varepsilon=e^{\pi i/(m+2)},\quad \omega=\varepsilon^2.$$ If $y_0(z,E)$ is a solution of (\[1\]) then $$\label{df} y_k(z,E)=y_0(\omega^{-k}z,\omega^{2k}E)$$ satisfies the same equation, while $$y_0(\varepsilon^{-k}z,\varepsilon^{2k}E)$$ with an odd $k$ satisfies (\[1\]) with the sign at $z^m$ switched. The Stokes sectors are defined as follows. When $\ell=0$, they are $S_0=\{ z \colon |\arg z|<\pi/(m+2)\}$, and $S_k=\omega^kS_0$ for $k\in\Z$. When $\ell=1$, the Stokes sectors are $S_0=\{ z \colon 0<|\arg z|<2\pi/(m+2)\}$ and $S_k=\omega^kS_0$ for $k\in\Z$. To obtain a discrete sequence of eigenvalues, one imposes boundary conditions of the form $$\label{bc} y(z)\to 0,\quad z\to\infty,\quad z\in S_n\cup S_k,$$ for some $n$ and $k$. The exact meaning of (\[bc\]) is that $y(z)\to 0$ when $z\to\infty$ along any interior ray from the origin contained in the union of the two sectors. We will denote such a boundary condition by $(n,k)$. It is known [@S] that when $n\neq k\pm1$ (modulo $m+2$), then the boundary value problem $(n,k)$ has a discrete spectrum with a sequence of eigenvalues tending to infinity. (For completeness, we include the argument below.) Moreover, K. Shin [@Shin2] proved that these eigenvalues always lie on a ray from the origin. In particular, when $S_n$ and $S_k$ are symmetric with respect to the positive ray, these eigenvalues are positive. All other cases can be reduced to this case using the symmetry of the differential equation stated above: if $\omega_1$ and $\omega_2$ are bisectors of $S_m$ and $S_n$, then the eigenvalues lie on the ray $\{ t/(\omega_1\omega_2)\colon t\geq 0\}$. From now on we assume that $\ell=0$ in (\[1\]). For each $E$ the equation (\[1\]) has a solution tending to zero as $z\to\infty$ in $S_0$. More precisely, there is a unique solution $y_0(z,E)$ satisfying $$\label{as} y_0(z,E)=(1+o(1))z^{-m/4}\exp\left(-\frac{2}{m+2}z^{(m+2)/2}\right)$$ as $z\to\infty$ in any closed subsector of $S_0\cup S_1\cup S_{-1}$; see [@S Thm 6.1]. Notice the simple but important fact that this principal part of the asymptotics does not depend on $E$. The function $y_0(z,E)$ is actually an entire function of the two variables $z$ and $E$, and its asymptotics when $E\to\infty$ while $z$ is fixed are also known [@S Thm 19.1]; this implies that the entire function $E\mapsto y_0(z_0,E)$ has order $$\rho=\frac{1}{2}+\frac{1}{m}.$$ Now we define $y_k$ by (\[df\]). Then $y_k\to 0$ as $z\to\infty$ in $S_k$. The boundary problem $(n,k)$ thus has a solution when $y_n$ and $y_k$ are linearly dependent as functions of $z$. This means that their Wronskian vanishes. But the Wronskian, evaluated at $z=0$, is an entire function of $E$, and its order is less than $1$. Thus its zeros, which are the eigenvalues of the problem, form a sequence tending to infinity, as mentioned above. As $y_0,y_1,y_{-1}$ satisfy the same differential equation, we have $$y_{-1}=C(E)y_0+\tilde{C}(E)y_{1}.$$ The asymptotics of $y_{1}$ and $y_{-1}$ in $S_0$ (which follow from (\[as\])) show that $\tilde{C}=-\omega$, so $$\label{2} y_{-1}=C(E)y_0-\omega y_{1}.$$ By differentiating this with respect to $z$ we obtain $$\label{2'} y_{-1}^\prime=C(E)y_0^\prime-\omega y_{1}^\prime.$$ Solving (\[2\]) and (\[2’\]) by Cramer’s rule, we obtain $$C(E)=W_{-1,1}/W_{0,1},$$ where $W_{i,j}$ is the Wronskian of $y_i$ and $y_j$. This shows that $C$ is an entire function (because $W_{0,1}$ is never $0$). It has the same order $\rho$ that $y_0$ has as a function of $E$. In view of (\[2\]), the zeros of $C$ are exactly the eigenvalues $\lambda_j$ of the problem (\[bc\]) with $(n,k)=(-1,1)$. So all zeros of $C$ are positive by Shin’s result. Substituting $(z,E)\mapsto (\omega^{-1}z,\omega^2E)$ to (\[2\]), we obtain $$y_0=C(\omega^2E)y_1-\omega y_2.$$ Using this to eliminate $y_0$ from (\[2\]) we obtain $$y_{-1}=\left(C(E)C(\omega^2E)-\omega\right)y_1-C(E)\omega y_2.$$ We conclude that the zeros of the entire function $$\label{sib} g(E):=C(E)C(\omega^2E)-\omega$$ are the eigenvalues of the problem $(-1,2)$. Therefore, these zeros lie on the ray $\{ z=t\omega^{-1}\colon t\geq 0\}$. So if we define $f(E)=-\omega^{-1} g(\omega^{-1}E)$ and $h(E)=C(E)/\sqrt{\omega}$, then $$f(E)=1-h(\omega^{-1}E)h(\omega E),$$ the zeros of $f$ are on the positive ray and the $1$-points on two other rays. This completes the proof of Theorem \[thm2\]. [*Remarks.*]{} Once it is known that two entire functions $C$ and $g$ satisfy (\[sib\]) and zeros of each function lie on a ray, the order of both functions and the angles between the rays can be determined from Theorem \[thm3\]. Equations of the type (\[sib\]) occur for the first time in the work of Sibuya and his students [@S; @S2; @S3] for the simplest case when $m=3$. It was later discovered that these equations also arise in the context of exactly solvable models of statistical mechanics on two-dimensional lattices and in quantum field theory [@DDT1; @DDT2]. The interesting question is to which angles Theorem \[thm2\] generalizes. If $m>2$ is not an integer, equation (\[1\]) and its solutions are defined on the Riemann surface of the logarithm, but Sibuya’s solution $y_0$ is still entire as a function of $E$. We found no source where this fact is proved, but it is stated and used in [@DDT1 p. 576], [@DDT2 p. R231] and [@Tabara]. Shin’s result, which we used above seems to generalize to non-integer $m\geq 4$, see [@Shin2 Theorem 11] which we use with $\ell=1$ and $\ell=2$. On the other hand, numerical evidence in [@Bender] (see Figs. 14, 15, 20) shows that for $m<4$ our function $g(E)$ in (\[sib\]) does not have radially distributed zeros on one ray, even if finitely many of the zeros are discarded. And of course, it would be interesting to know whether there are any other entire functions like in Theorem \[thm2\], not related to the differential equations (\[1\]). [11]{} V. S. Azarin, Asymptotic behavior of subharmonic functions of finite order, (Russian) Mat. Sb. (N.S.) 108(150) (1979), no. 2, 147–167, 303. English transl.: Math. USSR, Sb. 36, 135–154 (1980). I. N. Baker, Entire functions with linearly distributed values, Math. Z. 86 (1964) 263–267. I. N. Baker, Entire functions with two linearly distributed values, Ann. Acad. Sci. Fenn. Ser. A I Math. 5 (1980), no. 2, 381–386. C. M. Bender, S. Boettcher and P. N. Meisinger, $PT$-symmetric quantum mechanics, J. Math. Phys. 40 (1999), no. 5, 2201–2229. W. Bergweiler and A. Eremenko, Goldberg’s constants, J. Anal. Math. 119, no. 1, (2013) 365–402. V. Blondel, Simultaneous stabilization of linear systems, Springer, Berlin, 1994. H. P. Boas and R. P. Boas, Short proofs of three theorems on harmonic functions, Proc. Amer. Math. Soc. 102 (1988), no. 4, 906–908. P. Dorey, C. Dunning and R. Tateo, On the relation between Stokes multipliers and the T-Q systems of conformal field theory, Nuclear Physics B 563 (1999) 573–602. P. Dorey, C. Dunning and R. Tateo, The ODE/IM correspondence, J. Phys. A 40 (2007), no. 32, R205–R283. D. Drasin, Value distributions of entire functions in regions of small growth, Ark. Mat. 12 (1974), 281–296. D. Drasin and D. F. Shea, Pólya peaks and the oscillation of positive functions, Proc. Amer. Math. Soc.  34 (1972), 403–411. A. Edrei, Meromorphic functions with three radially distributed values, Trans. Amer. Math. Soc. 78 (1955), 276–293. A. Eremenko, Value distribution and potential theory, Proceedings of the ICM, Vol. II (Beijing, 2002), 681–690, Higher Ed. Press, Beijing, 2002. A. Eremenko, Simultaneous stabilization, avoidance and Goldberg’s constants, arXiv:1208.0778. A. A. Goldberg and I. V. Ostrovskii, Distribution of values of meromorphic functions, Amer. Math. Soc., Providence, RI, 2008. G. Gundersen, Questions on meromorphic functions and complex differential equations, preprint, arXiv: 1509.02225. L. Hörmander, The analysis of linear partial differential operators I, 2nd ed., Springer, Berlin, 1990. L. Hörmander, Notions of convexity, Birkhäuser, Boston, 1994. T. Kobayashi, An entire function with linearly distributed values, Kodai Math. J. 2 (1979), no. 1, 54–81. O. Lehto and K. I. Virtanen, Quasiconformal mappings in the plane, Springer, New York – Heidelberg, 1973. B. Ya. Levin, Distribution of zeros of entire functions, Amer. Math. Soc., Providence, RI, 1970. R. Nevanlinna, Über die Konstruktion von meromorphen Funktionen mit gegebenen Wertzuordnungen, Festschrift zur Gedächtnisfeier für Karl Weierstra[ß]{}, Westdeutscher Verlag, Köln – Opladen, 1966, pp. 579–582. M. Ozawa, On the zero-one set of an entire function, Kodai Math. Sem. Rep. 28 (1977), no. 4, 311–316. G. Pólya and G. Szegő, Problems and theorems in analysis. Vol. I: Series, integral calculus, theory of functions, Springer, New York, 1972. T. Ransford, Potential theory in the complex plane, Cambridge Univ. Press, 1995. L. A. Rubel and C.-C. Yang, Interpolation and unavoidable families of meromorphic functions, Michigan Math. J. 20 (1974), no. 4, 289–296. K. Shin, The potential $(iz)^m$ generates real eigenvalues only, under symmetric rapid decay boundary conditions, J. Math. Phys. 46 (2005), no. 8, 082110, 17pp. Y. Sibuya, Global theory of a second order linear ordinary differential equation with a polynomial coefficient, North-Holland, Amsterdam, 1975. Y. Sibuya, Non-trivial entire solutions of the functional equation $f(\lambda)+f(\omega\lambda)f(\omega^{-1}\lambda)=1$, Analysis 8 (1998), 271–295. Y. Sibuya and R. Cameron, An entire solution of the functional equation $f(\lambda)+f(\omega\lambda)f(\omega^{-1}\lambda)=1$, Lecture Notes Math. 312, Springer, Berlin, 1973, pp. 194–202. T. Tabara, Asymptotic behavior of Stokes multipliers for $y''-(x^\sigma+\lambda)y=0,\; (\sigma\geq 2)$ as $\lambda\to\infty$, Dynamics of Continuous, Discrete and Impulsive Systems 5 (1999), 93–105. J. Winkler, Zur Existenz ganzer Funktionen bei vorgegebener Menge der Nullstellen und Einsstellen, Math. Z. 168 (1979), 77–86. *W. B.: Mathematisches Seminar* Christian-Albrechts-Universität zu Kiel Ludewig-Meyn-Str. 4 24098 Kiel Germany A. E.: Department of Mathematics Purdue University West Lafayette, IN 47907 USA A. H.: Department of Mathematics University of Illinois at Urbana–Champaign 1409 W. Green St. Urbana, IL 61801 USA [^1]: Supported by NSF grant DMS-1361836.
Background ========== Expectation of neutrality regarding the mutation-drift equilibrium for microsatellite variation is not always valid due to demographic changes, including genetic bottlenecks and admixture (e.g. \[[@B1],[@B2]\]), and selection at linked sites (e.g. \[[@B3],[@B4]\]). In contrast to demographic processes, which affect the entire genome, selection operates at specific sites associated with phenotypic traits, such as important quantitative trait loci (QTLs) and candidate genes. Selection leaves its signature in the chromosomal regions surrounding the sites, where significantly reduced or elevated levels of genetic variation can be maintained at linked neutral loci. Thus, selection not only affects the selected sites but also linked neutral loci and the footprints of selection acting on specific functional loci can be detected by genotyping polymorphic microsatellites in the adjacent non-coding regions \[[@B5]\]. Different statistical methods have been developed to identify outlier loci under the influence of selection \[[@B6]-[@B13]\] and adaptations have been attempted to improve the original methods of Lewontin and Krakauer \[[@B14]\], which have been criticized because of their sensitivity to population structure and history (e.g. \[[@B15]\]). Nevertheless, recent studies have shown somewhat inconsistent results obtained by applying the above statistical tests to the same data (e.g. \[[@B7],[@B12],[@B16],[@B17]\]). The Lewontin- Krakauer test \[[@B14]\] is the oldest of these multilocus-comparison methods. Broadly speaking, these methods are derived by using one of the two general approaches detailed below. The first approach is to develop methods with Lewontin and Krakauers\' original idea and to use the distribution of estimates of genetic differentiation coefficient *F*~ST~and diversity parameters from individual genetic loci to detect the effects of selection, hereafter termed the *F*~ST~-based approach, such as the FDIST program-based method \[[@B9]\], Bayesian regression \[[@B12]\], and population-specific \[[@B7]\] methods. Schlötterer and colleagues have proposed alternative multilocus simulation-based tests that use summary statistics other than *F*~ST~, such as the ln RV \[[@B10]\], the ln RH \[[@B6]\], and the ln Rθ\' \[[@B13]\] tests. These tests involve considering the idea of a \'selective sweep\' that arises from natural and artificial selection, and recent genetic exchanges driven by the selective sweep leave a record or \"genetic signature\" in the genome covering the selected sites and their linked neutral loci. Given that microsatellite loci associated with a recent selective sweep differ from the remainder of the genome, they are expected to fall outside the distribution of neutral estimates of ln RV, ln RH or ln Rθ\' values. As reviewed by \[[@B18]-[@B20]\], all the methods have potential advantages and drawbacks, which can be due to different underlying assumptions regarding the demographic and mutational models on which they are based, as well as on uncertainty associated with the robustness of the approaches. The recent increased availability of large genomic data sets and the identification of a few genes or loci as the targets of domestication or subsequent genetic improvement in cattle have renewed the investigation of the genomic effects of selection. Candidate genes and QTL have been described on both BTA1 \[[@B21]-[@B25]\] and BTA 20 \[[@B26]\]. On BTA1, the *POLL*gene, characterized by two alleles: *P*(polled) dominant over *H*(horn), is responsible for the polled (i.e. hornless) and horn phenotypes in cattle and has been subjected to both natural and artificial selection. Georges et al. \[[@B21]\] have demonstrated genetic linkage between the *POLL*gene and two microsatellites, *GMPOLL-1*and *GMPOLL-2*. These loci are syntenic to the highly conserved gene for superoxide dismutase 1 (*SOD1*). In addition, in various breeds the *POLL*gene has been found to be linked to the microsatellites *TGLA49*, *AGLA17*, *INRA212*and *KAP8*, located in the centromeric region of BTA1 close to the *SOD1*locus \[[@B22],[@B23],[@B25]\]. To date, on BTA20 several QTL and candidate genes have been reported e.g. growth hormone and prolactin receptor genes \[[@B27]\] affecting conformation and milk production traits, such as body depth (e.g. \[[@B28]\]), udder (e.g. \[[@B29]\]), udder attachment (e.g. \[[@B30]\]), milk yield (e.g. \[[@B31]\]), fat percentage (e.g. \[[@B28]\]), and especially protein content (e.g. \[[@B28]-[@B30]\]). In this study on *Bos taurus*, we present microsatellite data using a relatively larger number of loci than previously reported, which mainly included the 30 microsatellite markers recommended by the International Society for Animal Genetics (ISAG)/Food and Agriculture Organization of the United Nations (FAO) working group (e.g. \[[@B2],[@B24]\]; but see also \[[@B32]\]). Among the 51 microsatellites genotyped on 10 representative cattle populations of different origins (native and modern commercial) and horn statuses (polled and horned) in the northern territory of the Eurasian subcontinent, seven were on BTA1 and 16 on BTA20. We applied four tests to detect molecular signatures of selection, ranging from tests for loci across populations and the recently proposed pairwise population tests using a dynamically adjusted number of linked microsatellites \[[@B13]\]. We compared the consistency of the different neutrality tests available to identify loci under selection in the north Eurasian cattle populations investigated here. Materials and methods ===================== Population samples and genetic markers -------------------------------------- Microsatellite data from 10 different cattle (*Bos taurus*) populations including 366 individuals were analyzed. Finnish populations were represented by Finnish Ayrshire (modern commercial, horned, *n*= 40), Finnish Holstein-Friesian (modern commercial, horned, *n*= 40), Eastern Finncattle (native, mostly polled, *n*= 31), Western Finncattle (native, mostly polled, *n*= 37), and Northern Finncattle (native, mostly polled, *n*= 26). We were able to inference the heterozygotic status at the *POLL*locus in 19 phenotypically polled cattle of the three Finnish native populations, on the basis of their offspring/parent phenotypes. In addition, there were 19 animals horned (recessive homozygotic) in the Finnish native populations. Istoben (native, horned, *n*= 40), Yakutian (native, horned, *n*= 51), and Kholmogory (native, horned, *n*= 32) cattle were sampled in Russia. Ukrainian Grey (native, horned, *n*= 30) and Danish Jersey (modern commercial, horned, *n*= 39) were sampled in Ukraine and Denmark, respectively. During sample collection, the pedigree information and the herdsman\'s knowledge were used to ensure the animals were unrelated. Additional information on these populations has been reported in previous publications \[[@B2],[@B33]\]. Genotypes of the 51 microsatellites were used (for details on the microsatellites, see \[[@B33]-[@B35]\]) among which data of the 30 markers from the panel of loci recommended for genetic diversity studies in cattle <http://www.projects.roslin.ac.uk/cdiv/markers.html> were taken from the literature \[[@B2]\]. The 23 microsatellites (21 new ones and two from the recommended panel) on BTA1 and BTA20 were chosen on the basis of their vicinity to genes and QTL, which could be considered as candidate loci for selection because of their assumed involvement in the polled/horned phenotype \[[@B22]\] and in milk yield and body composition \[[@B35]\]. Details of the primers and microsatellite analysis protocols can be found in CaDBase <http://www.projects.roslin.ac.uk/cdiv/markers.html> and \[[@B34]\]. In this study, GHRJA.UP, 5\'-GGTTCGTTATGGAGGCAATG-3\', and GHRJA.DN, 5\'-GTCACCGCTGGCAGTAGAT-3\' primers were designed based on the sequence of the promoter region of the growth hormone receptor gene \[[@B35]\] containing microsatellite GHRJA. Danish Jersey animals were analyzed only at 41 loci (see Table [1](#T1){ref-type="table"}). A full list of the loci studied and their chromosomal and genomic locations, as well as population and basic statistics, are available in Table [1](#T1){ref-type="table"}. ###### Summary of the microsatellites and basic population genetic estimates for the microsatellites Locus BTA Genomic position (bp) ***A***~**R**~ ***H***~**E**~ ***F***~**IS**~ FDIST2 test Ewens-Watterson test ----------- ----- ----------------------- ---------------- ---------------- ----------------- ------------- ---------------------- ----------- ------- ------- ----------- ----------- AGLA17 1 641402 641615 1.37 0.08 -0.049 0.017 0.010\*\* 0.907 0.754 0.978\* 0.976\* DIK4591 1 1704734 1705228 2.60 0.32 0.064 0.128 0.660 0.467 0.442 0.844 0.622 DIK1044 1 2829429 2829737 4.86 0.70 0.015 0.118 0.631 0.324 0.329 0.136 0.243 SOD1 1 2914373 2915349 4.78 0.65 0.083 0.173 0.968\* 0.331 0.379 0.037\* 0.047\* DIK5019 1 3900549 3900808 5.42 0.59 0.190 0.164 0.954\* 0.381 0.380 0.005\*\* 0.008\*\* BMS2321 1 10949260 10949302 3.58 0.45 0.154 0.094 0.410 0.429 0.486 0.424 0.052 BM1824 1 122531990 122532171 3.95 0.72 -0.083 0.122 0.655 0.450 0.487 0.030\* 0.231 TGLA304 20 11460907 11460992 3.30 0.49 0.113 0.114 0.573 0.497 0.531 0.237 0.238 BMS1754 20 18439757 18439877 3.47 0.58 0.014 0.094 0.384 0.503 0.536 0.153 0.126 NRDIKM033 20 15598470 15598176 5.20 0.75 -0.004 0.098 0.372 0.234 0.213 0.415 0.466 ILSTS068 20 21675187 21675451 2.07 0.25 0.095 0.146 0.760 0.734 0.751 0.383 0.223 TGLA126 20 21808628 21808745 6.27 0.71 -0.009 0.079 0.170 0.493 0.443 0.085 0.057 BMS2461 20 25278607 25278662 4.83 0.62 0.028 0.180 0.985\* 0.227 0.246 0.453 0.760 BMS1128 20 26364064 26364112 3.54 0.52 0.032 0.109 0.534 0.472 0.446 0.503 0.203 BM713 20 26977228 26977280 3.36 0.62 -0.074 0.162 0.907 0.439 0.486 0.197 0.674 DIK2695 20 30452613 30452786 3.60 0.58 -0.027 0.075 0.186 0.432 0.411 0.565 0.274 TGLA153 20 31240022 31240154 4.64 0.71 0.025 0.109 0.521 0.345 0.353 0.101 0.269 GHRpromS 20 31023202 31023306 3.12 0.43 0.006 0.114 0.581 0.426 0.446 0.726 0.268 BMS2361 20 34597279 34597368 5.10 0.72 0.019 0.125 0.698 0.329 0.351 0.045\*\* 0.017\*\* DIK4835 20 35915540 35916040 4.96 0.65 0.022 0.136 0.788 0.293 0.329 0.252 0.046 AGLA29 20 3842995 38843142 5.49 0.78 -0.006 0.087 0.202 0.363 0.412 0.000\*\* 0.000\*\* BMS117 20 40015465 40015564 3.88 0.67 -0.018 0.078 0.197 0.377 0.376 0.398 0.272 UMBTL78 20 40177064 40177157 4.22 0.58 -0.033 0.102 0.462 0.298 0.256 0.884 0.229 BM2113 2 88476 88616 5.44 0.79 -0.052 0.119 0.673 0.353 0.379 0.003\*\* 0.005\*\* INRA023 3 35576043 35576259 4.85 0.70 0.009 0.113 0.564 0.309 0.306 0.238 0.107 ETH10 5 55333999 55334220 4.57 0.67 0.002 0.134 0.789 0.432 0.446 0.049\* 0.031\* ETH152 5 NA NA 4.56 0.71 0.012 0.081 0.171 0.425 0.486 0.008\*\* 0.020 ILSTS006 7 86555402 86555693 5.14 0.77 -0.007 0.076 0.110 0.331 0.351 0.032\* 0.057 HEL9 8 NA NA 5.04 0.70 0.020 0.134 0.792 0.262 0.289 0.240 0.245 ETH225 9 8089454 8089601 5.02 0.71 0.013 0.113 0.560 0.410 0.478 0.009\*\* 0.009\*\* MM12 9 NA NA 7.76 0.67 0.017 0.123 0.671 0.312 0.347 0.244 0.112 ILSTS005 10 93304132 93304315 2.17 0.43 -0.026 0.083 0.356 0.686 0.664 0.358 0.390 CSRM60 10 70549981 70550081 7.03 0.72 0.011 0.073 0.094 0.405 0.418 0.046\* 0.038\* HEL13 11 NA NA 3.14 0.51 0.081 0.125 0.678 0.402 0.407 0.529 0.564 INRA032 11 49569411 49569592 3.81 0.62 -0.010 0.142 0.812 0.511 0.537 0.063 0.016 INRA037 11 70730695 70730819 4.54 0.58 0.030 0.129 0.717 0.266 0.243 0.830 0.462 INRA005 12 71751518 71751656 3.18 0.56 0.032 0.088 0.321 0.594 0.596 0.114 0.096 CSSM66 14 6128576 6128773 5.91 0.74 0.002 0.137 0.873 0.312 0.352 0.000\*\* 0.003\*\* HEL1 15 NA NA 3.99 0.67 0.020 0.072 0.138 0.468 0.445 0.119 0.155 SPS115 15 NA NA 5.40 0.58 0.039 0.096 0.416 0.478 0.482 0.228 0.146 INRA035 16 62926476 62926577 2.72 0.23 0.391 0.072 0.266 0.521 0.488 0.746 0.421 TGLA53 16 22214785 22214925 12.25 0.74 0.071 0.099 0.354 0.195 0.213 0.063 0.037 ETH185 17 36598852 36599086 8.31 0.68 0.039 0.146 0.877 0.336 0.303 0.186 0.196 INRA063 18 37562469 37562645 3.31 0.57 0.031 0.110 0.546 0.537 0.487 0.270 0.135 TGLA227 18 60360145 60360234 10.71 0.82 0.005 0.076 0.075 0.282 0.315 0.005\*\* 0.012\* ETH3 19 NA NA 4.44 0.65 0.009 0.135 0.787 0.407 0.406 0.073 0.139 HEL5 21 11850292 11850455 4.64 0.66 0.038 0.151 0.903 0.424 0.410 0.023\* 0.104 TGLA122 21 50825795 50825936 11.36 0.74 0.007 0.069 0.065 0.210 0.213 0.538 0.152 HAUT24 22 45733839 45733962 7.09 0.70 0.025 0.143 0.861 0.406 0.424 0.004\*\* 0.027\* BM1818 23 35634770 35635033 4.03 0.63 0.019 0.102 0.458 0.538 0.486 0.144 0.013\* HAUT27 26 26396836 26396987 8.85 0.61 0.126 0.103 0.453 0.376 0.396 0.083 0.003\*\* BTA, *Bos taurus*autosome; *A*~R~, allelic richness; *H*~E~, expected heterozygosity, *F*~IS~, inbreeding coefficient, observed homozygosity, *F*~OBS~, and expected homozygosity, *F*~EXP~, NA, not available; the probabilities for the Ewens-Watterson test were calculated based on homozygosity (*P*~H~) or Fishers\'s exact test (*P*~E~); \*, the significance level of *P*\< 0.05, \*\*, the significance level of *P*\< 0.01; the genomic positions for the loci are BLASTed against STS or primer sequence in ENSEMBL cow genome Btau4.0 <http://www.ensembl.org/Bos_taurus/Info/Index> updated until 11/02/2010 Microsatellite variability measures and test for linkage disequilibrium ----------------------------------------------------------------------- Microsatellite variability, expected heterozygosity (*H*~EXP~), allelic richness (*A*~R~), and Weir and Cockerham\'s *F*~ST~\[[@B36]\], were estimated with the FSTAT program, version 2.9.3.2 \[[@B37]\]. The *D*\' metric used to estimate the LD was calculated using Multiallelic Interallelic Disequilibrium Analysis Software (MIDAS; \[[@B38]\]). Values of *D*\' were calculated for all syntenic marker pairs on BTA1 and BTA20 across the populations. A more detailed description of the estimation of *D*\' can be found in \[[@B39]\]. The statistical significance of the observed association between pairs of alleles under the null hypothesis of random allelic assortment was tested using a Monte-Carlo approximation of Fisher\'s exact test as implemented in the software ARLEQUIN \[[@B40]\] using a Markov chain extension to Fisher\'s exact test for *R*× *C*contingency tables \[[@B41]\]. A total of 100 000 alternative tables were explored with the Markov chain and probabilities were typically estimated with a standard error of \< 0.001. Estimation of the *D*\' metric for LD and tests for their significance were conducted only in three Finnish native breeds, i.e. Northern Finncattle, Eastern Finncattle and Western Finncattle. The graphic summary of the significance of LD determinations was displayed using the HaploView program, version 4.0 \[[@B42]\]. Fisher\'s exact tests in the GENEPOP v 4.0 \[[@B43]\] were applied to assess LD determinations between all locus pairs across the sample. Tests to detect loci under selection across populations ------------------------------------------------------- Possible departures from the standard neutral model of molecular evolution - potentially revealing demographic events or the existence of selective effects at certain loci - were examined for each locus using the Ewens-Watterson test \[[@B44],[@B45]\] and the Beaumont and Nichols\'s modified frequentist method \[[@B9]\], as well as a more robust Bayesian test \[[@B12]\]. The Ewens-Watterson test of neutrality was performed with the ARLEQUIN program \[[@B40]\] assuming an infinite allele mutation model. To obtain sufficient precision with this test, the probability was recorded as the mean of 20 independent repeats of 1,000 simulations. The frequentist method used was that proposed by \[[@B9]\], further developed by \[[@B12]\], and implemented in the FDIST2 program <http://www.rubic.rdg.ac.uk/~mab/software.html>, a currently distributed version of the original FDIST program as described by \[[@B12]\]. FDIST2 calculates *θ*, Weir & Cockerham\'s \[[@B36]\] estimator of diversity for each locus in the sample. Coalescent simulations are then performed to generate data sets with a distribution of *θ*centered on the empirical estimates. Then, the quantiles of the simulated *F*~ST~within which the observed *F*~ST~\'s fell and the *P*-values for each locus were determined. Initially an island model of population differentiation was used and the procedure repeated 50,000 times to generate 95% confidence intervals for neutral differentiation and to estimate *P*-values for departure of the loci from these expectations. Simulation parameters were under an infinite allele mutation model for 100 demes, 10 sample populations, sample sizes of 100, and a weighted *F*~ST~similar to the trimmed mean *F*~ST~calculated from the empirical distribution. Computed by removing the 30% highest and lowest *F*~ST~values observed in the empirical data set, the trimmed mean *F*~ST~is an estimate of the average \"neutral\" *F*~ST~value uninfluenced by outlier loci (see \[[@B46]\]). This method provides evidence for selection by looking for outliers with higher/lower observed *F*~ST~-values, controlling for *P*-values \[[@B12]\]. The approach is fairly robust regarding variation in mutation rate between loci, sample size, and whether populations are at equilibrium or not \[[@B9]\]. Beaumont & Balding\'s \[[@B12]\] hierarchical-Bayesian method was performed using the BAYESFST program <http://www.reading.ac.uk/Statistics/genetics/software.html> package, which generates 2,000 Markov chain Monte Carlo (MCMC) simulated loci on the basis of the distribution of *F*~ST~given the data. The method combines information over loci and populations in order to simultaneously estimate *F*~ST~at the *i*^th^locus and the *j*^th^population, *F*~ST~(*i*, *j*), for all *i*loci and *j*populations. A hierarchical model is implemented for *F*~ST~(*i*, *j*) as $$F_{ST}(i,j) = \frac{\exp(\alpha_{i} + \beta_{i} + \gamma_{ij})}{1 + \exp(\alpha_{i} + \beta_{i} + \gamma_{ij})}$$ where α~i~, β~j~and γ~ij~are locus, population and locus-by-population parameters, respectively \[[@B12]\]. In this study, the interpretations of the potential outliers are based on the locus effect (*α*~i~). Outliers from our data set were identified on the basis of the distribution following \[[@B12]\]. Rather than a fixed *F*~ST~as assumed in the above frequentist method of \[[@B9]\], this BAYESFST test uses more information from the raw data and does not assume the same *F*~ST~for each population \[[@B5],[@B12]\]. Tests to detect loci under selection for pairwise populations ------------------------------------------------------------- To test for additional evidence of selection, we used the combination of statistics lnRH, lnRV and lnRθ\' in the population pairwise comparisons. The principle behind these tests is that variability at a neutral microsatellite locus is given by θ = 4 *N*~e~*μ*, where *N*~e~is the effective population size and *μ*is the mutation rate. A locus linked to a beneficial mutation will have a smaller effective population size and consequently a reduction in variability below neutral expectations. The relative variance in variability, lnRθ, can be assessed instead by estimating the relative variance in repeat number, lnRV, or heterozygosity, lnRH, for loci between populations. The lnRV was calculated using the equation lnRV = ln (*V*~pop1~/*V*~pop2~) where *V*~pop1~and *V*~pop2~are the variance in repeat number for population 1 and population 2, respectively \[[@B10]\]. The lnRH test is based on the calculation of the logarithm of the ratio of *H*for each locus for a pair of populations as follows $$\ln\text{RH} = \ln\frac{\left( \frac{1}{1 - H_{\text{pop1}}} \right)^{2} - 1}{\left( \frac{1}{1 - H_{\text{pop2}}} \right)^{2} - 1}$$ where *H*denotes expected heterozygosity (see equation 2 in \[[@B6]\]). In addition, we attempted to calculate ln Rθ by estimating θ directly using a coalescence-based Bayesian Markov chain Monte Carlo simulation approach employing the MSVAR program \[[@B47]\]. The tests have been shown to be relatively insensitive to mutation rate, deviation from the stepwise mutation model, demographic history of population and sample size \[[@B16]\]. As suggested by \[[@B48]\], to detect the most recent and strong selective sweeps, the combination of lnRH and lnRV statistics is as powerful as lnRV alone, but using both statistics together lowers the rate of false positives by a factor of 3 because the variance in repeat number and the heterozygosity of a population measure different aspects of the variation at a locus. Thus, combinations of any two of the three tests were implemented here and significance of lnRH, lnRV and lnRθ\' for each comparison was calculated according to standard methods \[[@B6],[@B10],[@B48]\]. These statistics are generally normally distributed, and simulations have confirmed that outliers (e.g. more than 1.96/2.58 standard deviations from the mean for 95%/99% confidence intervals, respectively) are likely to be caused by selection \[[@B48]\]. The tests were implemented for every pairwise comparison involving native populations from different trait categories (Eastern Finncattle, Western Finncattle and Northern Finncattle vs. Yakutian, Istoben, Kholmogory and Ukrianian Grey), i.e. 12 population pairs for the horn (polled/horned) trait. Tests to detect loci under selection within a population -------------------------------------------------------- The coalescence simulation approach using the DetSel 1.0 program \[[@B49]\] was used to detect outlier loci within the Finnish native populations (Eastern Finncattle, Western Finncattle and Northern Finncattle). It has the advantage of being able to take into account a wide range of potential parameters simultaneously and giving results that are robust regarding the starting assumptions. For each pair of populations (*i*, *j*), and for all loci, we calculated *F*~i~and *F*~j~(*F*~i~and *F*~j~are the population-specific divergence; for details see \[[@B7],[@B49]\]) and generated the expected joint distribution of *F*~i~and *F*~j~by performing 10,000 coalescent simulations. Thus, every locus falling outside the resulting confidence envelope can be seen as potentially under selection. The following nuisance parameters were used to generate null distributions with similar numbers of allelic stages as in the observed data set: mutation rates (infinite allele model) *μ*= 1 × 10^-2^, 1 × 10^-3^, and 1 × 10^-4^; ancestor population size *N*~e~= 500, 5,000, and 50,000; times since an assumed bottleneck event *T*~0~= 50, 500, and 5,000 generations; time since divergence *t*= 50 and 500; and population size before the split *N*~0~= 50 and 500. In order to detect outlier loci potentially selected for the polled trait within the three Finnish native cattle populations, the DetSel program was run for comparison between the two subpopulations representing the definitely polled (*n*= 19) and horned (*n*= 19) animals, respectively. Results ======= Genetic diversity and differentiation ------------------------------------- A complete list of loci and their variability in the 10 cattle populations are shown in Table [1](#T1){ref-type="table"}. The overall genetic differentiation across loci was 0.117 (*F*~ST~= 0.117, 95% CI 0.108 - 0.125). *F*~ST~values for an individual locus varied from 0.017 (SD = 0.011) at *AGLA17*on BTA1 to 0.180 (SD = 0.057) at *BMS2461*on BTA20. Mean population differentiations for loci on BTA1 and BTA20 were 0.126 (*F*~ST~= 0.126, 95% CI 0.103 - 0.143) and 0.118 (*F*~ST~= 0.118, 95% CI 0.100 - 0.139), respectively. Neither of the values indicated significant difference from the average for loci on other chromosomes (*F*~ST~= 0.114, 95% CI 0.104 - 0.124). Levels of variation across populations, including allelic richness (*A*~R~) and expected heterozygosity (*H*~E~), were in similar ranges as for microsatellites on BTA1, BTA20 and other autosomes, with the smallest variations observed at *AGLA17*(*A*~R~= 1.37, *H*~E~= 0.08). The highest *H*~E~of 0.79 was observed at *BM2113*(BTA2) and the highest *A*~R~of 11.36 at *TGLA122*(BTA21). Most *F*~IS~values were positive and for some loci significantly positive. Of the 13 negative *F*~IS~values, seven occurred for loci on BTA20, and two for loci on BTA1. Loci on BTA1 and BTA20 did not show a significant reduction or increase in mean *F*~IS~compared with the loci on other autosomes (other bovine autosomes, mean *F*~IS~= 0.038; BTA1, mean *F*~IS~= 0.053, Mann-Whitney test *U*= 118, *P*= 0.409; BTA20, mean *F*~IS~= 0.011, Mann-Whitney test *U*= 273.5, *P*= 0.227). Given the range of observations of *F*~IS~at an individual locus, there were no marked difference among the three classes of loci (BTA1, -0.083 - 0.190; BTA20, -0.074 - 0.113; other BTAs, -0.052 - 0.391). Linkage disequilibrium ---------------------- The strength of pairwise linkage disequilibrium (LD) between markers was estimated and the average *D*\' value of pairwise syntenic markers was 0.32 across BTA1 and 0.28 across BTA20, both of which are significantly (*P*\< 0.05) higher than for non-syntenic markers (0.15; only the *D*\' \> 0.3 are shown in Figure [1](#F1){ref-type="fig"}). Figure [1](#F1){ref-type="fig"} also shows matrices of LD significance levels for all possible locus combinations of the loci on BTA1 or BTA20 in their chromosomal order. Of the 120 pairwise comparisons of the 16 loci on BTA20, a total of 22 (22/120, 18.3%) tests showed *P*values below 0.05. Likewise, LD between markers on BTA1 provided seven (7/21, 33.3%) significant observations. However, a substantially smaller proportion (34/1124, 3.0%) of significant (*P*\< 0.05) pairs was found between non-syntenic markers. In general, significantly higher levels of LD were observed for syntenic markers on BTA1 and BTA20 than that for non-syntenic markers. There was no evidence of LD blocks on either of the chromosomes. ![**Detailed view of the extent and significance of LD in the cattle populations using the Haploview 4.0 program**. Numbers in the blocks indicate the percentage of the LD metric *D*\' values \> 0.3; shadings indicate Fisher\'s exact test significance levels: white, *P*\> 0.05; light shading, *P*\< 0.05.](1297-9686-42-32-1){#F1} Evidence for selection across the populations --------------------------------------------- The Ewens-Watterson test enables detection of deviations from a neutral-equilibrium model as either a deficit or an excess of genetic diversity relative to the number of alleles at a locus (see \[[@B50]\]). When applying the tests for all the microsatellites, we detected 13 loci (*AGLA17*, *DIK5019*, *SOD1*, *AGLA29*, *BMS2361*, *BM2113*, *ETH10*, *ETH225*, *CSSM66*, *ETH152*, *TGLA227*, *HAUT24*, and *CSRM60*) on 10 different chromosomes exhibiting significant probabilities for the Ewens-Watterson test based on both homozygosity (*P*~H~) and Fisher\'s exact test (*P*~E~) (see Table [1](#T1){ref-type="table"}). Of the 13 loci, one (*AGLA17*) exhibited a significant (*P*\< 0.05) deficit of heterozygosity and all the other 12 loci exhibited a significant (*P*\< 0.05) excess in genetic diversity relative to the expected values; these patterns are consistent with directional and balancing selection, respectively. The 12 loci generated average *P*values significantly (Student\'s *t*test: $\overset{ª}{P_{H}}$ = 0.020, *t*= -5.65, *P*\< 0.0001; $\overset{ª}{P_{E}}$ = 0.014, *t*= -5.69, *P*\< 0.0001) below than the expected median value of 0.5. However, average *P*values of 0.313 for *P*~H~(*t*= -4.63, *P*\> 0.1) and 0.232 for *P*~E~(*t*= -8.69, *P*\> 0.1) were observed in the remaining 38 loci which were not under selection. The observation provided further evidence that selection affected genetic diversity at the microsatellites under selection. The results of the analyses with the FDIST2 program are presented in Table [1](#T1){ref-type="table"} and Figure [2a](#F2){ref-type="fig"}. This summary-statistic method, based on simulated and observed *F*~ST~values, identified four loci (*SOD1*, *BMS2461*, *DIK5019*and *AGLA17*) as outliers showing footprints of selection in the analyses, including all 10 populations, at the 5% significance level. Of the four significant loci, three (*SOD1*, *BMS2461*and *DIK4519*) with higher *F*~ST~values indicated a sign of directional selection and one locus (*AGLA17*) appearing in the lower tail of the *F*~ST~distribution suggested a signature potentially affected by balancing selection (Figure [2a](#F2){ref-type="fig"}). In the Bayesian *F*~ST~-test (Figure [2b](#F2){ref-type="fig"}), which was based on a hierarchical regression model, three loci (*HEL5*, *DIK4591*and *SOD1*) were detected as being directionally selected and two (*AGLA17*and *TGLA227*) as under balancing selection. Overall, across all the populations, two loci, *AGLA17*and *SOD1*, exhibited the strongest evidence of selection with all three statistical approaches, which provided good support to their status as outliers due to selection. Two loci (*DIK5019*and *TGLA227*) exhibited significant departure from the neutral expectations in two out of the three selection tests. Furthermore, 12 loci (*AGLA29*, *BMS2361*, *BM2113*, *ETH10*, *ETH225*, *CSSM66*, *ETH152*, *HAUT24*, *CSRM60, BMS2461, HEL5*and *DIK4591*) can be regarded as candidates affected by selection, but were revealed only in one of the three tests. Interestingly, according to ENSEMBL cow genome <http://www.ensembl.org/Bos_taurus/Info/Index> the significant locus *AGLA17*under balancing selection was about 1.78 cM upstream from the candidate locus for *POLL*, whereas locus *SOD1*under directing selection was located about 3.87 cM downstream from the candidate locus. It should be noted that the *F*~ST~-based tests of selection are prone to false positives because of sensitivity to demographic history \[[@B51]\], heterozygosity among loci in mutation rate \[[@B52]\] and locus-specific phenomena not related to selection \[[@B48]\]. Nevertheless, we expect the set of loci identified by *F*~ST~-based tests to be enriched for the true positives in further tests. ![**Results of (A) the FDIST2 and (B) BAYESFST tests**. The solid lines indicate the critical cutoff for the *P*-value at the 0.05 level.](1297-9686-42-32-2){#F2} Tests for selection for pairwise populations -------------------------------------------- Since each of the five tests used above relies on somewhat different assumptions, loci that are repeatedly found to be outside the range expected for neutrality are extremely good candidates for markers under selection. Moreover, LD is known to be extremely high for the six BTA1 microsatellites near the candidate gene affecting the presence or absence of horns in *Bos taurus*, thus the region under selection is likely to be quite wide. Despite the possible presence of a few false positives, the full set of seven loci (*SOD1*, *BMS2461*, *DIK5019*, *HEL5*, *DIK4591*, *TGLA227*and *AGLA17*) was used for further analyses. The lnRθ methods (lnRH, lnRV and lnRθ\') use heterozygosity or variance difference, rather than population divergence, to test for selection. Significant results for the lnRθ tests for selective sweeps involve the two loci (*AGLA17*and *SOD1*) detected by the Ewens-Watterson test and the *F*~ST~-based tests for pairwise combinations (*n*= 12) of three native Finnish cattle populations and four old native populations from Russia and Ukraine (Table [2](#T2){ref-type="table"}). ###### Estimates of lnRV, lnRH and lnRθ\' for the pairwise comparisons Pairwise comparison lnRV lnRH lnRθ\' -------------------------------------- ------ ------ -------- ------ ------ ------ Eastern Finncattle - Istoben \* \* n.s. n.s. \* n.s. Eastern Finncattle - Yakutian \* \*\* \* \*\* \*\* \* Eastern Finncattle - Ukrainian Grey \*\* \*\* \* \* \*\* \* Eastern Finncattle - Kholmogory \* \*\* \* \* \* \* Western Finncattle - Istoben \*\* \* \*\* \*\* \* \* Western Finncattle - Yakutian \*\* \*\* \* \* \* \*\* Western Finncattle - Ukrainian Grey \* \* \*\* \* \* \* Western Finncattle - Kholmogory \* \* \* \* \* \*\* Northern Finncattle - Istoben \* n.s \* n.s. n.s. \* Northern Finncattle - Yakutian \* n.s. n.s. \* n.s. n.s. Northern Finncattle - Ukrainian Grey \*\* \* n.s. n.s. n.s. n.s. Northern Finncattle - Kholmogory \* n.s. n.s. \* n.s. n.s. \* Significance *P*\< 0.05, \*\* *P*\< 0.01, n.s., not significant Significant results for selective sweeps at loci *AGLA17*and *SOD1*were obtained for 12 pairwise population comparisons for each of the three different measures of lnRθ (Table [2](#T2){ref-type="table"}). Of the pairwise comparisons, a total of 28 and 26 significant (*P*\< 0.05) or very significant (*P*\< 0.01) results were observed at *AGLA17*and *SOD1*, respectively, in the three tests. Both loci (*AGLA17*and *SOD1*) appeared in all three different measures of lnRθ for eight or more comparisons (Table [2](#T2){ref-type="table"}), that is, lnRθ (lnRH, lnRV and lnRθ\') values deviating by more than 1.96 standard deviations from the mean. Accordingly, the pairwise comparisons between either of Eastern Finncattle and Western Finncattle and populations of Yakutian, Kholmogory and Ukrainian Grey were significant for all three estimators. All the comparisons between populations yielded at least two significant results for the three estimators. In total, 54 (75% 54/72) significant comparisons involved *AGLA17*or *SOD1*in the comparisons between Finnish native populations (Northern Finncattle, Eastern Finncattle and Western Finncattle) vs. the native populations from Russia and Ukraine (Istoben, Ukrainian Grey, Kholmogory and Yakutian Cattle), which suggested that selective sweeps had taken place in the Finnish native populations. Tests for selection within the Finnish native populations --------------------------------------------------------- The coalescent simulation, which was based on a population split model \[[@B49]\], was performed with the DetSel program within the Finnish native populations with very similar demographical backgrounds (Eastern Finncattle, Northern Finncattle and Western Finncattle). Among the six BTA1 microsatellites around the candidate loci, all are polymorphic in the three populations involved in the pairwise-subpopulation comparison. In the pairwise comparison between definitely polled (*n*= 19) and horned (*n*= 19) cattle, loci *AGLA17*and *SOD1*were significantly outside the 99% confidence interval (Figure [3](#F3){ref-type="fig"}), while locus *DIK4591*fell slightly outside the 95% confidence envelope in the three comparisons, which are thus considered as false positives, i.e., the locus was detected as an outlier because of the 5% type I error. The outlier behavior for loci *AGLA17*and *SOD1*was deemed to be the result of strong local effects of hitchhiking selection. ![**Pairwise comparison of Finnish native cattle populations performed with DetSel**. The test was at the 95% confidence envelope: plot of *F*~2~against *F*~1~estimates for the subpopulation pair polled vs. horned.](1297-9686-42-32-3){#F3} Discussion ========== In this study, besides 28 microsatellites on other cattle autosomes used as a reference set of markers, seven microsatellites on BTA1 and 16 on BTA20 around candidate loci were screened for the footprints of selection among 10 cattle populations with divergent horn or production traits. Across different statistical analyses, a highly divergent pattern of genetic differentiation and large differences in levels of variability were revealed at the loci *SOD1*and *AGLA17*among populations, which was inconsistent with neutral expectations. The results indicated divergent \'selective sweeps\' at *AGLA17*and *SOD1*, probably caused by selection of the closely-linked candidate loci for the horned/polled trait, e.g. the *POLL*gene. Evidence of selection of microsatellites surrounding the *POLL*gene ------------------------------------------------------------------- Because revealing outlier loci in genome scans currently depends on statistical tests, one of the main concerns is to highlight truly significant loci while minimizing the detection of false positives \[[@B44]\]. Using a multilocus scan of differentiation based on microsatellite data, we compared three different methods that aimed at detecting outliers from simulated neutral expectations: 1) the Ewens-Watterson method \[[@B44],[@B45]\], 2) the FDIST2 method \[[@B9]\], and 3) a BAYESFST method \[[@B12]\]. Outliers were identified for 15 loci using a 5% threshold, which was robust across methods for two loci (*SOD1*and *AGLA17*). The locus *SOD1*presented a higher differentiation (*F*~ST~value) than expected, suggesting that it could have been affected by the action of diversifying selection among homogeneous gene pools and populations. In contrast, the locus *AGLA17*presented a lower genetic differentiation than expected, which could represent signatures of homogenizing selection among populations and/or balancing selection within populations. All three methods identified loci *SOD1*and *AGLA17*as good candidates for selection on the polled trait. However, several significant loci were detected only by one or two of the tests and thus could not be accepted as reliable outliers with the remaining tests. The results obtained by the three methods are not totally consistent, probably because of the difference in statistical power using multiple measures of variability, each of which measures different parameters and relies on different assumptions, e.g. heterozygosity and variance in allele size \[[@B48]\], as detailed in e.g. \[[@B53]-[@B55]\]. Besides the global analyses, detection of outlier loci was also done using pairwise analyses. This helped to reveal loci with a major overall effect as well as loci responding with different strengths to artificial selection on the individual populations. Among the population chosen for the pairwise analyses, the lnRθ (lnRV, lnRH and lnRθ\') tests yielded a high number of significant (*P*\< 0.05) results at *SOD1*and *AGLG17*according to the three estimators of lnRθ (Table [2](#T2){ref-type="table"}). This finding conforms well to the previous results of selective sweeps associated with hitchhiking selection with one or more genes with locally beneficial mutations. Although there is difference in the statistical power to detect selection, as discussed in \[[@B6],[@B48],[@B56]\], the three estimators of lnRθ provide additional robust evaluation of potential selective sweeps for the pairwise population comparisons. Neutrality tests for microsatellites focus mainly on unlinked loci and are based on either population differentiation (*F*~ST~) or reduced variability (lnRθ). Our proposed tests consider lnRθ of several linked loci for the inference of selection. While the single-locus lnRθ-test is largely independent of the demographical past, the additional power of linked loci is balanced by the cost of an increasing dependence of the demographic past due to the fact that LD is extremely sensitive to the demographic history. Thus, pairwise analyses between sub-populations may decrease the demographic effects in accounting for the selection. As indicated in Figure [3](#F3){ref-type="fig"}, the great majority of loci always fall in the confidence region of the conditional pairwise-subpopulation distributions of branch length estimates, while some loci do not. Overall, we identified two loci (*SOD1*and *AGLA17*) that were probably subject to selection in the three Finnish native populations. Thus, we concluded that the distribution of variability at these loci could have been shaped by forces other than demographic effects e.g. genetic drift. Although the locus *DIK4591*was located on the edge or fell just outside the high probability region of the expected conditional distribution in the Finnish native populations, we must be cautious about the locus because the estimation of *F*~i~parameters is discontinuous as a result of the discrete nature of the data, i.e. the allele counts (e.g. \[[@B7]\]). However, it is worth noting that not all significant loci detected by other methods could be accepted as trustworthy outliers with DetSel due to technical constraints, which means that if a locus is monomorphic in one population of the pair analyses with DetSel are not possible. Tests to detect outlier loci that deviate from neutral expectation cannot identify false positives (type I errors). Thus, we conducted the three different neutrality tests (the Ewens-Watterson test, the FDIST test and the BAYESFST test), setting a 95% *P*level criterion to identify loci under selection pressure, at which the expected number of false positive loci is 51 × 0.05 = 2.55. We still found 13, four and five outlier loci, respectively, indicating that at least some of the outlier loci are unlikely to be false positives. As suggested by \[[@B5]\], a practical approach to strengthen the candidate status of identified outlier loci is to apply two or more neutrality tests simultaneously based on different assumptions and parameter estimation and only consider outlier loci that are supported by several methods for subsequent validation steps. Thus, the fact that some loci are identified by one neutrality test, but not by others, suggests that their status as candidate loci under selection must be regarded with considerable caution. However, significant deviations from neutrality expectation using multiple tests do not necessarily mean that a particular locus has been affected by hitchhiking selection. In this case, we applied three different pairwise population neutrality tests in 12 separate comparisons using two loci (across the populations: 3 × 12 × 2 = 72 separate tests). This is expected to result in approximately four false positives at the 95% *P*level. The fact that we observed as many as 54 deviations (Table [2](#T2){ref-type="table"}) at the 95% *P*level indicates that it is unlikely that all the outliers identified by pairwise analyses are due to type I errors. Moreover, no locus showed only one significant deviation in one pairwise population comparison (see Table [2](#T2){ref-type="table"}). Therefore, it can be considered that the approach was quite robust and conservative in the detection of the effects of hitchhiking selection, particularly when additional pairwise analyses were applied. Interpretation of the outlier loci and caveats ---------------------------------------------- Actually microsatellites are unlikely to be the target of selection, but are merely tightly linked to the candidate genes. Since the microsatellites used are located close to some functional candidate genes (or QTLs) on the same chromosome, this indicates a high probability that one or several good candidate genes (or QTLs) is/are tightly linked to some of the microsatellites. In many of the cases examined to date, selective sweeps have affected only a very small region, potentially containing only one or a few genes, except in the case of extremely strong selection (see \[[@B57]\]). Empirical studies indicate that the negligible LD between a hitchhiking locus and a candidate gene underlying selection varied from tens of bp (e.g. \[[@B55]\]) to tens or even hundreds of kb (see \[[@B58],[@B59]\]), which depends on a variety of factors such as the genomic regions (e.g. sex chromosome vs. autosome) and populations (e.g. domesticated vs. wild) investigated, and the type of markers (e.g. EST- or MHC-microsatellites vs. microsatellites) used. It has also been suggested that the LD between loci and candidate genes affected by selection is determined mainly by the strength of selection, local recombination rate, population history, and the age of the beneficial allele \[[@B60]\]. Whatever the reason, significant LD was detected with inter-marker genomic distances between *ca*.1100 kb and *ca*.10300 kb in this study (see Figure [1](#F1){ref-type="fig"}), a considerably wider interval than reported previously. We detected two microsatellite loci (*AGLA17*and *SOD1*) probably linked to the candidate gene for the polled trait in the populations investigated. The polled trait is an autosomal dominant trait in cattle and to date the genes controlling this trait have not been specifically identified. However, the gene causing the absence of horns is known to be at the centromeric end of BTA1. Several factors have potentially driven evolution of the functionally important candidate locus including artificial selection and mating system. In Finnish native cattle populations, polled animals were particularly favored during selective breeding. However, we did not detect any locus under selection on BTA20 despite that the fact that several microsatellites including GHRJA surround the growth hormone receptor gene. Growth hormone receptor belongs to the large superfamily of class 1 cytokine receptors. It has various roles in growth, lactation and reproduction in cattle and has been identified as a candidate gene affecting a few key quantitative traits. Therefore, it is not specific to dairy traits but to traits related with growth, lactation and reproduction. Among the cattle populations investigated here, no contrasting differences in growth, lactation or reproduction was observed. In addition, a recent study on the evolution of the cytoplasmic domains of the growth hormone receptor gene in Artiodactyla (see \[[@B61]\]) has suggested that possible effects of selective sweeps on growth hormone receptor gene in bovine occurred before domestication and not among the domestic breeds. Unfortunately, due to the lack of information on the mutation and recombination rates, as well as the effective population size for these data, estimation of the selection coefficient is not possible here (see \[[@B59]\]). Given that the genomic interval of significant LD is comparable with the findings of hitchhiking around two anti-malarial resistance genes in humans \[[@B58]\] and microsatellite hitchhiking mapping in the three-spined stickleback \[[@B59]\], the hitchhiking selection in this genomic region might be fairly strong. Moreover, the availability of genomic resources (e.g. NCBI Bovine Genome Resources; <http://www.ncbi.nlm.nih.gov/projects/genome/guide/cow/>)in bovine makes it possible to develop more precise approaches with other much more frequent markers such as SNP. Genotyping an additional set of high density SNP between *AGLA17*and *SOD1*markers in the populations investigated will definitely give more precise information on selection and LD in the region. Because the populations studied here are not experimental, they differ for many characteristics other than the polled and horned traits. Thus, some of the genetic differentiation could have been due to other selective forces, e.g. pathogens. In addition, since our data violate at least partly the model assumptions of equal population size and migration rates between populations for the FDIST2 test, the outliers from the test alone should be considered with caution although the multiple neutrality tests based on different assumptions and parameter estimation can minimize the possibility of false positives. Moreover, selection is not the only possibility for changes in the distribution of variation to occur at particular loci, reduced variation or increased differentiation can result from chance alone, e.g. genetic drift, bottlenecks or founder events \[[@B57]\]. To obtain clear evidence for selection of these markers, we must analyze nucleotide variations between polled and horned populations. Conclusions =========== Our microsatellite data from northern Eurasian cattle populations empirically indicate a practical approach for identifying the best candidate loci under hitchhiking selection by simultaneously applying multiple neutrality tests based on different assumptions and parameter estimations. By analyzing microsatellite markers adjacent to functional genes, we identified two loci (*SOD1*and *AGLA17*) that are \"selection candidate\" targets associated with the horned/polled trait in cattle. This result could be further confirmed by using a more densely spaced set of markers. It would also be of great interest to see if similar patterns of selection around the *POLL*gene are observed in commercial beef breeds such as Australian Brangus, Angus and Hereford breeds, where dehorning and breeding practices for polled cattle have been an accepted part of cattle management for generations. Another future challenge is to verify the signal of artificial selection on the *POLL*gene, possibly using the next generation sequencing technology to detect the nucleotide variation of the gene between polled and horned cattle. In addition, the approach we have taken in this paper can be easily extended to other cases and marker types. For example, diversity among cattle has been directed by man towards different goals (e.g. draft, milk, meat, fatness, size, color, horn characteristics, behavior, and other characteristics) during many generations of selection. Each of these selection events has potentially left a signature of selection on the genes and their neighboring loci that could be detected by using tests such as we have applied here. As a marker technology, SNP would offer the advantage of higher throughput when scanning the genome for evidence of hitchhiking selection. Competing interests =================== The authors declare that they have no competing interests. Authors\' contributions ======================= MHL designed the study, performed the data analysis and wrote the manuscript. TI-T did the laboratory work and contributed to the manuscript writing and data analysis. HL did the laboratory work, contributed to the manuscript writing and data analysis. JK planned and coordinated the whole study, and contributed to the manuscript writing. All the authors read and approved the final manuscript. Acknowledgements ================ The study includes parts of the data sets from projects of SUNARE (Sustainable Use of NAtural REsources; <http://www.aka.fi/sunare>), Russia In Flux, and N-EURO-CAD (North European Cattle Diversity). The projects were funded by the Academy of Finland, the Ministry of Agriculture and Forestry in Finland, the Nordic Gene Bank for Farm Animals (NGH), and the Nordic Council of Ministers. We also thank Tatyana Kiselyova, Zoya Ivanova, Ruslan Popov, Innokentyi Ammosov, Elena V. Krysova, Nikolai G. Bukarov, Aleksandr D. Galkin, Boris E. Podoba, Ljudmila A. Popova, and Valerij S. Matjukov for their help in collecting the samples.
5 - 10340*r**4 + 137*r**3 - 3069202*r**2 - 7? -60*r**2 - 248160*r + 822 Find the second derivative of 558637127*q**3 + 23024396*q + 5 wrt q. 3351822762*q Find the third derivative of -292339*l**4*r + 2*l**3 - 10*l**2*r - 2*l**2 + 680*l*r + 2*l - 37*r wrt l. -7016136*l*r + 12 What is the second derivative of 16*v**3 + 664115*v**2 - 58212*v + 9 wrt v? 96*v + 1328230 Find the second derivative of 59384442*s**4 - 334*s - 28516 wrt s. 712613304*s**2 Differentiate -1206*z**2 - 153*z + 6789252. -2412*z - 153 Differentiate 665018*k**4*y + 5*k**2*y**2 + k*y + 611*y**2 - 2*y + 133 wrt k. 2660072*k**3*y + 10*k*y**2 + y What is the first derivative of 155955835*c - 132488222 wrt c? 155955835 What is the second derivative of 22480*z**4 - 7078*z**2 + 66468363*z wrt z? 269760*z**2 - 14156 Find the third derivative of 63826*d**6 - 210*d**4 + 204967269*d**2. 7659120*d**3 - 5040*d What is the second derivative of -19127*n**2*p**2 + 3*n**2*p - 2*n**2 + 35*n*p + 282*n + 6147*p**2 + p + 293 wrt p? -38254*n**2 + 12294 What is the second derivative of -440176997*u**3 - 60*u + 276225 wrt u? -2641061982*u Find the third derivative of -161473722*t**4 + 14*t**2 + 278*t + 6169 wrt t. -3875369328*t Find the first derivative of -4*d**2*m*p + 3361*d**2*p - 62*d**2 - 64*d*m*p + 66*d*m + 2*d*p + 2*m wrt m. -4*d**2*p - 64*d*p + 66*d + 2 Find the third derivative of 41*l**5*o**2 - 78860*l**3*o**3 + 23*l**2*o**3 + 5*l*o**3 - 8071*l*o wrt l. 2460*l**2*o**2 - 473160*o**3 Find the second derivative of 127*t*z**3 - 24*t*z**2 - t*z + 10*t - 2*z**3 - 48*z**2 - 10*z + 864 wrt z. 762*t*z - 48*t - 12*z - 96 Differentiate 87*l**2*m**2 + 114*l**2*m - 4637*l*m + 4*m**2 - 63818149 with respect to l. 174*l*m**2 + 228*l*m - 4637*m What is the third derivative of 1160499033*y**6 - 486990900*y**2? 139259883960*y**3 What is the second derivative of 46*h**3*t**2*x + 15064*h**3*t**2 + 57*h**3*t - 2*h**3 - 9*h**2*t + h*x + 8*x + 1 wrt t? 92*h**3*x + 30128*h**3 What is the second derivative of 5262409*s**3 + 5404683*s? 31574454*s Find the first derivative of -44247969*x - 61498830 wrt x. -44247969 Differentiate -10486970*p - 14252267 with respect to p. -10486970 What is the second derivative of 39*a**3*m - 14*a**3 - 261*a**2*m - 3*a**2 - 4169*a*m + 7*m + 37 wrt a? 234*a*m - 84*a - 522*m - 6 Find the second derivative of 272*q**5 - 2156*q**4 - 6*q**3 - 23183192*q. 5440*q**3 - 25872*q**2 - 36*q What is the third derivative of 526585224*k**6 + 2953*k**2 - 430*k - 33 wrt k? 63190226880*k**3 Find the third derivative of -416815*d**4 + 24*d**3 - 57*d**2 + 222594. -10003560*d + 144 Find the third derivative of -566281006*x**3 + 10277965*x**2 - 9 wrt x. -3397686036 What is the first derivative of -146099879*u + 152132171? -146099879 Find the first derivative of -156*f**2*z - 104069847*f**2 + 2*f*z + f - 112678*z wrt z. -156*f**2 + 2*f - 112678 Find the second derivative of 40123857*d*h**2 - 51587098*d*h wrt h. 80247714*d What is the second derivative of 104977*s**3 - 35*s**2 + 18596122*s? 629862*s - 70 What is the second derivative of -2*y**4 - 23016465*y**3 + 2*y**2 - 198403848*y wrt y? -24*y**2 - 138098790*y + 4 What is the first derivative of 10113089*g**3 - 7686015 wrt g? 30339267*g**2 What is the second derivative of 2421401*f**3 - 34540*f + 17? 14528406*f What is the second derivative of -548823215*j**2 + 31671614*j - 16 wrt j? -1097646430 What is the third derivative of -22168098*v**3 + 2*v**2 + 8941294*v? -133008588 Find the third derivative of 36*w*x**3 + 14489*w*x**2 - 30663*x**3 + 37 wrt x. 216*w - 183978 What is the derivative of 1021014*h**2 - 36939332? 2042028*h Find the first derivative of 1718644*c**3*i + c**3 - 2*i - 1101389 wrt c. 5155932*c**2*i + 3*c**2 What is the first derivative of 21781817*g - 12544272 wrt g? 21781817 What is the third derivative of 13226329*i**6 - 9959239*i**2? 1587159480*i**3 Find the third derivative of -2*j**3*y**2 + 1170808*j**3*y - 2*j**3 + 8*j**2*y**2 + 4588*j*y**2 - 2*j*y - 2 wrt j. -12*y**2 + 7024848*y - 12 What is the second derivative of 58718*b**2*i**2 - 80*b**2 - 22*b*i**2 + 7*b + 160988*i**2 wrt b? 117436*i**2 - 160 Find the second derivative of -2557*t**4 - 6636*t**3 + t**2 - 83921474*t. -30684*t**2 - 39816*t + 2 What is the third derivative of -47261747*c**3*p - 10*c**2*p - 4*c**2 + 2165*c*p - c - 257 wrt c? -283570482*p What is the first derivative of 10073668*r**4 - 10960849 wrt r? 40294672*r**3 What is the second derivative of -135002*b**4*z**3 - 1064*b**3*z + 60*b*z**3 + 3132*b*z**2 + 11*b*z - 3*z**3 wrt b? -1620024*b**2*z**3 - 6384*b*z Find the first derivative of -702*i**2 + 235465*i*m**2 + 373894*m**2 - 2*m - 500 wrt i. -1404*i + 235465*m**2 What is the third derivative of 33887700*l**5 - 241847490*l**2 wrt l? 2033262000*l**2 What is the second derivative of d**2*u**3 - 9*d**2*u**2 - 42290*d**2*u + 543*d*u**3 - 6*d*u - 4*u**3 + 4*u**2 - 2*u - 131 wrt d? 2*u**3 - 18*u**2 - 84580*u What is the third derivative of 2*f**4*p**2 + 73882580*f**4 + f**2*p**2 + f**2 + 1618*f*p**2 + 233*p**2 wrt f? 48*f*p**2 + 1773181920*f What is the third derivative of -1128074*h**2*u**3 + 6*h**2*u**2 - h**2 + 2*h*u**3 + 25*u**3 - 2*u**2 + 278*u - 206 wrt u? -6768444*h**2 + 12*h + 150 Find the second derivative of -612*m**3*r**2 - 6*m**3 - 5*m**2*r - 2*m*r**2 - m*r - 13*m + 326*r**2 + r wrt r. -1224*m**3 - 4*m + 652 Differentiate -58333300*i**4 + 69075663. -233333200*i**3 Find the second derivative of -538*a*s**2*z**2 + a*s**2 + 47*a*z**3 + 2*a*z**2 + a*z + 37*a + 86*s**2 + 14*s*z - 6*z wrt z. -1076*a*s**2 + 282*a*z + 4*a What is the derivative of -593582191*b**4 - 501691570 wrt b? -2374328764*b**3 What is the third derivative of 582499699*q**3 - 1594704196*q**2 wrt q? 3494998194 What is the third derivative of 143729052*c**3 + 115411515*c**2 wrt c? 862374312 Differentiate 115*b*h**3*y + 2*b*h*y + 25*b*y - 50*b - h**3 + h**2*y - 9*h**2 - 2657*y - 3 with respect to b. 115*h**3*y + 2*h*y + 25*y - 50 Find the first derivative of 3099507*a**4 - 58160113 wrt a. 12398028*a**3 Find the third derivative of -576345283*w**3 - 659*w**2 + 95677. -3458071698 Differentiate -w*x**3 + 33634004*w - 130562335*x**4 with respect to x. -3*w*x**2 - 522249340*x**3 Find the third derivative of 3*f**4*m - 388032*f**3*m - 10263*f**2*m - 11*f*m wrt f. 72*f*m - 2328192*m Find the first derivative of -61847340*c - 10368310 wrt c. -61847340 Differentiate 22173637*d - 32995072 wrt d. 22173637 Differentiate 57*p**2 - 7509*p - 36569418. 114*p - 7509 What is the first derivative of -8*p**2*v + p**2 - 180574*p*v**3 - 38683*p wrt v? -8*p**2 - 541722*p*v**2 What is the third derivative of f**2*o**3 - 139256*f**2*o**2 - 382*f**2 + 28529277*f*o**3 wrt o? 6*f**2 + 171175662*f What is the second derivative of 3690*v*y**3 + 1348*v*y**2 - 1358*v*y + 3*v + 5*y**3 - 21*y - 171 wrt y? 22140*v*y + 2696*v + 30*y What is the second derivative of 4*b**3*i**2 + 2*b**3 - 12032*b**2*i**2 - 19*b*i**2 + 2*i**2 - 8345900*i wrt i? 8*b**3 - 24064*b**2 - 38*b + 4 Find the second derivative of 194*s**5 + 60594*s**3 + 235299*s + 28 wrt s. 3880*s**3 + 363564*s What is the third derivative of 6*h**3*u**2*w + 13221696*h**3*u*w + 7038*h**2*u**2 + 14*h**2 + 16*h*u**2*w - 4*w wrt h? 36*u**2*w + 79330176*u*w Find the second derivative of 646*a**3 + 130936*a**2 - 18608*a - 21236 wrt a. 3876*a + 261872 Find the first derivative of 4*w**4 - 340428*w**3 - 1495893. 16*w**3 - 1021284*w**2 What is the second derivative of 3632*f**3*r - 7*f**2*r + 4122*f**2 - 2*f*r - f + 1797647*r + 7 wrt f? 21792*f*r - 14*r + 8244 What is the second derivative of 11*j**2*t**2*u**3 - 4426401*j**2*t*u - 1309007*j*t*u + j*u**3 + 4*t**2*u**3 wrt j? 22*t**2*u**3 - 8852802*t*u Differentiate 133*l**2 - 213596*l - 49326930. 266*l - 213596 Differentiate 1999*v**2*y**2 + 2*v**2*y + 313696*v**2 + 22*v*y**2 + 3 with respect to y. 3998*v**2*y + 2*v**2 + 44*v*y Differentiate 59*y**3 - 815023*y**2 - 430012577 with respect to y. 177*y**2 - 1630046*y Find the first derivative of 441044434*u**2 - 551521200. 882088868*u What is the derivative of -24*j**2 + 68*j*o + 6516*j + 31346*o + 163 wrt j? -48*j + 68*o + 6516 Find the second derivative of -1084*u**4 + 5545*u**3 - 2*u**2 - 8887899*u wrt u. -13008*u**2 + 33270*u - 4 What is the second derivative of -1614*w**4 - 6*w**3 + 2
Henrik Lynggaard's blog Tuesday, 7 February 2012 I have previously written about the challenges of integrating an Ivy based subproject (like the play framework) into a build that is otherwise Maven based. After some work it is now working although the support feels a bit rudimentary.Here are the highlights on how to make it work Saturday, 14 January 2012 It might seem very counter intuitive to start using Hudson's XML format to manage Hudson jobs when it has such a great web interface, especially since it is the web interface which has made Hudson so approachable and easy to use. Don't get me wrong. I still love the web interface and think it is the right way to get people to use the tool, however I also think there comes a time when you outgrow the web interface and this is why I built the jobcreator tool. These are the requirements and issues that caused me to outgrow the web interface: Manual changes doesn't scale Using the web interface to make changes to individual jobs is very easy, but it doesn't really scale if you need to change a lot of jobs on the same time. One example could be changing the git branch from "master" to a release branch for all the jobs related to a environment, Such a change could involve upwards of 30 jobs. Another example could be that there are changes to the content and/or structure of the jobs and those must be propagated up through the environments in sync with project code.This would be even more manual changes. Managing this with manual changes in the web interfaces introduces a big risk for human error and inconsistencies. Hudson jobs are code If you want to be able to reproduce a build or deployment at a later time it is important that you can also reproduce the Hudson jobs. In order to do this you need to store and version your Hudson job configurations. This is naturally best done in a SCM like Git or Subversion. Doing so also gives you the option to do branched development of your jobs. Testing and more than one Hudson instance. Before making changes to the jobs being actively used the changes should naturally be tested somewhere. This normally means creating the same set of jobs somewhere else or targeted towards a different environment. Having the jobs defined as templates makes it easy for a developer to load the jobs into a private Hudson instance to experiment or share it on the testing instance. Overall Of course Hudson can be managed via the web interface, but for me it is just another step in the automation. Friday, 13 January 2012 I have finally had the chance to pull together the last changes before announcing the first version of the Hudson Job Creator tool. The idea behind the tool is that you can write FreeMarker based templates and combine those with properties defined in a "pipeline" specification in order to generate Hudson's job config.xml files. This is mainly useful if you maintain a number of similar jobs, or have a series of jobs that you need to specify for multiple environments. I know working with Hudson's xml files directly can seem counter intuitive since one of Hudson's main strengths is its approachability and easy to use web interface, so later this week I will post a more in depth blog post explaining why I chose to go this route. This bug has now been fixed and a new version of the release plugin has been released. So if you update the release plugin to version 2.2.2, Hudson's maven3 integration will now work for releases also. Every hour deploy and test the latest successfully build artefacts to dev #1 Every day deploy and test the latest artefacts which have been successfully tested in dev #1 to dev #2 On demand do a Maven release of the latest artefacts that have passed testing in dev #2 The tricky part of this pipeline comes in the dev #2 environment because it needs a way to select the latest artefacts which have been tested in dev #1 instead of just the latest artefacts produced. The same goes for making the release, I need some way to identify which artefacts have been tested successfully in dev #2. This means I cannot rely on just picking the latest snapshot deployed to the internal repository. We have considered different options: Publish the snapshot to a internal repository and carry around the timestamp of the specific snapshot version. This would allow us to pin out the version, but gives us a problem with regards to cleaning up unused snapshots as Nexus can only do keep X days or X versions, but not remove a set of artefacts based on a timestamp. Also it has the downside of people questioning why we should even do Maven releasing if we already have a unique identifier. A second approach would be to use the "copy artefacts" Hudson feature, but then the way we get artefacts would be different between dev and the upper environments. The approach we have settled on is to not deploy the snapshots to the nexus repo, but use the hudson maven repository plugin. This plugin exposes each build as its own repository. In order to get the right artefacts we use a custom settings.xml to mirror the snapshot repository to the URL of the specific build as exposed by the Hudson plugin. This plugin only works with either the jobs Maven 2 jobs or the new Maven 3 integration, since it relies on Hudson understanding the build and the artefacts. We use the promoted builds plugin to identify the correct build, and we use the promotion status for easy clean up of non used artefacts. We haven't looked too much into using nexus pro's features such as staging repositories or adding metadata, since the above approach works fairly well for us. The new challenge: The reason for writing this post and asking for help is a possible change to our process which I am not sure how to best integrate. There is a wish to integrate a component (playframework based webapp) build using ivy into this framework and specifically into this project. I can see this causing some integration pains Firstly the project is one big Maven multi module project and we prefer to keep it that way. As a very least we want to keep things as one build i.e one Hudson job for building. Is it possible to have a Maven submodule defer execution to Ivy ? If we get the component built using ivy only, then Hudson will not be able to see the outcome as a Maven artefact, and as thus wont be able to expose it as part of the "repository per build". That is at least my strong suspicion. What to do ? So does anyone have a suggestions on how to resolve this ? Can we cleanly integrate Ivy into our current build, if so how ? Do we need to find another approach than using the "repository per build" plugin ? if so, what is the suggested alternative. Would Nexus/Artifactory paid editions make this easier ? Just to be clear my requirements are: From the outside it must appear as one build i.e. one Hudson job. For a developers desktop build it is okay to be a multi step process We need to be able to specify a particular build to deploy in dev #2 and for releasing. We would like to continue the use the promoted builds plugin to visualize good builds Wednesday, 26 October 2011 This is a review of Apache Maven 3 Cookbook written by "Srirangan". I got a free copy from Packt publishing for the purpose of the review. I have been using Maven for some years now and this book is a introductory book, so it was clear from the beginning that I am not in the target audience. The style of splitting the book into 50 recipes makes for a good format which is easy to read and breaks the book into small achievements for the reader. Instead of focusing solely on how Maven is configured, the author tries to tie some of the subjects to software development practices e.g. covering Nexus and Hudson while explaining team collaboration. It serves the book well to put Maven into a development perspective, but it doesn't always fit with the recipe format. For instance in the Nexus case from above the "How it Work" section becomes more a "why it is good" section. I like the fact that the book covers a wide range of different project types and topics. Many times when you read tutorials or other documentation only the simplest project types are covered leaving the reader to add plugins as needed. This book covers many project types and framework and some non-Java areas. It also covers things like setting up Nexus,Hudson, various IDEs. It even has a single chapter on plugin development. In the first chapter the level of information in each recipe is appropriate, but as the chapters get more complex the level of information does not. This results in many of the recipes being too simplistic. A prime example the is set-up of remote repositories, it describes in great many screen shots how to install tomcat 7 and deploy nexus, but only has a single line of information on how to set-up the remote repository, plus only mentions (incorrectly) changing the settings.xml and not the required changes to the project object model. So it fails to help the reader define and use the remote repository possible leaving the reader with a broken set-up This simplistic approach has another side effect. In many of the recipes there is clear copy-paste'able examples but very little explanation of why things are the they are. An example would be the first recipe which introduce multi-module projects. In the top level project definition the dependencies are placed in "<dependencyManagment>" section instead of the normal "<dependencies>" section without an explanation of why. Conclusion: While I like the style and long list of topics covered in this book, I think the decision not the explain the details of why things work like they do e.g. <dependencies> vs <dependencyManagment> or how repositories works, does the book a big disservice. I would not recommend learning Maven from this book alone, since I think explaining the "why" is an essential part of learning a new tool. If you want to learn Maven use the free sonatype book Maven: The Complete Reference and buy a copy of this book if you like a quick introduction to the various project types and plugins. Thursday, 13 October 2011 The tool itself is pretty good and makes it very easy to test that our soap based web-services are working as intended. The fact that is actually provides Maven and Junit integration out of the box is even better and fits very nicely with our CI environment. There is however a few things that are not obvious when using the plugin. The documentation page is really old, e.g. it refers to an old (2.5.1) version of the plugin. The trick here is that you should in general use the same version, as the desktop version you are running. In my case that is 4.0.0 It isn't documented on the page but there here is both a "maven-soapui-plugin" and "maven-soapui-pro-plugin" version of the plugin. In order to fully use project created using the pro version you need the pro version of the plugin The version 4.0.0 of the plugin has a misconfiguration so you will need to manually add some dependencies to the plugin. The version I got working looks like this.
How the US Pushed Sweden to Take Down the Pirate Bay - pawal https://torrentfreak.com/how-the-us-pushed-sweden-to-take-down-the-pirate-bay-171212/ ====== cup-of-tea The copyright industry has had far too much power for many years now. But when I talk to people about this nobody cares. For most people the products of this industry are just "content" which they use to waste their time so I suppose it makes sense that they don't care too much about it. The tragedy is that the copyright industry controls a large and continually growing part of our culture and their power is only increasing. I was there when a UK music tracker called OiNK's Pink Palace was shut down. The police raided the home of the site owner before dawn and even the home of his father who had no idea what his son was up to. Copyright industry writers wrote the news article, claiming it was "extremely lucrative" and included gems such as "Within a few hours of a popular pre-release track being posted on the OiNK site, hundreds of copies can be found". The site's owner was found not guilty in court several years later, but not before the copyright industry essentially ruined his life. But how does this happen? If you talk to most people they don't understand copyright at all. They think it's some kind of privileged status that you have to pay for, like a trademark or something. Most people are not even aware that they hold copyrights. And why would they? Can the average person summon the police to help protect their copyright? Of course not. It's not even a criminal matter. The police being involved seems nothing short of corruption. ~~~ marcoperaza He was running a website that revolved around violating millions of copyrights. Why shouldn't he go to jail? What gives you the right to take someone else's painstakingly created artistic creation and give it away for free to thousands of people, depriving them of the exclusive right to sell their own work. Copyright is both a criminal and civil matter. The civil court system is useful for many things, but it is limited to monetary damages, which is not very helpful when the damages are in the millions and the defendant isn't very wealthy. The penal power of the criminal system is not appropriate for individual people downloading music, but it certainly is for a sophisticated operation involving the illegal distribution of millions of copyrighted works to hundreds of thousands of users. == Edit == Some responses, since I'm rate-limited: > _In most cases i read about it 's more a matter of the current copyright > holder versus the facilitator. Not a matter of the creator versus the actual > downloader._ Two points. 1\. How do you think the current copyright holder got the copyright? They acquired it from the creator by either paying in advance or after the fact or as part of some ongoing deal. 2\. If you run a market that you know is used almost exclusively by people selling contraband, do you think that's legal just because you're not the buyer or the seller? In case you don't know, it's not, and you'll go to jail just as if you had sold the contraband. > _If the defendant isn 't wealthy after distributing all that content, is the > content worth millions? Or is the government-enforced business model worth > millions?_ Yes, intellectual property isn't worth anything without government enforcement. But we've decided to, as individual societies and as an entire world by treaty, to provide such enforcement, because we think recognizing such property rights is good for our society. And as for the first point, how much you make by violating other people's rights isn't that relevant. If I steal a truckload of iPhones and give them away for free, I still stole them. I realize IP is very different from physical property, but the profit of the crook isn't that relevant in either. ~~~ gatmne Legality aside, whether sharing copyrighted information is amoral or not is determined by one's own values. Some people, myself included, see a person's right-to-share to be far more important to humanity than the authors ability to employ an ill-suited business model to profit off his or her work. There are many ways to generate profit other than to infringe on others' right to share. Humanity does not owe you a successful business model, and certainly not at the expense of it's right to share. > What gives you the right to take someone else's painstakingly created > artistic creation and give it away for free to thousands of people, > depriving them of the exclusive right to sell their own work. Users sharing copyrighted work does nothing to prevent authors from profiting off their work. Conflating sharing and business is what got us in this mess in the first place. ~~~ msc1 Think about 3rd world countries. I'm relatively better off than my peers (2 cars, own a house etc.) but I can no way afford Hex Rays IDA Pro, Burp Site Professional, Navicat Premium or JetBrains and this list goes on... They cost more than my two or three months of rent. My parents are both medical doctors and their medical books are not affordable if they were sold in US prices but they have 3rd world print editions and they can legally buy these copies. Software vendors have to adapt to tthis too. Gaming companies already adapted this and I've never pirated any games since Steam. I'm a paying netflix, spotify customer because they are priced for the country they operate and as you can guess I'm not torrenting music or movies either. Internet is global but purchasing power is not. Ethically, I see no problem in torrenting. Human knowledge is "on the shoulders of giants" and in philosophical perspective -I'm not advocating this- even copyright is on shaky grounds (Property is theft! - Pierre-Joseph Proudhon) ~~~ freeflight > Gaming companies already adapted this and I've never pirated any games since > Steam. The first to adopt this, very successfully, had actually been Apple with their approach to selling mp3s. While everybody was still busy trying to sell overpriced physical albums, complaining about the "digital thievery", Apple took this as an opportunity with iTunes. iTunes made buying music digitally as convenient as it was pirating it, at the same time iTunes allowed customers to only buy specific songs (at reasonable prices), instead of forcing them to buy whole albums. Valve did something similar for gaming with Steam, that's true, but it took Steam way longer to get there than it did take iTunes. Imho Steam has also regressed quite a bit in that regard, it used to be a place for good deals but increasingly feels like a platform to shovel around shovelware for badges and trading cards. ~~~ drewmol Apple is an interesting case. While they may have flirted with 'legal' music as a revenue stream, the big bucks came from adding utility and simplicity to the ubiquitous collections of 'stolen' music. A very small subset of iPods were filled up with the plus +$10K cost of 'legal' music. No strong opinion, but it's a somewhat unique situation in the economics of IP. ~~~ freeflight > A very small subset of iPods were filled up with the plus +$10K cost of > 'legal' music. But they were actually filled with some legal music, prior to iTunes there wasn't really "one unified place" for purchasing digital music, most mp3's came from physical CD's people ripped privately. A couple of flatrate services popped up before/around the same time, but these mostly turned out to be illegal offerings, so it was mostly iTunes which stuck around in the beginning and formed the market. > While they may have flirted with 'legal' music as a revenue stream They still have impressive market shares in digital music distribution, they have started to lose ground to streaming services like Spotify and music labels finally adapting to the digital age but afaik iTunes was and still is a major player in digital music distribution. ~~~ drewmol Certainly. I wanted to provide some insight to the dynamics of the iPod/iTunes situation. Interestingly as you noted >A couple of flatrate services popped up before/around the same time, but these mostly turned out to be illegal offerings, so it was mostly iTunes which stuck around in the beginning and formed the market. I think Apples success at creating this market was a byproduct of it being fundemental pairing for the iPod's sucesss. Without the iPod, iTunes would likely have gone the same way as the rest of the early legal digital music sellers. Without the existence of a large collection of mostly 'pirated' mp3's sitting on home desktops and office networks across the globe, the iPod probably would not have taken off. Apple provided great utility for those collections by selling the iPod. Apple only briefly had any barriers to allowing the seamless transfer/sharing of entire iPod collections of copyrighted music, before concluding it would be much more lucritive to embrace the prevelance of 'pirated' music collections by investing in software to clean & organize it, and simple to use hardware that makes it portable. ~~~ freeflight > Apple only briefly had any barriers to allowing the seamless > transfer/sharing of entire iPod collections of copyrighted music, before > concluding it would be much more lucritive to embrace the prevelance of > 'pirated' music collections by investing in software to clean & organize it, > and simple to use hardware that makes it portable. True enough, and you most certainly have a point about the iPod also helping, that's something I haven't really factored in that much. To me, iTunes was mostly a great example how usability, pricing, and ease of legal access to content matters. Much earlier versions of iTunes UI was very reminiscent of mp3 sharing clients popular at that time (Limewire/Napster/Whatnot) by sorting titles in long lists and making getting them as easy as pressing a "download" button right next to it. The choice of pricing, single songs for $.99 [0], also felt like it contributed a lot to a paradigm shift how music is sold and consumed, acknowledging established trends in priacy by allowing legitimate customers more freedom in paying for only those songs they want. [0] [https://apple.slashdot.org/story/03/04/28/1723226/apple- intr...](https://apple.slashdot.org/story/03/04/28/1723226/apple-introduces- itunes-music-store-itunes-4-new-ipod) ------ coldtea > _At the time there were some rumors that Sweden would be placed on the US > Trade Representative’s 301 Watch List. This could possibly result in > negative trade implications. However, in a cable written April 2006, the US > Embassy in Sweden was informed that, while there were concerns, it would not > be listed. Not yet at least. “We understand that a specialized organization > for enforcement against Internet piracy currently is under consideration,” > the cable reads, while mentioning The Pirate Bay once again._ Typical, not so subtle, blackmail. One wonders what would happen if, say, the leader of some disclosure website was residing in Sweden and a superpower wanted him... (From a comment below on TPB case: "The judge was Thomas Norström. Swedish public radio revealed that the judge, Thomas Norström, is a member of several copyright protection associations, whose members include Monique Wadsted and Peter Danowsky – attorneys who represented the music and movie industries in the case. According to the report, Judge Norström also serves as a board member on one of the groups of which Mrs. Wadsted, the Motion Picture Association of America’s attorney, is a member." \-- hurray for independent justice in any case..) ~~~ robert_foss Overall the whole series of events was pretty offensive, and it paints the picture of the US being a schoolyard bully. ~~~ RobertoG Business as usual, but better the bully than the psychopath. From Wikipedia's "1954_Guatemalan_coup_d'état": "[..] The United Fruit Company (UFC), whose highly profitable business had been affected by the end to exploitative labor practices in Guatemala, engaged in an influential lobbying campaign to persuade the U.S. to overthrow the Guatemalan government. U.S. President Harry Truman authorized Operation PBFORTUNE to topple Árbenz in 1952; although the operation was quickly aborted, it was a precursor to PBSUCCESS." Reading about those things, one get the impression that the Department of State works for the Camber of Commerce, instead of the USA citizens. (1). [https://en.wikipedia.org/wiki/1954_Guatemalan_coup_d%27%C3%A...](https://en.wikipedia.org/wiki/1954_Guatemalan_coup_d%27%C3%A9tat) ~~~ paganel > Reading about those things, one get the impression that the Department of > State works for the Camber of Commerce, instead of the USA citizens. If I'm not mistaken the first permanent "embassies" were set up by the Venetians (mostly) and the Genoese, and their role was essentially just that, i.e. protecting the economic interests of their "home" entities. It so happened that most of the time protecting the citizens who happened to reside in foreign countries also meant protecting their home-city economic interests, but that mainly happened because the citizens involved were traders themselves. So, in a way, you could say that what the Department of State is now doing is just the continuation of the initial idea of a "foreign embassy". ~~~ RobertoG Surely, the role of the embassy of a power, ruled by a oligarchy of merchants and aristocrats, it's very different from the expected role of the embassy of a democratic federal republic. Just joking. As you say, business as usual. ------ ckastner It never ceases to amaze me how much influence the MPAA has. Movies, while extremely popular, don't generate _that_ much money: in 2016, total box office results in the US were under $12bn [1]. That's _the entire industry_. Apple alone makes that much money in three weeks' time. Amazing, that you can apply such pressure to politics, with so little. [1] [https://www.statista.com/statistics/187069/north-american- bo...](https://www.statista.com/statistics/187069/north-american-box-office- gross-revenue-since-1980/) ~~~ digi_owl It's because so few care about copyright. It is seen as something dry and stodgy that only affect artists and their publishers/labels. This perhaps because once the cassette recorder, never mind the VCR, came to be, most nations on the western side of the wall decided to not go full police state and thus added a "friends and family" clause to their copyright laws. This meant that a person could create a copy, if it was meant for a direct friend or a relative. This avoided having to park a copyright cop in every home in the nation. Never mind that producing analog copies from tape to tape cause of a noticeable loss of content with each generation removed from the original. But the computer, never mind the internet, changed all that. It made mass copying not something that required massive machinery in a warehouse, but something every kid could do in their own home. Especially as bandwidth and storage capacity kept improving at a massive rate. And digital copies do not degrade like an analog one does. ~~~ icebraining The organizations also play a game of the Boy Who Cried Wolf. Remember _Home Taping Is Killing Music_? By their propaganda, the music industry should have died multiple times in the past few decades. ------ jakobegger And despite all these efforts, I'm still a happy user of the pirate bay whenever I want to watch something that I can't find on iTunes or Amazon. For me, the Pirate Bay has been the most reliable way to find stuff over the last years, for so many things it's still better than all the paid alternatives that I use. So much money wasted on futile attempts to suppress a website... ~~~ sveme There's the extremely annoying tendency at least at German streaming providers (iTunes, Amazon/Google Video) to remove rental access to movies about nine months after DVD release or when a second movie of a series is about to arrive at the theatres. Only buy access remains accessible. Now that physical video rental stores are on terminal decline, online stores have an effective oligopoly without real competition and push customers towards paying a maximum. The only alternative in this case remains thepiratebay. ~~~ madez Aren't you afraid of receiving a 'Abmahnung' for torrenting? ~~~ tekmate I'm still dumbfounded that the practise of setting up torrent honeypots by agencies like waldorf&frommer is actually legal ------ ploggingdev If you're interested in learning more about The Pirate Bay, the founders and the trial, watch the documentary called TPB AFK (The Pirate Bay : Away From Keyboard) : [https://www.youtube.com/watch?v=eTOKXCEwo_8](https://www.youtube.com/watch?v=eTOKXCEwo_8) One of the founders of TPB, Peter Sunde started: * Njalla ([https://njal.la/](https://njal.la/)) - a privacy focused domain registration service * Flattr ([https://flattr.com/](https://flattr.com/)) - a tipping/micropayment service to support content creators * A VPN service - [https://ipredator.se/](https://ipredator.se/) Another link that you might find interesting, his interview with Vice : [https://motherboard.vice.com/en_us/article/qkjpbd/pirate- bay...](https://motherboard.vice.com/en_us/article/qkjpbd/pirate-bay-founder- peter-sunde-i-have-given-up) ~~~ Marazan Does it talk much about the financier Carl Lundström's role in TPB? He never gets mentioned much for some reason* * Because of his far right connections ~~~ ploggingdev IIRC it does, but only briefly. He bought advertising space on TPB and it became a controversy. The TPB guys were falsely accused of being right wing extremists for doing business with Carl Lundström. ~~~ Marazan Woah, woah. He was a co-defendant at the trial. He was more that just a dude who bought some ad space. ------ beloch I'm sure there are many reading this who have absolutely no sympathy for pirates. They're stealing and that's that. Well, how do you feel about your government blackmailing, extorting, or otherwise "strong-arming" other sovereign nations in order to foist its laws upon them and then hiding that from you? (It really is a minor miracle this cable was released _at all_.) Is it truly worth stooping to such measures to ensure that Micky Mouse remains copyright protected for all time _everywhere_? Don't other nations have the right to make their own laws? How would you feel if some other nation foisted it's laws on the U.S. in such a manner? Why does the U.S. government go to such extremes for private enterprise anyways?[1] Piracy is bad. What the U.S. government has done in response is worse. [1]I suggest you google the United Fruit Company's history the next time you're eating a Chiquita banana for a _real_ eye opener. ~~~ Daycrawler Copyright holders lose customers because of piracy, but that's not stealing. If I sell some object and my logistics is that I sell 3 per months, then I manufacture 3 objects per months and wait for customers to buy them. If someone steals one, then I've only 2 remaining objects. I've the choice between telling the 3rd expected customer that I'm out, or to manufacture one extra, which in any case result in a direct loss of money. I'm a victim of theft. If I'm a film producer and my logistics is that I sell N viewings per month, and someone pirates the movie, then this doesn't interfere with my ability to sell the N viewings to my expected customers. So this isn't theft. Of course, I would like the pirate to be my customer so that I can step up to N+1 viewing per month, but if I want to enforce that I need to turn to who made the copy available to the pirate, which is counterfeit. ------ realusername I remember the piratebay trial being a gigantic farce where some of the judges had ties to copyright organisations. It's crazy how much power have these mafia-like organisations. (edit: spelling) ~~~ draugadrotten The judge was Thomas Norström. Swedish public radio revealed that the judge, Thomas Norström, is a member of several copyright protection associations, whose members include Monique Wadsted and Peter Danowsky – attorneys who represented the music and movie industries in the case. According to the report, Judge Norström also serves as a board member on one of the groups of which Mrs. Wadsted, the Motion Picture Association of America’s attorney, is a member. That this passed without causing a conflict of interest is astonishing. [https://www.csmonitor.com/World/Global- News/2009/0423/pirate...](https://www.csmonitor.com/World/Global- News/2009/0423/pirate-bay-judge-under-fire-for-conflict-of-interest) Also worth mentioning is that the lead investigating police got a job from Warner Brothers very soon after the trial was successful. Thank you, job well done. [https://techcrunch.com/2008/04/18/officer-who- investigated-p...](https://techcrunch.com/2008/04/18/officer-who-investigated- pirate-bay-took-job-with-warner-brothers-will-still-testify-against-pirate- bay/) In recent news, the chair of Swedens Supreme court judge Stefan Lindskog has been implied in shady financial transactions, and is under investigation by the police. The belief we once had that Sweden had a low level of corruption can be put to history. And of course even having a low level still means there is some corruption. [https://www.expressen.se/nyheter/polisen-utreder-hogsta- doms...](https://www.expressen.se/nyheter/polisen-utreder-hogsta-domstolens- ordforande/) YMMV. ~~~ Cthulhu_ > Also worth mentioning is that the lead investigating police got a job from > Warner Brothers very soon after the trial was successful. Can you blame them? Thanks to that case the guy got a lot of experience in the area of copyright violations and online piracy, that's valuable knowledge to have and they could use someone to advise them. You're implying that he did it for the cushy job he got for it, but I have my doubts. Maybe if you can prove he got the offer before the investigations started? ~~~ tonyedgecombe It's hard to prove but there is still a dirty smell around it all. ------ thomastjeffery Let's get one thing straight: Torrent sites _do not host content_. They host _community_. The only thing thepiratebay.org, what.cd, kickasstorrents.cr, etc. did or continue to do is the _same_ that a forum or news site like reddit or hackernews does: provide a community with a purpose. While hackernews is a community for discussing news or interesting things, etc. WhatCD was a place for discussing music, quality releases, and sharing good encodings, rather than the transcoded lossy->lossy formats you see flying around most places. Naturally, WhatCD's _as a community_ wasn't concerned with things like copyright owner's profits, etc., even though many of its users certainly were, but _simply couldn 't find an alternative_, as a lot of music is not even to be found, let alone sold in particularly high quality lossless formats. When what.cd was taken down, _none_ of the copies of _copyrighted content_ were deleted. The _community_ was broken up. If piracy is to be considered such a serious crime, taking down torrent trackers is like going to a meeting of known criminals, and - rather than arresting them - evicting them. It has only a minimal effect, as they are free to gather elsewhere. What bothers me the most is that the only thing being dismantled is the thing that clearly contains the most value to individuals, and society at large. Community is a _good thing_. When WhatCD was taken down, a countless amount of valuable data that could be found practically nowhere else was suddenly destined to be hidden from society at large, and the community it had cultivated was scattered, without a care for what that meant. Sure, quite a few people find that, while using copyright enforcement as a business model, piracy significantly detracts from sales. Sure, there is a culture that undervalues creators, but it is not a black and white problem, and most popular solutions have serious consequences that go practically ignored. ------ pferde Got to love the 'privacy' instead of 'piracy' typo in the first cable screenshot: "2\. Summary. In a visit to Sweden last month to raise the growing concerns about Internet privacy in Sweden, the Motion Picture Association of America (MPA), together with ..." ~~~ gcb0 was it a typo or a huge backdoor of piracy into privacy talks? ------ upofadown Canada has been on the 301 watch list for a long time now. There have been some attempts to get off it (theatre camcording law) but it turned out that that the real reason a country is put on the list is a lack of fawning obedience to the US copyright cartel. A country that is perceived to not be toeing the line is put on the list. If there no actual policy reason to be there the copyright cartel just makes stuff up. So these days the list is meaningless and is roundly ignored by Canada. Sweden probably should of did the same thing. ------ fsloth "The new confessions of an economic hitman" by John Perkins is a very good exposition of the close ties of state and corporate powers in the US and how they co-operate to increase the capital wealth of the elite. It's more autobiographical than a research document, and has some unproven claims, but no one has punched holes in the important claims there AFAIK. ------ jesperlang Wow, 10 years ago already? This was quite big here in Sweden back when it happened. It's scary how quickly these things slip out of our conscience (at least mine). It's chilling what you can get away with by just staying cool for while... Or is the short term damage in PR not worth waiting it out for the long? ------ dghughes People aren't stupid they know it's wrong to download a movie or music they didn't buy. But everyone agrees the response by the US law enforcement is overreaching and out of proportion. Convenience is the real reason people went to websites such as the Pirate bay not stealing, people don't buy fast food for their health. The rise of cheap and reliable streaming video websites such as Netflix changed that. That's all anyone wanted a convenient reliable way to legally watch and pay a reasonable amount. ------ wimagguc To remove pirated movies from the interwebs there are two options really: either attack content providers / trackers etc, or, find the users directly. In Germany, as soon as you start a torrent client, your traffic is being monitored by bots and agents, and if you upload something inappropriate you (or your host) will get a letter from a law firm with a heavy fine. (I know of two friends who had to pay $600 and $3000.) ~~~ _Codemonkeyism "In Germany, as soon as you start a torrent client, your traffic is being monitored by bots and agents" How is traffic monitored when I start a client? Don't I need to download/upload something to get monitored? Is the monitoring connected to trackers I download from or ISP monitored? "[...] with a heavy fine." Was it a fine or some kind of fee? ("Abmahngebühr") ~~~ wimagguc I'm not familiar with the German legal system but the sum did depend on what they've uploaded. (It was detailed in the letter, if I remember correctly, $600 for half an episode-of-whatever and $3000 for multiple movies.) As for the traffic monitoring, indeed, I'd imagine it to be honeypot tracker where all content/traffic is visible rather than something installed on the ISP side. ~~~ JohnStrange No it's not the tracker, it also works for magnet links and people get letters for downloading. There are companies who join the download swarm and register all other downloading parties. That's very easy with bittorrent, since the protocol is (originally) designed for fast download sharing without any regard to anonymity or pseudonymity.[1] The process is not reliable for providing evidence of copyright infringement, though, and the German system mostly works by scare tactics of lawyers - many people don't want to risk a lawsuit even if they could win it. [1] [https://torrentfreak.com/thousands-of-spies-are-watching- tra...](https://torrentfreak.com/thousands-of-spies-are-watching-trackerless- torrents-151004/) ~~~ zaarn These companies are the scum of the scum, tbh, I recall I once got a letter claiming I must pay about 6000€ for illegally downloading "Debian 5 Linux Netboot ISO" and "Ubuntu 12.04 x86 Full ISO" or something along those lines. They sent some awfully scary letters for what amounts to legally obtaining an ISO file. ~~~ notzorbo3 I used to run an abandoned warez site when I was young. I received a lot of cease and desist letters from "lawyers". They usually failed to identify the infringing material, failed to show they had the right to act on the copywriters behalf and a staggering amount of them confused trademark infringement with copyright infringement. Also, every last one I received via email. Yeah, right, like that's going to hold up. I ignored all of them and never got even so much as a follow up. In other words, such things are considered low-hanging fruit by these companies. Just throw it out there and see what sticks. ~~~ zaarn Luckily the german system is less strict than the DMCA, you can fact-check any letters you get, you only need to act if you know (for certain) it's illegal ------ vinceguidry The copyright industry is a direct arm of American soft power projection into the rest of the world. For the content industry it's about money, but for policy makers and the geopolitical strategists who have the ear of those policy makers, it's about furthering the nation's position in the world. The content industry punches above its weight in getting the government to protect it overseas for this reason. ------ koliber Interesting aside: is the redacting technique vulnerable to an analogue of the "timing attack" on certain crypto? The name of the employee in the wires has been redacted. I wonder if the physical size of the redacted box, together with the fact that this is a name, together with a database of public employees, could be used to uncover the identity of the person. By comparing the size of the redacting box with the lines above and below, we can guess that 6-9 characters are masked out (including the space). This is an a rough parallel to a timing attack used against crypto. The DB of public employees could be thought of as a list of candidate inputs. Weak redacting? This reminds me of a law in Poland where a person accused of a crime can not be named. Media will blur out photos and state something to the effect of "Mark W. an executive at XYZ Corp., stands accused of ...". If the accused is a well known actor with a unique first name, this becomes a running joke. ~~~ andrewla The 2008 Underhanded C Contest [1] had an exercise in "leaky" redaction. The winner, [2], used a very fun approach. [1] [http://www.underhanded-c.org/_page_id_17.html](http://www.underhanded-c.org/_page_id_17.html) [2] [http://notanumber.net/archives/54/underhanded-c-the-leaky- re...](http://notanumber.net/archives/54/underhanded-c-the-leaky-redaction) ------ ksk It's interesting that even after all the scummy things the movie industry has done, people still desperately want to pirate their content. People who base their opinion on a principled opposition to copyright, should be leading the charge in promoting other means of compensating content creators. Stop signalling how much you desire the copyrighted product, and start signalling how much you desire the non-copyrighted one! IMHO The only people who should be pirating are the people who don't have a principled stance on copyright. ------ thriftwy I wonder why isn't there torrent-search-over-DHT yet? I mean, this is known point of vulnerability. Maybe it's because owners of popular bittorrent software don't want that feature? ~~~ jokoon btdb.io and btdig btdig seems better as it doesn't have annoying pop ups, which are constantly brought up on btdb, even with ublock origin and noscript. I would not be surprised that btdb is buying ads from an ad provider that sell js injections to a MPAA operated third party. btdb is nice because you can sort by seeds, you cannot with btdig. To be honest I stopped using classic torrent indexers entirely since I started using DHT indexes. They have much larger choice. The issue is that you cannot "post" magnets links on the DHT automatically (I think you cannot), so the DHT works as long as people are finding magnets or torrents elsewhere. It's bringing more decentralization, which mean more chaos but much less traceability. ------ belorn Many interesting points which contradict the behavior of the lawyers of said MPA during the court hearings. “However, it is not clear to us what constraints Sweden and even U.S. authorities would be under in pursuing a case like this when the site is legally well advised and studiously avoids storing any copyrighted material.” A focus by the prosecutor was the claim that the founders did not have well legal advice. The idea was to prove to the court that the accused did the infringement knowingly and was aware that what they did was illegal. Here we can read that this supposedly obviousness of wrong doing was not so clear to the very high paid lawyers arguing it. _" Both Bodström and Eliasson denied any direct involvement of the Justice Ministry with the work of the police and prosecutors in the Pirate Bay case."_ That they surely did. It is very illegal for them to directly act in any specific legal case. If it ever was proven it would directly end any political carer. When similar document was earthened it was said that just because the US believe they influenced Swedish politicians it still doesn't mean that they did it, so no proof of foul play has been made. ------ implosificated I wouldn’t be as miserably ashamed for this as I am, if it weren’t for the fact that the popular artists my country has produced since, oh... 1996? Aren’t worth defending from piracy. You can harp on how there’s no accounting for taste, but the truth is that the industry this sort of thing protects certainly does account for taste, and only invests in the kind of lowest-common-denominator/mass-appeal trash that makes them the most money. And so, we are left to suffer the guilt trip that because we don’t adhere to an honor system of donating funds for better artists (paying and not pirating, copying, stealing, sharing, music and movies), we get the artists we deserve. But that’s clearly not true, because the money made off the garbage produced today, doesn’t make it into any kind of honor system that benefits the interests of better artists. How about producers of bad music and movies demonstrate that they are willing to donate into the honor system first? The profits that the industry sees are not reinvested. The artists, mysteriously, continue to worsen. ------ spodek The framers of the U.S. Constitution knew the risks of the government creating and granting monopolies, however limited. The incentives are to remove the limitations and expand them. The industries formed by these government-granted and defended monopolies have removed most of their limitations and keep growing. We see the benefit to them. They make big blockbusters that people enjoy watching, so we see that benefit. The costs keep growing too, such as this article and the deprivation from the public domain of nearly a century of work. Meanwhile, technology has lowered the costs of production and distribution, making investment for most works unnecessary, obviating the need for a monopoly. Have the costs grown to outweigh the benefits? The monopolists' power can maintain the monopolies past when that point so it's hard to tell, and people with different values will disagree, but this article points in that direction. ------ Feniks Still up though. I use it every once in a while because its on TOR. My ISP has to block some pirate sites now. I'm from the generation that grew up with digital piracy. I am accustomed to have all media available. From nineties anime shows to strategy guides for videogames. ------ frabbit Why is the name of the official who spearheaded this initiative REDACTED? Is this undercover, spy-type work as opposed to public, legal actions carried out by a legitimate government agency? ------ louhike I've discovered some days ago that the thepiratebay.org domain was available again. Is it linked to the original one or is it just a proxy? ~~~ kowdermeister I use it often so somebody keeps it running, actually there are dozens of mirrors running on various ccTLD-s. The source must be available somewhere online so you can spin up your own instance. ~~~ Mayzie It is. As The Pirate Bay don't host any .torrent files, only magnet links, on memory the entire site came to under 200mb. ------ l33tbro Funny, the "brand equity" of The Pirate Bay is really something. After over 10 years of use, there's almost a nostalgic bond to the site for me now which makes the downloading of material a familiar ritual with only positive associations. Not necessarily proud of this, just something I've noticed. ------ casualtech I don't care indeed, but someone has to think about the potentials and ecosystem that it could lead. Don't block the possibilities and help them to be in right way. ------ kwhitefoot > U.S. authorities provide concrete suggestions for improvement I don't think improvement is quite the right word. ------ parski "In your face, Hollywood." ------ paul7986 With Net neutrality repealed say hello the blocking sites like this one .. well without a vpn. ------ scopecreep You mean the one I used last night to watch Dunkirk? Great detective work there Lou. ------ ketsa Amazing how Yankees are annoying g the whole world... ------ antigirl do people still use piratebay ? there are better alternatives now ~~~ mac01021 For example? ~~~ jokoon DHT indexes, btdig and btdb.io ------ ronjouch Honest question: why is this surprising / newsworthy? ~~~ jacobush At least to me a Swede, this [datacenter] _"... was raided by 65 Swedish police officers"_ is so incredibly out of touch with normal reality in this country, on so many levels. * Copyright infringement case assigned to that many officers? Unheard of. High profile murder investigations don't get that many. * We have this peculiar law, that ministers are NOT TO meddle in the running of government agencies. Yet, this is what we got. * From cautious "see what happens" attitude among prosecutors with regards to copyright infringement and copying for personal use - to a big leap: not only an attempted (though only partially successful) witch hunt of Pirate Bay founders, but _also_ inventing a whole new crime, called "accessory to copyright infringement". Not that I don't agree that what Pirate Bay did was at times shady, but the whole thing made me believe without a doubt a few things: \- US as a case of "wag the dog". The trade associations (RIAA etc) in the US can easily make the state do their bidding. And the US state as an institution is quite weak, when it does these things so quickly. What that implies, is that there is no thinking things through. No serious cost/benefit analysis can possibly have been made. "How much ill will from foreign countries is this move worth? Fuck that, do it now." \- That Sweden would be pushed around so quickly. I must have been naive, but it _was_ surprising how not even a symbolic attempt at saving face was made here. Our domestic response was decisive and swift. Can't help but make you wonder what we could be made to do to ourselves over something more serious than fucking copyright infringement. Dance, monkey, dance. ~~~ staticelf Yes especially when pretty much all other crimes except murder and stuff like that are disregarded nowadays. Swedens judicial system is completely broken. ~~~ bionoid Norway is the same for the record. There was a local case recently where the police knocked down an innocent man on the street, handcuffed him, and charged him with assaulting a police officer. There were something like 30 eye witnesses, still he lost in court, police clearly giving false testimony. Luckily he did win the appeal. ~~~ staticelf The difference is that in Sweden the police doesn't do anything. Even if you give them a lot of evidence they drop the cases all the time. I have personal experience of this. ~~~ digi_owl "Henlagt grunnet bevisets stilling" (effectively claiming that the case will not be investigated due to lack of evidence) have become a running joke in Norway. ------ redm I don't understand why there is so much controversy over this. I've been reading about the reasons TPB is a bastion of freedom for years and they read like a list of reasons its ok to cheat on taxes, or put recyclable materials in the compost bin. We all know its illegal in the US, we know TPB knows it, and instead of changing copyright laws, it is continually justified. It feels disingenuous. I'm tired of the same conversation for the last 20 years. ~~~ spraak 20 years seems like a huge exaggeration unless you're talking about something more than just TPB. ~~~ redm On the Internet, the conversation about mainstream piracy goes all the way back to Napster (1999). ------ jmull (rant, sorry in advance) Eh, f--- Pirate Bay and everyone else who makes a living stealing the efforts of others. (And F-you too if you're a supporter/user of theirs.) Of course the various governments were stupid, clumsy, ham-fisted, and in the pockets of corporations. So what else is new? How does that make it OK to steal stuff? People want to talk about what total hipocrite jackasses they are (which is true) to deflect attention from the fact that they are casually and constantly taking stuff they don't have a right to (also true, come on why don't you want to talk about that?!?). If you don't like the terms, prices, availability, etc, of the Taxi reruns they are selling, well, then, don't watch the Taxi reruns. Trust me, despite Danny Devito, you aren't missing much. Likewise for all the pop music, old software, movies and virtually all the other content people are stealing through PB and similar. Is this the stuff you really what you want to sell your integrity out for? Think about it. If you all were mainly -- well, even just somewhat sporadically -- taking enlightening, high-quality stuff with an ounce or 1/2 of cultural importance that was otherwise too expensive, then I might be able to understand. But no. You're just mainly swiping bad superhero movies and video editing software that you'll never learn to use. I think we need to proceed on all the right paths here: 1\. yes, the governments and their associated law-enforcement, and regulatory bodies are a-holes who are beholden to petty, stupid, obsolete, obnoxious corporate ip holders. AND 2\. Stealing is wrong (and that doesn't change if you are stealing from 1.) ~~~ executesorder66 Piracy is not stealing. If I make an exact copy of your car and drive off in the copy, did I just steal your car? ~~~ jmull Your analogy is terrible, but let's go with it: If I spend a million dollars inventing a nice car that can be freely replicated I don't have to sell it for a million dollars to break even. I could sell a copy of it to 1100 people for $1000 each and everyone wins: Nice cars are inexpensive for everyone and I make a living, so I can keep inventing nice things. But if there are 200 pirates among the 1100 who take a copy of the car but don't pay the $1000 things are different. Now I'm selling 900 cars and losing money so I have to do something. Such as: * charge $1200 per car. Pirates win but car buyers lose, to the tune of $200 per car. Don't call it stealing if you don't want to, but your pirates are getting something and someone else has less money as a result. And of course you can't simply raise the price without losing some customers, so this can only go so far. * invest less to make up the difference (making a crappier car). Pirates win and car buyers lose. The pirates don't win as much, though, since they have to drive the crappy cars too. * enact stringent anti-copying mechanisms to try to precent unauthorized copying by pirates. This costs money, raising the price of the cars and is inevitably user hostile. So, again pirates win and customers lose. But again, the pirates don't win as much because they have to deal with the user-hostile anti-copying features as well. Note this is a vicious circle. As piracy makes the product more expensive and crappier, more people will be motivated to pirate rather than pay, causing the product to get even crappier or more expensive. And anything that makes buyers lose also makes my car company lose, with fewer sales at lower prices to less satisfied customers. * Ultimately, I might find I can't make money doing this at all: that there isn't a price high enough to make up for the piracy and low enough that anyone will pay for my crappy cars and I just stop making things altogether. Here pirates lose, buyers lose, and, of course, I lose. Don't call it stealing if you don't want, but you are getting something without paying for it and it is costing other people more money as a result. Not only is your piracy making stuff more expensive for everyone else, it's also making it crappier for everyone and ultimately lead to less nice stuff being available at all. ~~~ executesorder66 > If I spend a million dollars inventing a nice car that can be freely > replicated I don't have to sell it for a million dollars to break even. So do filmmakers make movies or an implementation of the bittorrent protocol? That analogy is terrible. ------ marcoperaza And why shouldn't the US have pressured Sweden to take down the Pirate Bay? The people running that site are openly and proudly flouting copyright laws and allowing American-owned (among other) content to be downloaded without payment to the owners. Very large portions of the US economy are dependent on international enforcement of copyright and patent law. If the US isn't using its leverage over other countries to make them enforce intellectual property laws, then it is failing to protect its citizens' economic security. ~~~ Strom Yes it might be very much fine from the US perspective to do this. Things change however once you look from the other side. It can easily be in the economic interest of other countries to not pay the US copyright holders, especially 100 years after something was created. So this is not so much about claiming the US is doing something against the interest of US citizens. This is about other country politicans/judges/police being corrupt, taking benefits from USA and acting against the best interests of the people they promised to defend.
At 4935, he remembers he doesn’t have the keys. His mother, Veena, who bankrolled the restaurant, had taken them one morning after she and restaurant staff walked in on her son and two women in the upstairs catering venue. A night of drinking and the drug Molly had turned into a naked slip-and-slide thanks to upturned restaurant-size jugs of olive oil. Ashish yanks the locked panic bar door open, loots the safe and stuffs his pockets with cash. He notices blood on the floor and all over his shirt. His chin is split wide open. He stops at a bodega on the way back to his apartment to buy Super Glue, a hack cooks use to close gashes, stop bleeding and keep working. AD AD At his apartment, the prostitute helps Ashish glue his chin shut. He showers and passes out. When he wakes, she and the cash are gone. A week later, Veena takes her son on a four-hour silent drive to Pennsylvania and drops him off at a 28-day rehabilitation program. Four-and-a-half years later, Ashish, 33, is sober and owns three restaurants, Duck Duck Goose and George’s Chophouse in Bethesda and a second Duck Duck Goose in Baltimore. In November, he was invited to cook at the illustrious James Beard House in New York City and did, just before Christmas. After having served a five-course meal that included line-caught halibut with scallop mousseline and osetra caviar, he posed for a picture arm in arm with his mother, he in immaculate chef whites, she in a stunning, intricately embroidered beige chiffon sari over a navy-blue velvet blouse. A portrait of icon James Beard hung behind them. For Ashish, a nightmare had become a dream come true. Why tell his story? “I want for someone to read it and say, ‘I have a friend who needs to read this,’ ” he said. “I’m not an uber-religious person, but if there wasn’t someone looking out for me, I wouldn’t be here right now.” AD AD Drug and alcohol abuse plagues the restaurant business. According to the 2015 National Survey on Drug Use and Health from the Department of Health and Human Services, hospitality and food service workers had the highest rate of substance abuse among all the industries studied, at 16.9 percent. The rate of alcohol abuse was higher only in the mining and construction fields. The website Chefs with Issues and programs such as Ben’s Friends and Restaurant Recovery provide safe spaces for people in the industry to deal with issues that probably have roots in the past. As with many addicts, Ashish’s journey began before he was born. His mother, a Seventh-day Adventist, was a professor in Pune, India. Rather than remain silent in an abusive marriage, she got divorced, taboo in India. She also began a courtship with Rajish Alfred, a student and her third cousin. AD AD “My brothers told me very clearly that we do not have divorces in our family,” says Veena. “What choice did I have? I came to America.” She settled in Silver Spring in 1982, staying with an American woman she knew from India. She had six suitcases, including bedsheets and a rug, $1,600 (the maximum the United States allowed) and two boys, Shane 12, and George, 6, to raise. Rajish planned to follow once he could get a visa. An immigration lawyer took $800 to apply for her H1 visa. Six months later, she received a letter saying her visitor visa had expired. The lawyer had never filed her papers. Rather than return to India and now with expired documents, she took a job with the lawyer — and a second job at a senior care facility. Rajish arrived in 1984 and held a string of jobs, while keeping his drinking binges under wraps; unbeknown to Veena, he had become a severe alcoholic. They married and bought a condominium in 1985. Ashish was born in 1986. AD AD And so his troubles began. In 1989, Rajish had three DUIs, including one totaling the car, with George inside. He would leave for long periods, then return unannounced. “I was loud and violent. [Veena] had a couple of episodes where she had to go into a shelter with the boys. There were lots of apologies, regrets,” Rajish says. “When Ashish was 2 ½ , he came charging into our room and bounced on my chest. I was half asleep, hung over. I grabbed him by his shoulders and pushed him down hard, and his femur shattered.” The injuries put Ashish in a body cast for months. “I haven’t even begun to think about how I can ask forgiveness for this.” Ashish didn’t know the truth about the incident until his teenage years. “I was told I had fallen down the steps,” he says. Veena got a green card in 1991 and took in three foster children with disabilities to help raise the money to found what would become a chain of AlfredHouse assisted-living facilities, starting in Rockville. (There are now nine.) In 1995, she bought a house in Derwood. Shane, who also struggled with alcohol, had long moved out; George was in California for college. AD AD “That’s where Ashish grew up, where he saw the worst of his father, where he called 911 on him because he was throwing things at me and abusing me,” Veena said. School provided no respite. In fourth grade, at Spencerville Adventist Academy in Beltsville, he didn’t fit in and was bullied. He stopped doing homework. In eighth grade, he was buying pot, stealing liquor and drinking all day. He hacked into his teacher’s computer to change a grade and got suspended. In ninth grade, he got kicked out, repeating the grade at the public Magruder High School in Rockville. “I made friends with some real criminals there,” he says. “That’s when it clicked that the more I drank, the more drugs I did, the more fights I got in, the more popular I was.” AD His father was in and out of his life. Veena divorced Rajish in 2001, remarrying him in 2006, and during his binges she would send Ashish to make sure he hadn’t burned his apartment down with a cigarette or passed out and hit his head. “I stopped being a kid while I was still a kid,” Ashish says. AD He worked various jobs, including at Starbucks, where he got fired for stealing $50 to buy Ecstasy. By 21, he was a VIP host at a gentleman’s club in Baltimore and arranging for drugs for guests. His booze and cocaine habit ballooned. His mother kicked him out, then so did a girlfriend he stayed with, then the friends who let him couch surf. “I burned every bridge, owed everyone money and went home with my tail between my legs,” he says. He spent his days watching Food Network, where he caught the cooking bug. In 2008, he made a lavish Thanksgiving dinner for his mother to persuade her to pay for him to go to the French Culinary Institute in New York. She knew he just wanted to get out of Maryland, and she feared drugs would kill him, but she paid the $40,000-plus tuition plus rent, and let him go. AD AD The day before he was supposed to go to New York and sign an apartment lease, he blew lines of coke until 4 a.m. He took the train up, signed the lease, passed out on the floor of the unfurnished apartment and came back home. Once at the FCI, he excelled. The program included working for the school’s in-house restaurant, where a samosa-like appetizer he created made it on the lunch menu, an honor. Veena was proud, despite cultural misgivings. “Nowadays, chefs have great prestige, but if my grandfather saw this, my father, they would weep, because I come from the ruling class in India, where they have their own kitchens and chefs,” she said. “But because my son was happy, I was happy.” AD Ashish worked in restaurants around New York, such as DBGB, Lupa Osteria Romana and Bar Pitti. Chefs told him he could go far if he put his demons behind him. Instead, he partied on, sometimes missing work and getting fired. He sold coke. On two trips home to see his parents, he was arrested for DUIs. AD He returned to Maryland in 2012, where the 4935 space in Bethesda had become available. Veena sold a property to finance the business. She knew that Ashish was still drinking — he would disappear for days at a time — but not that heroin had entered the picture. “I liked the idea that I could die from it,” he says. “That seemed like a comfortable death.” The best-case scenario, as he saw it, was that he would get a good buzz; the worst-case, that he wouldn’t wake up. “I didn’t do it to the point where I’d get sick if I didn’t have it. If it wasn’t dope, it was [oxycodone] — whatever I could get my hands on. I was doing so much blow, the oxy would even me out, people said. And it worked.” It didn’t work. One day in 2014, Veena gave Ashish an ultimatum. “I told my son, ‘If you are willing to go into rehab, I am with you. I will pay for it. Otherwise, you will not see me and I’ll have nothing to do with you. Call and let me know by Friday night,’ ” Veena says. “I left my son standing there on Cordell Avenue, miserably thin, injured emotionally and physically, with the fear in my heart that I will not see him again, that he will kill himself.” Ashish’s father, recently sober and living in Philadelphia, encouraged his son to go. “Like any good drug addict, I told them to f--- off and I’d figure it out on my own,” Ashish recalls. “But that lasted only a few days.” At the Caron Treatment Center in Wernersville, Pa., counselors asked Ashish to get rid of anything he might have on him. He found some cocaine in his wallet and threw it out. “And that was it,” he said. “I was done. They put me in detox, and I never wanted to go there again. I never wanted to feel that again.” Ashish calls it the best 28 days of his life. “For me, the most value was in the family part of it. There was a three-day workshop with the parents that really changed my life.” Pent-up thoughts and memories surfaced that made the work valuable even if it was excruciating. He recalled that in Silver Spring when he was very young — 5, 6, 7 years old — various workers and tenants were going through the house, and “they got handsy.” He didn’t mention it to his parents until rehab. “What could they say?” he asked. “It wasn’t their fault. I knew as a child that it was a secret not to be told. I don’t think it went too far. No, not too far.” Says Rajish: “We shared a lot of things we never talked about. . . . Ashish and I were in it. Our bond was elevated from father and son.” It wasn’t until years later, though, that Rajish fessed up to the violence and asked forgiveness. “I said, ‘Yes, fine, thank you,’ ” Ashish says, “and kind of kept it moving. And that was the end of it. I mean, I spank my dog and feel terrible. I can only imagine how he must feel.” When he returned to Maryland after rehab, Ashish lived in a halfway house for two months. On the first night, he was slinging drinks at his restaurant, which remained open while he was gone, because a bartender had called in sick. “What was I going to do? I was short-staffed, and no one wanted to work for me.” Ashish now attends Alcoholics Anonymous meetings and sees his sponsor once a week. Until a year-and-a-half ago, he submitted to random drug testing four or five times a week to reassure his mother and staff. He told them if they ever thought they had something to worry about, he would happily submit to testing again. He opened Duck Duck Goose in April 2016 and a Baltimore outpost in June 2018. He bought a house in Baltimore in July and lives there with his dogs, Otis and Marco (after the British chef Marco Pierre White). There have been challenges. George, his half-brother and surrogate father, died suddenly of a heart attack in 2015. Ashish had George’s birth and death dates tattooed on his right hand, and when he rebranded the struggling 4935 Restaurant in 2017, he paid further tribute by naming it George’s Chophouse. He recently went to court for a young woman who works for him. She is new to sobriety, and her mother, with whom she lived, had left. The rent hadn’t been paid in three months, and his employee was going to be evicted. Ashish got her an apartment and took money out of her paycheck for rent. “The judge said, ‘I don’t like the fact that you work in restaurants, because I think that’s a terrible environment for somebody trying to stay sober,’ ” Ashish said. “I don’t necessarily agree with that. I think it’s the environment you create. I think, as a chef and owner, that if I foster an environment where it’s cool to be blowing lines and doing shots at the bar at the end of the night, then, yeah, that’s what’s going to happen there. It’s not acceptable in any of my places. I’ll take them out for a drink after work and buy them one drink and swipe my card and get out of there.” Ashish sees his mother every Saturday, either for lunch or at the Southern Asian Seventh-day Adventist Church in Silver Spring. He says she has made peace with his career. “It really took a long time for her to understand and respect what it is that I actually do,” he says. “But I know she’s happy and she’s happy with me and how far I’ve come. I mean, five years ago I was almost . . . dead, and now I’m cooking at the Beard House. That’s a big deal.” Hagedorn is a Washington writer and cookbook author.
Welcome to the Nillumbik Plumbing & Gasfitting Guide, your local guide to Plumbing & Gasfitting in the Shire of Nillumbik . The Nillumbik Plumbing & Gasfitting Guide brings together information from a wide range of sources to make it easy for you to find what you're looking for. Established in 1993 and based in Research, Clear Water Plumbing and Gas Fitting is owned and operated by Simon LARKIN, a self-employed plumber who has lived in the Shire of Nillumbik for over 25 years. Simon is a licensed plumber and gas fitter with over 14 years of experience who is qualified Terence Wray is a local tradesman with over 30 years of expertise in the plumbing industry, who has resided in the Shire of Nillumbik for over 25 years.Terry is licensed in all aspects of general plumbing, including gas fitting, mechanical services, roofing, storm water, sanitary, water supply and drainage John COX operates Komplete Plumbing Services Pty. Ltd, a Greensborough based organisation, established in 1987, and servicing the Shire of Nillumbik. John has over 21 years of experience in the industry and is licensed in all aspects of plumbing. S.A. Edwards Plumbing was established in 1992. Scott is qualified in, and offers all general plumbing services including roof plumbing, maintenance of hot water and gas services, and sanitary drainage for both commercial and residential properties. He specialises in clearing blocked drains. Aardee Plumbing is a family oriented business committed to providing an efficient, reliable and courteous service to the public. Russel Dower is a fully licensed plumber and gas fitter with over 26 years of experience who services commercial, industrial and residential properties, specialising in maintenance contracts and high pressure water cleaning of drains All Times Plumbing specialises in the installation, maintenance and repair of gas appliances as well as providing services relating to sanitary and water supply, drains, sewers, spouting, roofing and duct fitting. Ayr Plumbing is owned and operated by Raymond Stekel, servicing both commercial and residential properties within the Nillumbik Shire. Services include installation and service of all gas appliances, including gas and electric hot water service and stove changeovers, clearing of blocked sewers and below ground drainage works, repair of burst pipes, tap re-washering, roof repairs Guardian Plumbing and Gas Services has been a plumbers for 17 years. Our highly developed skills, varies through all facets of Plumbing and Gas maintenance. So passionate about his vocation, he has become a member of the Master Plumbers Association, attending a wide range of seminars to enhance his knowledge. I am Also A bosch Hot Water Hero. The Local Plumbers of Taylor and Sons Plumbing have over 25 years of experience in the residential and commercial sectors providing superior and gasfitting services. We have been working with home improvement companies, architects, builders and home owners for all their plumbing needs. Our goals are to develop a long lasting relationships with our clients based on trust, honesty, reliability and satisfaction. No job is too big or too small for us. Servicing Ringwood, Croydon, Donvale and surrounding suburbs. Plumber Ashburton - Your local plumber in Ashburton and surrounding areas. We are friendly and detail oriented. With Plumber Ashburton you can always expect the finest quality and expert workmanship from the finest plumbing technicians Ashburton has to offer. Welcome to Unibond Plumbing - Your Local Plumber in Essendon We are fully licensed plumbers that are based in the Northern Suburbs and work in all areas of Melbourne. We are members of Master Plumbers and Mechanical Services Association and have all the accredited Occupational Health and Safety Cards. We specialise in many types of roofing, gas fitting, renovations, new construction, maintenance of gas appliances and hot and cold water installations. Taylor & Sons Plumbing, the blockage specialists, has over 20 years experience unblocking your drains, sewers and storm water lines. Whatever the drain....whatever the plumbing problem, our experienced staff can unblock it. We service most areas of Melbourne and can cater for your needs 24 hours a day, 7 days a week. Welcome to Schultz Plumbing Need emergency plumber in Vermont? Call 1300 SCHULTZ (1300 724 858) We're on time, every time. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. At Schultz Plumbing, nothing is more important to us than honouring our commitment to professionalism. Our Customer Response Team keeps track of our workforce at all times - so you can be sure we'll be there when we say we will, every time. If you have an blocked drain problem in Balwyn then call us today Balwyn Plumbing the blockage specialists, has over 20 years experience unblocking your drains, sewers and storm water lines. Whatever the drain....whatever the plumbing problem, our experienced staff can unblock it. If you have an blocked drain problem in Camberwell then call us today. Taylor & Sons Plumbing the blockage specialists, has over 20 years experience unblocking your drains, sewers and storm water lines. Whatever the drain....whatever the plumbing problem, our experienced staff can unblock it. If you have an blocked drain problem in Ivanhoe then call us today. Taylor & Sons Plumbing the blockage specialists, has over 20 years experience unblocking your drains, sewers and storm water lines. Whatever the drain....whatever the plumbing problem, our experienced staff can unblock it. If you have an blocked drain problem in Kew then call us today. Taylor & Sons Plumbing the blockage specialists, has over 20 years experience unblocking your drains, sewers and storm water lines. Whatever the drain....whatever the plumbing problem, our experienced staff can unblock it. Welcome to Schultz Plumbing Blocked drain? Burst water pipe? No hot water? Running tap? Schultz Plumbing offers a 24 hour Immediate Responsive Service, 7 days a week, 365 days a year. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. At Schultz Plumbing, nothing is more important to us than honouring our commitment to professionalism. Our Customer Response Team keeps track of our workforce at all times - so you can be sure we'll be there when we say we will, every time. Welcome to Schultz Plumbing Blocked drain? Burst water pipe? No hot water? Running tap? Schultz Plumbing offers a 24 hour Immediate Responsive Service, 7 days a week, 365 days a year. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. At Schultz Plumbing, nothing is more important to us than honouring our commitment to professionalism. Our Customer Response Team keeps track of our workforce at all times - so you can be sure we'll be there when we say we will, every time. Welcome to Schultz Plumbing Blocked drain? Burst water pipe? No hot water? Running tap? Schultz Plumbing offers a 24 hour Immediate Responsive Service, 7 days a week, 365 days a year. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. At Schultz Plumbing, nothing is more important to us than honouring our commitment to professionalism. Our Customer Response Team keeps track of our workforce at all times - so you can be sure we'll be there when we say we will, every time. Welcome to Schultz Plumbing Blocked drain? Burst water pipe? No hot water? Running tap? Schultz Plumbing offers a 24 hour Immediate Responsive Service, 7 days a week, 365 days a year. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. At Schultz Plumbing, nothing is more important to us than honouring our commitment to professionalism. Our Customer Response Team keeps track of our workforce at all times - so you can be sure we'll be there when we say we will, every time. Welcome to Schultz Plumbing Blocked drain? Burst water pipe? No hot water? Running tap? Schultz Plumbing offers a 24 hour Immediate Responsive Service, 7 days a week, 365 days a year. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. At Schultz Plumbing, nothing is more important to us than honouring our commitment to professionalism. Our Customer Response Team keeps track of our workforce at all times - so you can be sure we'll be there when we say we will, every time. Welcome to Schultz Plumbing Blocked drain? Burst water pipe? No hot water? Running tap? Schultz Plumbing offers a 24 hour Immediate Responsive Service, 7 days a week, 365 days a year. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. At Schultz Plumbing, nothing is more important to us than honouring our commitment to professionalism. Our Customer Response Team keeps track of our workforce at all times - so you can be sure we'll be there when we say we will, every time. Welcome to Schultz Plumbing - Water Tank Eltham Need Water Tank? Burst water pipe? No hot water? Running tap? Schultz Plumbing offers a 24 hour Immediate Responsive Service, 7 days a week, 365 days a year. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. If you are looking for a professional, reliable and qualified plumber, Taylor & Sons Plumbing is here to assist you. We strive to provide our customers with exceptional quality workmanship delivered at competitive prices. Commercial Services - Property Maintenance Albert Park is one of very few cleaning businesses in Albert Park that provide their customers with a one-stop service for property maintenance, from gardening to roof leaks. Commercial Services - Property Maintenance South Melbourne is one of very few cleaning businesses in South Melbourne that provide their customers with a one-stop service for property maintenance, from gardening to roof leaks. Property Maintenance North Melbourne is one of very few cleaning businesses in North Melbourne that provide their customers with a one-stop service for property maintenance, from gardening to roof leaks. Property Maintenance West Melbourne is one of very few cleaning businesses in West Melbourne that provide their customers with a one-stop service for property maintenance, from gardening to roof leaks. Wall Rendering Altona North | Welcome to SHP Wall Boards SHP Wall Boards provides the ultimate rendered outcome in Altona North. Our full time wall rendering team are experts in our field. We use techniques and specialist knowledge that is European in origin. Wall Rendering Toorak | Welcome to SHP Wall Boards SHP Wall Boards provides the ultimate rendered outcome in Toorak. Our full time wall rendering team are experts in our field. We use techniques and specialist knowledge that is European in origin. Wall Rendering Hawthorn | Welcome to SHP Wall Boards SHP Wall Boards provides the ultimate rendered outcome in Hawthorn. Our full time wall rendering team are experts in our field. We use techniques and specialist knowledge that is European in origin. Home Extensions Bayside | Cronin Builders Cronin Builders build home extensions in Bayside and surrounding areas. Our home extensions in Bayside are often praised, and they have estabnlished a large referral base over the last 30 years. Home Extensions Bentleigh | Cronin Builders Cronin Builders build home extensions in Bentleigh and surrounding areas. Their home extensions in Bentleigh are often praised, and they have established a large referral base over the last 30 years. Home Extensions Mornington | Cronin Builders Cronin Builders build home extensions in Mornington and surrounding areas. Their home extensions in Mornington are often praised, and they have established a large referral base over the last 30 years. Home Extensions Camberwell | Cronin Builders Cronin Builders build home extensions in Camberwell and surrounding areas. Their home extensions in Camberwell are often praised, and they have established a large referral base over the last 30 years. Home Extensions Waverley | Cronin Builders Cronin Builders build home extensions in Waverley and surrounding areas. Their home extensions in Waverley are often praised, and they have established a large referral base over the last 30 years. Taylor & Sons Plumbing offers a SAME DAY SERVICE for any East Melbourne suburb. If you need a reliable plumber who can be at your aid in just a matter of hours, then Taylor & Sons Plumbing is the perfect solution. As one of the leading plumbing companies servicing Melbourne's eastern suburbs, including Rosanna, Lower Plenty, Balwyn North & Ivanhoe, Taylor & Sons Plumbing are highly regarded amongst their clientele and peers for providing quality service at an affordable price. Taylor & Sons Plumbing are a locally owned and operated plumbing company that service the entire eastern and south-eastern suburbs of Melbourne, including the outer east region of Heathmont, Kilsyth, Bayswater and Boronia. Our commitment at Jim’s Pest Control – Pest Control Melbourne is to protect the environment and this and with an outstanding safety record this means you can trust Jim’s Pest Control – Pest Control Melbourne will all your pest control needs in Melbourne. For the past 17 years, Guardian Plumbing and Gas Services has been servicing the South Yarra community. We have highly developed skills, that varie through all facets of Plumbing and Gas maintenance. So passionate about our vocation, Guardian Plumbing and Gas Services has become a member of the Master Plumbers Association, attending a wide range of seminars to enhance his knowledge. Guardian Plumbing and Gas Services has been a plumbers for 17 years. Our highly developed skills, varies through all facets of Plumbing and Gas maintenance. So passionate about his vocation, he has become a member of the Master Plumbers Association, attending a wide range of seminars to enhance his knowledge. I am Also A bosch Hot Water Hero. GStore is your one stop Green Shop. Our aim is to provide you with a single design destination to find all products you need to make your home more environmentally sustainable. G is for Green as well as Gottieb's, a family owned building and plumbing supplies business that has been trading for nearly 50 years so we understand our products. And as a member of the Plumbtec Group, we offer very competitive prices. Visit our eco-showroom in Melbourne to view our wide range of products. The Local Plumbers of Taylor and Sons Plumbing have over 25 years of experience in the residential and commercial sectors providing superior and gasfitting services. We have been working with home improvement companies, architects, builders and home owners for all their plumbing needs. Our goals are to develop a long lasting relationships with our clients based on trust, honesty, reliability and satisfaction. No job is too big or too small for us. Servicing Kew, Balwyn, Hawthorn and surrounding suburbs. The Local Plumbers of Taylor and Sons Plumbing have over 25 years of experience in the residential and commercial sectors providing superior and gasfitting services. We have been working with home improvement companies, architects, builders and home owners for all their plumbing needs. Our goals are to develop a long lasting relationships with our clients based on trust, honesty, reliability and satisfaction. No job is too big or too small for us. Servicing Burwood, Camberwell, Ashburton and surrounding suburbs. Emergency Plumber Balwyn North, the blockage specialists, has over 20 years experience unblocking your drains, sewers and storm water lines. Whatever the drain....whatever the plumbing problem, our experienced staff can unblock it. We service most areas of Melbourne and can cater for your needs 24 hours a day, 7 days a week. Need emergency plumber in Blackburn? Call 1300 SCHULTZ (1300 724 858) We're on time, every time. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. At Schultz Plumbing, nothing is more important to us than honouring our commitment to professionalism. Our Customer Response Team keeps track of our workforce at all times - so you can be sure we'll be there when we say we will, every time. Need emergency plumber in Box Hill? Call 1300 SCHULTZ (1300 724 858) We're on time, every time. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. At Schultz Plumbing, nothing is more important to us than honouring our commitment to professionalism. Our Customer Response Team keeps track of our workforce at all times - so you can be sure we'll be there when we say we will, every time. Welcome to Schultz Plumbing Need emergency plumber in Doncaster? Call 1300 SCHULTZ (1300 724 858) We're on time, every time. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. At Schultz Plumbing, nothing is more important to us than honouring our commitment to professionalism. Our Customer Response Team keeps track of our workforce at all times - so you can be sure we'll be there when we say we will, every time. Need emergency plumber in Glen Waverley? Call 1300 SCHULTZ (1300 724 858) We're on time, every time. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. At Schultz Plumbing, nothing is more important to us than honouring our commitment to professionalism. Our Customer Response Team keeps track of our workforce at all times - so you can be sure we'll be there when we say we will, every time. Looking for an experienced 24 hour emergency plumber in Bentleigh? Then look no further than Taylor & Sons Plumbing, your friendly local plumber - call us today for more information. Based in Victoria, Taylor & Sons Plumbing covers all aspects of plumbing and heating including 24 hour emergency call outs. Looking for an experienced 24 hour emergency plumber in Malvern? Then look no further than Y2K Plumbing, your friendly local plumber - call us today for more information. Based in Victoria, Y2K Plumbing covers all aspects of plumbing and heating including 24 hour emergency call outs. A professional and friendly service, all work carried out by Y2K Plumbing is done to the highest workmanship. With experience in all aspects of plumbing, we strive to ensure all work is completed promptly. Need emergency plumber in Mt Waverley? Call 1300 SCHULTZ (1300 724 858) We're on time, every time. Schultz Plumbing is an established company that offers every kind of plumbing and water services for homes and businesses in Melbourne's north and north eastern suburbs. From Heidelberg to Eltham, Doncaster to Greensborough and Balwyn to Bundoora, Schultz Plumbing are the "professionals around the corner" for all areas in the Yarra Valley region of Victoria. At Schultz Plumbing, nothing is more important to us than honouring our commitment to professionalism. Our Customer Response Team keeps track of our workforce at all times - so you can be sure we'll be there when we say we will, every time. Looking for an experienced 24 hour emergency plumber in Sandringham? Then look no further than Y2K Plumbing, your friendly local plumber - call us today for more information. Based in Victoria, Y2K Plumbing covers all aspects of plumbing and heating including 24 hour emergency call outs. A professional and friendly service, all work carried out by Y2K Plumbing is done to the highest workmanship. With experience in all aspects of plumbing, we strive to ensure all work is completed promptly. Go Go Gas and Plumbing - Plumbers Brunswick is a 100% Australian-owned and operated business, which has been in the Melbourne plumbing industry for over 1 year, building a local reputation for 'supreme service'. We will test for gas leaks, run gas lines and get rid of your LPG bottles by connecting your bbq to your household gas.
                 Opinion issued September 11, 2003   In The Court of Appeals For The First District of Texas       NO. 01-02-00106-CV ____________   IP PETROLEUM COMPANY, INC., Appellant   V.   WEVANCO ENERGY, L.L.C.; DAVID L. NEAL, INDIVIDUALLY AND AS ADMINISTRATOR OF THE ESTATE OF FRANCES NEAL; MARK SCHOOMAKER; JANE SCHOOMAKER; BONNIE VAUGHAN; AND MARTIN PHILLIPS, Appellees   * * *   WEVANCO ENERGY, L.L.C.; DAVID L. NEAL, INDIVIDUALLY AND AS ADMINISTRATOR OF THE ESTATE OF FRANCES NEAL; MARK SCHOOMAKER; JANE SCHOOMAKER; BONNIE VAUGHAN; AND MARTIN PHILLIPS, Appellants   V.   IP PETROLEUM COMPANY, INC., Appellee       On Appeal from the 129th District Court Harris County, Texas Trial Court Cause No. 99-24160       OPINION ON REHEARING           We withdraw our opinion of May 8, 2003 and issue the following in its stead. The plaintiffs motion for rehearing is denied.           A jury found that IP Petroleum Company, Inc., appellant, was grossly negligent when it breached its contract with Wevanco Energy, L.L.C.; David L. Neal, Individually and as Administrator of the Estate of Frances Neal; Mark Schoomaker; Jane Schoomaker; Bonnie Vaughan; and Martin Phillips (collectively, “the plaintiffs”). In 11 points of error, IP argues that it did not breach the contract and the award of lost profits and attorneys’ fees was improper. The plaintiffs appeal the trial court’s refusal to award prejudgment interest on purely economic damages. We reverse and render judgment that the plaintiffs take nothing. Factual and Procedural Background The Kerans Theory           In 1993, Dr. Don Snyder read a scientific article in an issue of a professional association bulletin. The article, entitled “Karst-Controlled Reservoir Heterogeneity in Ellenburger Group Carbonates of West Texas,” was written by a geologist, Charles Kerans. It proposed an unconventional new approach to drilling for oil in West Texas.           The article was based on a theory of “karsting,” or cave formation, in the Ellenburger Formation in West Texas. According to the Kerans theory, millions of years ago, caves were formed in West Texas and were slowly filled with sediment. Over time, new rock was formed on top of the caves, collapsing the roofs of the caves and slowly burying the collapsed caves far below the surface. Kerans theorized that the buried, collapsed caves created three distinct strata in the Ellenburger Formation—a “cave roof” zone, a “cave fill” zone, and a “cave floor” zone. Both the cave roof and cave floor zones contain oil, Kerans opined, but existing wells had tapped only the oil in the cave roof zone. According to Kerans, if oil wells were drilled deeper into the Ellenburger Formation, through the cave roof zone and the unproductive cave fill zone, those wells might produce oil if they tapped into a productive area of the cave floor zone.           The Kerans theory marked a significant departure from the conventional approach to drilling in the Ellenburger Formation. The conventional approach, known as the “scratch and sniff” method, was to drill only to the very top of what Kerans called the cave roof zone and siphon off any oil. The concern was that if a well was drilled any deeper, it would be inundated by a zone of water. The Millard E-2 Well           Dr. Snyder decided to test the Kerans theory on the Millard E-2 well in the Penwell Field in Ector County. In 1955, Phillips Petroleum Company had drilled the Millard E-2 to a total depth of 8600 feet—just a few hundred feet short of where Kerans theorized the oil-rich cave floor zone might be found. Dr. Snyder believed that the Millard E-2 presented an opportunity to test the Kerans theory at relatively low cost. Phillips executed a “farmout agreement” with Dr. Snyder, allowing him to deepen the well. The Investors           Dr. Snyder approached Richard Reeve, the owner of Cleveland Oil Company. The two men had previously worked together on several projects. Cleveland Oil paid Snyder $40,000, and Reeve agreed to have Cleveland Oil find investors in exchange for a four percent overriding royalty interest. Cleveland Oil did not have to pay any of the drilling costs, but would receive four percent of the profits.           The promotional materials Cleveland Oil sent to prospective investors indicated that “this is a wildcat test.” The materials also indicated that “mechanical risk is present in re-entries.” This risk disclosure was made because the Millard E-2 had been abandoned for 50 years and it was impossible to predict how badly the well had deteriorated.           Frank Cox, the managing member of Wevanco Energy, L.L.C. had invested with Reeve in the past, and Cox decided that Wevanco would invest in the Millard E-2. Cox’s accountant, David Neal, also invested, as did several of Neal’s friends and family members. Selection of IP Petroleum           Snyder, Reeve, and Cox chose IP as the operator of the well. Snyder presented the proposal to Dr. Mike Senich, a project geologist at IP. Dr. Senich reviewed the promotional materials and found one sentence in the solicitation letter to be of particular interest—“Other than for faulting, the Cave Roof is thought to be reasonably continuous across a field, while the Cave Floor zones are more heterogeneous much like Permian age rock.” Dr. Senich testified that he understood this to mean that if the cave floor contained any oil at all, it might be productive in some areas but unproductive in others. He estimated that the chances of success for the Millard E-2 were one in ten, or possibly one in five.           Scott Nonhof, an engineer, conducted a petroleum engineering analysis on the well. Nonhof’s handwritten notes indicate: “*Bottom Line - Don’t have any production data worth a flip. . . . This is a WILDCAT not supported by Production Data but risk to reward is very high. Supper [sic] low cost to do reentry.”           IP agreed to participate, and IP and Cleveland Oil signed a participation letter agreement that described the objective of the project to “deepen the Phillips Petroleum Millard E #2 from its current TD of 8626' to +/- 9000' to test the Ellenburger Cave Floor Zone.” IP also agreed to pay 50 percent of the drilling costs, in exchange for a 50 percent share of the profits if the project were to succeed. The Joint Operating Agreement           Before drilling began, Cleveland Oil, IP, and all the investors executed a comprehensive Joint Operating Agreement (JOA). The JOA is a form contract promulgated by the American Association of Petroleum Landmen and is used throughout the industry. The two provisions of the JOA that are at issue in this case provide as follows: [IP, as the operator would] continue the drilling of the well with due diligence to a depth of 9125' below the surface of the ground or a depth sufficient to test the Lower Ellenburger Formation, whichever is lesser, unless granite or other practically impenetrable substance or condition in the hole, which renders further drilling impractical, is encountered at a lesser depth, or unless all parties agree to complete or abandon the well at a lesser depth.           . . . .   [IP] shall have no liability as Operator to the other parties for losses sustained or liabilities incurred, except such as may result from gross negligence or willful misconduct. The Drilling           On December 3, 1997, IP began drilling the Millard E-2, and, as warned in the promotional materials, IP encountered mechanical difficulties that ultimately increased the costs of drilling to $383,000, instead of $282,625 as originally projected. IP paid all drilling expenses as they were incurred, expecting a pro rata reimbursement from the other investors in accordance with the JOA.           On the evening of December 31, 1997, Reeve, Senich, and Snyder were losing hope because they were nearing the target depth, and there were no signs of oil. At about 10 p.m., the mudlogger approached the three men and asked what they wanted him to do with the oil and gas “shows” in the sample. Dr. Senich testified he was surprised to have had shows at this depth—he thought they had already drilled too deep and were past the area where he expected to find oil and gas. Likewise, Snyder testified he thought they had a dry hole “because we had drilled past the point that I had predicted that we would encounter this lower collapse zone.” Dr. Senich testified that, in general, operators know that it is critical not to drill past the spill point because, once a well taps into water, it will produce only water and not oil. Dr. Snyder was convinced, based on the Kerans theory, that the well was already past its spill point, and Dr. Senich also thought it was too deep. Most of IP’s employees testified that the Millard E-2 was drilled to a depth sufficient to test the Lower Ellenburger, but the plaintiffs’ experts testified to the contrary. The Agreement to Complete the Well           The shows were good news, but they presented a dilemma. In order to run a test and determine whether the shows actually indicated significant quantities of oil, it would be necessary to set pipe to reinforce the side of the well hole. Setting pipe would narrow the hole, and because of the depth and the age of Millard E-2, the hole was narrow already. After a test was run and the hole was narrowed, it would remain physically possible to deepen the well further, but additional drilling would be pointless. By then, the well would be so narrow that it would be virtually impossible to extract oil from the well in paying quantities.           Cox consulted with Neal and then urged Dr. Senich to run the test immediately. Dr. Senich testified that Cox and Snyder “wanted to stop immediately” at about the 9000' mark, but Senich wanted to drill a little bit further to allow room for the tools that test the well. IP ultimately drilled the well to 9015'.           IP delivered “completion letters” to each investor, which stated in pertinent part that “IP . . . hereby recommends . . . attempting an open hole completion in the Ellenburger formation in the interval of 8,947' - 9,010' . . . . Should you elect to participate, please evidence your election in the space provided.” Each investor signed the completion letters. Cox, however, testified that he thought the E-2 could still be deepened if IP had not reached the Lower Ellenburger. David Neal testified that he signed the letters because he understood them to mean IP had “found the zone.”           The tests indicated that the well would produce three percent oil and 97 percent water, a mix that would not produce oil in paying quantities. The Aftermath           Upon hearing the test results, Cox demanded that IP drill the well deeper. Glynn Broussard, a land man and team leader for IP, testified that Cox refused to take “no” for an answer and, at one point, Broussard told Cox that “IP felt like we had a dry hole and it was—we were done and we weren’t going to pursue drilling the well any deeper. If he wanted to drill the well deeper, he had every right to.” Neither Wevanco nor any other investor exercised its right under the JOA to take over the well. Cox, however, testified that IP misled him into believing IP intended to deepen the well.           In July 1998, IP gave notice of its intent to plug and abandon the well. In accordance with the JOA, at that time, the plaintiffs were given an election either to agree to the abandonment or to disagree and take over the well. The plaintiffs refused to select either option. Thus, the plaintiffs had at least two opportunities—when the completion was proposed and when the abandonment was proposed—to deepen the Millard E-2 themselves and to retain all the profits. They rejected both opportunities.           The plaintiffs argued that they had no motivation to produce oil from the Millard E-2 without a P-4—a regulatory form filed with the Railroad Commission that allows an operator to sell oil from a well. The P-4 on the Millard E-2 gave IP the sole authority to sell oil on the well. The plaintiffs contended that IP was “intentionally holding the rights to the well hostage” until the plaintiffs reimbursed IP for the drilling expenses it believed it was due under the JOA. The Trial           The plaintiffs sued IP, alleging that IP breached its alleged obligation to further deepen the Millard E-2; and IP counterclaimed, seeking reimbursement for its drilling expenses under the JOA. The jury found that IP failed to drill to a depth sufficient to test the Lower Ellenburger Formation and that the failure was the result of gross negligence or willful misconduct. The jury also found that IP had breached the participation letter agreement and that none of the plaintiffs had agreed to the completion of the wells.           The jury found that, had IP deepened the Millard E-2 to a sufficient depth, the plaintiffs would have realized a profit of $534,274. The jury then found that, had the Millard E-2 been a success, the plaintiffs would have realized an additional profit of $3,560,000. The jury also awarded $1,424,000 in attorneys’ fees through trial and $178,000 in appellate attorneys fees. JOA Breach           In issue four, IP asserts that the evidence was legally and factually insufficient to support the jury’s finding that IP’s alleged failure to drill to a sufficient depth under the JOA was the result of gross negligence or willful misconduct. Exculpatory Clause           The exculpatory clause in the JOA establishes a standard of care applicable to drilling operations, but then provides that, if an operator falls short of this standard, it will not be liable, in the absence of gross negligence or wilful misconduct. The clause provides in pertinent part that: IP Petroleum Company, Inc. shall be the Operator of the Contract Area, and shall conduct and direct and have full control of all operations on the Contract Area as permitted and required by, and within the limits of this agreement. It shall conduct such operations in a good and workmanlike manner, but it shall have no liability as Operator to the other parties for losses sustained or liabilities incurred, except as may result from gross negligence or willful misconduct . . . . (Emphasis added.) The plaintiffs globally contend that this clause can never apply to any breach of contract claim against an operator and therefore the clause does not apply to its claims against IP. We disagree.           Generally, exculpatory clauses in a contract are utilized to exempt one party from future liability for negligence. See Allright, Inc. v. Elledge, 515 S.W.2d 266, 267 (Tex. 1974). We have found only two cases discussing exculpatory clauses with respect to liability for breach of contract. See Cone v. Fagadau Energy Corp., 68 S.W.3d 147 (Tex. App.—Eastland 2001, pet. filed); Abraxas Petroleum Corp. v. Hornburg, 20 S.W.3d 741(Tex. App.—El Paso 2000, no pet.).           In Cone, the court was faced with an exculpatory clause in a joint operating agreement identical to the one that we have before us in this case. See Cone 68 S.W.3d at 154. The court noted that, in the operating agreement, the language requiring a showing of gross negligence and wilful misconduct to establish liability immediately followed the provision requiring the operator to conduct its drilling operations in a good and workmanlike manner. Id. at 155. Cone’s breach of contract claim, however, did not allege the failure of the operator to operate in a good and workmanlike manner. In Cone, the operator was alleged to have breached the joint operating agreement by improperly assessing certain charges against an investor’s account and the breach of contract claims were in the nature of an accounting. Id. Therefore, the Eastland Court of Appeals held that the exculpatory clause did not apply to Cone’s breach of contract claims against the operator. Id.           In Abraxas, the court determined that an exculpatory clause provision identical to the one before us was unambiguous. Abraxas Petroleum Corp., 20 S.W.3d at 759. In determining the scope of this exculpatory clause, the court noted that the clause was found in an article which concerned the operator’s authority to conduct operations in the contract area. Id. More significantly, in the clause, the operator’s limitation of liability is directly linked to the imposition of the duty to act as a reasonably prudent operator, which strictly concerns the manner in which the operator conducts drilling operations on the lease. Id. The breach of contract claims, however, did not allege that Abraxas failed to act as a reasonably prudent operator nor did they allege any misconduct arising from the manner in which the operator conducted drilling operations on the lease . In Abraxas, the operator was alleged to have breached the joint operating agreement by improperly sending “authorization for expense” letters to investors for expenses associated with routine repairs and the breach of contract claims concerned the operator’s administrative duties under the JOA. Id. Therefore, the El Paso Court of Appeals held that the exculpatory clause did not pertain to the breach of contract claims against the operator.           Here, the basis of the plaintiffs’ claims is alleged misconduct arising from the manner in which IP, as operator, conducted drilling operations on the lease. Unlike in Cone and Abraxas, the plaintiffs alleged that IP failed to conduct operations in good and workmanlike manner and failed to act as a reasonably prudent operator. In paragraphs 22 and 23 of its Second Amended Original Petition, the plaintiffs alleged the following:                                          BREACH OF CONTRACT 22.     Pursuant to the express terms of the Farmout Agreement, Joint Operating Agreement, and Participation and Purchase and Sale Agreement, [IP] was required to conduct its activities as a reasonably prudent operator, in a good and workmanlike manner, and with due diligence. [IP] breached these duties, and further, acted with gross negligence or with wilful misconduct.   23.     [IP] further breached its agreement with [the plaintiffs] by failing to drill and deepen the oil and gas well in question to the Contract Depth as defined in the Farmout Agreement, Joint Operating Agreement, and the Participation and Purchase and Sale Agreement. Specifically, with regard to the contract depth, [IP] as successor to the Cleveland Oil Company, had the duty to (1) prosecute the re-entry of the test well to its objective depth (as described in Exhibit “A” to the Participation and Purchase and Sale Agreement) with due diligence and in a good and workmanlike manner; and (2) accept responsibility for losses sustained by [the plaintiffs] resulting from [IP’s] gross negligence or from breach of its obligations under the Participation and Purchase Sale Agreement.   (Emphasis added.) Accordingly, the exculpatory clauses in the JOA applied, and the plaintiffs had to establish that IP was grossly negligent or acted with wilful misconduct when it breached the contract. See Cone, 68 S.W.3d at 155; Abraxas Petroleum Corp., 20 S.W.3d at 759. Standard of Review           In reviewing a legal sufficiency challenge, we must view the evidence in a light that tends to support the finding of the disputed fact and disregard all evidence and inferences to the contrary. Weirich v. Weirich, 833 S.W.2d 942, 945 (Tex. 1992). If more than a scintilla of evidence exists, the evidence is legally sufficient. Browning-Ferris, Inc. v. Reyna, 865 S.W.2d 925, 928 (Tex. 1993). To rise above a scintilla, the evidence offered to prove a vital fact must do more than create a mere surmise or suspicion of its existence. Kindred v. Con/Chem, Inc., 650 S.W.2d 61, 63 (Tex. 1983). In determining legal sufficiency, we consider whether the evidence rises to a level that would enable reasonable and fair-minded people to differ in their conclusions. Transp. Ins. Co. v. Moriel, 879 S.W.2d 10, 25 (Tex. 1994). Gross Negligence or Willful Misconduct           The jury affirmatively answered the following question: Did IP Petroleum Company, Inc.’s failure to drill to a depth sufficient to test the Lower Ellenburger Formation result from gross negligence or willful misconduct?   “Gross negligence or willful misconduct” means:   (a)a specific intent by IP Petroleum Company Inc. to cause substantial injury to Plaintiffs; or   (b) an act or omission by IP Petroleum Company, Inc.,   (i)which, when viewed objectively from the standpoint of IP Petroleum Company, Inc. at the time of its occurrence, involved an extreme degree of risk, considering the probability and magnitude of the potential harm to others; and   (ii)of which IP Petroleum Company, Inc. had actual, subjective awareness of the risk involved, but nevertheless proceeded with conscious indifference to the rights, safety, or welfare of others.   This was a significant finding because, under the JOA, for IP to be found liable, the jury had to have found that IP was grossly negligent or acted with willful misconduct.           To support a finding of gross negligence, there must be evidence that IP had “actual subjective knowledge of an extreme risk of serious harm.” Moriel, 879 S.W.2d at 22. The magnitude of the risk is judged from the viewpoint of the defendant at the time the events occurred. Id. at 23. The harm anticipated must be extraordinary harm, not the type of harm ordinarily associated with breaches of contract or even with bad faith denials of contract rights; harm such as “death, grievous physical injury, or financial ruin.” Id. at 24; Bluebonnet Sav. Bank, F.S.B. v. Grayridge Apartment Homes, Inc., 907 S.W.2d 904, 911 (Tex. App.—Houston [1st Dist.] 1995, writ denied).           Here, the plaintiffs, who were seeking monetary lost profits, summarize the evidence to support the jury’s finding of gross negligence as follows: IP apparently did not prepare a written drilling plan for the E-2, which is the normal procedure, and could not explain its failure to do so.   IP got stuck during the drilling operation because of its failure to use drilling mud and spent over $100,000 trying to get unstuck.   IP did not run a drill stem test on the E-2, which many witnesses testified would be the customary, reasonable and prudent thing to do and Wevanco’s witnesses testified would have shown that the well was not in the Lower Ellenburger.   IP failed to tell Cox and Wevanco it thought setting pipe would mean the E-2 could not be deepened if it were not productive.   IP told Cox and others that it was considering deepening the E-2 well when it had absolutely no intention of doing so.   IP allowed the farmout to expire while it was still operating, discovered that fact, and did not do anything about it, requiring Cox to obtain a second farmout to protect the investors’ rights in the E-2.   IP, against the advice of Cox, unsuccessfully attempted to set an inflatable bridge plug down hole.   IP obtained the P-4 on the well, which the jury could reasonably conclude was at best intended to hold up Wevanco over the issue of the unpaid drilling expenses and at worst a deliberate attempt to prevent Cox from deepening the E-2.             While this may be legally sufficient evidence of negligence, this evidence does not rise to the level of gross negligence as defined in the jury charge. Nor was there evidence of willful misconduct. “Throughout the history of Texas law, ‘wilful misconduct’ has been defined in a manner akin to ‘gross negligence.’” Marshall Indep. Sch. Dist. v. U.S. Gypsum Co., 790 F. Supp. 1291, 1300 (E.D. Tex. 1992). Thus, as the jury instruction indicated, a finding of willful misconduct required evidence of “a specific intent by IP Petroleum Company Inc. to cause substantial injury to Plaintiffs.” We hold that the evidence is legally insufficient to support the jury’s finding. We sustain issue four. JOA           In issue one, IP argues the plaintiffs’ claim for breach of the JOA was without merit as a matter of law. The JOA states that IP “shall have no liability as Operator to the other parties for losses sustained or liabilities incurred, except such as may result from gross negligence or willful misconduct.”           Having held there was no evidence of gross negligence or willful misconduct, we sustain IP’s issue one.           Because we have sustained IP’s issues one and four, which are dispositive of the plaintiffs’ claims that IP breached the JOA, we need not consider IP’s issues two, three, five, and six as those issues are moot. Participation Letter Agreement           In issue seven, IP contends that Texas law precludes any liability to the plaintiffs under the participation letter agreement.           The jury affirmatively answered the following question: Did IP Petroleum, Inc. fail to comply with the following agreement? Deepen the Phillips Petroleum Millard E #2 from its current TD of 8626' to +/- 9000' to test the Ellenburger Cave Floor Zone. . . . The well will be deepened from the depleted Ellenburger Cave Roof, through the dense Ellenburger Cave Fill interval, then test the objective, the Ellenburger Cave Floor.   The “agreement” from which the excerpt was taken was the participation letter agreement—a contract entered into between IP and Cleveland Oil. IP argues that, for two reasons, this was an improper question to submit to the jury: (1) the plaintiffs were not parties to the participation letter and cannot seek to enforce it, and (2) the participation letter agreement was superseded by the JOA.           The participation letter agreement was drafted by Cleveland Oil and signed by a land manager from IP. The plaintiffs contend that they signed essentially identical agreements with Cleveland Oil, but they were unable to produce any copies of the signed agreements at trial. They did, however, produce an unsigned copy which was admitted into evidence.           Generally, a plaintiff may not enforce a contract to which he is not a party. Stine v. Stewart, 80 S.W.3d 586, 589 (Tex. 2002). However, a third party may recover on a contract made between other parties only if the parties intended to secure a benefit to that third party, and only if the contracting parties entered into the contract directly for the third party’s benefit. Id. “The intention to contract or confer a direct benefit to a third party must be clearly and fully spelled out or enforcement by the third party must be denied.” MCI Telecomm. Corp. v. Texas Utils. Elec. Co., 995 S.W.2d 647, 651 (Tex. 1998). Any doubt is resolved against a finding that the party was intended to be a third-party beneficiary. Mandell v. Hamman Oil & Ref. Co., 822 S.W.2d 153, 161 (Tex. App.—Houston [1st Dist.] 1991, writ denied). To determine the parties’ intent, courts must examine the entire agreement when interpreting a contract and give effect to all the contract’s provisions so that none are rendered meaningless. Stine, 80 S.W.3d at 589.           Shortly after IP signed the participation letter agreement with Cleveland Oil, Richard Reeve, Cleveland Oil’s owner, sent IP a letter outlining the “burden clarification of the [Penwell] prospect (as described in that participation agreement dated 10/31/97 between The Cleveland Oil Company, L.L.C. . . . and IP. . . .).” In this letter, Reeve explains that “IP shall serve as Operator. IP will maintain the same spirit of operations as would Cleveland, where the Operator is operating for the benefit of all parties . . . .” Stephen Guidry, IP’s land manager, signed this burden clarification. We hold that this clarification indicates that the plaintiffs were third-party beneficiaries of the participation agreement between IP and Cleveland Oil.           IP argues that the participation agreement was superceded by the JOA because the two agreements contain inconsistent drilling obligations and gross negligence provisions.           Instruments pertaining to the same transaction may be read together to ascertain the parties’ intent, even if the parties executed the instruments at different times. Ft. Worth Indep. Sch. Dist. v. City of Ft. Worth, 22 S.W.3d 831, 840 (Tex. 2000). Only when the terms of one contract are so inconsistent with those of the other that the two cannot subsist together is there a presumption that the second superceded the first. Willeke v. Bailey, 189 S.W.2d 477, 479 (Tex. 1945).           We must examine the two agreements and their respective drilling obligations. Participation Letter Agreement JOA “Deepen the Phillips Petroleum Millard E #2 from its current TD of 8626' to +/- 9000' to test the Ellenburger Cave Floor Zone.” “[IP] shall thereafter continue the drilling of the well with due diligence to a depth of 9125' below the surface of the ground or a depth sufficient to test the Lower Ellenburger Formation, whichever is lesser . . . unless all parties agree to complete or abandon the well at a lesser depth.”   The participation letter agreement only addresses drilling to a depth necessary to test the Ellenburger Cave Floor Zone. The JOA, however, authorized IP to stop drilling at 9125' even if it believed it had not yet reached the Lower Ellenburger Formation. Furthermore, the JOA authorized IP to stop drilling at a lesser depth if all parties agreed to complete or abandon the well.           The letter agreement does not allow a consensual completion of the drilling at any point above the Ellenburger Cave Floor Zone. If the parties had signed the completion letters with the understanding that IP had not yet reached the Ellenburger Cave Floor Zone, IP would have been in violation of the participation letter agreement. As such, the two agreements contain mutually inconsistent terms.           We hold that the JOA superseded the participation letter agreement.           We sustain issue seven.           In issue eight, IP argues that, even under the participation letter agreement, it is not liable to the plaintiffs.           Having held that the JOA superseded the participation letter agreement, issue eight is moot. Damages           In issue nine, IP asserts that the plaintiffs failed to establish lost-profits damages with “reasonable certainty” where the lost profits damages were based on (1) a “wildcat” prospect drilled to test a new and unproven scientific theory and (2) seven additional hypothetical wells. In issue 10, IP contends that the trial court erred by awarding attorneys’ fees to the plaintiff. In issue 11, IP argues that the trial court erred by awarding prejudgment interest to the plaintiffs.           Having held that there was legally insufficient evidence to establish that IP breached the JOA, and having held that the JOA superceded the participation letter agreement, we hold that the plaintiffs are not entitled to damages, attorneys’ fees, or prejudgment interest.           We sustain issues 9, 10, and 11. Future Damages           In their sole point of error, the plaintiffs argue that the trial court erred when it refused to award prejudgment interest on the future damages awarded for lost profits.           Damages, in this case, are contingent on the breach of contract finding. We have held there was legally insufficient evidence to establish that IP breached its contract.           We overrule the plaintiffs’ sole point of error. Conclusion           We reverse the judgment of the trial court and render judgment that the plaintiffs take nothing. We affirm the remainder of the trial court’s judgment.                                                                               George C. Hanks, Jr.                                                                         Justice   Panel consists of Chief Justice Radack and Justices Nuchia and Hanks.          
1. Introduction {#sec1} =============== Globally approximately 34 million people were living with HIV in 2011 \[[@B1], [@B2]\]. Still, there were about 2.2 million new infections \[[@B3]\]. Since the beginning of the epidemic nearly 30 million people have died of AIDS (Acquired Immunodeficiency Syndrome) related causes \[[@B1], [@B2], [@B4]\]. About 22.9 million which is 67% of those living with HIV/AIDS globally are in Africa though only about 12% of the world\'s population lives in the region \[[@B2]\]. In terms of mortality, the region represents about 79% of AIDS mortality globally \[[@B5]\]. According to 2011 Ethiopian demographic health survey the overall national adult HIV prevalence is 1.5%. The survey showed the HIV prevalence was 2.2% in Amhara region which is found in North West Ethiopia \[[@B6]\]. Human Immunodeficiency Virus (HIV) infection leads to acquired immunodeficiency syndrome (AIDS) and major causes of morbidity and mortality of such patients are OIs \[[@B7]\]. OIs can occur in up to 40% of PLWHA with a CD4 count less than 250 cells/mm^3^ \[[@B8]\]. A national study in Ethiopia showed HIV patients\' had OIs like Herpes Zoster scar (19.3%), pulmonary tuberculosis (5.2%), and pneumonia (5.2%) \[[@B9]\]. The respective prevalence of OIs in pre-ART HIV patients\' in two studies in Northwest Ethiopia was 88.9% \[[@B10]\] and 82.4% \[[@B11]\]. The problem of HV and OIs is still high in the study area though there is no prior local evidence in the study area thus the current study would give the duration of staying free from acquiring rehappening opportunistic infections after its treatment and its determinant factors. The output of the study will be used to plan resources needed for chronic HIV/AIDS care and to know groups of PLWHA given especial attention during care. The evidence is expected to be used by governmental and nongovernmental organizations working on HIV/AIDS or mainstreaming it in order to inform policy makers and medical practitioners. 2. Methods and Materials {#sec2} ======================== 2.1. Study Setting and Source Population {#sec2.1} ---------------------------------------- The study was conducted in Debre Markos town public health institutions among adult Pre-ART PLWHA included to chronic HIV care between 25 March 2008 and 24 March 2013. Debre Markos town is found 299 kilometer away from Addis Ababa (a capital city of Ethiopia) and it has one referral hospital, three public health centers, two NGO clinics, and ten other private clinics and though only the referral hospital and one health center providing chronic HIV care for the HIV/AIDS patients. Thus we conduct a study using retrospective cohort study design on health institutions providing chronic HIV care in the town. The source populations were all adult with age above 17 years PLWHA who had chronic HIV care in the town public health institutions. PLWHA who were having incompletely documented follow up format; not developing OI while registered on HIV chronic care; not taking standard treatment after developing OI according to the Ethiopian Ministry of Health guideline; and pre-ART pregnant or lactating mothers who were taking zidovidine for prevention of mother to child transmission of HIV/AIDS were excluded from the study. 2.2. Sampling and Data Collection Procedure {#sec2.2} ------------------------------------------- The sample size was calculated based on the two-sided 95% confidence interval and 3.5% margin of error and by using the proportion of pre-ART HIV patients\' having OI in Northwest Ethiopia study, which was 88.9% \[[@B10]\]. The calculated sample size using Open-Epi Version 2.3 May 2009 was 310 then after adding 10% contingency the final was 341. About 2712 PLWHA who fulfill the inclusion criteria were requited from the already available list of PLWHA who were on chronic HIV care in ART clinics. And then selection of participants was made by applying simple random sampling procedure using random number table. The needed data is available on study participants\' treatment card and chronic HIV care follow-up form which it is found in the ART clinic but rarely they may seek treatment out-off their follow-up clinic. Thus in order to reduce falsely survival increment, study participants were asked by data collectors about treatment history out of the follow-up health institution and for those doing it, the treatment was checked and abstracted in the respective health institutions. Seeking treatment out of the follow-up health institution was asked when PLWHA come to health institution for follow-up or treatment or by using registered address on follow-up form like phone number or kebele, house number which was used to get to them. Data collection instrument was developed from federal ministry of health chronic HIV care follow-up form which is used in the ART clinic and also the patient\'s card. The data was collected by reviewing chronic HIV care follow-up form and patients\' card. Among a serious of laboratory measurements (like CD4 count, hemoglobin value, height, and weight) the most nearest to the study period were taken as baseline characteristics. And among the serious measurements performed on PLWHA while she/he is included on the study, the nearest to the OI recurrence or censored was taken as the end line or follow-up values. Approval of OI free duration was done by reviewing chronic HIV care follow-up form or patient card in ART clinic or out of ART clinic if study participants seek treatment out of ART clinic. Study participants who start ART/drop-out/loss follow-up/transferred out/dead by any disease other than OI/cause of death not confirmed while on study or not developing OI at end of the study period were censored. A selected and trained health professionals working in ART clinics in each health institution were used as data collectors and supervisors. 2.3. Operational Definition {#sec2.3} --------------------------- *Survival*. Duration of free of OI rehappening. *Censored*. Nonrelapse of OI in study participant during follow-up on study, but future relapse is uncertain. Recurrence/rehappen/relapse: happening or diagnosis of any type of OIs by health personals working in ART clinic after completing the preceding treatment of any type of OI. *Drop Out*. If a PLWHA on HIV care lost to follow-up for more than three months as recorded by ART health personnel. *Lost to Follow-Up*. If PLWHA on HIV care not seen for equal to or more than one month as recorded by ART health personnel. *Transferred out.* If PLWHA on HIV care in one health institution shift to other health institution. *Good Adherence*. If PLWHA adherent ≥95% that is the percentage of missed dose is \<2 doses of 30 doses or \<3 dose of 60 dose as documented by ART health personnel. *Fair Adherence*. If PLHIV adherent 85--94% that is the percentage of missed dose is 3--5 doses of 30 doses or 3--9 dose of 60 dose as documented by ART health personnel. *Poor Adherence*. If PLHIV adherent \<85% that is the percentage of missed dose is ≥6 doses of 30 doses or \>9 dose of 60 dose as documented by ART health personnel. 2.4. Data Quality Management and Statistical Analysis {#sec2.4} ----------------------------------------------------- To maintain data quality training was given for data collectors and for supervisors. Properly designed data collection material was developed from Ethiopian federal ministry of health chronic HIV care follow-up form and patients\' card. To check correct data collection 10% of the sample was reabstracted by supervisors. The data were double entered by trained data clerk to check correct data entry. After completing data entry, outliers and any missed values were checked using frequency, listing, and sorting and any identified error at any step was corrected by revising the original data abstraction format. After coding each abstraction format, data was entered in to Epi Info version 3.5.1 statistical package. Analysis of data was done using Open-Epi Version 2.3 May 2009, SPSS version 20, and STATA version 11 statistical packages. Incidence rate was calculated by dividing total events to person-weeks. OI free duration was estimated using the actuarial life table and Kaplan Meier survival. Assumption of proportional-hazard was checked by Schoenfeld residual with *P* value ≥0.1 (*α* = 10%) and the assumption was not violated. Multicollinearity was checked using Pearson correlation, tolerance/variance inflation factor and there was no colinearity To determine independent predictors of OI free duration cox proportional-hazard model was used to calculate the hazard rate. Variables having *P* value \<0.05 at bivariate analysis and not collinear were entered in multivariate cox proportional hazard model to determine the adjusted hazard rate. The cut-off point for significant association was *P* value 0.05. 2.5. Ethical Consideration {#sec2.5} -------------------------- Ethical approval and clearance was given by School of Public Health Addis Ababa University ethical committee. Permission was also obtained from the concerned bodies of East Gojam zone and Debre Markos town Health Department and the responsible bodies of hospital and health centers. To maintain confidentiality of PLWHA, health professionals working in ART clinic were abstracting the data. In addition no personal identifier was extracted on medical records and the recorded data was not accessed by a third person. 3. Result {#sec3} ========= In the five year study period among 341 study participants, the median duration of follow-up was 41 weeks (95% CI: 37--47.97) and the minimum, maximum, and interquartile range of follow-up was 1, 234, 50 weeks, respectively. Among the study participants majority of them were females 234 (68.6%), orthodox Christian 316 (92.7%), living in urban 229 (67.2%), not educated 153 (44.9%), married 130 (38.1%), and not employed in governmental or private organizations 291 (85.3%). Their mean age was 33.3 (±10.6) years, in which almost all of them were below 50 years old 318 (93.3%) ([Table 3](#tab3){ref-type="table"}). 3.1. The Baseline and Follow-Up Laboratory, Clinical and Prophylaxis Characteristics {#sec3.1} ------------------------------------------------------------------------------------ At base line, the median values for CD4 count (cells/uL) and hemoglobin value (g/dL) were 383 and 11.6, respectively and the respective end line values were 382.5 and 12.5. The base line and end line body mass index mean values were 19.1 (±3.1) and 19.7 (±3.1) kilogram per meter square units, respectively. At start of the study, majority of participants were having WHO stage II OI 165 (48.4%). About 11 (3.2%) participants were having concomitant chronic diseases like hypertension, cardiac disease, and diabetes mellitus. With regard to functional status, almost all of them were working both at base line 83 (83%) and at end line 297 (87.1%) ([Table 3](#tab3){ref-type="table"}). About three quarters of participants were taking prophylaxis both at base line 244 (71.6%) and at follow-up 255 (74.8%) and almost all of them were having good drug adherence both at base line 225 (92.2%) and at follow-up 231 (90.6%) in which nearly all of them were taking cotrimoxazole both at base line 225 (92.2%) and at follow-up 243 (95.3%) ([Table 3](#tab3){ref-type="table"}). 3.2. Incidence of Recurrence and OI Free Duration {#sec3.2} ------------------------------------------------- The cumulative incidence of OI recurrence was 75.37% (95 CI: 70.6--79.7%) and incidence rate was 15.04 (95 CI: 13.1--16.97%) per 1000 person weeks. Of recurrence OIs, about 12.8% (95 CI: 9.16--17.36%) were self-relapsed and incidence rate of self-relapse was 1.93 (95 CI: 1.35--2.68%) per 1000 person weeks. The most rediagnosed OI was recurrent upper respiratory tract infection 44 (17.1%) whereas chronic diarrhea was most self-relapsed OI (23.7%) ([Table 1](#tab1){ref-type="table"}). According to the Kaplan-Meier survival estimation, the median duration of not acquiring OI recurrence was 54 weeks (95% CI: 46.9--61.1) ([Figure 1](#fig1){ref-type="fig"}). Among participants, those employed were more surviving than unemployed ([Figure 2](#fig2){ref-type="fig"}). As the actuarial life table analysis showed about 91% participants were not acquiring OI at end of 10 weeks and the probability of free of OI recurrence at end of 220 and 230 weeks was about 1% and \<0.01%, respectively ([Table 2](#tab2){ref-type="table"}). In bivariate cox proportional hazard model, the predictor variables that showed significant (*P* \< 0.05) association with the outcome variable were marital status, occupational status, educational status, the base line and follow-up functional status, having exposure for prophylaxis at baseline and adhering to it both at baseline and at follow up, baseline hemoglobin value, follow-up CD4 count, follow-up body mass index, number of OIs diagnosed at one time at start of the study, and being diagnosed wasting syndrome and Herpes Zoster at start of study ([Table 3](#tab3){ref-type="table"}). After adjustment for potential confounders in multivariate cox proportional hazard model, the significant (*P* \< 0.05) predictors preventing repeated diagnosing of OI rediagnosis were being employed in governmental or private sectors, divorced than married, taking prophylaxis at baseline, having a follow-up CD4 count above 100 cells/*μ*L, and having hemoglobin value of 10 g/dl and above, whereas not adhering to prophylaxis both at base line and at follow-up was the risk factors for short time rediagnosing of OIs ([Table 3](#tab3){ref-type="table"}). 4. Discussion {#sec4} ============= In the five-year study period, the cumulative incidence of OI recurrence was seen in about three quarters (75.37%) of participants. Different studies have diverse figures with regard to the proportion of OI. In North India, tuberculosis was the commonest OI (71%) followed by candidiasis (39.3%),*PCP* (7.4%), cryptococcal meningitis, and cerebral toxoplasmosis (3.7% each) \[[@B12]\]. In the same country of southern India, proportion of pulmonary tuberculosis was (14%) \[[@B13]\]. A national study in Ethiopia showed HIV patients\' had OIs like Herpes Zoster scar (19.3%), pulmonary tuberculosis (5.2%), and pneumonia (5.2%) \[[@B9]\]. In Northwest Ethiopia, about 88.9% pre-ART HIV patients had OIs \[[@B10]\]. Another study in similar area also showed that 82.4% pre-ART HIV patients have OIs \[[@B11]\]. The respective prevalence of pulmonary tuberculosis and cryptococcal meningitis among PLWHA in North West Ethiopia was about 7.5% \[[@B14]\] and 8.3% \[[@B15]\]. In about a quarter (22.7%) of PLWHA, chronic diarrhea was seen in Southern Ethiopia \[[@B16]\]. The cumulative incidence of recurrent OIs like recurrent upper respiratory tract infection, chronic diarrhea, bacterial pneumonia, oral candidiasis, herpes zoster, extra pulmonary tuberculosis, PCP, and pulmonary tuberculosis in current study was 17.1%, 14.8%, 9.8%, 9.8%, 9.3%, 9.3%, and 7%, 5.4%, respectively, and this finding is relatively in line with some figures of OIs in the studies \[[@B9], [@B10], [@B12]--[@B14]\] though it is lower than some other studies \[[@B11], [@B15], [@B16]\]. The possible reasons for discrepancy would be different in study design (the prior ones that are cross-sectional), study population (prior ones using PLWHA when coming to initiate ART which will increase the prevalence since ART is initiated using WHO stage of disease and CD4 count), study area, and other sociocultural practices. In this study, chronic diarrhea was most self-relapsed OI (23.7%) and this might be attributed by not using improved drinking water source and sanitation facility since only 50.8% and 8.8% of Ethiopians were using improved drinking water source and sanitation facility according to 2011 Ethiopian demographic health survey \[[@B6]\]. The current finding of being employed in governmental or private sectors as increasing duration of frequent vising of health institutions due to illness of rehappened OIs was in harmony with a cohort study in United States \[[@B17]\]. The current study that comes up as having a follow-up CD4 count above 100 cells/*μ*L compared to ≤100 cells/*μ*L was preventing repeated diagnosing of OI recurrence and this is in agreement with other studies \[[@B18]--[@B22]\]. The HIV cohort study in Switzerland showed that CD4 count is one of the predictor for OI progression; a rise in CD4 count by 50 × 10^6^/L or more by 6 months reduced subsequent OIs with hazard ratio of 0.32 \[[@B19]\]. Another cohort study also showed that higher CD4 cell count was associated with a reduction of risk of new OI progression and with a hazard ratio compared to 100 cells/mL of 0.35 for counts 200 cells/mL, 0.81 for counts 200 to 350 cells/mL, 0.74 for counts 350 to 500 cells/mL, and 0.96 for counts 500 cells/mL or above \[[@B22]\]. In this study, taking prophylaxis at baseline was enhancing duration of frequent diagnosis of recurrent OI and the finding was supported by other studies \[[@B23]--[@B26]\]. Primary prophylaxis with trimethoprim-sulfamethoxazole is preventing life-threatening OIs like PCP, toxoplasmosis, and bacterial infections \[[@B23]\]. Taking cotrimoxazole reduces the risk of PCP and tuberculosis \[[@B24]\]. Taking cotrimoxazole prophylaxis was preventing OIs like diarrhea in an experimental study in Ugandan PLHIV adults \[[@B25]\]. The evidence of preventing OIs like PCP using Cotrimoxazole was also assured in an experimental study \[[@B26]\]. 5. Conclusion and Recommendation {#sec5} ================================ During the historical follow-up period, OIs were rediagnosed in about three quarters (75.37%) of participants. Of rediagnosed OIS, nearly one every ten rediagnosis was self-relapsed (12.8%). In each week the probability of getting new recurrence of any type OI and self-relapse OI was about 15.04 and 1.93 per 1000 person weeks, respectively. Commonly rehappening OIs were recurrent upper respiratory tract infection (17.1%), chronic diarrhea (14.8%), bacterial pneumonia (9.8%), oral candidiasis (9.8%), Herpes Zoster (9.3%), and extra pulmonary tuberculosis (9.3%). According to the Kaplan-Meier survival estimation, the median duration of not acquiring OI recurrence was 54 weeks. After adjustment for potential confounders in multivariate cox proportional hazard model, the significant (*P* \< 0.05) predictors preventing repeated diagnosing of OI recurrence were being employed in governmental or private sectors, divorced than married, taking prophylaxis at baseline, having a follow-up CD4 count above 100 cells/*μ*L and having hemoglobin value of 10 g/dL and above, whereas not adhering to prophylaxis both at base line and at follow-up was the risk factors for frequent diagnosing of OI recurrence. 5.1. Based on This Study Finding, the Following Recommendations Can Be Forwarded {#sec5.1} -------------------------------------------------------------------------------- Providing prophylaxis and counseling to adhere to it should be further enhanced.Treatment and other sportive measures should be given to enhance the CD4 count and hemoglobin value.During giving of HIV chronic care especial attention should be given for those not adhering to prophylaxis drug since they have OI recurrence in short periods.Governmental or nongovernmental organizations should give especial criteria that support PLWHA to win in computation of job at vacancies since being employed reduces duration repeated rediagnosing of OI.Finally we recommend further observational studies with prospective design to ascertain the current findings. The authors would like thank Professor Mitkie Getnet, Dr. Worku Alemayehu, and Dr. Enquselassie Fikre, Addis Ababa University, Debre Markos town health managers, ART clinic staffs, for their contribution in the success of the work. Conflict of Interests ===================== The authors declare that there is no conflict of interests regarding the publication of this paper. ![Kaplan-Meier survival estimation of progressing to OI rediagnosing among pre-ART PLWHA in Debre Markos town between 2008 and 2013.](BMRI2015-146306.001){#fig1} ![Kaplan-Meier survival estimation of progressing to acquiring OI recurrence among employed and unemployed pre-ART PLWHA in Debre Markos town between 2008 and 2013.](BMRI2015-146306.002){#fig2} ###### The rediagnosed OIs of any type and self-relapse among participants\' in Debre Markos town between 2008 and 2013. Rediagnosed OI Frequency of overall rediagnosis (%) Self-relapse --------------------------------------------- -------------------------------------- -------------- ------------ Recurrent upper respiratory tract infection 44 (17.1) 9 (20.5) 35 (79.5) Chronic diarrhea 38 (14.8) 9 (23.7) 29 (76.3) Oral candidacies 25 (9.7) 1 (4) 24 (96) Pneumonia 25 (9.7) 4 (16) 21 (84) Herpes Zoster 24 (9.3) 3 (12.5) 21 (87.5) Extra pulmonary tuberculosis 24 (9.3) 1 (4.2) 23 (95.8) Minor mucocutanous manifestation 15 (5.8) 2 (13.3) 13 (86.7) Wasting syndrome 9 (3.5) 2 (22.2) 7 (77.8) Persistent generalized lymphadenopathy 8 (3.1) 1 (12.5) 7 (87.5) Viral infection 6 (2.3) 1 (16.7) 5 (83.3) Others 39 (15.2) 0 39 (100) Total 257 (100) 33 (12.8) 224 (87.2) ###### The actuarial life table estimation of participants\' duration of diagnosing of rehappening OI in Debre Markos town between 2008 and 2013. Interval start time Number entering interval Number withdrawing during interval Number exposed to risk Number of OIs diagnosed at interval Cumulative proportion surviving at end of interval Hazard rate --------------------- -------------------------- ------------------------------------ ------------------------ ------------------------------------- ---------------------------------------------------- ------------- 0 341 7 337.5 32 0.91 0.01 10 302 11 296.5 41 0.78 0.01 20 250 9 245.5 31 0.68 0.01 30 210 11 204.5 20 0.61 0.01 40 179 11 173.5 18 0.55 0.01 50 150 10 145.0 21 0.47 0.02 60 119 8 115.0 31 0.34 0.03 70 80 7 76.5 18 0.26 0.03 80 55 1 54.5 15 0.19 0.03 90 39 2 38.0 2 0.18 0.01 100 35 1 34.5 8 0.14 0.03 110 26 1 25.5 2 0.13 0.01 120 23 2 22.0 0 0.13 0.00 130 21 0 21.0 3 0.11 0.02 140 18 1 17.5 4 0.08 0.03 150 13 0 13.0 2 0.07 0.02 160 11 0 11.0 4 0.05 0.04 170 7 1 6.5 0 0.05 0.00 180 6 0 6.0 1 0.04 0.02 190 5 0 5.0 0 0.04 0.00 200 5 1 4.5 1 0.03 0.02 210 3 0 3.0 0 0.03 0.00 220 3 0 3.0 2 0.01 0.10 230 1 0 1.0 1 0.00 0.20 ###### The cumulative incidence of diagnosing OI, Kaplan Meier estimation of median duration of not acquiring OI recurrence and cox proportional hazard model of the association between characteristics and OI recurrence among PLWHA in Debre Markos town between 2007 and 2013. Variables Diagnosis of recurrence OI Median KMS CHR (95% CI) AHR (95% CI) ------------------------------------ ---------------------------- ------------ -------------- -------------------- -------------------------- Marital status            Married 101 (39.3) 29 (34.5) 60 1 1  Single 68 (26.5) 11 (13.1) 46 1.65 (1.21--2.26) 1.19 (0.72--1.96)  Divorced 60 (23.3) 32 (38.1) 64 1.02 (0.74--1.41) ***0.57 (0.33--0.99)***  Widowed 28 (10.9) 12 (14.3) 42 1.69 (1.10--2.59) 1.64 (0.79--3.40) Educational status            Not educated 110 (42.8) 43 (51.2) 55 1 1  Grades 1--8 82 (31.9) 16 (19) 41 1.06 (0.79--1.41) 1.17 (0.70--1.94)  Grades 9--12 44 (17.1) 14 (16.7) 55 0.84 (0.59--1.19) 1.38 (0.71--2.68)  Above grade 12 21 (8.2) 11 (13.1) 67 0.49 (0.30--0.80) 1.55 (0.69--3.47) Occupational status            Unemployed 221 (86) 70 (83.3) 53 1 1  Employed 36 (14) 14 (16.7) 76 0.48 (0.33--0.69) ***0.34 (0.16--0.71)*** Functional status^B.^            Working 206 (80.2) 77 (91.7) 60 1 1  Ambulatory/bed-ridden 51 (19.8) 7 (8.3) 32 1.7 (1.25--2.32) 1.59 (0.81--3.12) Functional status^F.^            Working 215 (83.7) 82 (97.6) 60 1 1  Ambulatory/bed-ridden 42 (16.3) 2 (2.4) 37 1.65 (1.17--2.31) 0.97 (0.52--1.81) Number of OIs treated at base line            1 203 (79) 62 (73.8) 59 1 1  ≥2 54 (21) 22 (26.2) 44 1.38 (1.01--1.87) 1.19 (0.69--2.07) Prophylaxis adherence^B.^            Good 170 (91.4) 55 (94.8) 64 1 1  Fair 6 (3.2) 0 (0) 43 2.69 (1.19--6.13) ***14.92 (1.03--215)***  Poor 10 (5.4) 3 (5.2) 23 1.76 (0.92--3.35) ***5.96 (1.21--29.41)*** Prophylaxis adherence^F.^            Good 173 (88.3) 58 (98.3) 64 1 1  Fair 9 (4.6) 0 (0) 55 1.06 (0.54--2.08) ***5.39 (1.77--16.36)***  Poor 14 (7.1) 1 (1.7) 22 2.2 (1.27--3.82) ***5.79 (1.86--17.98)*** Prophylaxis exposure^B.^            No 71 (27.6) 26 (31) 37 1 1  Yes 186 (72.4) 58 (69) 63 0.64 (0.49--0.85) ***0.31 (0.19--0.49)*** CD4 count (cells/*μ*L)^F.^            ≤100 4 (2.2) 1 (1.4) 17 1 1  101--199 19 (10.6) 3 (4.2) 60 0.19 (0.06--0.58) ***0.12 (0.029--0.49)***  200--350 50 (27.8) 27 (37.5) 69 0.14 (0.048--0.39) ***0.21 (0.057--0.79)***  351--499 58 (32.2) 24 (33.3) 66 0.17 (0.058--0.47) ***0.16 (0.045--0.59)***  ≥500 49 (27.2) 17 (23.6) 70 0.13 (0.046--0.38) ***0.17 (0.46--0.62)*** Hemoglobin value (g/dL)^B.^            \<10 61 (31.1) 5 (16.7) 26 1 1  ≥10 135 (68.9) 25 (83.3) 53 0.55 (0.40--0.75) ***0.49 (0.25--0.97)*** Body mass index (kg/m^2^)^F.^            ≤18.4 122 (47.7) 36 (42.9) 46 1 1  18.5--22.9 103 (40.2) 35 (41.7) 54 0.93 (0.71--1.21) 0.89 (0.54--1.46)  ≥23 31 (12.1) 13 (15.5) 69 0.54 (0.37--0.79) 0.61 (0.33--1.14) Herpes Zoster diagnosis^B.^            No 202 (78.6) 73 (86.9) 55 1 1  Yes 55 (21.4) 11 (13.1) 48 1.38 (1.02--1.86) 1.18 (0.65--2.13) Wasting syndrome diagnosis^B.^            No 247 (96.1) 84 (100) 55 1 1  Yes 10 (3.9) 0 (0) 23 2.39 (1.21--4.34) 1.29 (0.36--4.71) ^B^Base line value. ^F^Follow-up value. KMS: Kaplan Meier survival in weeks. CHR: crude hazard rate. AHR: Adjusted hazard rate. [^1]: Academic Editor: Domingo Pere
"Lear 251 Delta Lima, this is Burbank tower." "Do you read?" "Lear 251 Delta Lima, this is Burbank tower." "Do you read?" "Acknowledge," "Lear 251 Delta Lima." "They're off course and under the glide slope." "Lear 251 Delta Lima, do you read?" "Come right to 020." "Delta Lima, do you read?" "It's seconds out." "Delta Lima, abort." "Delta Lima, abort!" "Delta Lima, abort!" "Abort!" "Run!" "LA 5x02 ♪ Impact Original air date on October 1, 2013" "== sync, corrected by elderman == @elder_man" "How's he doing?" "Passed his evals, right?" "Oh, with flying colors." "So he's good to go." "Have you talked to Deeks?" "He appears to be screening my calls." "That's brazen." "Indeed." "Oh, my God." "Emoticon overload." "These guys from last night are kind of driving me nuts." "Which one?" "Jesse?" "Alex." "What is that?" "An ear of corn?" "A pickle?" "That actually looks like a-a..." "Oh, my God, here's Jesse." ""Good morning, beautiful."" ""Well, good morning to you," smiley face." "You heard from all three?" "Haven't you?" "Uh, no." "My, uh, phone's off." "I mean, who came up with this Groupster thing anyway?" "You know, three times the rejection doesn't seem psychologically sound." "One-on-one is bad enough." "I know, but Rose was so excited, and, you know, she really needs to get out and meet guys." "Yeah, that have a pulse." "It's supposed to be fun." "Three guys, three girls, no pressure, no expectations." "I'm sorry." "Three's a crowd." "Is that a heart or a butt?" "You know what?" "Here's an example." "Three bears, burgled." "Three little piggies, houses obliterated." "Three blind mice, tails cut off." "I am telling you, people start killing each other when the equation is three." "Wonder if Rose got any calls." "Yeah, only if one of them dropped dead." "You're bad." "Case on deck." "Oh, here we go." "Haircut?" "No, I think it's a new shirt." "Wait a second." "Are those...?" "Yep, I am wearing pants." "Sad face." "What, you got Old MacDonald's entire farm in there?" "Uh, it's just my mom." "Just her mom." "Well, well, well." "Look who's wearing big-boy pants." "Hetty got me these." "I mean, you still have the thongs, but it's a start." "Might as well be wearing a thong." "Stop whining." "Yes, ma'am." "Early this morning, a private jet, on its way from Washington, D.C., crashed at Burbank Airport." "There's no information as to why the plane went down, but at this early hour, it does appear no one on board survived." "What did air traffic control say?" "The tower lost contact with the jet upon approach." "The plane appeared to be on a collision course before veering off at the last moment, crashing." "Pilot error?" "Could have overshot the runway." "Maybe, or whoever was flying the plane had a clear target in mind." "Or maybe they missed a target." "It's who's on board that interests us." "Former Vice Admiral William Gardner." "He was a key player in the War on Terror." "Forced into early retirement ten months ago." "Gardner's uncensored criticism of the administration lost him his job and a seat on the Joint Chiefs of Staff." "What was he doing in L.A.?" "According to this, he was brokering a book deal." ""Unbroken Warrior, a riveting account of the truth behind the headlines."" "Sam and I will check out the airport." "Kensi..." "Uh, you'll be taking Kensi with you," "Mr. Callen." "Oh, great, my third-wheel status made official." "Not today." "Sam has an appointment." "Oh." "With who?" "That would be me." "Nate." "Good morning." "You want me to see another shrink?" "Uh, I don't think" "Mr. Getz is "another shrink."" "He knows you, your past." "And he knows I bounce back fast." "Even the most durable fabric wears out eventually." "Is that what you think," "Hetty?" "You think I'm worn out?" "I worry that you will be if you don't take care of yourself." "Sit down," "Mr. Hanna." "I don't know what more you want from me, Hetty." "I passed my physical my psych assessment." "Have I ever told you about the time I went blind?" "It was in Cambodia." "I was so committed to my assignment that I went for weeks existing on little more than insects and lemongrass." "So when, at last, my target presented itself," "I could barely see to complete my mission." "Vitamin A deficiency." "I take a multivitamin." "Oh, come on." "Sorry, Hetty." "I get it, but I wouldn't be here if I didn't think I could do a good job." "Then, your visit with Mr. Getz will be a mere formality." "How'd you get your sight back?" "Carrots." "Always eat your carrots, Mr. Hanna." "Sorry." "I know you were looking forward to getting back out with Sam." "Yeah, it's the same for you and Deeks." "Well, let's stay positive." "Good idea." "He won't return my calls." "Don't take it personally." "Sam's been staying close to home as well." "I couldn't get him to go to a Lakers game." "Yeah, I bought him a Cronut." "I had courtside..." "You bought him a what?" "A Cronut." "It's a croissant-doughnut hybrid;" "Deeks loves them." "I can only get them in this little bakery in New York City, and I left it on his doorstep, and it's still there." "He'll be okay." "Yeah." "They both will." "I'm gonna call Eric, see if he spoke to the I.T. guys." "NCIS." "Investigating the death of Vice Admiral Gardner." "Chief Howard," "National Transportation Safety Board." "How are you?" "Good luck with that." "Come again?" "Take a look; not much left." "So you haven't been able to find anything that helps explain the crash?" "Well, actually, we've pretty much found everything except the one thing that could help." "The black box?" "The wreck area is pretty small." "But we can't find the box anywhere." "Really?" "Really." "Yeah, you think you can get onto those ATC computers?" "They're not your average laptops, Kensi." "I'm a geek, not a god." "Eric, okay, let us know what else you find." "Fine." "I.T. guys confirmed the tower systems were operating properly." "Did the pilots give any indication that they were having problems?" "No, there was no response to any of the communication attempts." "Total radio silence." "I'm trying to get Eric to verify it, but something's got his panties in a twist." "Maybe it's his new pants." "Busy morning." "Is that your, uh, your mom again?" "A friend." "Friend of you and Kensi's?" "Yeah, just someone I hung out with last night." "With Kensi?" "On a date?" "I don't mean a date with Kensi." "You know what I mean." "Like, Kensi and the guy she's into, you and the guy you're into." "Not into him, not into any of them, and neither is Kensi." "Whoa, whoa, whoa, them?" "Yeah, okay, there's three of them and three of us, but we only went because Rose really needs to get out more, so..." "Oh, Rose came, too." "Yeah, it was like a girls' night." "Ah, girls' night with guys." "Who we are not into." "And yet you hung out with them all evening." "Hey, so should we move this interrogation into the boatshed?" "Oh, sorry, just, uh, curious how this whole thing works." "Why?" "'Cause you want to go on one?" "What?" "Oh, on a date?" "Whoa, with you?" "No, no." "With, like, other people." "Just, come on, not with me." "Oh." "No." "I mean, that's three times the heartache, right?" "For them, I mean." "Right." "Right." "right,ring on the questions." "Is that what you want?" "Me running through a list of questions, seeing if any of them trigger you?" "Trigger me?" "I'm a ticking time bomb?" "Well, is that how you feel?" "I feel fine." "It's just everybody acts like I'm gonna explode." "In what way?" "You know, tiptoeing around," ""watching for signs."" "Can you blame us?" "You went through quite an ordeal." "I've been through a lot worse." "That's not how trauma works, Sam." "Somebody might survive a tsunami no problem, only to be scarred for life by a trip to the dentist." "You might not want to use that example on Deeks." "Noted." "Look, Sam, you and I both know you're more than well-equipped to handle this, and you've got a great support network here." "The other night, Michelle and I got into it over whose turn it was to do the dishes." "That sounds normal." "She wouldn't let me do 'em." "That sound normal to you?" "It sounds like she cares." "And Callen got us these amazing tickets at the Lakers game." "Probably sold a kidney for 'em." "Did you go?" "No, it didn't feel right." "Because he's being too nice to you?" "Everybody is." "It's like they're trying to make me feel better when I'm fine." "Remember to relax and concentrate on the next exercise." "You must breathe very slowly." "Fill what is empty and empty what is full." "Fill what is empty and empty what is full." "Ah-hum-rumas-me." "I am the universe." "Ah-hum-rumas-me." "I am the universe." "My head is relaxing." "My head is relaxing." "My arms are relaxing." "My arms are relaxing." "My abdomen is relaxing." "My abdomen is relaxing." "Relax the buttocks." "Relax the buttocks." "What am I doing?" "Clench, release." "Clench, release." "Clench, release." "Clench, release." "I am one with the universe." "I am one with the universe." "God, Hetty, what are you doing?" "Well, I thought I'd brave the monsoon to come check on you." "Storm sounds-- supposed to make it easier to fall asleep, so..." "You having trouble sleeping?" "Yeah, I'd say I have a little case of insomnia." "Probably all that clenching and releasing." "Wow-wow-wow- wow-wow-wow-wow, you've been busy." "Well, when you don't sleep, you realize how many hours there are in the day you have to fill." "Well, if you're bored... perhaps you could come back to work." "I didn't even know that you were, um," "I didn't know you were coming." "If I knew..." "Do, do you want something?" "Do you want some milk?" "Oh, no, no, no, no, no, I-I can't stay long." "I just came to... see if your phone was working." "43... missed calls." "Fancy that." "Like you said, I've been busy." "So has Kensi." "For the past few weeks." "Without a partner." "I'm gonna need a decision soon." "Especially if I need to find a replacement." "Of course." "I'll leave you to your storm." "Hopefully it'll pass without too much damage." "Feel your troubles melting away as you drift off to a world abstract." "I'll show you, Hetty." "Oh!" "Hello." "Uh, did you ever hear of knocking?" "Sorry." "Last I checked, this was the burn room, not the locker room." "What are you doing?" "What's it look like I'm doing?" "Something really weird?" "Are those your pants?" "Uh... no." "Oh, my..." "God!" "Those are not my pants." "I do not own pants." "Those are Hetty's pants." "Interesting, and you were going to incinerate them?" "Do you have a death wish?" "I didn't have a choice." "Did you have an accident?" "Ew, no." "Those things are driving me nuts, they're so constrictive." "It's like my legs are trapped in a straitjacket." "Eric, they're pants." "People have been wearing them for thousands of years." "Oh, no, no, not my people." "The Beales of the Clan McBeale." "And now you're Scottish?" "As heather and haggis." "So why don't you wear a kilt?" "I do." "I did." "I used to." "Until this little incident with Hetty." "It's easy to forget how short she is." "Her eye line is lower than you think." "Yup, got it!" "Thanks." "Okay, I suggest you take your bag-o-pants and put them back on your body before Hetty finds out, or else it'll be your butt in the incinerator." "And there was a last-minute passenger added to the flight's manifest." "Jason Carter?" "How do I know that name?" "Jason Carter was a journalist." "He had written a number of high profile pieces on a variety of different topics." "He was even nominated for a Pulitzer for an article he wrote on hydraulic fracturing and the oil companies." "That's how I know him." "For the past year or so, he had been writing about the war in Afghanistan, embedding himself in several different units." "Hmm." "Maybe he was interviewing the vice admiral on the flight." "Reasonable assumption." "So I contacted his publisher." "Turns out, he was the ghost writer for the admiral's memoir." "Somebody didn't want this book being published." "Listen, Nate." "I wouldn't jeopardize Callen and the rest of the team if I didn't think I could hold my own." "Look, I appreciate that, Sam, and I believe you." "In fact, I know you put your partner and the rest of the team above your own safety." "Okay, then you know pretty much all there is to know." "Hetty doesn't think so." "Then, maybe you should go talk to Hetty." "You know, you're probably right." "Is that it?" "I'm only here because Hetty worries about you." "Nate..." "The only way to survive is to let go." "I keep a little something behind in case there's a chance to escape or attack, but... the rest of me is gone." "I see 'em wailing on that guy in the chair." "I can't help him." "When it's over, I reconnect." "And the only thing left are some scars." "I'm afraid one day I may drift off... and never reconnect." "Then what happens to the guy in the chair?" "Yeah?" "Special Agent Callen." "Special Agent Blye, NCIS." "We were wondering if we could take a look inside of Jason Carter's apartment." "Yeah, I guess." "I heard he died." "Shame." "Nice guy." "Good tenant." "This have anything to do with the fire?" "I'm sorry, the fire?" "In his apartment." "The place was gutted day before yesterday." "Fire marshal said it could take a week or more to determine what happened." "Had a insurance company out here this morning." "They wanted to take a look, too." "Not much left." "Fortunately nobody was home at the time." "This guy's had a bit of a run of bad luck, huh?" "This is Jason's girlfriend." "Julie, these are the agents from, um..." "NCIS." "His insurance company?" "No..." "Naval Criminal Investigative Service." "Oh." "Shall we?" "Yeah." "Need a hand?" "Oh." "Thank you." "I didn't even know Jason was on his way home until I saw the message on my phone." "What did the message say?" "Just that he was able to get a ride back to L.A. with the vice admiral, and was gonna use the time to interview him." "I fell asleep waiting up for him." "I kept expecting him to crawl into bed and kiss me good night." "When I woke up in the morning and he wasn't there," "I knew something was wrong." "And he wasn't answering his phone, so I came here and the fire department was just leaving." "I was standing here, already in shock when the police called to tell me" "Jason was killed in the plane crash." "Uh, four days ago." "He give you any indication that something might've been wrong?" "He seemed a little stressed maybe?" "Callen?" "Excuse me." "So, super wasn't kidding when he said the place was gutted." "Forensics will be able to tell us more, but the fire was hot and fast." "Pro job." "Julie?" "Do you know anybody that would've wanted to hurt Jason?" "No." "Some of his articles earned him hate mail." "Did he tell you what he was currently working on?" "No, he didn't talk much about work." "Did he ever give you anything to keep for him?" "No." "Why, do you think something he was working on played a part in his death?" "We're considering a lot of possibilities." "Please tell me your presence here is because of your excitement over a startling and revealing piece of valuable evidence that solves this case." "Well..." "You know what, I'll settle for a run-of-the-mill clue." "Actually, I have more bad news." "Is it worse than Jason Carter's apartment being torched?" "Virtually." "What is it?" "Virtually." "What?" "Virtually." "I think he's stuck." "I knew Hetty was a robot, but now him?" "No... virtually as in cyberspace." "As in somebody's been scrubbing through his electronic life." "As in they hijacked his cloud and wiped it clean an hour after he died." "As in they hijacked his cloud This is some serious voodoo." "I'm talking black bag kung fu, ninja warrior assassin level hacking." "Do you have any idea what he's saying?" "I really don't, but I think it's bad." "Either that or his motherboard was fried." "So, who do we know with this level of cyber warcraft?" "I may have a guy." "This is the security cam footage from the airport." "And I think this is the black box." "Orange." "See?" "That's why we didn't find it, because somebody stole it." "Why would someone steal a plane's flight recorder?" "Million-dollar question." "No, the million-dollar question is why is it called the black box if it's always orange?" "If somebody wanted to find out what happened to the aircraft in its last few moments." "Or if they didn't want someone to know." "Then they would get away in the confusion." "Yup." "And I believe the same vehicle entered the airport just 24 hours earlier." "See?" "Plain, inconspicuous." "They probably parked it in a hangar and then re-skinned it as an emergency vehicle." "And then all they had to do was wait for the crash to happen." "So they show up on the scene as emergency workers, and when everyone else is busy, they walk off with the flight recorder." "Which also means that they knew that the crash was going to happen." "Which proves it was sabotage." "I'm running the hangar rental and owner lists now, but still no luck facial rec-ing these guys." "Okay, what about the vehicle?" "Working on that, too." "I doubt you're gonna find it." "These guys haven't left much to chance." "Someone deliberately crashes their plane in what was described as a aborted kamikaze flight into the tower." "There could have been a struggle on board." "Hijacking and a struggle would be the most logical explanation." "But all of these guys are top-drawer." "None of them have anything in their profile or background that would even remotely suggest that they could be responsible for this." "Which is where our black-box-stealing, mystery emergency workers come in." "Could they be responsible for the sabotage?" "It's not likely." "The plane originated in Washington." "They would have had to have sabotaged it a few days before to have been waiting here." "Which would suggest accomplices." "This is starting to sound like a conspiracy nut's fantasy." "Only this might be real." " Okay, good luck." " Thank you." "How was the zoo?" "Did you get a churro?" "That's funny." "It's a good one." "Hey, you sure you want him back?" "Can I have the rest of the week to think about it?" "Ha-ha." "So, just give me the greatest hits." "He's as stubborn as he is big." "I consider both of those to be assets in an agent." "And they don't make them any tougher." "That's also why he's here." "But the trauma and damage he experienced, it's cumulative, physically and psychologically, as you well know." "If it happens too many more times, he could reach a breaking point where he can't take it anymore." "You also know what can happen then." "Thank you, Nate." "Now could you turn your attentions to Detective Deeks?" "Anything I should know?" "I don't want him back if he's not the man he was." "So, what'd Nate have to say?" "Ah, same old shrink mumbo jumbo." "Yeah. "You ever have sexual fantasies about your mom?" "You ever wear her clothes when she's not home?"" "That sort of thing?" "What?" "The hell are you talking about?" "He asked you that kind of stuff before?" "Yeah, but..." "I mean, that's normal... shrink stuff." "Are you messing with me?" "I'm not messing with you." "Don't be messing with me on my first day back, man." "Anyone hear an explosion prior to the plane crash?" "No... and no sign of any sort of explosive devices from what I can see." "This all happened from this plane hitting the ground at several hundred miles per hour." "With a crew with a flawless record." "In a plane that was just as safe." "What do you got?" "Looks like it was a digital recorder in its former life." "Well?" "I'm not sure." "I mean, it's digital, so it should be there." "It's just a matter of the damage." "What else...?" "Afghanistan... never..." "conquered by a foreign army." "The Russians learned..." "the hard..." "Absolutely not... not at war with Afghanistan." "...the country trying to weed out persistent terrorists, bastards... to Pakistan." "...semantics?" "What would it... take to put...?" "Money... money and more troops... private contractors..." "billions." "Whoa, what was that when he said "money, private contractors"?" "Can you dig it out more?" "I can try." "Same... said of... military." "Our people... properly trained." "Theirs are not." "They're a... risk that re..." "lots... cover-ups." "Sounded like he said "cover-ups."" "...risk that re..." "lots... cover-ups." "Hell... proof of what..." "considered war crimes." "That's it." "He was talking about war crimes being committed." "Without the entire recording, we can only ever guess what was actually said, but that's what it sounded like to me, too." "Well, if you had proof that Americans with war contracts committed atrocities overseas while employed by the U.S. government," "I'd say there are those who would kill to keep that buried." "You need to hear this." "Let me guess-- you're stuck in traffic." "No." "Uh, hi, Hetty." "No, I-I thought you were going to be Deeks." "No, he hasn't showed yet." "In fact, he's not answering his phone calls or e-mails." "An amateur plane spotter and his buddy sent this to the Burbank Police Department." "Yeah, they were parked here, listening in on the tower, when they picked up on this from a different radio frequency." "Lear 251 Delta Lima, this is Burbank tower." "Do you read?" "Tower said they never responded." "They thought they were responding to the tower, but they actually weren't." "Burbank tower." "Lear 251 Delta Lima." "We have you loud and clear." "Little foggy down there." "That's the pilot of Admiral Gardner's plane." "Delta Lima, we're still above minimums here, unbroken with a 300-foot ceiling." "If that's not the tower, then who the hell is it?" "Roger." "Glad to hear." "I don't know, I ran it" "Burbank tower, I may have some instrument issues." "Altimeter isn't matching my other displays." "Computer's got me at a thousand feet and altimeter is reading 750." "Altimeter setting is 2886." "We have you descending through 900 feet." "Glide slope is spot-on." "Come left to 024 degrees." "You're cleared to land." "Wilco." "024 and cleared to land." "Look out, look out, pull up, pull up!" "We're going down!" "Three seconds later they crashed." "Somebody intercepted the tower's radio system as well as that of the plane." "And were able to sabotage the jet's instrument system." "Like I said... serious voodoo." "So, an outspoken navy admiral is speaking with a journalist known for controversial stories." "And they want to keep him quiet so they kill the admiral and the writer..." "Making it look like a plane crash..." "Wiping out any evidence of what the journalist was doing." "And we have nothing but a few seconds of interview." "What if the journalist gave his girlfriend copies of his work for safekeeping?" "But he didn't." "Maybe those responsible don't know that." "More evidence is surfacing about the fiery plane crash in Burbank yesterday that killed retired Navy Vice Admiral Gardner." "Another victim has been identified as controversial journalist Jason Carter." "According to sources close to Carter, he was worried about his own safety recently because of the story he was working on, excerpts of which have been released by his girlfriend." "The following conversation between Carter and the vice admiral is believed to have been recorded just moments before the fatal crash." "Our people are properly trained." "Theirs are not." "They're a security risk that resulted in lots of cover-ups." "Hell, we've even seen proof of what would be considered war crimes." "The investigation into the crash that killed" "Vice Admiral Gardner and Jason Carter is ongoing, and this latest development suggests this story is far from over." "Will this be enough to draw them out?" "Whoever's responsible went to extremes to bury this." "I doubt they have any intentions of stopping now." "Our people are properly trained." "Theirs are not." "They're a security risk that resulted in lots of cover-ups." "Hell, we've even seen proof of what would be considered war crimes." "What else do you want to know?" "Afghanistan has never been conquered by a foreign army." "The Russians learned that the hard way." "What are we waiting for?" "Let's go." "Showtime." "You put a surveillance camera inside a garden gnome?" "Yeah, we call it the Hetty-cam." "Hey now, these guys are real pros." "Wouldn't you say so, Hetty?" "I could say many things, many, many." "Look, they're picking the lock." "Absolutely not the same situation." "We're not at war with Afghanistan." "We're in the country trying to weed out persistent terrorists, bastards who migrated to Pakistan." "Isn't that just semantics?" "What would it really take to put an end to...?" "Money." "It'd take more money and more troops, but we're inundated with private contractors who waste billions." "They call Washington corrupt;" "these bastards take the cake." "So now you're talking about multinational corporations." "I've had my flu shot." "What the...?" "Kitty Corner?" "I only read it for the articles." "Federal agent." "Don't even think about it." "What'd I just say?" "You're still thinking about it." ""What if I distract him with the magazine, dive for cover, draw, and fire?"" "That might work, but they'd fill you full of holes." "Good call." "Wish you were out there?" "No such thing as a bad day in the water." "I came." "Even had my hand on the door." "I don't know what happened," "I just couldn't come in." "Pretty sure you'd feel better if we talked." "Pretty sure I wouldn't." "Look... even though I'm here at Hetty's request, and... well, I've got my own opinions, the only one who matters in all this is you." "I have no agenda beyond making sure you're in the best place you can be right now." "And how can you possibly know what that place is when I don't even know?" "Perspective?" "Seldom do we know what we need for ourselves." "What I need... is sleep." "Why do you think you can't sleep?" "Because every time I close my eyes, my mind just keeps running." "With what?" "All sorts of stuff, man." "The abduction?" "Yes, the abduction." "Torture?" "The abduction... torture, when I was shot... falling off my bike when I was eight years old, stepping on a bee, nursery rhymes, grocery lists, infomercials." "It's like someone took all my memories and just put 'em into a blender." "You went through a traumatic experience." "Yeah, but this is not my first traumatic experience." "No, but maybe something about this one had more impact." "Your brain could be trying to make sense of what happened by comparing it to past experiences, but you got nothing that comes close, so it's working a little harder to resolve it." "Okay, so how long's this supposed to last?" "I don't think I have a definitive answer for that, but the more you talk about it out here, the less you're gonna have to work on it in here." "So, what, in the meantime," "I just walk around with the mind of a schizophrenic?" "I don't think you have to worry about being a..." "You know, it's funny, 'cause I already get the Shaggy jokes and how I'm halfway to homeless." "You know, what's crazy is that I see these guys and I hear them talking to themselves and it's scaring the hell out of me because if I were to say what's going on in my mind," "it wouldn't be that different." "Well, that's the real difference." "You're worried about it." "I'd be more concerned if you weren't." "So I'm not crazy?" "Not yet." "If you don't start getting some sleep, you're gonna start to act and feel like it." "What about Kensi?" "What about her?" "You two obviously have something special." "Who told you that?" "You're partners." "That's a special relationship." "Look at Callen and Sam." "Right, of course." "What is it about your partnership that's... unique?" "What do you mean?" "What do you mean, "unique"?" "Different from Callen and Sam or the others." "What's the one ingredient you'd say makes your partnership distinct from the rest?" "I don't know." "Well... once you can answer that truthfully to yourself, everything else will become much clearer." "They're not talking." "Lawyered up before we could zip-tie 'em." "Both are former Special Forces." "Both work for D7-- it's a private company with a couple of billion in government contracts supplying support services in hot spots around the world." "Yeah, we've already called in for a search warrant." "The evidence we'd need will be long gone by then, if it isn't already." "We have to work with what we have." "Well, the guys in the boatshed are small fish." "They're well-trained, they're well-funded, but they didn't order this." "That was somebody higher up." "Somebody who has access to technology that allows them to intercept air traffic control and sabotage a private jet's in-flight computer." "Yeah, Hetty, this is a lot bigger than we initially thought." "I mean, we're talking war crimes by private contractors." "We'll get them, but today you caught two small fish, and sometimes small fish are the most perfect bait for big fish." "This is far from over." "You did well today." "Eric even managed to keep his pants on." "You can drop them off at wardrobe." "I can go back to wearing shorts?" "For a while." "It's a process." "Uh-huh." "Hmm." "I want a job where it's an accomplishment to leave my pants on." "Hmm." "I have sensitive thighs." "Oh..." "He has sensitive thighs." " Yeah." " Hey, come on, guys." "Guys, it's not funny-- it's like restless legs syndrome times a zillion." "Good luck with that." "I'm dead." "I'm the one who cut them." "I couldn't bear to see you suffer." "What are we gonna do?" "Looks like we're going shopping, Beale." "So... what's the good word on our Mr. Deeks?" "He's hurting." "Can he return to work?" "Yes." "Whether he will or not is a question for him." "He's not sleeping." "He can't work through this if he doesn't get some rest." "And his partner?" "It's a complicated relationship." "Aren't they all?" "You asked me if he could come back to work." "And now I'm asking you about his relationship with Kensi." "Are you playing semantics with me?" "He's very close..." "to his partner." "Too close to return?" "I'm co" " I'm co." "Oh." "Hi." "Hey." "I've been calling." "Yeah, I think I must have had my phone off." "Guess what reopened." "Is that Yummy Yummy Heart Attack?" "Yep, three Fs from the health department and still going strong." "Yeah, if "F" stood for "fabulous."" "Did you get the, uh, Drunken Pigs?" "With extra kimchi-- you're welcome." "I think I just felt a shiver." "Want me to, uh, grab something to drink?" "Oh, no, no, no, no, I have got you covered, my friend." "Wow, one day you are going to make somebody the perfect personal assistant." "Got a fork?" "I got a spork." "Yeah, yeah, you do." "All right." "Oh, I have been waiting for this-- the smell in the car..." "Oh, are you kidding me?" "I forgot how good this was." "This is so good." "You think it's bad for us?" "Hmm, ah, you only live once, ha." "Yeah, probably a lot shorter when you eat like this." "You got napkins?" "Uh-huh." "Is that dessert?" "Uh, no, it's nothing." "You got me a Cronut?" "Um, I did, but that was a while ago, and I left it at your doorstep, so that's old, don't eat it." "N-N-N-No, d-don't throw it away." "It's the thought that counts-- I'm gonna frame this thing." "You're so weird." "I mean, look at that." "It's like America and France made slow, sweet love and then had a pastry baby." "Sure you don't want a bite?" "No, seriously it's been out there for a while." "I'll probably still eat that." "Okay." "So, Burnt Offerings is on at 11:00." "It's the bottom of the eighth..." "I don't really know if I'm up for a movie." "Oh, yes, you are because I cannot watch this alone." "It is rated triple-B." "What's that?" "Blood, breasts, and beasts." "What was the last one?" "Beasts." "Well, you know how I like big beasts." "Either way, you're watching it with me." "I thought you loved horror movies." "I do, just not by myself." "Watch it with me." "Watch it with me?" "Watch it with me." "You won me over with the pastry, baby." "Awesome, okay." "Want to use my cat pillow?" "It's pink, very masculine." "So..." "Oliver Reed and Karen Black move into this mansion with their son and their elderly aunt, played by Bette Davis, who I absolutely love, and then Burgess Meredith, who played Mickey in the Rocky films-- it was so sad when he dies," "oh, my God, it's the best scene ever-- um, and his sister play the caretakers of this mansion, and then their mother-- she's like an elderly recluse in the attic, and then flying monkeys from The Wizard of Oz show up" "with guns, and there's a big shoot-out." "Mm, those monkeys are scary." "What happens next?" "It's a love story." "What?" "== sync, corrected by elderman == @elder_man"
/* * Copyright 2014-2017 Red Hat, Inc. and/or its affiliates * and other contributors as indicated by the @author tags. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.hawkular.metrics.api.jaxrs.handler; import static javax.ws.rs.core.MediaType.APPLICATION_JSON; import static org.hawkular.metrics.api.jaxrs.util.ApiUtils.badRequest; import static org.hawkular.metrics.api.jaxrs.util.ApiUtils.noContent; import static org.hawkular.metrics.api.jaxrs.util.ApiUtils.serverError; import static org.hawkular.metrics.model.MetricType.AVAILABILITY; import java.net.URI; import java.util.List; import java.util.Map; import java.util.regex.PatternSyntaxException; import javax.enterprise.context.ApplicationScoped; import javax.ws.rs.Consumes; import javax.ws.rs.DELETE; import javax.ws.rs.DefaultValue; import javax.ws.rs.GET; import javax.ws.rs.POST; import javax.ws.rs.PUT; import javax.ws.rs.Path; import javax.ws.rs.PathParam; import javax.ws.rs.Produces; import javax.ws.rs.QueryParam; import javax.ws.rs.container.AsyncResponse; import javax.ws.rs.container.Suspended; import javax.ws.rs.core.Context; import javax.ws.rs.core.Response; import javax.ws.rs.core.UriInfo; import org.hawkular.metrics.api.jaxrs.QueryRequest; import org.hawkular.metrics.api.jaxrs.handler.observer.MetricCreatedObserver; import org.hawkular.metrics.api.jaxrs.handler.observer.ResultSetObserver; import org.hawkular.metrics.api.jaxrs.handler.template.IMetricsHandler; import org.hawkular.metrics.api.jaxrs.param.TimeAndBucketParams; import org.hawkular.metrics.api.jaxrs.param.TimeAndSortParams; import org.hawkular.metrics.api.jaxrs.util.ApiUtils; import org.hawkular.metrics.api.jaxrs.util.Logged; import org.hawkular.metrics.core.service.Functions; import org.hawkular.metrics.core.service.Order; import org.hawkular.metrics.core.service.transformers.MinMaxTimestampTransformer; import org.hawkular.metrics.model.ApiError; import org.hawkular.metrics.model.AvailabilityBucketPoint; import org.hawkular.metrics.model.AvailabilityType; import org.hawkular.metrics.model.Buckets; import org.hawkular.metrics.model.DataPoint; import org.hawkular.metrics.model.Metric; import org.hawkular.metrics.model.MetricId; import org.hawkular.metrics.model.MetricType; import org.hawkular.metrics.model.param.BucketConfig; import org.hawkular.metrics.model.param.Duration; import org.hawkular.metrics.model.param.TagNames; import org.hawkular.metrics.model.param.Tags; import org.hawkular.metrics.model.param.TimeRange; import org.jboss.resteasy.annotations.GZIP; import com.fasterxml.jackson.databind.annotation.JsonDeserialize; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import io.swagger.annotations.ApiParam; import io.swagger.annotations.ApiResponse; import io.swagger.annotations.ApiResponses; import rx.Observable; import rx.schedulers.Schedulers; /** * @author Stefan Negrea * */ @Path("/availability") @Consumes(APPLICATION_JSON) @Produces(APPLICATION_JSON) @GZIP @Api(tags = "Availability") @ApplicationScoped @Logged public class AvailabilityHandler extends MetricsServiceHandler implements IMetricsHandler<AvailabilityType> { @POST @Path("/") @ApiOperation(value = "Create availability metric.", notes = "Same notes as creating gauge metric apply.") @ApiResponses(value = { @ApiResponse(code = 201, message = "Metric created successfully"), @ApiResponse(code = 400, message = "Missing or invalid payload", response = ApiError.class), @ApiResponse(code = 409, message = "Availability metric with given id already exists", response = ApiError.class), @ApiResponse(code = 500, message = "Metric creation failed due to an unexpected error", response = ApiError.class) }) public void createMetric( @Suspended final AsyncResponse asyncResponse, @ApiParam(required = true) Metric<AvailabilityType> metric, @ApiParam(value = "Overwrite previously created metric configuration if it exists. " + "Only data retention and tags are overwriten; existing data points are unnafected. Defaults to false.", required = false) @DefaultValue("false") @QueryParam("overwrite") Boolean overwrite, @Context UriInfo uriInfo ) { if (metric.getType() != null && MetricType.UNDEFINED != metric.getType() && MetricType.AVAILABILITY != metric.getType()) { asyncResponse.resume(badRequest(new ApiError("Metric type does not match " + MetricType .AVAILABILITY.getText()))); } URI location = uriInfo.getBaseUriBuilder().path("/availability/{id}").build(metric.getId()); metric = new Metric<>( new MetricId<>(getTenant(), AVAILABILITY, metric.getMetricId().getName()), metric.getTags(), metric.getDataRetention()); metricsService.createMetric(metric, overwrite).subscribe(new MetricCreatedObserver(asyncResponse, location)); } @GET @Path("/") @ApiOperation(value = "Find tenant's metric definitions.", notes = "Does not include any metric values. ", response = Metric.class, responseContainer = "List") @ApiResponses(value = { @ApiResponse(code = 200, message = "Successfully retrieved at least one metric definition."), @ApiResponse(code = 204, message = "No metrics found."), @ApiResponse(code = 400, message = "Invalid type parameter type.", response = ApiError.class), @ApiResponse(code = 500, message = "Failed to retrieve metrics due to unexpected error.", response = ApiError.class) }) public void getMetrics( @Suspended AsyncResponse asyncResponse, @ApiParam(value = "List of tags filters", required = false) @QueryParam("tags") String tags, @ApiParam(value = "Fetch min and max timestamps of available datapoints") @DefaultValue("false") @QueryParam("timestamps") Boolean fetchTimestamps) { Observable<Metric<AvailabilityType>> metricObservable = null; if (tags != null) { metricObservable = metricsService.findMetricIdentifiersWithFilters(getTenant(), AVAILABILITY, tags) .flatMap(metricsService::findMetric); } else { metricObservable = metricsService.findMetrics(getTenant(), AVAILABILITY); } if(fetchTimestamps) { metricObservable = metricObservable .compose(new MinMaxTimestampTransformer<>(metricsService)); } metricObservable .toList() .map(ApiUtils::collectionToResponse) .subscribe(asyncResponse::resume, t -> { if (t instanceof PatternSyntaxException) { asyncResponse.resume(badRequest(t)); } else { asyncResponse.resume(serverError(t)); } }); } @GET @Path("/{id}") @ApiOperation(value = "Retrieve single metric definition.", response = Metric.class) @ApiResponses(value = { @ApiResponse(code = 200, message = "Metric's definition was successfully retrieved."), @ApiResponse(code = 204, message = "Query was successful, but no metrics definition is set."), @ApiResponse(code = 500, message = "Unexpected error occurred while fetching metric's definition.", response = ApiError.class) }) public void getMetric(@Suspended final AsyncResponse asyncResponse, @PathParam("id") String id) { metricsService.findMetric(new MetricId<>(getTenant(), AVAILABILITY, id)) .compose(new MinMaxTimestampTransformer<>(metricsService)) .map(metric -> Response.ok(metric).build()) .switchIfEmpty(Observable.just(noContent())) .subscribe(asyncResponse::resume, t -> asyncResponse.resume(serverError(t))); } @DELETE @Path("/{id}") @ApiOperation(value = "Deletes the metric and associated uncompressed data points, and updates internal indexes." + " Note: compressed data will not be deleted immediately. It is deleted as part of the normal" + " data expiration as defined by the data retention settings. Consequently, compressed data will" + " be accessible until it automatically expires.") @ApiResponses(value = { @ApiResponse(code = 200, message = "Metric deletion was successful."), @ApiResponse(code = 500, message = "Unexpected error occurred trying to delete the metric.") }) public void deleteMetric(@Suspended AsyncResponse asyncResponse, @PathParam("id") String id) { MetricId<AvailabilityType> metric = new MetricId<>(getTenant(), AVAILABILITY, id); metricsService.deleteMetric(metric).subscribe(new ResultSetObserver(asyncResponse)); } @GET @Path("/tags/{tags}") @ApiOperation(value = "Retrieve availability type's tag values", response = Map.class) @ApiResponses(value = { @ApiResponse(code = 200, message = "Tags successfully retrieved."), @ApiResponse(code = 204, message = "No matching tags were found"), @ApiResponse(code = 500, message = "Unexpected error occurred while fetching tags.", response = ApiError.class) }) public void getTags(@Suspended final AsyncResponse asyncResponse, @ApiParam("Tag query") @PathParam("tags") Tags tags) { metricsService.getTagValues(getTenant(), AVAILABILITY, tags.getTags()) .map(ApiUtils::mapToResponse) .subscribe(asyncResponse::resume, t -> asyncResponse.resume(ApiUtils.serverError(t))); } @GET @Path("/{id}/tags") @ApiOperation(value = "Retrieve tags associated with the metric definition.", response = String.class, responseContainer = "Map") @ApiResponses(value = { @ApiResponse(code = 200, message = "Metric's tags were successfully retrieved."), @ApiResponse(code = 204, message = "Query was successful, but no metrics were found."), @ApiResponse(code = 500, message = "Unexpected error occurred while fetching metric's tags.", response = ApiError.class) }) public void getMetricTags( @Suspended final AsyncResponse asyncResponse, @PathParam("id") String id ) { metricsService.getMetricTags(new MetricId<>(getTenant(), AVAILABILITY, id)) .map(ApiUtils::mapToResponse) .subscribe(asyncResponse::resume, t -> asyncResponse.resume(ApiUtils.serverError(t))); } @PUT @Path("/{id}/tags") @ApiOperation(value = "Update tags associated with the metric definition.") @ApiResponses(value = { @ApiResponse(code = 200, message = "Metric's tags were successfully updated."), @ApiResponse(code = 500, message = "Unexpected error occurred while updating metric's tags.", response = ApiError.class) }) public void updateMetricTags( @Suspended final AsyncResponse asyncResponse, @PathParam("id") String id, @ApiParam(required = true) Map<String, String> tags ) { Metric<AvailabilityType> metric = new Metric<>(new MetricId<>(getTenant(), AVAILABILITY, id)); metricsService.addTags(metric, tags).subscribe(new ResultSetObserver(asyncResponse)); } @DELETE @Path("/{id}/tags/{tags}") @ApiOperation(value = "Delete tags associated with the metric definition.") @ApiResponses(value = { @ApiResponse(code = 200, message = "Metric's tags were successfully deleted."), @ApiResponse(code = 400, message = "Invalid tags", response = ApiError.class), @ApiResponse(code = 500, message = "Unexpected error occurred while trying to delete metric's tags.", response = ApiError.class) }) public void deleteMetricTags( @Suspended final AsyncResponse asyncResponse, @PathParam("id") String id, @ApiParam(value = "Tag names", allowableValues = "Comma-separated list of tag names") @PathParam("tags") TagNames tags ) { Metric<AvailabilityType> metric = new Metric<>(new MetricId<>(getTenant(), AVAILABILITY, id)); metricsService.deleteTags(metric, tags.getNames()).subscribe(new ResultSetObserver(asyncResponse)); } @POST @Path("/{id}/raw") @ApiOperation(value = "Add data for a single availability metric.") @ApiResponses(value = { @ApiResponse(code = 200, message = "Adding data succeeded."), @ApiResponse(code = 400, message = "Missing or invalid payload", response = ApiError.class), @ApiResponse(code = 500, message = "Unexpected error happened while storing the data", response = ApiError.class) }) public void addMetricData( @Suspended final AsyncResponse asyncResponse, @PathParam("id") String id, @ApiParam(value = "List of availability datapoints", required = true) List<DataPoint<AvailabilityType>> data ) { Observable<Metric<AvailabilityType>> metrics = Functions.dataPointToObservable(getTenant(), id, data, AVAILABILITY); Observable<Void> observable = metricsService.addDataPoints(AVAILABILITY, metrics); observable.subscribe(new ResultSetObserver(asyncResponse)); } @Deprecated @POST @Path("/{id}/data") @ApiOperation(value = "Deprecated. Please use /raw endpoint.") public void deprecatedAddAvailabilityForMetric( @Suspended final AsyncResponse asyncResponse, @PathParam("id") String id, @ApiParam(value = "List of availability datapoints", required = true) List<DataPoint<AvailabilityType>> data) { addMetricData(asyncResponse, id, data); } @POST @Path("/raw") @ApiOperation(value = "Add metric data for multiple availability metrics in a single call.") @ApiResponses(value = { @ApiResponse(code = 200, message = "Adding data succeeded."), @ApiResponse(code = 400, message = "Missing or invalid payload", response = ApiError.class), @ApiResponse(code = 500, message = "Unexpected error happened while storing the data", response = ApiError.class) }) public void addData( @Suspended final AsyncResponse asyncResponse, @ApiParam(value = "List of availability metrics", required = true) @JsonDeserialize() List<Metric<AvailabilityType>> availabilities ) { Observable<Metric<AvailabilityType>> metrics = Functions.metricToObservable(getTenant(), availabilities, AVAILABILITY); Observable<Void> observable = metricsService.addDataPoints(AVAILABILITY, metrics); observable.subscribe(new ResultSetObserver(asyncResponse)); } @POST @Path("/raw/query") @ApiOperation(value = "Fetch raw data points for multiple metrics. This endpoint is experimental and may " + "undergo non-backwards compatible changes in future releases.") @ApiResponses(value = { @ApiResponse(code = 200, message = "Successfully fetched metric data points."), @ApiResponse(code = 204, message = "Query was successful, but no data was found."), @ApiResponse(code = 400, message = "No metric ids are specified", response = ApiError.class), @ApiResponse(code = 500, message = "Unexpected error occurred while fetching metric data.", response = ApiError.class) }) @Override public void getData( @Suspended AsyncResponse asyncResponse, @ApiParam(required = true, value = "Query parameters that minimally must include a list of metric ids or " + "tags. The standard start, end, order, and limit query parameters are supported as well.") QueryRequest query) { findMetricsByNameOrTag(query.getIds(), query.getTags(), AVAILABILITY) .toList() .flatMap(metricIds -> TimeAndSortParams.<AvailabilityType>deferredBuilder(query.getStart(), query.getEnd()) .fromEarliest(query.getFromEarliest(), metricIds, this::findTimeRange) .sortOptions(query.getLimit(), query.getOrder()) .toObservable() .flatMap(p -> metricsService.findDataPoints(metricIds, p.getTimeRange().getStart(), p.getTimeRange().getEnd(), p.getLimit(), p.getOrder()) .observeOn(Schedulers.io()))) .subscribe(createNamedDataPointObserver(asyncResponse, AVAILABILITY)); } @Deprecated @POST @Path("/data") @ApiOperation(value = "Deprecated. Please use /raw endpoint.") public void deprecatedAddAvailabilityData( @Suspended final AsyncResponse asyncResponse, @ApiParam(value = "List of availability metrics", required = true) @JsonDeserialize() List<Metric<AvailabilityType>> availabilities ) { addData(asyncResponse, availabilities); } @Deprecated @GET @Path("/{id}/data") @ApiOperation(value = "Deprecated. Please use /raw or /stats endpoints.", response = DataPoint.class, responseContainer = "List") public void deprecatedFindAvailabilityData( @Suspended AsyncResponse asyncResponse, @PathParam("id") String id, @ApiParam(value = "Defaults to now - 8 hours") @QueryParam("start") String start, @ApiParam(value = "Defaults to now") @QueryParam("end") String end, @ApiParam(value = "Total number of buckets") @QueryParam("buckets") Integer bucketsCount, @ApiParam(value = "Bucket duration") @QueryParam("bucketDuration") Duration bucketDuration, @ApiParam(value = "Set to true to return only distinct, contiguous values") @QueryParam("distinct") @DefaultValue("false") Boolean distinct, @ApiParam(value = "Limit the number of data points returned") @QueryParam("limit") Integer limit, @ApiParam(value = "Data point sort order, based on timestamp") @QueryParam("order") Order order ) { if ((bucketsCount != null || bucketDuration != null) && (limit != null || order != null)) { asyncResponse.resume(badRequest(new ApiError("Limit and order cannot be used with bucketed results"))); return; } TimeRange timeRange = new TimeRange(start, end); if (!timeRange.isValid()) { asyncResponse.resume(badRequest(new ApiError(timeRange.getProblem()))); return; } BucketConfig bucketConfig = new BucketConfig(bucketsCount, bucketDuration, timeRange); if (!bucketConfig.isValid()) { asyncResponse.resume(badRequest(new ApiError(bucketConfig.getProblem()))); return; } MetricId<AvailabilityType> metricId = new MetricId<>(getTenant(), AVAILABILITY, id); Buckets buckets = bucketConfig.getBuckets(); if (buckets == null) { if (limit == null) { limit = 0; } if (order == null) { order = Order.defaultValue(limit, start, end); } metricsService .findAvailabilityData(metricId, timeRange.getStart(), timeRange.getEnd(), distinct, limit, order) .toList() .map(ApiUtils::collectionToResponse) .subscribe(asyncResponse::resume, t -> asyncResponse.resume(serverError(t))); } else { metricsService.findAvailabilityStats(metricId, timeRange.getStart(), timeRange.getEnd(), buckets) .map(ApiUtils::collectionToResponse) .subscribe(asyncResponse::resume, t -> asyncResponse.resume(serverError(t))); } } @GET @Path("/{id}/raw") @ApiOperation(value = "Retrieve availability data.", response = DataPoint.class, responseContainer = "List") @ApiResponses(value = { @ApiResponse(code = 200, message = "Successfully fetched availability data."), @ApiResponse(code = 204, message = "No availability data was found."), @ApiResponse(code = 400, message = "buckets or bucketDuration parameter is invalid, or both are used.", response = ApiError.class), @ApiResponse(code = 500, message = "Unexpected error occurred while fetching availability data.", response = ApiError.class) }) public void getMetricData( @Suspended AsyncResponse asyncResponse, @PathParam("id") String id, @ApiParam(value = "Defaults to now - 8 hours") @QueryParam("start") String start, @ApiParam(value = "Defaults to now") @QueryParam("end") String end, @ApiParam(value = "Use data from earliest received, subject to retention period") @QueryParam("fromEarliest") Boolean fromEarliest, @ApiParam(value = "Set to true to return only distinct, contiguous values") @QueryParam("distinct") @DefaultValue("false") Boolean distinct, @ApiParam(value = "Limit the number of data points returned") @QueryParam("limit") Integer limit, @ApiParam(value = "Data point sort order, based on timestamp") @QueryParam("order") Order order ) { MetricId<AvailabilityType> metricId = new MetricId<>(getTenant(), AVAILABILITY, id); TimeAndSortParams.<AvailabilityType>deferredBuilder(start, end) .fromEarliest(fromEarliest, metricId, this::findTimeRange) .sortOptions(limit, order) .toObservable() .flatMap(p -> metricsService.findAvailabilityData(metricId, p.getTimeRange().getStart(), p .getTimeRange().getEnd(), distinct, p.getLimit(), p.getOrder())) .toList() .map(ApiUtils::collectionToResponse) .subscribe(asyncResponse::resume, t -> asyncResponse.resume(ApiUtils.error(t))); } @GET @Path("/{id}/stats") @ApiOperation(value = "Retrieve availability data.", notes = "Based on buckets or bucketDuration query parameter" + ", the time range between start and end will be divided in buckets of equal duration, and " + "availability statistics will be computed for each bucket.", response = AvailabilityBucketPoint.class, responseContainer = "List") @ApiResponses(value = { @ApiResponse(code = 200, message = "Successfully fetched availability data."), @ApiResponse(code = 204, message = "No availability data was found."), @ApiResponse(code = 400, message = "buckets or bucketDuration parameter is invalid, or both are used.", response = ApiError.class), @ApiResponse(code = 500, message = "Unexpected error occurred while fetching availability data.", response = ApiError.class) }) public void getMetricStats( @Suspended AsyncResponse asyncResponse, @PathParam("id") String id, @ApiParam(value = "Defaults to now - 8 hours") @QueryParam("start") String start, @ApiParam(value = "Defaults to now") @QueryParam("end") String end, @ApiParam(value = "Use data from earliest received, subject to retention period") @QueryParam("fromEarliest") Boolean fromEarliest, @ApiParam(value = "Total number of buckets") @QueryParam("buckets") Integer bucketsCount, @ApiParam(value = "Bucket duration") @QueryParam("bucketDuration") Duration bucketDuration) { MetricId<AvailabilityType> metricId = new MetricId<>(getTenant(), AVAILABILITY, id); TimeAndBucketParams.<AvailabilityType>deferredBuilder(start, end) .fromEarliest(fromEarliest, metricId, this::findTimeRange) .bucketConfig(bucketsCount, bucketDuration) .toObservable() .flatMap(p -> metricsService.findAvailabilityStats(metricId, p.getTimeRange().getStart(), p.getTimeRange().getEnd(), p.getBucketConfig().getBuckets())) .flatMap(Observable::from) .skipWhile(bucket -> Boolean.TRUE.equals(fromEarliest) && bucket.isEmpty()) .toList() .map(ApiUtils::collectionToResponse) .subscribe(asyncResponse::resume, t -> asyncResponse.resume(ApiUtils.error(t))); } @GET @Path("/tags/{tags}/raw") @ApiOperation(value = "Retrieve availability data on multiple metrics by tags.", response = DataPoint.class, responseContainer = "List") @ApiResponses(value = { @ApiResponse(code = 200, message = "Successfully fetched metric data points."), @ApiResponse(code = 204, message = "Query was successful, but no data was found."), @ApiResponse(code = 400, message = "No metric ids are specified", response = ApiError.class), @ApiResponse(code = 500, message = "Unexpected error occurred while fetching metric data.", response = ApiError.class) }) public void getRawDataByTag( @Suspended AsyncResponse asyncResponse, @PathParam("tags") String tags, @ApiParam(value = "Defaults to now - 8 hours") @QueryParam("start") String start, @ApiParam(value = "Defaults to now") @QueryParam("end") String end, @ApiParam(value = "Use data from earliest received, subject to retention period") @QueryParam("fromEarliest") Boolean fromEarliest, @ApiParam(value = "Limit the number of data points returned") @QueryParam("limit") Integer limit, @ApiParam(value = "Data point sort order, based on timestamp") @QueryParam("order") Order order ) { metricsService.findMetricIdentifiersWithFilters(getTenant(), AVAILABILITY, tags) .toList() .flatMap(metricIds -> TimeAndSortParams.<AvailabilityType>deferredBuilder(start, end) .fromEarliest(fromEarliest, metricIds, this::findTimeRange) .sortOptions(limit, order) .toObservable() .flatMap(p -> metricsService.findDataPoints(metricIds, p.getTimeRange().getStart(), p.getTimeRange().getEnd(), p.getLimit(), p.getOrder()) .observeOn(Schedulers.io()))) .subscribe(createNamedDataPointObserver(asyncResponse, AVAILABILITY)); } }
Photo: The Howard Stern Show Sacha Baron Cohen returned to the Stern Show on Tuesday to talk to Howard about his newest comedy “The Brothers Grimsby.” The conversation covered an array of topics ranging from his surprise Ali G appearance at the Oscars to a failed attempt at a Freddie Mercury biopic and why he refused to stand down on a Donald Trump joke. Check out all the details below. Becoming a Soccer Hooligan Once Sacha entered the studio, Howard introduced the British comedian’s newest action comedy “The Brothers Grimsby,” which is about two English brothers separated at a very young age. One brother grows up to become the world’s best secret agent (Mark Strong) while the other becomes a soccer hooligan (Sacha Baron Cohen). Howard wanted to know more about what it meant to be a hooligan, so Sacha explained to get into his role he had to visit a few soccer clubs, which is another name for a pub and not the safest place to be hanging around. Sacha hired a bodyguard to go with him to visit one of the most dangerous groups, the Special Crew. The guard told him, “Listen, just so you know I’m going to be blending in … If anyone knows you’ve got a bodyguard, you’re in trouble,” the guard told Sacha. All of the sudden the bodyguard had one beer after another in his hand, Sacha told Howard. Within a half hour of entering the pub, Sacha’s trusty protection was passed out on the floor. The guard was a local guy and apparently got a little too into character while going “undercover.” Curious about the lifestyle, Sacha asked one Special Crew member for a funny story and this is what he was told: “There’s one time I was in Manchester and we kidnapped this man. We brought him back to his house and we put barbed wire on places on his body that we shouldn’t have and then we blew up his garage,” the hooligan told Sacha. Sacha replied, “Uh, have you got any even funnier stories?” Howard was amazed at the length at which Sacha will go to get into his characters but that was only the tip of the iceberg in this rowdy English bar. Getting Past the MPAA Howard wanted to know what drove Sacha to create such wild and over-the-top movies like “Borat” and “Bruno.” “The reality is this, personally I get very bored seeing almost every movie. I find it hard to sit there, I find it hard to get to the end,” Sacha told him. “I try to make these films engaging.” Howard pointed out in Sacha’s new flick there are about five or six moments that are really over the top. Howard asked Sacha if he dreads showing the movie to Sony executives once it’s finished to which Sacha replied, “Yes, but the studio actually loved the movie when they screened it.” The real test for Sacha is getting the movie by the MPAA (the Motion Picture Association of America) since they’re the ones who designate the official movie rating. Sacha recounted how he convinced the MPAA to give Bruno an R rating when by his own admission it should’ve been NC-17 due to a 45-second continuous shot of a penis. “I know that I have a game with the MPAA,” he said, revealing that there is a method to his perceived madness. In his latest flick there’s an insane bit where the two brothers need to “hide inside of an elephant’s vagina” and Sacha knew heading into his meeting with the MPAA that he wanted the scene to be about three minutes long. To get his wish he delivered a cut with a nine-minute version of this scene so that when he cut out six minutes it would seem like the MPAA got a small victory. Using the Top Special Effects Team in the Industry Howard was impressed how realistic the film elephant’s penis looked in the movie. Sacha informed Howard that he hired Ridley Scott’s prosthetics team. Cohen had the same special effects masters from movies like “Star Wars” to replicate the pachyderm’s phallic member. There were months of preparations that went into an “animatronic cock,” he told Howard. Another memorable scene in the movie involved a large amount of elephant ejaculate and Sacha revealed his secret for making realistic-looking animal semen. “I’m going to tell you something about the elephant cum,” Sacha began. “The fluid that most approximates elephant cum is McDonald’s sauce.” And that is indeed what Sacha said he used for the film. A Run in With Wladimir Klitschko Photo: Landmark / PR Photos Howard wanted to know if Sacha had ever offended someone so bad, he feared physical harm. Sacha told Howard a story of how he ran into Olympic gold medalist and former boxing heavyweight champion Wladimir Klitschko at a restaurant a few years ago. Normally, that wouldn’t be a big deal but Mr. Klitschko is from Kazakhstan, the country Sacha’s Borat character claims as home. A waiter approached Sacha at the restaurant and told him that Klitschko would love to meet him. Sacha walked over to his table and extended his hand, which Klitschko firmly gripped with no intention of letting go. Soon Sacha’s wife and fellow actor Isla Fisher came over pretending to be a massive fan and asked for an autograph so Wladimir would release her husband’s hand, but that proved to not work as the former heavyweight pulled Sacha closer and quietly told him: “You humiliated my country. I’m from Kazakhstan. Why did you do that?” “I’m actually looking for stuff on the table to grab hold of and like, get a fork and stab him,” Sacha half-joked to Howard. The actor pleaded with him that it was only a joke and after a painful three minutes, Wladimir started to laugh and told Sacha he was joking and didn’t mind the film at all. In fact, he said he thought it was funny. Dave Chappelle Supported Ali G’s Oscar Appearance At last month’s Academy Awards, Sacha once again pulled one over on the powers that be during Hollywood’s most prestigious award show. They tried to forbid him from doing anything off-script, which turned out to push him even further into the bit. At this year’s event, the lack of diversity amongst nominees was a hot button topic in the weeks leading up to the show. Sacha figured this would be a great impetus for Ali G to make an appearance. Sacha told Howard he knew what he wanted his opening line to be: “I know what yous all are thinking, the academy has brought on another token black presenter.” But before going through with it, Cohen found an English friend to run it past. The friend told him it was a bad idea, so he found someone more fitting to give him the green light to do the joke: Dave Chappelle. Sacha saw Chappelle backstage and he quickly pitched him the line he wanted to say when he walked out. Dave told him that it was “great” and gave him the proverbial thumbs up to do it. Sacha admits that had Dave not given him the go ahead he probably wouldn’t have said the controversial line. It was ultimately Cohen’s wife Isla Fisher who smuggled the Ali G costume in. And to truly prepare, he and his wife had to disappear into a bathroom for 40 minutes so Sacha could apply his character’s facial hair. As producers frantically searched for the actor, Sacha’s publicist told them he had diarrhea and wasn’t feeling well. When he finally came out of the bathroom, Sacha said he had to cover his beard with his hand so no one would be wise to his plan before walking out on stage. He informed his co-presenter, Olivia Wilde, seconds before walking out and she was totally into it, helping him put the final touches on his appearance. Sacha explained that once he walked on stage though, things felt off: “I was worried because I went out there I didn’t hear any … reaction,” he admitted. “And then I realized actually eventually the camera was not on me” yet. Once the camera did land on Cohen, he heard a nice gasp from the audience. “I’m always trying to get fired,” he joked. A Failed Freddie Mercury Biopic Howard also asked about the Freddie Mercury biopic Cohen had reportedly been working on for six years. Sacha said in the end, it just didn’t pan out as he craved a grittier, edgier film than what the remaining Queen band members desired. “There are amazing stories about Freddie Mercury … the guy was wild,” he explained. The band, though, seemed to be more interested in protecting their legacy than promoting the true embodiment of arguably one of the greatest voices rock has ever known, in Cohen’s opinion. Sacha admitted he should’ve dropped out of the project after the first meeting when one of the band members – Sacha wouldn’t say which one – told him that Freddie Mercury’s character would die halfway through the movie. Sacha assumed they wanted to make a “Pulp Fiction” type movie where everything was out of order. But the truth was the band wanted a linear movie that showed how well they all did after Freddie’s untimely death. At the end of the day, Sacha chalked it up to an “artistic difference.” A Controversial Donald Trump Scene In Sacha’s newest movie “The Brothers Grimsby,” there’s a controversial scene that depicts G.O.P. presidential candidate Donald Trump contracting H.I.V. “We actually had an argument with the studio about it,” Sacha told Howard. “They said you have to put in a card at the end saying, ‘Donald Trump was not involved in the movie and is not H.I.V. positive.’ And I fought them on it.” The studio was presumably afraid that the Donald would sue them, not to mention that he might be the next U.S. president. In the end, Sacha pointed out that he couldn’t prove that Donald Trump didn’t have H.I.V. so he wasn’t comfortable putting that card up. Make sure to go out and see “The Brothers Grimsby” in theaters everywhere Friday, March 11.
/* * Copyright (c) 1996, 2017, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. Oracle designates this * particular file as subject to the "Classpath" exception as provided * by Oracle in the LICENSE file that accompanied this code. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. */ /* * The Toolkit class has two functions: it instantiates the AWT * ToolkitPeer's native methods, and provides the DLL's core functions. * * There are two ways this DLL can be used: either as a dynamically- * loaded Java native library from the interpreter, or by a Windows- * specific app. The first manner requires that the Toolkit provide * all support needed so the app can function as a first-class Windows * app, while the second assumes that the app will provide that * functionality. Which mode this DLL functions in is determined by * which initialization paradigm is used. If the Toolkit is constructed * normally, then the Toolkit will have its own pump. If it is explicitly * initialized for an embedded environment (via a static method on * sun.awt.windows.WToolkit), then it will rely on an external message * pump. * * The most basic functionality needed is a Windows message pump (also * known as a message loop). When an Java app is started as a console * app by the interpreter, the Toolkit needs to provide that message * pump if the AWT is dynamically loaded. */ #ifndef AWT_TOOLKIT_H #define AWT_TOOLKIT_H #include "awt.h" #include "awtmsg.h" #include "Trace.h" #include "sun_awt_windows_WToolkit.h" class AwtObject; class AwtDialog; class AwtDropTarget; typedef VOID (CALLBACK* IDLEPROC)(VOID); typedef BOOL (CALLBACK* PEEKMESSAGEPROC)(MSG&); // Struct for _WInputMethod_enable|disableNativeIME method struct EnableNativeIMEStruct { jobject self; jobject peer; jint context; jboolean useNativeCompWindow; }; /* * class JNILocalFrame * Push/PopLocalFrame helper */ class JNILocalFrame { public: INLINE JNILocalFrame(JNIEnv *env, int size) { m_env = env; int result = m_env->PushLocalFrame(size); if (result < 0) { DASSERT(FALSE); throw std::bad_alloc(); } } INLINE ~JNILocalFrame() { m_env->PopLocalFrame(NULL); } private: JNIEnv* m_env; }; /* * class CriticalSection * ~~~~~ ~~~~~~~~~~~~~~~~ * Lightweight intra-process thread synchronization. Can only be used with * other critical sections, and only within the same process. */ class CriticalSection { public: INLINE CriticalSection() { ::InitializeCriticalSection(&rep); } INLINE ~CriticalSection() { ::DeleteCriticalSection(&rep); } class Lock { public: INLINE Lock(const CriticalSection& cs) : critSec(cs) { (const_cast<CriticalSection &>(critSec)).Enter(); } INLINE ~Lock() { (const_cast<CriticalSection &>(critSec)).Leave(); } private: const CriticalSection& critSec; }; friend class Lock; private: CRITICAL_SECTION rep; CriticalSection(const CriticalSection&); const CriticalSection& operator =(const CriticalSection&); public: virtual void Enter() { ::EnterCriticalSection(&rep); } virtual BOOL TryEnter() { return ::TryEnterCriticalSection(&rep); } virtual void Leave() { ::LeaveCriticalSection(&rep); } }; // Macros for using CriticalSection objects that help trace // lock/unlock actions /* Use THIS_FILE when it is available. */ #ifndef THIS_FILE #define THIS_FILE __FILE__ #endif #define CRITICAL_SECTION_ENTER(cs) { \ J2dTraceLn4(J2D_TRACE_VERBOSE2, \ "CS.Wait: tid, cs, file, line = 0x%x, 0x%x, %s, %d", \ GetCurrentThreadId(), &(cs), THIS_FILE, __LINE__); \ (cs).Enter(); \ J2dTraceLn4(J2D_TRACE_VERBOSE2, \ "CS.Enter: tid, cs, file, line = 0x%x, 0x%x, %s, %d", \ GetCurrentThreadId(), &(cs), THIS_FILE, __LINE__); \ } #define CRITICAL_SECTION_LEAVE(cs) { \ J2dTraceLn4(J2D_TRACE_VERBOSE2, \ "CS.Leave: tid, cs, file, line = 0x%x, 0x%x, %s, %d", \ GetCurrentThreadId(), &(cs), THIS_FILE, __LINE__); \ (cs).Leave(); \ J2dTraceLn4(J2D_TRACE_VERBOSE2, \ "CS.Left: tid, cs, file, line = 0x%x, 0x%x, %s, %d", \ GetCurrentThreadId(), &(cs), THIS_FILE, __LINE__); \ } // Redefine WinAPI values related to touch input, if OS < Windows 7. #if (!defined(WINVER) || ((WINVER) < 0x0601)) /* * RegisterTouchWindow flag values */ #define TWF_FINETOUCH (0x00000001) #define TWF_WANTPALM (0x00000002) #define WM_TOUCH 0x0240 /* * Touch input handle */ typedef HANDLE HTOUCHINPUT; typedef struct tagTOUCHINPUT { LONG x; LONG y; HANDLE hSource; DWORD dwID; DWORD dwFlags; DWORD dwMask; DWORD dwTime; ULONG_PTR dwExtraInfo; DWORD cxContact; DWORD cyContact; } TOUCHINPUT, *PTOUCHINPUT; typedef TOUCHINPUT const * PCTOUCHINPUT; /* * Touch input flag values (TOUCHINPUT.dwFlags) */ #define TOUCHEVENTF_MOVE 0x0001 #define TOUCHEVENTF_DOWN 0x0002 #define TOUCHEVENTF_UP 0x0004 #define TOUCHEVENTF_INRANGE 0x0008 #define TOUCHEVENTF_PRIMARY 0x0010 #define TOUCHEVENTF_NOCOALESCE 0x0020 #define TOUCHEVENTF_PEN 0x0040 #define TOUCHEVENTF_PALM 0x0080 #endif /************************************************************************ * AwtToolkit class */ class AwtToolkit { public: enum { KB_STATE_SIZE = 256 }; /* java.awt.Toolkit method ids */ static jmethodID getDefaultToolkitMID; static jmethodID getFontMetricsMID; static jmethodID insetsMID; /* sun.awt.windows.WToolkit ids */ static jmethodID windowsSettingChangeMID; static jmethodID displayChangeMID; BOOL m_isDynamicLayoutSet; AwtToolkit(); ~AwtToolkit(); BOOL Initialize(BOOL localPump); BOOL Dispose(); void SetDynamicLayout(BOOL dynamic); BOOL IsDynamicLayoutSet(); BOOL IsDynamicLayoutSupported(); BOOL IsDynamicLayoutActive(); BOOL areExtraMouseButtonsEnabled(); void setExtraMouseButtonsEnabled(BOOL enable); static UINT GetNumberOfButtons(); bool IsWin8OrLater(); bool IsTouchKeyboardAutoShowEnabled(); bool IsAnyKeyboardAttached(); bool IsTouchKeyboardAutoShowSystemEnabled(); void ShowTouchKeyboard(); void HideTouchKeyboard(); BOOL TIRegisterTouchWindow(HWND hWnd, ULONG ulFlags); BOOL TIGetTouchInputInfo(HTOUCHINPUT hTouchInput, UINT cInputs, PTOUCHINPUT pInputs, int cbSize); BOOL TICloseTouchInputHandle(HTOUCHINPUT hTouchInput); INLINE BOOL localPump() { return m_localPump; } INLINE BOOL VerifyComponents() { return FALSE; } // TODO: Use new DebugHelper class to set this flag INLINE HWND GetHWnd() { return m_toolkitHWnd; } INLINE HMODULE GetModuleHandle() { return m_dllHandle; } INLINE void SetModuleHandle(HMODULE h) { m_dllHandle = h; } INLINE static DWORD MainThread() { return GetInstance().m_mainThreadId; } INLINE void VerifyActive() throw (awt_toolkit_shutdown) { if (!m_isActive && m_mainThreadId != ::GetCurrentThreadId()) { throw awt_toolkit_shutdown(); } } INLINE BOOL IsDisposed() { return m_isDisposed; } static UINT GetMouseKeyState(); static void GetKeyboardState(PBYTE keyboardState); static ATOM RegisterClass(); static void UnregisterClass(); INLINE LRESULT SendMessage(UINT msg, WPARAM wParam=0, LPARAM lParam=0) { if (!m_isDisposed) { return ::SendMessage(GetHWnd(), msg, wParam, lParam); } else { return NULL; } } static LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); static LRESULT CALLBACK GetMessageFilter(int code, WPARAM wParam, LPARAM lParam); static LRESULT CALLBACK ForegroundIdleFilter(int code, WPARAM wParam, LPARAM lParam); static LRESULT CALLBACK MouseLowLevelHook(int code, WPARAM wParam, LPARAM lParam); INLINE static AwtToolkit& GetInstance() { return theInstance; } INLINE void SetPeer(JNIEnv *env, jobject wToolkit) { AwtToolkit &tk = AwtToolkit::GetInstance(); if (tk.m_peer != NULL) { env->DeleteGlobalRef(tk.m_peer); } tk.m_peer = (wToolkit != NULL) ? env->NewGlobalRef(wToolkit) : NULL; } INLINE jobject GetPeer() { return m_peer; } // is this thread the main thread? INLINE static BOOL IsMainThread() { return GetInstance().m_mainThreadId == ::GetCurrentThreadId(); } // post a message to the message pump thread INLINE BOOL PostMessage(UINT msg, WPARAM wp=0, LPARAM lp=0) { return ::PostMessage(GetHWnd(), msg, wp, lp); } // cause the message pump thread to call the function synchronously now! INLINE void * InvokeFunction(void*(*ftn)(void)) { return (void *)SendMessage(WM_AWT_INVOKE_VOID_METHOD, (WPARAM)ftn, 0); } INLINE void InvokeFunction(void (*ftn)(void)) { InvokeFunction((void*(*)(void))ftn); } INLINE void * InvokeFunction(void*(*ftn)(void *), void* param) { return (void *)SendMessage(WM_AWT_INVOKE_METHOD, (WPARAM)ftn, (LPARAM)param); } INLINE void InvokeFunction(void (*ftn)(void *), void* param) { InvokeFunction((void*(*)(void*))ftn, param); } INLINE CriticalSection &GetSyncCS() { return m_Sync; } void *SyncCall(void*(*ftn)(void *), void* param); void SyncCall(void (*ftn)(void *), void *param); void *SyncCall(void *(*ftn)(void)); void SyncCall(void (*ftn)(void)); // cause the message pump thread to call the function later ... INLINE void InvokeFunctionLater(void (*ftn)(void *), void* param) { if (!PostMessage(WM_AWT_INVOKE_METHOD, (WPARAM)ftn, (LPARAM)param)) { JNIEnv* env = (JNIEnv *)JNU_GetEnv(jvm, JNI_VERSION_1_2); JNU_ThrowInternalError(env, "Message not posted, native event queue may be full."); } } // cause the message pump thread to synchronously synchronize on the handle INLINE void WaitForSingleObject(HANDLE handle) { SendMessage(WM_AWT_WAIT_FOR_SINGLE_OBJECT, 0, (LPARAM)handle); } /* * Create an AwtXxxx C++ component using a given factory */ typedef void (*ComponentFactory)(void*, void*); static void CreateComponent(void* hComponent, void* hParent, ComponentFactory compFactory, BOOL isParentALocalReference=TRUE); static void DestroyComponentHWND(HWND hwnd); // constants used to PostQuitMessage static const int EXIT_ENCLOSING_LOOP; static const int EXIT_ALL_ENCLOSING_LOOPS; // ... void QuitMessageLoop(int status); UINT MessageLoop(IDLEPROC lpIdleFunc, PEEKMESSAGEPROC lpPeekMessageFunc); BOOL PumpWaitingMessages(PEEKMESSAGEPROC lpPeekMessageFunc); void PumpToDestroy(class AwtComponent* p); void ProcessMsg(MSG& msg); BOOL PreProcessMsg(MSG& msg); BOOL PreProcessMouseMsg(class AwtComponent* p, MSG& msg); BOOL PreProcessKeyMsg(class AwtComponent* p, MSG& msg); /* Checks that an free ID exists. */ jboolean isFreeIDAvailable(); /* Create an ID which maps to an AwtObject pointer, such as a menu. */ UINT CreateCmdID(AwtObject* object); // removes cmd id mapping void RemoveCmdID(UINT id); /* Return the AwtObject associated with its ID. */ AwtObject* LookupCmdID(UINT id); /* Return the current application icon. */ HICON GetAwtIcon(); HICON GetAwtIconSm(void* pAwtWindow = NULL); // Calculate a wave-like value out of the integer 'value' and // the specified period. // The argument 'value' is an integer 0, 1, 2, ... *infinity*. // // Examples: // Period == 3 // Generated sequence: 0 1 2 1 0 ..... // // Period == 4 // Generated sequence: 0 1 2 3 2 1 0 ..... static inline UINT CalculateWave(UINT value, const UINT period) { if (period < 2) { return 0; } // -2 is necessary to avoid repeating extreme values (0 and period-1) value %= period * 2 -2; if (value >= period) { value = period * 2 -2 - value; } return value; } HICON GetSecurityWarningIcon(UINT index, UINT w, UINT h); /* Turns on/off dialog modality for the system. */ INLINE AwtDialog* SetModal(AwtDialog* frame) { AwtDialog* previousDialog = m_pModalDialog; m_pModalDialog = frame; return previousDialog; }; INLINE void ResetModal(AwtDialog* oldFrame) { m_pModalDialog = oldFrame; }; INLINE BOOL IsModal() { return (m_pModalDialog != NULL); }; INLINE AwtDialog* GetModalDialog(void) { return m_pModalDialog; }; /* Stops the current message pump (normally a modal dialog pump) */ INLINE void StopMessagePump() { m_breakOnError = TRUE; } /* Debug settings */ INLINE void SetVerbose(long flag) { m_verbose = (flag != 0); } INLINE void SetVerify(long flag) { m_verifyComponents = (flag != 0); } INLINE void SetBreak(long flag) { m_breakOnError = (flag != 0); } INLINE void SetHeapCheck(long flag); static void SetBusy(BOOL busy); /* Set and get the default input method Window handler. */ INLINE void SetInputMethodWindow(HWND inputMethodHWnd) { m_inputMethodHWnd = inputMethodHWnd; } INLINE HWND GetInputMethodWindow() { return m_inputMethodHWnd; } static VOID CALLBACK PrimaryIdleFunc(); static VOID CALLBACK SecondaryIdleFunc(); static BOOL CALLBACK CommonPeekMessageFunc(MSG& msg); static BOOL activateKeyboardLayout(HKL hkl); static INLINE BOOL EnableNcDpiScaling(HWND hwnd) { return lpEnableNonClientDpiScaling != NULL ? lpEnableNonClientDpiScaling(hwnd) : FALSE; } static INLINE BOOL AdjustWindowRectExForDpi(LPRECT lpRect, DWORD dwStyle, BOOL bMenu, DWORD dwExStyle, UINT dpi) { return lpAdjustWindowRectExForDpi != NULL ? lpAdjustWindowRectExForDpi(lpRect, dwStyle, bMenu, dwExStyle, dpi) : ::AdjustWindowRectEx(lpRect, dwStyle, bMenu, dwExStyle); } HANDLE m_waitEvent; DWORD eventNumber; private: HWND CreateToolkitWnd(LPCTSTR name); void InitTouchKeyboardExeFilePath(); HWND GetTouchKeyboardWindow(); BOOL m_localPump; DWORD m_mainThreadId; HWND m_toolkitHWnd; HWND m_inputMethodHWnd; BOOL m_verbose; BOOL m_isActive; // set to FALSE at beginning of Dispose BOOL m_isDisposed; // set to TRUE at end of Dispose BOOL m_areExtraMouseButtonsEnabled; typedef BOOL (WINAPI *RegisterTouchWindowFunc)(HWND hWnd, ULONG ulFlags); typedef BOOL (WINAPI *GetTouchInputInfoFunc)(HTOUCHINPUT hTouchInput, UINT cInputs, PTOUCHINPUT pInputs, int cbSize); typedef BOOL (WINAPI *CloseTouchInputHandleFunc)(HTOUCHINPUT hTouchInput); BOOL m_isWin8OrLater; BOOL m_touchKbrdAutoShowIsEnabled; TCHAR* m_touchKbrdExeFilePath; RegisterTouchWindowFunc m_pRegisterTouchWindow; GetTouchInputInfoFunc m_pGetTouchInputInfo; CloseTouchInputHandleFunc m_pCloseTouchInputHandle; BOOL m_vmSignalled; // set to TRUE if QUERYENDSESSION has successfully // raised SIGTERM BOOL m_verifyComponents; BOOL m_breakOnError; BOOL m_breakMessageLoop; UINT m_messageLoopResult; class AwtComponent* m_lastMouseOver; BOOL m_mouseDown; HHOOK m_hGetMessageHook; HHOOK m_hMouseLLHook; UINT_PTR m_timer; class AwtCmdIDList* m_cmdIDs; BYTE m_lastKeyboardState[KB_STATE_SIZE]; CriticalSection m_lockKB; static AwtToolkit theInstance; /* The current modal dialog frame (normally NULL). */ AwtDialog* m_pModalDialog; /* The WToolkit peer instance */ jobject m_peer; HMODULE m_dllHandle; /* The module handle. */ CriticalSection m_Sync; static EnableNonClientDpiScalingFunc *lpEnableNonClientDpiScaling; static AdjustWindowRectExForDpiFunc *lpAdjustWindowRectExForDpi; /* track display changes - used by palette-updating code. This is a workaround for a windows bug that prevents WM_PALETTECHANGED event from occurring immediately after a WM_DISPLAYCHANGED event. */ private: BOOL m_displayChanged; /* Tracks displayChanged events */ // 0 means we are not embedded. DWORD m_embedderProcessID; public: BOOL HasDisplayChanged() { return m_displayChanged; } void ResetDisplayChanged() { m_displayChanged = FALSE; } void RegisterEmbedderProcessId(HWND); BOOL IsEmbedderProcessId(const DWORD processID) const { return m_embedderProcessID && (processID == m_embedderProcessID); } private: static JNIEnv *m_env; static DWORD m_threadId; public: static void SetEnv(JNIEnv *env); static JNIEnv* GetEnv(); static BOOL GetScreenInsets(int screenNum, RECT * rect); // If the DWM is active, this function uses // DwmGetWindowAttribute()/DWMWA_EXTENDED_FRAME_BOUNDS. // Otherwise, fall back to regular ::GetWindowRect(). // See 6711576 for more details. static void GetWindowRect(HWND hWnd, LPRECT lpRect); private: // The window handle of a toplevel window last seen under the mouse cursor. // See MouseLowLevelHook() for details. HWND m_lastWindowUnderMouse; public: HWND GetWindowUnderMouse() { return m_lastWindowUnderMouse; } void InstallMouseLowLevelHook(); void UninstallMouseLowLevelHook(); /* AWT preloading (early Toolkit thread start) */ public: /* Toolkit preload action class. * Preload actions should be registered with * AwtToolkit::getInstance().GetPreloadThread().AddAction(). * AwtToolkit thread calls InitImpl method at the beghining * and CleanImpl(false) before exiting for all registered actions. * If an application provides own Toolkit thread * (sun.awt.windows.WToolkit.embeddedInit), the thread calls Clean(true) * for each action. */ class PreloadThread; // forward declaration class PreloadAction { friend class PreloadThread; public: PreloadAction() : initThreadId(0), pNext(NULL) {} virtual ~PreloadAction() {} protected: // called by PreloadThread or as result // of EnsureInited() call (on Toolkit thread!). virtual void InitImpl() = 0; // called by PreloadThread (before exiting). // reInit == false: normal shutdown; // reInit == true: PreloadThread is shutting down due external // Toolkit thread was provided. virtual void CleanImpl(bool reInit) = 0; public: // Initialized the action on the Toolkit thread if not yet initialized. bool EnsureInited(); // returns thread ID which the action was inited on (0 if not inited) DWORD GetInitThreadID(); // Allows to deinitialize action earlier. // The method must be called on the Toolkit thread only. // returns true on success, // false if the action was inited on other thread. bool Clean(); private: unsigned initThreadId; // lock for Init/Clean CriticalSection initLock; // Chain support (for PreloadThread) PreloadAction *pNext; // for action chain used by PreloadThread void SetNext(PreloadAction *pNext) { this->pNext = pNext; } PreloadAction *GetNext() { return pNext; } // wrapper for AwtToolkit::InvokeFunction static void InitWrapper(void *param); void Init(); void Clean(bool reInit); }; /** Toolkit preload thread class. */ class PreloadThread { public: PreloadThread(); ~PreloadThread(); // adds action & start the thread if not yet started bool AddAction(PreloadAction *pAction); // sets termination flag; returns true if the thread is running. // wrongThread specifies cause of the termination: // false means termination on the application shutdown; // wrongThread is used as reInit parameter for action cleanup. bool Terminate(bool wrongThread); bool InvokeAndTerminate(void(_cdecl *fn)(void *), void *param); // waits for the the thread completion; // use the method after Terminate() only if Terminate() returned true INLINE void Wait4Finish() { ::WaitForSingleObject(hFinished, INFINITE); } INLINE unsigned GetThreadId() { CriticalSection::Lock lock(threadLock); return threadId; } INLINE bool IsWrongThread() { CriticalSection::Lock lock(threadLock); return wrongThread; } // returns true if the current thread is "preload" thread bool OnPreloadThread(); private: // data access lock CriticalSection threadLock; // the thread status enum Status { None = -1, // initial Preloading = 0, // preloading in progress RunningToolkit, // Running as Toolkit thread Cleaning, // exited from Toolkit thread proc, cleaning Finished // } status; // "wrong thread" flag bool wrongThread; // thread proc (calls (this)param->ThreadProc()) static unsigned WINAPI StaticThreadProc(void *param); unsigned ThreadProc(); INLINE void AwakeThread() { ::SetEvent(hAwake); } // if threadId != 0 -> we are running unsigned threadId; // ThreadProc sets the event on exit HANDLE hFinished; // ThreadProc waits on the event for NewAction/Terminate/InvokeAndTerminate HANDLE hAwake; // function/param to invoke (InvokeAndTerminate) // if execFunc == NULL => just terminate void(_cdecl *execFunc)(void *); void *execParam; // action chain PreloadAction *pActionChain; PreloadAction *pLastProcessedAction; // returns next action in the list (NULL if no more actions) PreloadAction* GetNextAction(); }; INLINE PreloadThread& GetPreloadThread() { return preloadThread; } private: PreloadThread preloadThread; }; /* creates an instance of T and assigns it to the argument, but only if the argument is initially NULL. Supposed to be thread-safe. returns the new value of the argument. I'm not using volatile here as InterlockedCompareExchange ensures volatile semantics and acquire/release. The function is useful when used with static POD NULL-initialized pointers, as they are guaranteed to be NULL before any dynamic initialization takes place. This function turns such a pointer into a thread-safe singleton, working regardless of dynamic initialization order. Destruction problem is not solved, we don't need it here. */ template<typename T> inline T* SafeCreate(T* &pArg) { /* this implementation has no locks, it just destroys the object if it fails to be the first to init. another way would be using a special flag pointer value to mark the pointer as "being initialized". */ T* pTemp = (T*)InterlockedCompareExchangePointer((void**)&pArg, NULL, NULL); if (pTemp != NULL) return pTemp; T* pNew = new T; pTemp = (T*)InterlockedCompareExchangePointer((void**)&pArg, pNew, NULL); if (pTemp != NULL) { // we failed it - another thread has already initialized pArg delete pNew; return pTemp; } else { return pNew; } } #endif /* AWT_TOOLKIT_H */
DGAP-Media / Unternehmen: Maricann Group Inc. WKN: A2DQR6 Anlass des Berichts: Update Empfehlung: Strong Buy Kursziel mittelfristig: 3,00 EUR Kursziel langfristig: 5,00 EUR Millionencashflow durch weitere Cannabis-Liefervereinbarung gesichert - Aktie massiv unterbewertet! Unser Cannabis Top-Pick die kanadische Maricann (WKN A2DQR6) verkündete eine weitere Liefervereinbarung. Dies ist bereits das dritte Lieferabkommen innerhalb kürzester Zeit und ein Garant für baldige Millionenumsätze. Wie Maricann (WKN A2DQR6) vermeldete, wurde mit dem Unternehmen Liquor Distribution Branch ("BCLDB", Spirituosenvertriebsstelle der Provinz British Columbia) eine Absichtserklärung unterzeichnet. Als ein bevorzugter lizenzierter Produzent wird das Unternehmen in den ersten 12 Monaten nach der Legalisierung anfänglich ungefähr 3.621.900 Gramm (ca. 3.622kg) nicht medizinisches Cannabis an BCLDB liefern. Die wichtigsten Punkte - Maricann wurde ausgewählt, um Cannabis für Genusszwecke an den Markt in British Columbia zu liefern. - Das Unternehmen stimmt zu, mindestens 3.621.900 Gramm pro Jahr für BCLDB bereitzustellen. - Maricann hat jetzt bestätigte Allokationen von Cannabis zu Genusszwecken für Manitoba, Alberta und British Columbia mit einem annualisierten Volumen von 10.923.100 Gramm (ca. 10.923kg). "Maricann ist begeistert, eine Absichtserklärung mit BCLDB zu schließen und wir sind stolz darauf mit ihnen als Partner während dieser aufregenden Markteinführungsphase zu arbeiten," sagte Geoff Kosar, Vice President Verkauf und Marketing, Maricann Group Inc. Bereits bestehende Liefervereinbarungen: Zuletzt wurden bereits zwei Lieferabkommen zur Versorgung des Marktes in Alberta und Manitoba mit Cannabisprodukten abgeschlossen. Am 29.06.2018 vermeldete das Unternehmen, dass eine Liefervereinbarung mit der Manitoba Liquor & Lotteries Corporation (MLCC) abgeschlossen werden konnte, dies ist ein wichtiger Meilenstein des Unternehmens. Die Liefervereinbarung sieht einen Umfang von 550.000 Gramm verschiedenster Cannabisprodukte innerhalb der ersten 12 Monate vor. Am 05.07.2018 vermeldete das Unternehmen, dass es eine Liefervereinbarung mit Alberta Gaming, Liquor & Cannabis Commission abgeschlossen hat. Das Liefervolumen beträgt 3.375 Kilogramm für einen Zeitraum von 6 Monaten! Umfangreiches Unternehmensupdate: Das Unternehmen ist im Gespräch mit mehreren anderen Provinzen für 2018 und 2019 und wird, wenn zulässig, bekannt geben, wann diese Abkommen geschlossen werden. Obwohl es keine Garantien gibt, dass zusätzliche Lieferabkommen unterzeichnet werden, so haben die laufenden Gespräche mit bestimmten Provinzen eine hohe Wertschätzung der Fähigkeiten und Wettbewerbsfähigkeit des Unternehmens bestätigt, die im Einklang mit viel größeren Unternehmen in dieser Branche stehen. Erweiterung der Anlage in Langton Die Bauarbeiten an der Erweiterung der Anlage des Unternehmens in 138 8th Concession Road, Langton, Ontario (die "Langton-Anlage") machen weitere Fortschritte. Bis Mitte November 2018 (vorbehaltlich des Erhalts der betreffenden Lizenzen von Health Canada) erwartet Maricann, in der Lage zu sein, während Phase 1 der Erweiterung ihrer Langton-Anlage 706kg getrocknetes Cannabis pro Woche zu produzieren. Bis April 2019 erwartet Maricann, in der Lage zu sein, pro Woche 2.023 kg getrocknetes Cannabis zu Gesamtkosten der verkauften Ware von ungefähr 1,30 Dollar pro Gramm zu produzieren. Diese Zahlen sind unter der Annahme, dass alle Gebäudegenehmigungen, Gegenstände mit langen Lieferzeiten wie z. B. HVAC-Geräte rechtzeitig zur Langton-Anlage geliefert werden. Betriebs-Update Deutschland Maricann hat den Umbau der 4.600m2 (ungefähr 49.000 Quadratfuß) großen Anlage für den Betäubungsmittelimport und Vertrieb in Ebersbach, Deutschland, abgeschlossen. Die Besichtigungen der Anlage durch die europäische Arzneimittelagentur wurden ohne signifikante Beanstandungen abgeschlossen. Das Unternehmen wird ein weiteres Update geben, wenn die endgültige Genehmigung der Aufsichtsbehörden erteilt wurde. Diese Zertifizierung, wenn erhalten, wird Maricann den Import ihrer eigenen durch EMA-GMP zertifizierten Produkte und der zertifizierten Produkte anderer zum Vertrieb in Deutschland und im restlichen Europa, wo legal, erlauben und eine Gewinnmarge bewahren, die in den meisten Fällen bei 50% des Großhandelspreises liegt. Malta und VESIsorb(R)-Update Wie erwartet hat Maricann die Bestätigung von Malta Enterprise (die offizielle Wirtschaftsförderungsstelle des Landes) in Form einer Absichtserklärung erhalten, um mit der Lizenzierung der Fertigwarenproduktionsstätten für medizinisches Cannabis fortzufahren. Malta ist ein integraler Bestandteil von Maricanns langfristiger Entwicklungsstrategie einschließlich der Herstellung von Fertigprodukten, die die patentierte VESISorb(R)-Verabreichungstechnologie verwenden. Die Überlegenheit von VESISorb(R) wurde immer wieder in gut konzipierten Pilotstudien und begutachteten veröffentlichten pharmakokinetischen Absorptionsstudien und Studien der Bioverfügbarkeit demonstriert. Maricann hat jetzt beständig THC-Destillate in ihren Anlagen in Langton, Ontario, hergestellt, ein Baustein für zukünftige Produkte für den Genussmarkt sowie vordosierte pharmazeutische Produkte. Maricann verwendete die Dünnschicht-Destillation zur Verarbeitung des Materials. "Wir realisieren und entwickeln alle Aspekte des Unternehmens gemäß unseres Plans, um eine Vorwärtsbewegung für unsere Aktionäre zu erzielen mit Fokus auf den Erhalt von Margen durch die Entwicklung von Geschäftseinheiten und zusätzlicher Fähigkeiten als ein Unternehmen für einen Bruchteil der Kosten, die man in diesem Sektor sieht. Regierungsaufsichtsbehörden unterstützen weiterhin unser Unternehmen, indem sie uns als eines von einer Handvoll Unternehmen mit Lizenzen für medizinisches Cannabis und Produktlieferung vertrauen. Wir sind auf die bis dato erzielten Fortschritten stolz und werden den Unternehmenswert durch strategische Segmentierung unseres Geschäfts nach Geografie, strategische Investitionen, Qualität und wissenschaftlich differenzierte Produkte, die von fachkundigen und qualifizierten Experten entwickelt werden, langfristig steigern," sagte Ben Ward, CEO. Die kürzlich abgeschlossene Liefervereinbarung: Am 29.06.2018 vermeldete das Unternehmen, dass eine Liefervereinbarung mit der Manitoba Liquor & Lotteries Corporation (MLCC) abgeschlossen werden konnte, dies ist ein wichtiger Meilenstein des Unternehmens. Die Liefervereinbarung sieht einen Umfang von 550.000 Gramm verschiedenster Cannabisprodukte innerhalb der ersten 12 Monate vor. Dies markiert Maricanns ersten Provinzliefervertrag und repräsentiert weitere externe Validierung für das Betriebs- und Managementteam von Maricann. Das Unternehmen ist engagiert um ein qualitatives, hochwertiges, sicheres Produkt zu liefern, das in der hochmodernen Anbauanlage in Langton, Ontario produziert wird. Expansion in europäischen Markt schreitet voran In den letzten Wochen hat die Maricann Group (WKN A2DQR6) ihre Aktivitäten auf dem europäischen Markt intensiviert. Die Maricann Group (WKN A2DQR6) konnte sowohl große Fortschritte im Ausbau der Ebersbach Anlage in Deutschland sowie bei der endgültigen Übernahme der Haxxon Anlagen in der Schweiz erzielen. Mit dem zusätzlich Erwerb der maltesischen Medican und der dazugehörigen Lizenz, ist es nun für die Maricann Group (WKN A2DQR6) möglich, Cannabis aus Kanada zu importieren, vor Ort auf Malta zu extrahieren und herzustellen. Zudem erlaubt die Lizenz, den Vertrieb von medizinischen Cannabis Produkten in die Europäische Union. In einer 164 Hektar großen Freiluft-Anlage in Sachsen, nahe des Headquarters in Ebersbach, wurde der erste Samen zum Anbau von Non-THC Cannabis Pflanzen durch Maricanns Tochterfirma Mariplant ausgebracht. Mariplant konzentriert sich auf die Herstellung von CBD und CBG-haltigem Cannabis, dass durch die von Maricann Group (WKN A2DQR6) eigens entwickelte VesiSorb Emulsionstechnologie zu Gelkapseln verarbeitet wird. Als Synergie durch die Übernahme der schweizer Haxxon, können zusätzliche Stoffe wie Terpenoide oder Flavonoide kostengünstig den Kapseln hinzugefügt werden. Die Produkte werden dann als Nahrungsmittelergänzung über den Onlineshop der Mariplant europaweit verkauft. Des Weiteren wird die Ebersbach Anlage derzeit auf einer Fläche von 8.000 m² ausgebaut. Die daraus entstehende Trockenanlage wird nach der Fertigstellung in der Lage sein, ca. 1 Tonne Cannabis pro Stunde zu trocknen. Die Extraktionsanlage sowie Lagerkapazitäten werden zusätzlich genutzt, um 200 kg Cannabis Blüten pro Tag verarbeiten zu können. Mit der erneuten Expansion im europäischen Raum, kann die Maricann Group (WKN A2DQR6) einen weiteren wichtigen Meilenstein vermelden. Die Möglichkeit Cannabis in der europäischen Union produzieren zu können, bedeute einen großen Wettbewerbsvorteil gegenüber der Konkurrenz. Des Weiteren steht die Vergabe einer Lizenz zur Cannabis Produktion in Deutschland weiterhin auf der Agenda der Maricann Group (WKN A2DQR6) . Wir erwarten positiven Newsflow und eine deutliche Erholung des Kurses in den kommenden Wochen! LINK zu unserer umfangreichen Erstempfehlung mit allen Hintergrundinfos True Research ist ein Produkt der BlackX GmbH, Schwetzingerstr. 3 69190 Walldorf E-Mail: info@true-research.de * Webseite: www.true-research.de Disclaimer / Haftungsausschluss: Die Markteinschätzungen, Hintergrundinformationen und Wertpapier-Analysen, die True Research auf ihren Webseiten und in Newslettern veröffentlicht, stellen weder ein Verkaufsangebot für die genannten Notierungen, noch eine Aufforderung zum Kauf- oder Verkauf von Wertpapieren dar. Die Markteinschätzungen, Hintergrundinformationen und Wertpapier-Analysenstellen auch keine wertpapiermarktanalystische Beratung dar. Den Ausführungen liegen Quellen zugrunde, die der Herausgeber als vertrauenswürdig einstuft. Trotzdem haftet True Research nicht für Schäden materieller oder immaterieller Art, die durch die Nutzung oder Nichtnutzung der dargebotenen Informationen bzw. durch die Nutzung fehlerhafter und unvollständiger Informationen unmittelbar oder mittelbar verursacht werden, sofern ihr nicht nachweislich vorsätzliches oder grob fahrlässiges Verschulden zur Last fällt. Diese Information ist eine Werbemitteilung und beinhaltet weder Anlagestrategieempfehlungen noch Anlageempfehlungen gemäß § 34b WpHG und Artikel 20 der Marktmissbrauchsverordnung. Sie erfüllt deshalb nicht die gesetzlichen Anforderungen zur Gewährleistung der Objektivität von Anlagestrategieempfehlungen/Anlageempfehlungen. Wir geben weiterhin zu bedenken, dass Aktien grundsätzlich immer mit wirtschaftlichen Risiken verbunden sind. Aufgrund von politischen, wirtschaftlichen oder etwaigen anderen Veränderungen kann es zu erheblichen Kursverlusten, im schlimmsten Fall sogar zum Totalverlust des investierten Kapitals kommen. Bei Derivaten ist die Wahrscheinlichkeit extremer Verluste mindestens genauso hoch wie bei Small-Cap-Aktien. Jeglicher Haftungsanspruch auch für ausländische Aktienempfehlungen wird daher ausnahmslos ausgeschlossen. Obwohl die in den Aktienanalysen und Markteinschätzungen von True Research enthaltenen Bewertungen und Aussagenmit der angemessenen Sorgfalt erstellt wurden, übernehmen wir keinerlei Verantwortung oder Haftung für etwaige Fehler, oder falsche Angaben. Dies gilt ebenso für alle von unseren Partnern in den Interviews geäußerten Zahlen, Darstellungen und Beurteilungen. True Research ist um Richtigkeit und Aktualität der auf dieser Internetpräsenz bereitgestellten Informationen bemüht. Gleichwohl können Fehler und Unklarheiten nicht vollständig ausgeschlossen werden. Daher übernimmt True Research keine Gewähr für die Aktualität, Richtigkeit, Vollständigkeit oder Qualität der bereitgestellten Informationen. Alle die den vorliegenden Analysen zugrundeliegenden Aussagen sollten als zukunftsorientierte Aussagen verstanden werden, die sich wegen verschiedener erheblicher Risiken (z.B. politische, wirtschaftliche oder ähnliche andere Veränderungen) durchaus nicht bewahrheiten könnten. True Research gibt daher weder eine Sicherheit noch eine Garantie dafür ab, dass die gemachten zukunftsorientierten Aussagen tatsächlich eintreffen könnten. Daher sollten sich die Leser von True Research nicht auf die Aussagenverlassen und nur auf Grund der Analysen genannte Wertpapiere kaufen oder verkaufen. Durch notwendig werdende Kapitalerhöhungen könnten zudem kurzfristig Verwässerungserscheinungen auftreten, die zu Lasten der Investoren gehen können. Alle vorliegenden Texte, insbesondere Markteinschätzungen, Aktienbeurteilungen und Chartanalysen spiegeln die persönliche Meinung von Herrn Nicholas Hornung wider, die durch Artikel 5 des Grundgesetzes gedeckt ist und dürfen keineswegs als Anlageberatung gedeutet werden. Es handelt sich also um reine individuelle Auffassungen ohne Anspruch auf ausgewogene Durchdringung der Materie. True Research ist kein registrierter oder gar anerkannter Finanzberater. Bevor Sie in Wertpapiere investieren, sollten Sie einen professionellen Anlageberater kontaktieren. Des Weiteren behalten wir uns das recht vor sämtliche Materialien welche auf unserer Website dargestellt werden ohne Ankündigung zu verändern, erweitern oder zu entfernen. Wir weisen nochmal ausdrücklich daraufhin, dass es sich bei den veröffentlichten Analysten um keine Finanzanalysen nach deutschem Kapitalmarktrecht handelt, sondern um journalistische / werbliche Beiträge in Form von Texte, Videos und Grafiken. Die Inhalte unserer Seiten wurden mit größter Sorgfalt erstellt. Für die Richtigkeit, Vollständigkeit und Aktualität der Inhalte können wir jedoch keine Gewähr übernehmen. Als Dienstanbieter sind wir gemäß § 7 Abs.1 TMG für eigene Inhalte auf diesen Seiten nach den allgemeinen Gesetzen verantwortlich. Nach §§ 8 bis 10 TMG sind wir als Dienstanbieter jedoch nicht verpflichtet, übermittelte oder gespeicherte fremde Informationen zu überwachen oder nach Umständen zu forschen, die auf eine rechtswidrige Tätigkeit hinweisen. Verpflichtungen zur Entfernung oder Sperrung der Nutzung von Informationen nach den allgemeinen Gesetzen bleiben hiervon unberührt. Eine diesbezügliche Haftung ist jedoch erst ab dem Zeitpunkt der Kenntnis einer konkreten Rechtsverletzung möglich. Bei Bekannt werden von entsprechenden Rechtsverletzungen werden wir diese Inhalte umgehend entfernen. True Research hat keine Kenntnis über rechtswidrige oder anstößige Inhalten auf den mit ihrer Internetpräsenz verknüpften Seiten fremder Anbieter. Sollten auf den verknüpften Seiten fremder Anbieter dennoch rechtswidrige oder anstößige Inhalte enthalten sein, so distanziert sich True Research ausdrücklich von diesen Inhalten ausdrücklich. Unser Angebot enthält Links zu externen Webseiten Dritter, auf deren Inhalte wir keinen Einfluss haben. Deshalb können wir für diese fremden Inhalte auch keine Gewähr übernehmen. Für die Inhalte der verlinkten Seiten ist stets der jeweilige Anbieter oder Betreiber der Seiten verantwortlich. Die verlinkten Seiten wurden zum Zeitpunkt der Verlinkung auf mögliche Rechtsverstöße überprüft. Rechtswidrige Inhalte waren zum Zeitpunkt der Verlinkung nicht erkennbar. Eine permanente inhaltliche Kontrolle der verlinkten Seiten ist jedoch ohne konkrete Anhaltspunkte einer Rechtsverletzung nicht zumutbar. Bei Bekanntwerden von Rechtsverletzungen werden wir derartige Links umgehend entfernen. Urheberrecht Die durch die Seitenbetreiber erstellten Inhalte und Werke auf diesen Seiten unterliegen dem deutschen Urheberrecht. Vervielfältigung, Bearbeitung, Verbreitung und jede Art der Verwertung außerhalb der Grenzen des Urheberrechtes bedürfen der schriftlichen Zustimmung des jeweiligen Autors bzw. Erstellers. Soweit die Inhalte auf dieser Seite nicht vom Betreiber erstellt wurden, werden die Urheberrechte Dritter beachtet. Insbesondere werden Inhalte Dritter als solche gekennzeichnet. Sollten Sie trotzdem auf eine Urheberrechtsverletzung aufmerksam werden, bitten wir um einen entsprechenden Hinweis. Bei Bekanntwerden von Rechtsverletzungen werden wir derartige Inhalte umgehend entfernen. Offenlegung der Interessenskonflikte: Die auf den Webseiten oder anderen Werbemitteln von TRUE RESEARCH veröffentlichten Empfehlungen, Interviews und Unternehmenspräsentationen erfüllen grundsätzlich werbliche Zwecke und werden von den jeweiligen Unternehmen oder sogenannten third parties bezahlt. Aus diesem Grund kann allerdings die Unabhängigkeit der Analysen in Zweifel gezogen werden. Diese sind per definitionem nur Informationen. Dies gilt auch für die vorliegende Studie. Die Erstellung und Verbreitung des Berichts wurde vom Unternehmen bzw. von dem Unternehmen nahe stehenden Kreisen in Auftrag gegeben und bezahlt. Damit liegt ein entsprechender Interessenkonflikt vor, auf den wir Sie als Leser ausdrücklich hinweisen. Ferner geben wir zu bedenken, dass die Auftraggeber dieser Studie in naher Zukunft beabsichtigen könnten, sich von Aktienbeständen zu trennen oder Aktien am Markt nachzukaufen. Der Auftraggeber könnte von steigenden Kursen der Aktie profitieren. Auch hieraus ergibt sich ein entsprechender Interessenkonflikt. True Research handelt demzufolge im Zusammenwirken mit und aufgrund entgeltlichen Auftrags von weiteren Personen, die ihrerseits signifikante Aktienpositionen halten. Auch hieraus ergibt sich ein entsprechender Interessenkonflikt. Weil andere Research-Häuser und Börsenbriefe den Wert auch covern, kommt es in diesem Zeitraum zu einer symmetrischen Informations-/ und Meinungsgenerierung. True Research oder Mitarbeiter des Unternehmens sowie Personen bzw. Unternehmen die an der Erstellung beteiligt sind (Auftraggeber) halten zum Zeitpunkt der Veröffentlichung Aktien an dem Unternehmen über welche im Rahmen der Internetangebote berichtet wird. True Research oder der Autor / Verfasser behalten sich vor - wie andere Aktionäre auch - jederzeit (auch kurzfristig) Aktien des Unternehmens über die im Rahmen des Internetangebots www.true-research.de berichtet wird, einzugehen (Nachkauf) oder zu verkaufen und könnte dabei insbesondere von erhöhter Handelsliquidität profitieren. Ein Kurszuwachs der Aktien der vorgestellten Unternehmen kann zu einem Vermögenszuwachs von True Research oder des Autors führen. Auch hieraus ergibt sich ein entsprechender Interessenskonflikt Vollständiger Haftungsausschluss und weitere Hinweise gemäß §34b Abs. 1 WpHG in Verbindung mit FinAnV (Deutschland). Natürlich gilt es zu beachten, dass das behandelte Unternehmen in der höchsten denkbaren Risikoklasse für Aktien gelistet ist. Die Gesellschaft weist noch keine Umsätze auf und befindet sich auf Early Stage Level, was gleichzeitig reizvoll wie riskant ist. Es gibt keine Garantie dafür, dass sich die Prognosen der Experten und des Managements tatsächlich bewahrheiten. Damit stellt das Unternehmen einen Wechsel auf die Zukunft aus. Wie bei jedem Explorer gibt es auch hier die Gefahr des Totalverlustes, wenn sich die hohen Erwartungen des Managements nicht auf absehbare Zeit realisieren lassen. Deshalb dient das vorliegende Unternehmen nur der dynamischen Beimischung in einem ansonsten gut diversifizierten Depot. Der Anleger sollte die Nachrichtenlage genau verfolgen. Die segmenttypische Marktenge sorgt für hohe Volatilität. Der erfahrene Profitrader, und nur an diesen und nicht etwa an unerfahrene Anleger und Low-Risk Investoren richtet sich unsere Empfehlung, findet hier aber einen hochattraktiven spekulativen Wert, der über ein extremes Vervielfachungspotenzial verfügt. Canopy Growth Corporation (WKN A140QA), Aurora Cannabis (WKN A12GS7), Cannabis Wheaton ( WKN A2DRE4), Cannabis Science (WKN A0RM6Z).
Introduction {#s1} ============ *ACTN3* is a gene that encodes for alpha-actinin-3, a protein expressed only in type-II muscle fibers (North et al., [@B52]). A common polymorphism in this gene is R577X (rs1815739), where a C-to-T base substitution results in the transformation of an arginine base (R) to a premature stop codon (X). X allele homozygotes are deficient in the alpha-actinin-3 protein, which is associated with a lower fast-twitch fiber percentage (Vincent et al., [@B69]), but does not result in disease (MacArthur and North, [@B44]). The XX genotype frequency differs across ethnic groups, with approximately 25% of Asians, 18% of Caucasians, 11% of Ethiopians, 3% of Jamaican and US African Americans, and 1% of Kenyans and Nigerians possessing the XX genotype (Yang et al., [@B75]; MacArthur et al., [@B46]; Scott et al., [@B60]). *ACTN3* genotype is associated with speed and power phenotypes. Yang et al. ([@B74]) reported that elite sprint athletes had significantly higher frequencies of the R allele than controls, a finding that has been replicated multiple times in speed, power and strength athletes (Druzhevskaya et al., [@B26]; Roth et al., [@B58]; Eynon et al., [@B28]; Ahmetov et al., [@B2]; Cieszczyk et al., [@B14]; Kikuchi et al., [@B37]; Papadimitriou et al., [@B53]; Weyerstraß et al., [@B71]; Yang et al., [@B76]), although these findings are not unequivocal (Scott et al., [@B60]; Gineviciene et al., [@B31]; Sessa et al., [@B61]). Whilst Yang et al. ([@B74]) found a trend toward an increased XX genotype frequency in endurance athletes vs. controls, this relationship is less robust, with most studies reporting a lack of association between XX genotype and endurance performance (Lucia et al., [@B42]; Saunders et al., [@B59]; Döring et al., [@B25]; Kikuchi et al., [@B37]). In addition, whilst Kenyan and Ethiopian endurance runners are highly successful (Wilber and Pitsiladis, [@B72]), the frequency of the XX genotype within this group is very low at 8% (Ethiopian) and 1% (Kenyan) (Yang et al., [@B75]). As such, the general consensus is that *ACTN3* X allele likely does not modify elite endurance athlete status (Vancini et al., [@B66]). Much of the attention on *ACTN3* has focused on the robust relationship with the R allele and strength/power phenotype, with a number of reviews further exploring this relationship (Eynon et al., [@B29]; Ma et al., [@B43]; Ahmetov and Fedotovskaya, [@B3]). Indeed, a number of papers referenced *ACTN3* as a "gene for speed" (MacArthur and North, [@B44]; Chan et al., [@B13]; Berman and North, [@B8]). However, emerging evidence suggests that this polymorphism may impact a number of other traits, including exercise recovery, injury risk, and training adaptation (Delmonico et al., [@B21]; Pimenta et al., [@B55]; Massidda et al., [@B50]). The purpose of this mini-review is to further explore these potential relationships, as an increased understanding of the role played by *ACTN3* on these traits may lead to improvements in the utilization of genetic information in exercise training. *ACTN3* as a modulator of training response {#s2} =========================================== Over the last 20 or so years, the consistent underlying impact of genetics on exercise adaptation has been well explored (Bouchard et al., [@B11]; Bouchard, [@B10]). Whilst it is clear that genetics has an undoubted influence on both exercise performance (Guth and Roth, [@B34]) and adaptation (Mann et al., [@B49]), fewer studies examine the influence of individual single nucleotide polymorphisms (SNPs) (Delmonico et al., [@B21]), or a combination of SNPs (Jones et al., [@B36]), on this process. In this section, we explore the evidence regarding the impact of *ACTN3* on the post-exercise adaptive response. Following a structured literature search, we found five studies that examined the influence of *ACTN3* on exercise adaptation to a standardized training programme (Table [1](#T1){ref-type="table"}). Four of these studied resistance training (Clarkson et al., [@B15]; Delmonico et al., [@B21]; Pereira et al., [@B54]; Erskine et al., [@B27]), and one focused on aerobic training (Silva et al., [@B64]). An additional study (Mägi et al., [@B48]), monitored changes in VO~2peak~ over a five-year period in elite skiers, with no significant *ACTN3* genotype differences. However, the exercise intervention in this study was not controlled, and so we did not include it within Table [1](#T1){ref-type="table"}. There was considerable variation in the findings. For resistance training, two studies reported that the RR genotype was associated with the greatest increase in strength (Pereira et al., [@B54]) and power (Delmonico et al., [@B21]) following resistance training. One study reported no effect of *ACTN3* genotype on training adaptations following resistance training (Erskine et al., [@B27]). Another reported greater improvement in one-repetition maximum (1RM) in X allele carriers compared to RR genotypes (Clarkson et al., [@B15]). A further study utilized *ACTN3* within a 15-SNP total genotype score (TGS), finding that individuals with a higher number of power alleles (such as *ACTN3* R) exhibited greater improvements following high-intensity resistance training compared to low-intensity resistance training (Jones et al., [@B36]). However, because subjects could have the *ACTN3* XX genotype and still be classed as those who would best respond to high-intensity training (due to the possession of a higher number of alleles in other power-associated SNPs), we did not include this study within Table [1](#T1){ref-type="table"}. ###### Studies examining the interaction between *ACTN3* genotype and exercise adaptation. **Study** **Method** **Sample characteristics** **Main outcome** -------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Clarkson et al., [@B15] 12 weeks progressive resistance exercise training on non-dominant arm. Progression from 3 sets of 12 repetitions to 3 sets of 6 repetitions, with concurrent increase in load. 602 (355 females) aged 18--40 (*n* = 133 XX genotype). In females, the X allele was associated with greater absolute and relative improvements in 1RM vs. RR genotypes. Pereira et al., [@B54] 12-week high-speed power training programme. Progression from 3 sets of 10 repetitions @ 40% 1RM to 3 sets of 4 repetitions @ 75% 1RM. 139 Older (mean = 65.5 years) Caucasian females (*n* = 54 XX genotype). RR genotypes exhibited greater performance improvements (maximal strength, CMJ) compared to X allele carriers. Erskine et al., [@B27] 9-week unilateral knee extension resistance training programme. 51 previously untrained young males (*n* = 7 XX genotype). Responses to resistance training were independent of *ACTN3* genotype. Silva et al., [@B64] 18-week (3 sessions per week) endurance training programme, comprised primarily of 60-min running, individually controlled by heart rate monitor use. 206 male Police recruits (*n* = 33 XX genotype). At baseline, XX genotypes had greater VO~2~ measure scores than RR genotypes. Following training, this difference disappeared; i.e., RR had greater improvements than XX. Delmonico et al., [@B21] 10-week (3 session per week) unilateral knee extensor strength training comprised of 4--5 sets of 10 repetitions. 155 (*n* = 86 females) older (50--85 years) subjects (*n* = 39 XX genotype). Change in absolute peak power greater in RR vs. XX (*p* = 0.07) for males. Relative peak power change greater in RR vs. XX (*p* = 0.02). The variation between studies is likely due to heterogeneity at baseline between genotypes, and differences in exercise prescription. Given the prevalence of the R allele in elite speed-power and strength athletes (Yang et al., [@B74]; Vincent et al., [@B69]), it is speculatively considered that R allele carriers would respond best to speed-power and strength training (Kikuchi and Nakazato, [@B38]). However, as illustrated here, there is perhaps a paucity of data to support this position. Nevertheless, there are some potential molecular mechanisms that could underpin this proposition. Norman et al. ([@B51]) reported that mammalian target of rapamycin (mTOR) and p70S6k phosphorylation was greater in R allele carriers than XX genotypes following sprint exercise. Both mTOR and p70S6k regulate skeletal muscle hypertrophy (Bodine et al., [@B9]; Song et al., [@B65]), providing mechanistic support for the belief that hypertrophy, and hence strength and power improvements, should be greater in R allele carriers following resistance training. In addition, Ahmetov et al. ([@B1]) reported that testosterone levels were higher in male and female athletes with at least one R allele compared to XX genotypes. Whilst the direction of this association is not clear, it again supplies a possible mechanism explaining why R allele carriers may experience greater training-induced strength improvements. A single study examined the impact of this polymorphism on the magnitude of VO~2~ improvements following endurance training (Silva et al., [@B64]). Here, VO~2~ scores at baseline were greater in XX genotypes, but following training this difference was eliminated, indicating that RR genotypes had a greater percentage improvement following training. The population in this cohort were police recruits. Given that the X allele is potentially associated with elite endurance athlete status (Yang et al., [@B74]), it is not clear whether these results would be mirrored in elite endurance athletes. Clearly, further work is required to fully understand what relationship, if any, exists between *ACTN3* and improvements in aerobic capacity following training. *ACTN3* as a modulator of post-exercise recovery {#s3} ================================================ *ACTN3* R577X has also been associated with exercise-induced muscle damage; here, increased muscle damage will likely reduce speed of recovery, suggesting a potential modifying effect of this polymorphism on between-session recovery. Of the eight studies identified that examined the impact of this polymorphism on post-exercise muscle damage (Table [2](#T2){ref-type="table"}), six reported that that the X allele and/or the XX genotype was associated with higher levels of markers associated with muscle damage (Vincent et al., [@B70]; Djarova et al., [@B24]; Pimenta et al., [@B55]; Belli et al., [@B7]; Del Coso et al., [@B18],[@B19]). One study found no effect of the polymorphism (Clarkson et al., [@B17]), and one found that RR genotypes experienced a greater exercise-induced reduction in force compared to XX genotypes (Venckunas et al., [@B68]). An additional study (Del Coso et al., [@B20]) examined the impact of *ACTN3* as part of a TGS on creatine kinase (CK) response following a marathon race. Within this TGS, the R allele was considered protective against increased CK concentrations. The results indicated that those athletes with a higher TGS, and therefore greater genetic protection, had a lower CK response to the marathon. Whilst not direct evidence of the R allele\'s protective effect, as it is possible that the other SNPs used in the TGS conveyed this effect, it nevertheless strengthens the supporting argument. ###### Studies examining the interaction between *ACTN3* genotype and exercise recovery. **Study** **Method** **Sample characteristics** **Main outcome** -------------------------- -------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Pimenta et al., [@B55] Eccentric-contraction based training session. 37 male professional soccer players based in Brazil. (*n* = 9 XX genotype). Greater creatine kinase (CK) activity in XX genotypes vs. RR. Clarkson et al., [@B17] 50 maximal eccentric contractions of the elbow flexor. 157 male (*n* = 78) and female subjects of various ethnicities (*n* = 115 Caucasians; *n* = 48 XX genotype). No association of R577X with increases in CK and myoglobin (Mb) following eccentric exercise. Vincent et al., [@B70] 4 × 20 maximal single leg eccentric knee extensions. 19 healthy young males (*n* = 10 XX genotype). XX genotypes had greater peak CK activity post-training compared to RR genotypes, and reported greater increases in muscle pain. Venckunas et al., [@B68] Two bouts of 50 drop jumps. 18 young males (*n* = 9 XX genotype). RR showed greatest decrease in voluntary force, and slower recovery, compared to XX genotypes. Djarova et al., [@B24] Resting blood sample. 31 South African Zulu males (*n* = 14 Cricketers and *n* = 17 controls). No XX genotypes. R allele associated with lower CK levels (RR vs. RX). Del Coso et al., [@B19] Marathon race, pre- and post-race Counter Movement Jump (CMJ). 71 experienced runners (*n* = 8 XX genotype). X allele carriers had higher CK and Mb levels post-race compared to RR homozygotes. X allele carriers also had a greater reduction in leg muscle power compared to RR genotypes. Del Coso et al., [@B18] Triathlon competition (1.9 km swim, 75 km cycle, 21.1 km run), pre- and post-race CMJ. 23 healthy, experienced triathletes (*n* = 19 males, *n* = 5 XX genotype). X allele carriers had a more pronounced jump height reduction compared to RR genotypes. In X allele carriers, there was a tendency toward higher post-race Mb concentrations (*P* = 0.10) and CK concentrations (*P* = 0.06) compared to RR homozygotes. Belli et al., [@B7] 37.1 km adventure race (22.1 km mountain biking, 10.9 km trekking, 4.1 km water trekking, 30 m rope course). 20 well trained athletes (*n* = 15 males; *n* = 4 XX genotype). XX genotypes had higher concentrations of serum Mb, CK, lactate dehydrogenase (LDH) and AST compared to R allele carriers. The increase in post-exercise muscle damage is likely due to structural changes associated with this polymorphism. Alpha-actinin-3 is expressed only in fast-twitch muscle fibers, and X allele homozygotes are alpha-actinin-3 deficient; instead, they upregulate production of alpha-actinin-2 in these fast-twitch fibers (MacArthur et al., [@B47]; Seto et al., [@B62]). Both alpha-actinin-3 (encoded for by *ACTN3*) and alpha-actinin-2 are major structural components of the Z-disks within muscle fibers (Beggs et al., [@B6]). The Z-disk itself is vulnerable to injury during eccentric contractions (Friden and Lieber, [@B30]), and knock-out mouse models illustrates these Z-disks are less stable during contraction with increased alpha-actinin-2 concentrations (Seto et al., [@B62]). A number of the studies in Table [2](#T2){ref-type="table"} exclusively utilized eccentric contractions, whilst others focused on prolonged endurance events that include running, which incorporates eccentric contractions as part of the stretch shortening cycle with each stride (Komi, [@B41]). The overall consensus of these studies is that the X allele, and/or the XX genotype, is associated with greater markers of muscle damage following exercise that has an eccentric component; either through direct eccentric muscle action (Vincent et al., [@B70]), from sport-specific training (Pimenta et al., [@B55]), or from a competitive event requiring eccentric contractions (Belli et al., [@B7]; Del Coso et al., [@B18],[@B19]). However, there are a number of weaknesses to these studies, potentially limiting the strength of these findings. The overall subject number is modest, with a total of 376 (mean 47) across all eight studies; indeed, the study with the greatest number of subjects, Clarkson et al. ([@B17]), reported no modifying effect of this polymorphism on post-exercise muscle damage. The total number of XX genotypes was also low, with 85 reported across the studies. This is partly a function of the lower prevalence (\~18%) of this genotype, but again the study with the largest number (*n* = 48) of XX genotypes found no effect of this polymorphism (Clarkson et al., [@B17]). It is clear that, in order to increase the robustness of this association, further work with greater subject numbers is required. *ACTN3* as a modulator of exercise-associated injury risk {#s4} ========================================================= We found six studies examining the association between *ACTN3* genotype and sports injury risk (Table [3](#T3){ref-type="table"}). Three of these examined ankle sprains (Kim et al., [@B40]; Shang et al., [@B63]; Qi et al., [@B57]), with one each for non-contact injuries (Iwao-Koizumi et al., [@B35]), professional soccer players (Massidda et al., [@B50]), and exertional rhabdomyolysis (ER) (Deuster et al., [@B23]). Whilst ER is strongly related to increased CK following exercise (Clarkson and Ebbeling, [@B16]; Brancaccio et al., [@B12]), because it requires medical treatment we classified it as an injury. Of these papers, five reported a protective effect of the R allele and/or the RR genotype against injury (Deuster et al., [@B23]; Kim et al., [@B40]; Shang et al., [@B63]; Qi et al., [@B57]; Massidda et al., [@B50]). Specifically, Deuster et al. ([@B23]) found that XX genotypes were almost three times more likely to be ER patients than R allele carriers. Qi et al. ([@B57]) reported a significantly lower frequency of the RR genotype in a group of ankle sprain patients vs. controls. Kim et al. ([@B40]) found that XX genotypes were 4.7 times more likely to suffer an ankle injury than R allele carriers in their cohort of ballerinas. Shang et al. ([@B63]) reported the R allele as significantly under-represented in a cohort of military recruits reporting ankle sprains. Finally, Massidda et al. ([@B50]) demonstrated that XX genotypes were 2.6 times more likely to suffer an injury than RR genotypes, and that these injuries were more likely to be of increased severity. Only one study (Iwao-Koizumi et al., [@B35]) reported that the R allele was associated with an increased risk (OR = 2.52) of a muscle injury compared to X allele carriers in a female cohort. ###### Studies examining the interaction between *ACTN3* genotype and sports injury. **Study** **Method** **Sample characteristics** **Main outcome** ----------------------------- ----------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Iwao-Koizumi et al., [@B35] Sports injury data survey. 99 female students (*n* = 34 XX genotype). R allele associated with an increased odds ratio (OR) of 2.52 of muscle injury compared to X allele. Deuster et al., [@B23] Controls--lower body exercise test. Cases--anonymous blood or tissue sample collected after an exertional rhabdomyolysis (ER) incident. 134 controls and 47 ER patients (*n* = 38 XX genotype) XX genotypes 2.97 times more likely to be to ER cases compared to R allele carriers. Qi et al., [@B57] Ankle sprain case-control analysis. 100 patients with non-acute ankle sprain vs. 100 healthy controls (*n* = 89 XX genotype). Significantly lower frequency of RR genotype in ankle sprain group compared to controls (*p* = 0.001). Kim et al., [@B40] Ankle injury case-control analysis. 97 elite ballerinas and 203 normal female adults (*n* = 65 XX genotype). XX genotypes 4.7 times more likely to suffer an ankle injury than R allele carriers. Shang et al., [@B63] Ankle injury case-control analysis. 142 non-acute ankle sprain patients and 280 physically active controls (*n* = 87 XX genotype). All military recruits. RR genotype and R allele significantly under-represented in the acute ankle injury group. Massidda et al., [@B50] Case control, genotype-phenotype association study. 257 male professional Italian soccer players and 265 non-athletic controls. XX players were 2.6 times more likely to suffer a sports injury than RR genotypes. Severe injuries were also more likely in X allele carriers compared to RR genotypes. Regarding ER, the likely mechanism is similar to that discussed in the post-exercise muscle damage section; increased damage at the Z-disk during exercise. For ankle sprains, the mechanism is potentially related to muscle function. R allele carriers tend to have greater levels of muscle mass (MacArthur and North, [@B45]), and specifically type-II fibers (Vincent et al., [@B69]), indicating that both the RX and RR genotypes tend to have increased strength capabilities (Pimenta et al., [@B56]). For other soft-tissue injury types, again, the decreased potential of damage at the Z-disk likely reduces injury risk. This would be particularly true for eccentric contractions; given the importance of this contraction type in the etiology of hamstring injuries, this could be a further causative mechanism (Askling et al., [@B4]), alongside that of reduced muscle strength (Yamamoto, [@B73]). Alongside the modifying role of *ACTN3* on muscle strength and injury risk, emerging evidence suggests this SNP may also impact flexibility and muscle stiffness. Two studies reported an association between RR genotype and a decreased flexibility score in the sit-and-reach test (Zempo et al., [@B77]; Kikuchi et al., [@B39]). Conversely, Kim et al. ([@B40]) reported that XX genotypes had decreased flexibility in the same test. This lack of consensus is largely due to the small total study number, with greater clarity expected as research in the area evolves. It also mirrors the lack of consensus as to whether flexibility increases or decreases risk of injury (Gleim and McHugh, [@B32]), indicating the complex, multifactorial nature of injuries and their development (Bahr and Holme, [@B5]). In summary, it appears that the R allele of *ACTN3* is somewhat protective against injuries. The mechanisms underpinning this are likely varied, and related to a combination of the modifying effects of this SNP on both strength (particularly eccentric strength), exercise-induced muscle damage, and flexibility. Discussion {#s5} ========== The results of this mini-review indicate that, aside from its established role in sporting performance, the *ACTN3* R577X polymorphism also potentially modifies exercise adaption, exercise recovery, and exercise-associated injury risk. As this polymorphism directly influences both muscle structure and muscle fiber phenotype, this is perhaps unsurprising, and points to the potential use of knowledge of this polymorphism in the development of personalized training programmes. However, it is important to consider the limitations surrounding many of these studies. The subject numbers in the considered studies tended to be low, with large heterogeneity between study cohorts, ranging from untrained subjects to professional sports people, as well as differences in sex. Both of these aspects will impact the study findings; the effect of this polymorphism may be smaller in untrained subjects, for example, whereas in elite, well-trained athletes, who are likely closer to their genetic ceiling, the effect may be greater. The low subject numbers are troubling due to the relatively low XX genotype frequency, which is \~18% in Caucasian cohorts, and even lower in African and African-American cohorts. As such, XX genotypes are considerably under-represented across the considered research. The above limitations indicate further work is required to fully understand the impact of this polymorphism on these phenotypes. That said, there is some consistency between trials, allowing speculative guidelines to be developed for the use of genetic information in the development of personalized training. XX genotypes potentially have increased muscle damage following exercise that includes an eccentric component (Pimenta et al., [@B55]; Belli et al., [@B7]; Del Coso et al., [@B18],[@B19]). This information may, consequently, be used to guide between-session recovery, and during the competitive season recovery times post-competition. For example, in an elite soccer club, *ACTN3* genotype could be utilized alongside other well-established markers to determine training intensity in the days following a match, with players genetically predisposed to increased muscle damage either having a longer recovery period, or increased recovery interventions such as cold-water immersion. In addition, recent research has illustrated the positive impact of Nordic Hamstring Exercises on hamstring injury risk (van der Horst et al., [@B67]), making these exercises increasingly common in professional sports teams. These exercises have a large eccentric component, upon which this polymorphism may have a direct effect. As such, it would be expected that XX genotypes would have increased muscle soreness and damage following these exercises, potentially impacting the timing of their use within a training programme. Focusing on sporting injuries, the general consensus from the studies found is that the X allele increased the risk of ankle injuries (Kim et al., [@B40]; Shang et al., [@B63]; Qi et al., [@B57]) and general sporting injury (Massidda et al., [@B50]). Again, this information could guide training interventions. In this case, X allele carriers might undertake increased general strengthening exercises and neuromuscular training targeting injury risk reduction. Furthermore, knowledge of this information could increase athlete motivation to undertake these exercises (Goodlin et al., [@B33]). Finally, maximizing the training response is crucial, both to elite athletes looking to improve by fractions of a second, and to beginners looking to decrease their risk of disease. Increasingly, there is evidence that polymorphisms, including *ACTN3* R577X, can impact this adaptive process (Delmonico et al., [@B21]; Pereira et al., [@B54]). If further research replicates these early findings, then again, this information could be used in the development of training programmes. Regarding *ACTN3*, at present it appears that R allele carriers potentially exhibit greater increases in strength and power following high-load resistance training (Delmonico et al., [@B21]). As such, Kikuchi and Nakazato ([@B38]) speculate that R allele carriers should prioritize high-load, low-repetition resistance training if improvements in muscle strength are required, and high intensity interval (HIT) training to specifically elicit improvements in VO~2max~. Conclusion {#s6} ========== There is a clear, undoubted impact of genetics on both sporting performance and exercise adaptation. In this regard, one of the most well-studied genes is *ACTN3*, which has been reliably shown to impact speed-power and strength phenotypes. However, emerging research indicates that this polymorphism may also impact other exercise associated variables, including training adaptation, post-exercise recovery, and exercise-associated injuries; this research is summarized in Figure [1](#F1){ref-type="fig"}. This information is important, not just because it illustrates the wide-ranging impact SNPs can have, but also because it represents an opportunity to personalize, and therefore enhance, training guidelines. At present, there are no best-practice guidelines pertaining to the use of genetic information in both elite sport and the general public. However, sports teams have been using genetic information for over 10 years (Dennis, [@B22]), and continue to do so. Consequently, the development of these guidelines represents an important step from lab to practice. Clearly, further research is required to fully develop these guidelines, and at present such information is speculative. Nevertheless, the use of genetic information represents an opportunity to enhance training prescription and outcomes in exercisers of all abilities. ![A summary of the potential wider implications of *ACTN3* genotype on outcomes from exercise.](fphys-08-01080-g0001){#F1} Author contributions {#s7} ==================== CP: Conceived the idea for this manuscript, and wrote the initial draft. JK: Provided feedback on the initial draft, and made valuable changes to the manuscript, as well as providing direction. All the authors made contributions in drafting the manuscript and have approved the final version. Conflict of interest statement ------------------------------ CP is an employee of DNAFit Ltd., a genetic testing company. He received no payment for the production of this article, which was completed as part of his Professional Doctorate studies at the University of Central Lancashire. The other author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Both authors would like to thank Tshepo Mofokeng for his assistance in the design of Figure [1](#F1){ref-type="fig"}. [^1]: Edited by: Kimberly Huey, Drake University, United States [^2]: Reviewed by: Rudy Valentine, Iowa State University, United States; Moh H. Malek, Wayne State University, United States [^3]: This article was submitted to Exercise Physiology, a section of the journal Frontiers in Physiology
g. 1 Let s be (2/6 + 3)*30/20. Suppose 5*g - 78 = -4*c, s*g = 5*c - 8*c + 81. Solve 4*t - g = 2*i, -2*i - 25 = -5*t - 4 for t. 3 Let l(d) = 2*d. Let y be l(2). Suppose 20*c - 86 = 194. Let i(q) = q**2 - 10*q + 10. Let m be i(9). Solve 3*w + y*v + 4 = -m, -5*w - v = c for w. -3 Suppose -30 = -19*p + 8. Suppose -x + 2*c + 9 = 0, 5*x - 5*c = -p*c + 31. Solve 0 = -x*t - 2*a + 15, 3*t + 5 = -a + 15 for t. 5 Let u(t) = -t**3 - 6*t**2 + 5*t - 2. Let j be u(-7). Let l be (-15)/(-3) + -1 + 12 + -14 + 3. Solve -18 = -3*r - l*i + 8, 3*i = j for r. 2 Let z be 3/(-18) + 939/18. Suppose -a + p + 29 = 0, 2*a - 3*p = z + 10. Let v = 3846 + -3840. Solve -l - v + 2 = 0, a = -3*o - 4*l for o. -3 Let d = -34 - -19. Let q = 443 - 456. Let o = q - d. Solve 4 = 3*u - 4*u, -o*u = 5*g + 23 for g. -3 Suppose -b = -3*m - 26 + 28, -4*b - 3*m = -37. Suppose 1 = 2*k + k - l, -l = k - b. Solve 3 = -4*n - 4*x - 33, -3*n + 2*x = k for n. -4 Let j = -6605 - -6608. Solve 0 = 3*r + 5*n - 0 + 3, 0 = 4*r - j*n + 4 for r. -1 Let x be 2/(-12) - ((-295)/(-30) - 9). Let z be x/(-4) + (-2 - 60/(-16)). Solve -7 = 3*o - 0*g + z*g, -3*o = 4*g + 5 for o. -3 Suppose -5*j = -3*s - 43, -3*s = 3*j + 75 - 72. Solve -4*m = j*q + 4, -2*m - 4 = -2*q - 2 for m. -1 Suppose -5*m = 2*u + 27, 30 = 4*u - 4*m + 2*m. Solve 2*a - 5*r = -3*r - u, 0 = -5*a - 5*r - 40 for a. -5 Let a be (1 + 6)*360/252. Suppose a = 6*u - 20. Solve -3*r - 5*y = 13, -u*y - 8 = -4*r + 2*r for r. -1 Let n = -65 - -83. Let h(a) = -19 - n*a + 2*a**2 + 12*a + 26*a. Let y be h(-11). Solve p = -y*p - 4*k - 4, p = 3*k - 13 for p. -4 Let y be 33/77 + 148/14. Suppose 12*u - 18*u = 48. Let a = u + y. Solve n + 3 = 5*o, -3*o - a*n + 3 = -6*o for o. 1 Suppose 19*c - 38 + 26 = 45. Solve 6 = -u - 4*r - r, 3*r = -c for u. -1 Let l = 27071 + -27057. Solve 2*f + 3*c = l, 3*f = 5*c - 8 - 9 for f. 1 Let b(k) = -k**3 + 11*k**2 - 24*k + 146. Let l be b(10). Solve 4*h - 3*c - 35 = 0, 0 = -7*h + l*h - 5*c - 20 for h. 5 Let p be ((-74)/296)/((1/8)/(-1)). Let m(z) = -51*z + 106. Let k be m(p). Solve -5*r - 3*y - 1 = 0, -k*y = -3*r - 3 + 14 for r. 1 Suppose -2*s - 8 = -16. Suppose -s*r - 2*n - 3*n + 65 = 0, 5*r = -2*n + 77. Suppose 17*o - r = 12*o. Solve -3*x = -0*x - 4*a + 14, -x + o*a - 13 = 0 for x. 2 Let i(f) = -45*f + 4. Let p = 266 + -266. Let c be i(p). Solve -3*b = 2*m - 20, 8 = 3*b - c for m. 4 Suppose 0 = -2*c - 5*u + 99, 4*c - 3*u - 232 = -c. Let a(v) = -4*v + c - 30 + 3*v. Let t be a(15). Solve j + 2*m + 2*m = 14, -t*m = -4*j - 16 for j. -2 Let w(h) = h**2 - 8*h - 8. Let c be w(-3). Suppose -a - 4*a - c = 0, 5 = 5*o + 3*a. Solve 0 = i + o*m + 12, -36 = 2*i - 7*i + 4*m for i. 4 Let q = -99 - -127. Suppose q*x = -21 + 189. Solve 4*t + z + 9 = 0, z + x = 3*z for t. -3 Let z = 10684 + -10665. Solve -4*g = -5*x + 6*x - z, 0 = -4*g + 16 for x. 3 Let k be 2 - 1 - (-71 - 0). Suppose -4*i - 2*l = -i - 20, -3*l = -3*i. Solve 3*j = -w - 7, 0 = i*j - 5*w - 31 + k for j. -4 Suppose 5 - 5 = 83*u - 0. Solve 3*c + u + 10 = w, 14 = 3*w - c for w. 4 Let a = -6267 - -6267. Solve 3*w + 3*x = a, 0 = w + 2*x - 7 + 3 for w. -4 Let l(u) = 33*u - 460. Let j be l(14). Solve -h + 94 = 90, 3*v = h + j for v. 2 Let v = 1079 + -1054. Solve -18 - 1 = -5*l - 3*r, 3*l - 5*r - v = 0 for l. 5 Suppose x + x = 12. Let a(d) = 45*d - 4 - 115*d + 41*d + 36*d - d**2. Let m be a(x). Solve -5*n - m*t = -22, 4 - 6 = -2*t for n. 4 Suppose 0 = -14*y + 15*y - 88. Let t = y + -79. Suppose 9 = 2*z - t. Solve 2*o + z + 11 = -4*q, 0 = 4*q + 4*o + 24 for q. -4 Let v(s) = -s**2 - 56*s - 781. Let k be v(-29). Solve c = -2*o - 5, -2*c + 6*c = k*o for c. -1 Let q be -6 - (7 + -8 + -33). Suppose q + 53 = 27*d. Solve -3*o + 15 = 0, 3*g - d*o + 0 + 27 = 0 for g. -4 Let h be 18 - (42 + -7 + -21). Solve 0 = -u - p + 14 - 13, 12 = h*u + 2*p for u. 5 Let h(m) = -m**2 - 5*m - 1. Let k be h(-4). Let q = 404 - 364. Let a(l) = l**2 - 44*l + 160. Let s be a(q). Solve 24 = 5*u - j, s = -u + k*j + 2 for u. 5 Let c be 14 + ((-18)/24 - 13/4). Let o(l) = l**2 - 12*l + 22. Let v be o(c). Solve 2*f = -0*f - v*h - 16, 2*h - 4 = 3*f for f. -4 Let m(s) = -s**3 - 4*s**2 + 2*s - 3. Let f be m(-5). Let a be ((-25)/(-5) - -4) + -4. Let o be 20/a*-1 - -7. Solve 10 = -2*r + 4*r + o*p, r + 5*p = f for r. 2 Suppose -2122*q + 2146*q - 1008 = 0. Solve o + 4*t + 10 = 0, 0 = -q*o + 44*o + t + 6 for o. -2 Let t be (-1 + (-42)/(-12))*(22 - 0). Let p = -41 + t. Let d(s) = -s + 14. Let j be d(p). Solve -5*q + j*q = -3*o + 3, -5*o = -q - 5 for o. 1 Let j be 6/27 + 100/36. Suppose 5*x - j*x + d = 3, 26 = 4*x - 2*d. Suppose 16*b - 43 = 117. Solve -3*w + 2*c + b = -w, -2*w - x = 5*c for w. 3 Suppose 14041 = -0*r - 20*r + 15181. Solve 2*d + 59 - r = 5*b, -3*d = -5*b - 2 for d. 4 Let n = 35 - 29. Let j be (n + -5)/(4*(-2)/(-32)). Solve j - 5 = 2*y + 5*d, 4 = 4*d for y. -3 Let r = -18258 - -18135. Suppose 670 = -v + 6*v. Let p = r + v. Solve -2*h + 4*l = -l + p, 4*h = -l + 11 for h. 2 Suppose -4*v + 346 = -2722. Let y = v + -746. Solve 5*b = 2*g + y, -3*g + 4*b = 9 + 12 for g. -3 Let v = 1720 - 1715. Solve -2*n + 26 = 3*z + 38, -v*n - 25 = 5*z for n. -3 Suppose l - 22005 = -26*l. Let i = 817 - l. Solve -20 = i*s + 2*v, 6*v = 5*s + 5*v + 20 for s. -5 Let t be 2/(-4 + (-50)/(-15)). Let q be (-1)/(1*t/6) + 5. Let d be 41/4 - q/28. Solve -3*i = 2*i + n + d, -16 = 2*i - 2*n for i. -3 Let z(u) = -10*u + 6. Let g be z(-2). Let r(k) = -8*k**2 - k**3 + 19*k**2 - 9*k + k - g + 9. Let o be r(10). Solve 4*a + 13 = s, -7*a = -3*a + s + o for a. -2 Let z be ((-7)/2)/((4 + 1)/(-10)) + -13. Suppose 5 = 2*p - 35. Let d be z/9*-1 - p/(-15). Solve d*h = -5*o + 6, -o + 3*h + 2 = -6 for o. 2 Let r = 2 + 3. Let a = 32 + 9. Suppose 4*i + 25 = a. Solve -r*s + 45 = 5*w, 0 = 2*w + s + i*s - 30 for w. 5 Suppose 5*b + 3*b = 5*b. Suppose 0 = j, -17*v + 12*v + 4*j + 55 = b. Solve 2*a - 3*p - 12 = 0, 5*a - 2*p = -4*p + v for a. 3 Let u be (0 - 3) + -3 + 10. Suppose 0 = r + 4*r - 10. Suppose -4*s + r*d + 11 = 3, -4*s + 2 = -5*d. Solve 1 = f, -s*k - 4 = -u*f - 6 for k. 2 Let w be (-35)/(-42)*(-108)/(-18). Solve -25 = -w*n + 4*a, -25 = -2*n - 3*n - 4*a for n. 5 Suppose s - 182 = -13*s. Let h(i) = -10*i - 4*i + s*i - 7. Let a be h(-9). Solve 3*q + 4*c + 1 + 20 = 0, 9 = a*q - 5*c for q. -3 Let z(k) = 2*k - 3. Let d be 3/((-3)/(-2) + -3). Let x be z(d). Let u(c) = c + 10. Let s be u(x). Solve -s*a - m + 12 = -0*m, 0 = -3*a + 2*m + 12 for a. 4 Let t(d) be the first derivative of -d**3/3 - 2*d**2 + 3*d - 24. Let v be t(-4). Let a = -6 + 14. Solve 5*m + 33 = -2*n, a = v*n + 2*m - 6*m for n. -4 Let x = 2238 - 2233. Solve -x*o - 87*d - 28 = -88*d, 3*o + 21 = 2*d for o. -5 Let s(v) = 6*v + 15. Let d be s(-2). Let u be 0/(1 + (-5)/(-5)). Suppose 5*h - h + 12 = 3*g, -12 = 4*g + 4*h. Solve z + 0*z - d*b + 4 = u, g = -3*b for z. -4 Let r = 63 - 43. Let x = r - 10. Suppose 2*s + 5*i = 20, 2*i + 2 - x = -5*s. Solve s = 3*g + 3*q + 18, -2*g + 5*q - 4*q - 6 = 0 for g. -4 Let z be (-9 - (-855)/60)/(12/16). Solve 2*c = 3*r + 8, -z + 13 = -3*r + 3*c for r. -4 Suppose 210*c + 2*d = 212*c - 106, -4*d + 12 = 0. Solve -c*m + 59*m - 15 = 0, -2*f = -m - 1 for f. 3 Suppose 5*k - 21 = 3*q, 10*k - 11 = 7*k + q. Solve -k*u + 9*v = 8*v - 7, u = v + 5 for u. 1 Let l(f) = -f**3 + 17*f. Let r be l(7). Let d be 2/8 + (-840)/r. Solve -25 = w + d*w, 2*i - 2*w = 10 for i. 0 Suppose -3*b = -4*b. Suppose -3*c + b*c - 4*m = 3, 3*c + 3*m = 0. Suppose -363*r + 18 = -354*r. Solve 0*l - r*h + 3 = c*l, 2*h - 1 = -l for l. 1 Let t(c) = -51*c + 637. Let u be t(12). Solve 3 = -2*o + 6*o - q, -5*o = 3*q - u for o. 2 Let w(b) = b**3 - 4*b**2 - 3*b - 12. Let a = -85 + 90. Let x be w(a). Let v be (-1)/(-2)*(x + 2). Solve 3*d - 3*r = 5 + 4, -r + 2 = v for d. 5 Suppose -2*p - 5*g + 3 - 8 = 0, p - 8 = g. Let j = 26188 - 26177. Solve -5*b + p = 5*r, 0 = 2*r - 3*r + 3*b - j for r. -2 Suppose 3*k = 4*w + 72, 443*k - 444*k = 2*w - 34. Solve -3*u = 2*t - 4*t - 8, 0 = -4*t - 5*u + k for t. 2 Suppose -u + 5 + 1 = 0. Suppose -u*i = -288 + 276. Solve n + i = 2*s + 5*n, s = 2*n + 9 for s. 5 Let n(z) = -3*z**2 - 118*z + 80. Let w be n(-40). Suppose w = -23*f - 11 + 11. Solve 0 = -4*d + 2*r + 2, 5*d - 2*r + 5*r + 25 = f for d. -2 Let x(s) = 1 - 6*s + 10*s - 1 + 8*s**2 - 2. Let o be x(-1). Solve -o*f - 3*v - v = -6, 0 = 3*f + 2*v - 9 fo
TO BE PUBLISHED IN THE OFFICIAL REPORTS OFFICE OF THE ATTORNEY GENERAL State of California DANIEL E. LUNGREN Attorney General ______________________________________ OPINION : : No. 95-302 of : : December 20, 1995 DANIEL E. LUNGREN : Attorney General : : ANTHONY S. Da VIGO : Deputy Attorney General : : ______________________________________________________________________________ THE HONORABLE MILTON MARKS, MEMBER OF THE CALIFORNIA STATE SENATE, has requested an opinion on the following questions: 1. May a county rent space to a private, non-profit organization for the operation of a contemporary art museum in a building maintained by the county for use of a veterans' association, where the veterans' association has not violated the terms or conditions of the county's dedication for such use, has not consented to the revocation of the dedication for such use, and has not abandoned its use of the building? 2. May a charter county revoke its dedication of a building for use of a veterans' association without substituting alternative facilities, where the veterans' association has not violated the terms or conditions of the county's dedication for such use, has not consented to the revocation of the dedication for such use, and has not abandoned its use of the building? CONCLUSIONS 1. A county may rent space to a private, non-profit organization for the operation of a contemporary art museum in a building maintained by the county for use of a veterans' association, even though the veterans' association has not violated the terms or conditions of the county's dedication for such use, has not consented to the revocation of the dedication for such use, and has not abandoned its use of the building, provided that the use of the building as a museum is incidental to, consistent with, and does not unduly interfere with the reasonable use of the building by the veterans' association. 1. 95-302 2. A charter county may not revoke its dedication of a building for use of a veterans' association without substituting alternative facilities, where the veterans' association has not violated the terms or conditions of the county's dedication for such use, has not consented to the revocation of the dedication for such use, and has not abandoned its use of the building. ANALYSIS Pursuant to the provisions of Military and Veterans Code section 1262,1 the board of supervisors of a county may "provide, maintain or provide and maintain buildings, memorial halls, meeting places, memorial parks, or recreation centers for the use or benefit of one or more veterans' associations." (See Wall v. State of California (1946) 73 Cal.App.2d 838, 840-841; 62 Ops.Cal.Atty.Gen. 655 (1979).) The provision of such a facility pursuant to section 1262 and its acceptance by the veterans' association constitutes a dedication for such use and benefit. (' 1266.) The two questions presented for consideration require an interpretation of section 1266. Specifically, it must be determined whether space within a building dedicated for use of a veterans' association may be rented for the operation of a contemporary art museum and whether a charter county may revoke a dedication for such use without providing an alternative facility to the veterans' association in the absence of a violation, consent, or abandonment by the association. Section 1266 provides: "Whenever a county has provided, maintained, or provided and maintained any building, memorial hall, meeting place, memorial park, or recreation center for the use or benefit of one or more veterans' associations, pursuant to Section 1262, the provision of that facility and its acceptance by the veterans' association constitutes a dedication of that property to a public purpose, and the county may not revoke the dedication, so long as the veterans' association has not violated the terms and conditions of the dedication, unless it dedicates substitute facilities or unless the veterans' organization has either consented to the proposed county action or has abandoned its use of the facilities." 1. Rental of Dedicated Space In Allied Architects' Assn. v. Payne (1923) 192 Cal. 431, the Supreme Court held that the dedication of a building by a county for the exclusive use of a veterans' association did not violate the constitutional prohibition against the making of gifts of public funds. After a lengthy analysis the court concluded: "It follows from what has been said that the board of supervisors, acting in pursuance of section 4041f of the Political Code [now section 1262], is empowered to erect as a memorial hall the building to be known and designated as Victory Hall. Furthermore, it is empowered to limit the use of the hall to a meeting place for associations of veterans who have served the United States honorably in its wars. We 1 All section references herein are to the Military and Veterans Code. 2. 95-302 do not, however, wish to be understood as holding that the legislature has not the power to enlarge the scope of the use of the hall to include kindred organizations or to open it to the public at large if future exigencies indicate that a wider use of the hall would enhance its usefulness and better subserve the public needs." (Id., at p. 440.) Six years later in Gridley Camp No. 104 v. Board of Supervisors (1929) 98 Cal.App. 585, the Court of Appeal explained the Payne decision as follows: "In behalf of the petitioners it is argued that the case of Allied Architects' Association of Los Angeles v. Payne, 192 Cal. 431, holds and adjudges that the Board of Supervisors, or the committee or custodian placed in charge of such building possesses no authority to permit the use thereof by any other than a veteran association. While there is language in the opinion in that case which is open to the construction that the author of the opinion entertained the idea that such buildings should properly be limited to the exclusive use of such organizations, and the case does hold that the Board of Supervisors may erect a building under the act which is devoted exclusively to the use of such organizations, the case does not go to the extent contended for by the petitioners. It is to be observed also that the case primarily was based upon a resolution of the Board of Supervisors of the county of Los Angeles, which specified that the building to be erected should be exclusively used as a meeting place for patriotic, fraternal and benevolent associations `whose membership shall be composed only of veterans,' etc. "While we are of the opinion that under the section of the Political Code authorizing the construction of memorial buildings, boards of supervisors possess the authority and power to limit the use thereof and of every room therein, exclusively to veterans, it does not follow that the language should be so narrowly construed as to prevent the incidental use thereof or the incidental use of the rooms therein, such as the main auditorium, in such a manner as not to interfere with the main purposes for which the building has been erected, and at times when such rooms would otherwise be vacant. While the section of the code authorizes the construction of such buildings, in the nature of things and in the ordinary course of events associations of veterans of wars that are past will cease to be. This is peculiarly applicable to cities no larger than the town of Gridley and it would be carrying the idea further than we think the language of the act requires to hold that when the veteran associations referred to have ceased to be, the building must remain vacant or stand as an empty monument. The complaint in this case is directed particularly to the use permitted of the auditorium, but from the facts before us it is apparent that the auditorium having a seating capacity of 1200 is designed for use only on special occasions, and under such circumstances we see no reason why the Board of Supervisors or the committee placed by it in control of the building, should not allow the incidental use of the auditorium for purposes not inconsistent with the objects for which the building was erected, or which do not obstruct or interfere with the use of the building by the different veteran organizations, either free of charge or for a stated compensation, and thus aid in defraying the expenses of maintaining the building." (Id., at pp. 591-592.) 3. 95-302 On numerous occasions we have alluded to the issue of the incidental use of dedicated facilities. In 5 Ops.Cal.Atty.Gen. 155 (1945), for example, we concluded that a board of supervisors may allow the use of a veterans' memorial hall by a non-veterans organization, where such use is not specifically restricted and would not interfere with its use by veterans. Similarly, in 13 Ops.Cal.Atty.Gen. 56 (1949), respecting the use of such a facility in Imperial County, we stated in part: ". . . We think, therefore, that the construction should be that where a veterans' association is given the use of the building for a regular meeting place or recreation center, such organization must be composed solely of veterans, but that this should not prohibit the supervisors from permitting the use of such buildings or parts thereof for other purposes incidental to the main purpose by the State and Federal Governments if the board, in the exercise of a sound discretion, determines that such use will not interfere with the main use or purpose for which such buildings were constructed." (Id., at p. 60; see also 16 Ops.Cal.Atty.Gen. 64 (1950).) In 1955 the Legislature essentially codified the Gridley decision by enacting section 1264. (Stats. 1955, ch. 1604, ' 1.) Section 1264 provides: "The governing body maintaining any facilities constructed or maintained pursuant to this chapter may provide for the use of such facilities by persons or organizations other than veterans, either free of charge or for stated compensation to aid in defraying the cost of maintenance, for any purpose not inconsistent with the continued use pursuant to this chapter, when such use will not unduly interfere with the reasonable use of the facilities by veterans' associations." In 1989, however, the Legislature enacted section 1266, as set forth at the outset. (Stats. 1989, ch. 102, ' 1.) Section 1266 speaks of the provision by a county of a facility and its acceptance by a veterans' association as a "dedication" to a public purpose. Does such terminology affect or modify in any respect the right of a board of supervisors under the terms of section 1264 to provide for the use of a dedicated facility by persons other than veterans? In our view neither the term "dedication" nor the language of section 1266 as a whole has any such effect. First, section 1266 pertains specifically to the revocation of a dedication, as distinguished from a county's authority to allow consistent uses of a dedicated facility. Second, the words of a statute must be construed in context, keeping in mind the statutory purpose, and statutes or sections relating to the same subject must be harmonized, both internally and with each other, to the extent possible. (Walnut Creek Manor v. Fair Employment & Housing Com. (1991) 54 Cal.3d 245, 268; 78 Ops.Cal.Atty.Gen. 253, 260 (1995); 78 Ops.Cal.Atty.Gen. 247, 251 (1995).) We may assume that had the Legislature intended to modify the meaning of section 1264 and the longstanding judicial precedent embodied therein, it would have amended that section so as to conform with such intention. (Cf. 78 Ops.Cal.Atty.Gen., supra, at 260; 75 Ops.Cal.Atty.Gen. 256, 260 (1992).) Third, the Legislature, at the time of the enactment of section 1266, expressly stated in an uncodified provision: "It is the intent of the Legislature, in enacting Section 1266 of the Military and Veterans Code, to codify the holding of the court in the case of Gridley Camp No. 104 v. Board of Supervisors, 98 Cal.App. 4. 95-302 585." (Stats. 1989, ch. 102, ' 2.) The grant of authority contained in section 1264 thus remains unchanged by the enactment of section 1266. Accordingly, it is concluded that a county may rent space to a private, non-profit organization for the operation of a contemporary art museum in a building maintained by the county for the use of a veterans' association, even though the veterans' association has not violated the terms or conditions of the county's dedication for such use, has not consented to the revocation of the dedication for such use, and has not abandoned its use of the building, provided that the use as a museum is incidental to, consistent with, and does not unduly interfere with the reasonable use of the building by the veterans' association. 2. Revocation of Dedication The second inquiry concerns a charter county's revocation of the dedication for use of a county building by a veterans' association. Section 1266 prescribes the conditions under which a dedication may be revoked where the veterans' association has not violated the terms and conditions of the dedication, has not consented to the revocation, and has not abandoned its use of the facility. The principal obligation of the county under these circumstances would be to dedicate a substitute facility.2 The question remains, however, whether the requirements of sections 1262-1266 are applicable to a charter county. Does having a charter mean that a county need not follow the Legislature's statutory requirements? The scope of authority reserved to a county by charter is more restricted than in the case of charter cities. (64 Ops.Cal.Atty.Gen. 234, 237-238 (1981); 61 Ops.Cal.Atty.Gen. 31, 33 (1978).) Specifically, the rule with respect to a charter county is that its legislation may supersede conflicting state law only as to those matters which are actually contained in the charter pursuant to express constitutional authorization. (Cal. Const., art. XI, '' 3, 4; 67 Ops.Cal.Atty.Gen. 402, 403-404 (1984); 61 Ops.Cal.Atty.Gen. 512, 519 (1978).) Here no such express authorization may be found in the Constitution. Further, with respect to matters of statewide concern, county charters are always subordinate to general laws. (Shean v. Edmunds (1948) 89 Cal.App.2d 315; 61 Ops.Cal.Atty.Gen., supra, at 33-34.) We entertain no doubt that the establishment and maintenance of facilities for veterans' associations throughout California are matters of statewide interest. The welfare of veterans and the promotion of patriotism are not peculiarly incidental to any particular local concern. On the contrary, the broader public purposes of veterans' associations were clearly envisioned by the Supreme Court in Allied Architects' Assn. v. Payne, supra, 192 Cal. at 434-435: "It is settled beyond question that the promotion of patriotism, involving as it does the sense of self-preservation, is not only a public purpose but the most elemental of public purposes. [Citations.] The continuity of our governmental institutions is dependent in a large measure upon the perpetuation of a patriotic impulse which is but 2 We have previously determined that a dedication may, in the event of a substantial abandonment, be withdrawn without the necessity of dedicating a substitute facility. (20 Ops.Cal.Atty.Gen. 199 (1952).) 5. 95-302 the willingness to sacrifice all for the ideas and the ideals which form the foundation stones of our republic . . . . It is conceded, as indeed it must be, that the erection of a building as a memorial hall, to the extent that it would serve as a stimulus to patriotism, would be for a public purpose. [Citations.] ". . . To permit the men whose associations have been effected for the purpose of inculcating and promoting patriotism to use the vacant halls as a rendezvous for revivifying and broadcasting the spirit of patriotism throughout the length and breadth of the land is clearly a public . . . purpose. ". . . And the determination of the legislature to make the memorial hall in question a living force for the promulgation of patriotic principles by dedicating its use to associations created to foster the spirit of patriotism . . . was certainly a wise and commendable exercise of legislative power." Inasmuch as the power to establish and maintain a building for the use and benefit of a veterans' association is not expressly included in the Constitution among those which may be contained in a county charter and is further a matter of statewide concern, the terms and requirements of sections 1262-1266 govern charter counties. (Cal. Const., art. XI, ' 1, subd. (b).)3 It is concluded that a charter county may not revoke its dedication of a building for use of a veterans' association without substituting alternative facilities, where the veterans' association has not violated the terms or conditions of the county's dedication for such use, has not consented to the revocation of the dedication for such use, and has not abandoned its use of the building. ***** 3 A charter city and county has the same authority with respect to municipal affairs as does a charter city. (Cal. Const., art. XI, ' 6; West Coast Adver. Co. v. San Francisco (1939) 14 Cal.2d 516, 520-522; 61 Ops.Cal.Atty.Gen., supra, at 518, fn. 6.) Nevertheless, the same result would ensue for purposes of this inquiry. Where a charter city legislates with respect to municipal affairs, its charter prevails over the general laws. (Cal. Const., art. XI, ' 5, subd. (a).) As to matters of statewide concern, however, charter cities remain subject to state law. (Johnson v. Bradley (1992) 4 Cal.4th 389, 397-400; California Fed. Savings & Loan Assn. v. City of Los Angeles (1991) 54 Cal.3d 1, 16-18; 78 Ops.Cal.Atty.Gen. 143, 148-150 (1995).) 6. 95-302
If this opinion indicates that it is “FOR PUBLICATION,” it is subject to revision until final publication in the Michigan Appeals Reports. STATE OF MICHIGAN COURT OF APPEALS DAVID SUTTON, UNPUBLISHED January 14, 2020 Plaintiff-Appellant, v No. 345716 Oakland Circuit Court ADVANCE PHARMACEUTICAL, INC, LC No. 2014-144679-CZ Defendant-Appellee. Before: RIORDAN, P.J., and SAWYER and JANSEN, JJ. PER CURIAM. In this products liability based personal injury action, plaintiff appeals as of right the order granting summary disposition in favor of defendant under MCR 2.116(C)(10). We affirm. I. RELEVANT FACTUAL AND PROCEDURAL BACKGROUND This is the third appeal in this matter. See Sutton v Advance Pharmaceutical, Inc. (Sutton I), unpublished per curiam opinion of the Court of Appeals, issued October 25, 2016 (Docket No. 328038); Sutton v Advance Pharmaceutical, Inc. (Sutton II), unpublished per curiam opinion of the Court of Appeals, issued March 13, 2018 (Docket No. 336526). In plaintiff’s first appeal, this Court summarized that, This action arises from plaintiff’s claim that he suffered severe injuries when he ingested acetaminophen pills manufactured by defendant. Plaintiff, in pro per, filed his complaint on December 22, 2014, alleging claims of failure to warn, improper labeling, and manufacturing defect; defendant answered on January 29, 2015. On March 27, 2015, plaintiff filed a motion to amend his complaint, in which he moved to state similar claims plus a breach of the implied warranty of marketability. [Sutton I, unpub op at 2.] In his first appeal, plaintiff appealed the trial court’s dismissal of his case for failure to pay a $500 sanction, and this Court reversed and remanded, concluding that the $500 sanction was improper, and the dismissal was an inadequate remedy for failure to pay. Sutton I, unpub op at 4. The case was then remanded for further proceedings consistent with that opinion. Id. -1- After being remanded to the trial court, defendant “moved . . . to compel plaintiff to sign authorization forms for release of his medical records.” Sutton II, unpub op at 1. “Over plaintiff’s physician-patient privilege objections, the trial court granted defendant’s motion[.]” Id. However, plaintiff refused to sign any authorization forms for release of his medical records. The trial court thus “entered an order dismissing plaintiff’s case without prejudice.” Id. Plaintiff’s second appeal to this case followed. Again, this Court reversed and remanded for further proceedings. This Court noted that “defendant sought medical records from plaintiff’s treating physicians to determine whether plaintiff was taking any other medication at the time of the alleged injury that may have caused side effects similar to the ones identified in his complaint.” Id. at 2. Defendant sought this information under MCR 2.314(C)(1) and MCR 2.314(A). However, plaintiff “asserted that the medical records sought by defendant were privileged, and therefore not discoverable. MCR 2.314(C)(1)(b); MCR 2.314(A)(1)(b). Specifically, plaintiff asserted the physician-patient privilege, which has its roots in MCL 600.2157.” Id. at 2-3. This Court correctly concluded that plaintiff was entitled to assert the physician-patient privilege. This Court also explained that under MCL 600.2157, quoted below in relevant part: If the patient brings an action against any defendant to recover for any personal injuries, or for any malpractice, and the patient produces a physician as a witness in the patient’s own behalf who has treated the patient for the injury or for any disease or condition for which the malpractice is alleged, the patient shall be considered to have waived the privilege provided in this section as to another physician who has treated the patient for the injuries, disease, or condition. This provision is called the patient-litigator exception. Landelius v Sackellares, 453 Mich 470, 474; 556 NW2d 472 (1996). However, this Court concluded that the patient-litigator exception did not apply in this case, where: plaintiff is the patient and has brought an action for personal injury against defendant. However, nothing in the record indicates that plaintiff has produced his own treating physician or “produced another treating physician as a witness” which would trigger the application of the patient-litigator exception. In fact, the only witness plaintiff has identified at this point in the litigation is Regan D. Carney, who states in her affidavit that she lived with plaintiff and observed him ingest the pills. Although defendant argues that plaintiff intends to rely on his treating physician, plaintiff’s discovery responses do not support that assertion. Accordingly, because plaintiff has asserted a valid privilege to his medical records, the trial court abused its discretion by ordering plaintiff to sign authorization forms so that defendant could obtain those records. [Sutton II, unpub op at 3-4.] Again, this case returned to the trial court, where the parties continued to engage in written and oral discovery. -2- Eventually, defendant moved for summary disposition under MCR 2.116(C)(10). Defendant ultimately moved for summary disposition under MCR 2.116(C)(10) on July 27, 2018. Defendant explained that it manufactures and packages over-the-counter (OTC) medications such as baby aspirin and acetaminophen, and then distributes these medications to wholesalers. The medications it manufactures are never sold to the public, nor are they packaged for sale to the public. Rather, they are purchased by hospitals, nursing homes, and pharmacies. Defendant admitted that in June 2013, it issued a nationwide recall for its baby aspirin because one pharmacist in Rhode Island noticed that one bottle of baby aspirin looked “markedly different than [it] should,” and actually contained acetaminophen instead of baby aspirin. Now, defendant argued, plaintiff claimed that he ingested acetaminophen instead of baby aspirin after filling prescriptions at a CVS pharmacy in Farmington, Michigan. However, plaintiff admitted at his deposition that he no longer has the bottle or the pills that he ingested because he destroyed them. Moreover, although plaintiff puts his physical condition into controversy, plaintiff refused to disclose his medical records related to his claims, and also refused to sign authorizations for the release of medical records, citing privacy. Although “plaintiff did provide approximately 200 random, piecemealed, and incomplete records from various medical providers ranging from 2005 through 2013, most . . . are handwritten and illegible, without any verification from a custodian of records as to the accuracy or completeness of the records.” Thus, defendant argued, plaintiff is unable to meet his burden and establish multiple essential elements of his claim: liability, damages, and causation. Accordingly, defendant argued it was entitled to summary disposition under MCR 2.116(C)(10). The trial court agreed with defendant, concluding on the record that plaintiff was unable to present any evidence of causation or damages, and therefore defendant was entitled to summary disposition. An order granting defendant’s motion for summary disposition for the reasons stated on the record was entered on September 12, 2018. This appeal followed. II. STANDARD OF REVIEW Defendant moved for summary disposition under MCR 2.116(C)(10). This Court reviews a trial court’s determination on a motion for summary disposition de novo. Cove Creek Condo Assoc, ___ Mich App at ___; slip op at 13. “Summary disposition is proper under MCR 2.116(C)(10) if the affidavits and other documentary evidence show that there is no genuine issue concerning any material fact and that the moving party is entitled to judgment as a matter of law.” Id., quoting Rataj v City of Romulus, 306 Mich App 735, 746; 858 NW2d 116 (2014). Upon being presented with a motion for summary disposition under MCR 2.116(C)(10), a trial court’s “task is to review the record evidence, and all reasonable inferences therefrom, and decide whether a genuine issue of any material fact exists to warrant a trial.” Skinner v Square D Co, 445 Mich 153, 162; 516 NW2d 475 (1994) (footnote omitted). III. ANALYSIS In this case, plaintiff filed a personal injury claim, rooted in products liability, against defendant, the manufacturer of various OTC medications, including baby aspirin and acetaminophen. Plaintiff claims that he filled a prescription for baby aspirin at his local drugstore, and was given a product manufactured by defendant and labeled as baby aspirin. -3- However, instead of baby aspirin in the prescription bottle, it was actually acetaminophen, which upon ingestion, caused plaintiff to suffer “confusion, sweating, extreme fatigue, diarrhea, pain in stomach (especially upper right portion), irregular heartbeat, and pale stools.” Plaintiff claimed that defendant failed to warn plaintiff of an unreasonably dangerous condition where acetaminophen was labeled and sold as baby aspirin, and that ingestion of acetaminophen caused his injury. The trial court concluded that plaintiff was unable to establish causation or damages relating to his claim, and granted summary disposition in favor of defendant. We agree. As our Supreme Court articulated in Skinner, Under Michigan products liability law, as part of its prima facie case, a plaintiff must show that the manufacturer’s negligence was the proximate cause of the plaintiff’s injuries. Brisboy v Fibreboard Corp, 429 Mich 540, 547; 418 NW2d 650 (1988) (emphasis added). We have previously explained that proving proximate cause actually entails proof of two separate elements: (1) cause in fact, and (2) legal cause, also known as “proximate cause.” Moning v Alfono, 400 Mich 425, 437; 254 NW2d 759 (1997). The cause in fact element generally requires a showing that “but for” the defendant’s actions, the plaintiff’s injury would not have occurred. Prosser & Keeton, Torts (5th ed), § 41, p 266. On the other hand, legal or “proximate cause” normally involves examining the foreseeability of consequences, and whether a defendant should be held legally responsible for such consequences. Moning, [400 Mich] at 439. See also Charles Reinhart Co v Winiemko, 444 Mich 579, 586 n 13; 513 NW2d 773 (1994). A plaintiff must adequately establish cause in fact in order for legal cause or “proximate cause” to become a relevant issue. [Skinner, 445 Mich at 162-163.] Additionally, a plaintiff can establish causation through circumstantial evidence. Id. at 164. However, “[t]o be adequate, a plaintiff’s circumstantial proof must facilitate reasonable inferences of causation, not mere speculation.” Id. We conclude that plaintiff is unable to establish cause in fact, through either direct or circumstantial evidence. Therefore, not only does proximate causation not become an issue in this matter, but plaintiff is unable to make his prima facie case, and summary disposition in favor of defendant was appropriate. First, plaintiff is unable to present any physical evidence of legal causation. In his deposition, plaintiff testified that he destroyed both the pills he ingested and the mislabeled bottle that the pills came in. Second, plaintiff is unable to present any documentary evidence to support his claim. Plaintiff admitted in his reply brief filed in this Court that he never sought medical treatment for his alleged injuries. Thus, there are no medical records to suggest what plaintiff may have ingested to cause his injuries, or the extent to which he was injured. -4- Finally, plaintiff cannot present any testimonial evidence regarding causation. Plaintiff planned to present the testimony of his roommate, as well as his own testimony, to establish both causation and damages. Although plaintiff’s testimony and the testimony of his roommate is fact-based, it is merely lay witness testimony, and does not take the place of medical witness testimony or expert witness testimony that would support plaintiff’s theory of causation. Indeed, any testimonial evidence on causation offered by plaintiff or his roommate is speculative and merely conjecture. We note that the parties in this case focus most of their arguments on the applicability of MCR 2.314(B)(2). Here, plaintiff has repeatedly refused to sign authorizations for release of his medical records relating to the injuries claimed in this case. Thus, because plaintiff’s medical history and physical condition are relevant, yet plaintiff claims his medical information is subject to privilege and has prevented discovery of his medical information, MCR 2.314(B)(2) would prevent plaintiff from “present[ing] or introduce[ing] any physical, documentary, or testimonial evidence relating to the party’s medical history or mental or physical condition.” However, plaintiff has admitted there are no medical records to be discovered because he never sought medical treatment for his alleged injuries. The applicability of MCR 2.314(B)(2) in this matter is therefore a moot issue. Because plaintiff cannot establish causation, he cannot establish a prima facie products liability case. Defendant was therefore entitled to summary disposition under MCR 2.116(C)(10), and the trial court did not err by granting summary disposition in favor of defendant. Affirmed. /s/ Michael J. Riordan /s/ David H. Sawyer /s/ Kathleen Jansen -5-
Contents Ilya I. Ilyin and Arkadi D. Ursul Evolutionary Approach to Global Research and Education: Theoretical and Methodological Issues Original text and References The article presents an evolutionary approach to Globalistics which will study global processes and systems, primarily, globalization and global problems as regards to their development and relation to and individual and society. The main subject of Evolutionary Globalistics is global development as an evolution of global processes and systems. The concept of Evolutionary Globalistics is considered within the context of the universal (global) evolutionism and possible transition to the sustainable development, and also formation of various stages of noosphere. Education is likely to undergo transformations caused by the evolutionary changes of the whole civilization and interaction between society and nature. There will appear an evolutionary range of global models of education processes and systems, starting from the currently existing experimental options of global education, then education for the global process of sustainable development, and also formation of both noospheric education, and ‘global and evolutionary’ forms of education. The present article discusses some general issues connected with the definition of Global Studies, its subject, status, terminology, main directions and problems of this field. The author's position concerning a number of fundamental issues is formulated, namely: what is Global Studies; whether this branch is a science; what is its status; what place Global Studies hold in the system of modern sciences; whether Global Studies belong to political sciences, sciences about the international relations or to philosophical disciplines. The main issue of this article concerns the following issue: how the religion in recent years has become the major subject of the world politics and international relations, not least because the religion has appeared in the center of some dangerous world global conflicts. It is proved that modern global history which developed in the context of disputable discussion on globalization gives us the chance to understand on the one hand how the subject of religion has been ignored in the world politics studies, and on the other how while studying religion in particular its sociological form the international relations have also been neglected in view of its constant and inappropriate concern of secularization. So both parts of the equation ignore each other in many respects. It is characteristic for the structure of academic disciplines, in particular in the Western world. Various directions of human activity have become more closely connected in the modern global world. It is well shown by the recent world crisis which having started in the financial and economic spheres spread to social, political and other spheres of public life very quickly, having affected not only material, but also spiritual foundations of public life: culture, science, education. To promote a better understanding of complex nature of the problems of this subject the editorial board of the journal ‘Age of Globalization’ appealed to competent experts to express their opinion on this point. This paper will focus on the basic social contradiction, due to the deep, often overlooked conflict between capitalism, the main motor of the main part of the world, which can be represented as a so-called economic blockhead and an influential form of Islam, which I will call ‘Islamic fundamentalism’ applying the Protestant term of ‘religious fundamentalism’. I believe that this contradiction, which results from unfettered expansion of increasingly globalized economic capitalism, lies at the root of the 9/11 events. Migration has always been the most important component of economic development and social progress in many countries. Labor migration becomes one of the major resources of the regional integration where regulation of labor migration is conducted at the region-wide level since only such large integration unions using advantages of merger of the markets, resource bases and labor potentials can withstand in the conditions of the growing competition of the globalized world. However, if migration is not regulated by the adequate laws and rules, it poses a danger to human rights violation of people participating in it and social tension. Today discussion about migration represents a contradiction between economic logic of globalization, on the one hand, and the moral values presented by the concept of human rights, on the other. There are often diametrically opposed views on the problems of protecting rights of the migrants in view of these disagreements, especially of those who have no legal status, and insurance of safety and social stability if foreign citizens are under protection of the national legislation. In the reality of daily life this contradiction focuses on the problem of migration as a dominant theme of discussion about the relationship of labor and capital, the distribution of income from economic activity, about the regulation of working conditions and life and how foreign workers and civil society can self-organize to formulate and protect the rights clearly. The article represents the critical analysis of the concept of ‘political globalization’ used by geopolitical hegemon to remove the principles of national sovereignty, national interests, balance of forces, and also the category ‘expansion’. A peculiar role of the state as the main subject of international relations and a guarantor of the world system variety is proved, and also a role of the state line as a measure of real sovereignty in the conditions of the crisis of the Westphalian system created by information and economic globalization. Some aspects of the general-purpose bases of making border policy of the state during the post-Westphalian era are considered. The scientific validity of the ideological foundations of globalization is considered in the article as the prime condition of development of globalization in the direction of formation of the dynamic global system. The article explores reasons of the crisis observed in three subsystems nowadays: political, legal and economic. The author introduces the thesis that the main problem consists in the system of norms and basic values of global system. Basic principles of the corresponding activity in solution of the problem are also discussed. The outstanding Russian thinker, philosopher, literary scholar, publicist, and politician Yury Fedorovich Karyakin marks the 80th anniversary. To this remarkable date we publish the philosopher's article written in the last century, but which has not lost its significance nowadays. The attention is paid to human rights as to a possible consensus between cultures within the intellectual dialogue in this article. The author seeks to contribute to intellectual overcoming of clash of civilizations and searches for the ways to overcome imposing of human rights on other cultures. It is shown that the method of the cross-cultural affirming of human rights without particular aspects of separate cultures and civilizations causes the resistance to humiliation and abuse among people. The first part of the article is devoted to a short review of the range of problems of the discussion and dialogue, the second one deals with cross-cultural and intercivilization nature of this discussion, in the third one human rights are considered on the basis of the above-mentioned facts, and the last, final part concerns legal documents on human rights. The ideas of united Europe emerged long ago, at the beginning of the 16thcentury. Later many outstanding thinkers, scientists, philosophers, politicians, and writers developed them in various directions. In real practice Europe has never been united either ethnically, or politically. The formation of a common European educational space assumes not unification, but coordination, mutual enrichment of national education systems and educational cultures with preservation of the national identity and civilization distinctions. The role of Russia in this process is very significant. Being the largest Eurasian country Russia has significant geopolitical and geo-economic advantages and objectively should develop a multi-vector foreign policy. In the article the author analyzes modern Russian policy in the Caucasus. It is shown that today Islamic terrorist organizations are the main political force fighting against Russia. They use different resources: staffing, financial, ideological and political. The youth of the Caucasian Muslim republics is the main social group which is of interest for the terrorist networks in terms of staffing. Islamism has its own ideology distinguishing it from separatism with different functioning mechanisms, regions of distribution, participants and sympathizing. Policy had also to be changed respectively. During a pause in the Caucasian policy the enemy represented by radical Islamism managed to become stronger organizationally, financially and ideologically, and also began a new round of the armed stand-off with Russia. Now, when the novelty of the problem became clearer, qualitatively new Russian policy is rapidly formed in the Caucasus. The article is devoted to the problem which explodes the myth about identity of Russia and the Soviet Union through social and philosophical analysis of formation and development of Russia, through its dialectics of objective conditions and subjective factor. The abolishment of this myth can be the first step on the way of establishment of good neighborly relations between Europe, Asia and Russia. Today we can observe the signs of weakening of ‘the third wave’ of democracy, and the beginning of the process of its throwback. We can see this democratic backsliding not only in Russia, but also in other countries of the world. This article is devoted to generalization of the collected facts and the analysis of the reasons of backsliding of ‘the third wave’ of democracy. Using materialistic methodology the author shows that the crash of totalitarian systems known in history as socialism was connected with their economic crisis. The considerable part of authoritarian regimes of the world is based upon the armed forces. Refusal of further development of democracy in a number of countries is caused not only by economic strengthening of bureaucracy. In many countries freedom brought instability, separatism and distribution of terrorism. The author associates progress towards democracy with processes of liberalization of economy, and draws a conclusion that authoritarianism is based on the state paternalism and accompanying poverty. It can be defeated only by means of policy of social liberalism providing involvement of the most part of population in labor and business activity and supporting that part of population which cannot carry out these functions.
The present invention relates to a method for solidifying radioactive wastes, and more particularly to a method for improving the long-term storage characteristics of solidified radioactive wastes comprising compact blocks which are to be disposed in transporting or final storage containers. The compact blocks comprise prefabricated ceramic tablets containing radioactive substances and an inactive matrix which continuously surrounds these tablets and is solid in its final state. Radioactive wastes must be conditioned for permanent storage, i.e. they must be converted with the aid of matrix materials into solidified products. Such solidified products must have a high resistance to leaching of the radioactive substances by aqueous solutions. For waste concentrates containing medium and highly radioactive wastes and/or actinides, or for fine-grained solid wastes which are present as suspensions in water or acids or for muds, ceramic matrix materials have been used, among others, to form the solidified products. The radioactive wastes are mixed with these matrix materials, are shaped and sintered to form mechanically stable bodies. For reasons of workability of such ceramic materials, the tablet shape has been selected for the ceramic solidification products. In principle, radioactive wastes that have been conditioned in this manner can be stored in suitable containers in a permanent storage facility. There do exist, however, some considerable drawbacks with the tablet shaped solidification products. Thus, if the transporting or final storage container is damaged, the tablets may be scattered about. The danger of contamination is then augmented considerably. Moreover, the bulk of such tablets constitutes a very large surface area. In the case of the entry of liquids, for example, water or aqueous salt solutions, the leachings of radioactive substances per unit time is relatively high. Further, heat dissipation from the bulk tablet fill is limited. These drawbacks can be overcome if bulk fills of ceramic tablets, whose individual volume is in the milliliter range, are solidified with the aid of a filler or binder into compact and mechanically stable blocks. The volume of these blocks is the liter range. Such fillers or binders will hereinafter be called the continuous matrix. DE-PS No. 2,726,087 and corresponding U.S. Pat. No. 4,297,304 disclose a method for solidifying such radioactive wastes. In particular, these documents disclose a method for solidifying high and medium radioactivity and/or actinide containing aqueous waste concentrates or fine-grained solid wastes suspended in water for final noncontaminating storage in which the waste concentrates or the suspensions are subjected together with absorbing and/or hydraulically binding inorganic material, to a ceramic firing process so as to produce a solid sintered body. The method comprises a plurality of steps, including (a) treating the waste concentrates or suspensions by evaporation, to form an evaporate having a water content in the range between 40 and 80 percent by weight and a solid content whose metal ion and/or metal oxide concentration lies between 10 and 30 percent by weight of the evaporate being formed, and adjusting the pH of the evaporate to between 5 and 10; (b) kneading the evaporate obtained from step (a) with a clay-like substance containing a small quantity of cement, or with such a clay-like substance or mixture of a clay-like substance with a small quantity of cement containing an additive for suppressing the volatility of alkali metals or alkaline earth metals which may be present in the evaporate and/or an additive for suppressing the volatility of any decomposable anions which may be present in the evaporate selected from sulfate, phosphate, molybdate and uranate ions, at a weight ratio range of evaporate to clay-like substance of 1:1 to 2:1, the clay-like substance being at least one substance selected from pottery clays, stoneware clays, porcelain clay mixtures and kaolin; (c) producing molded bodies from the kneaded mass obtained in step (b); (d) heat treating the molded bodies, including drying at temperatures between room temperature and 150.degree. C., calcining at temperatures up to 800.degree. C., and subsequently firing at temperatures between 800.degree. and 1400.degree. C. to form practically undissolved mineral phases; and (e) enclosing the molded bodies containing the fired mineral phases on all sides in a dense, continuous ceramic or metallic matrix. The molded bodies of step (d) can be comminuted to a grain size range of about 1 to 10 mm, and thus be in the form of small particles or chips before being enclosed in the matrix of step (e). The continuous matrix can be a fired ceramic produced from at least one clay substance and at least one cement. It has been found, however, that if a continuous ceramic matrix is used produced from at least one clay-like substance, e.g. from the group including pottery clays, porcelain clay mixtures or kaolin, and cement, and particularly if this mass has been processed into a fired ceramic, the solidified product does not have the desired properties. No clay-like material, with or without the addition of cement, has been found thus far for use as a continuous matrix which, in its sintered state, has a coefficient of thermal expansion which is at least very similar to that of the ceramic tablets and which shrinks uniformly and tightly onto the ceramic tablets during firing so that in the past solidified blocks were obtained which were penetrated by extensive cracks. The cracks permitted the access of liquids into the interior. Moreover, the mechanical stability of such blocks was limited. These drawbacks could also not always be overcome by the use of a hot pressing technique. In contrast to mixtures of particulate bodies which can be optimally compressed and sintered, there are limited possibilities for compressing mixtures of sinterable, clay-like or ceramic powders. The compression limit is reached when the ceramic tablets contact one another and support themselves on one another. Beginning with this state, the pressure no longer acts on the ceramic powder disposed in the interstices. Sintering then takes place practically without pressure, i.e. compression occurs only by the shrinkage caused by the sintering process. Thus, the same or similar results can be expected as in the above-mentioned process of DE-PS No. 2726087 and U.S. Pat. No. 4,297,304 which is a pressure-free sintering process. If it is attempted to effect compression beyond the stated limits, this unavoidably results in destruction of the ceramic tablets. At the customary sintering temperatures, the ceramic matrix material does not flow so plastically that it is able to cover the resulting fragments on all sides, and accordingly the pressure surfaces remain practically open. One advantage of embedding the ceramic tablets in a matrix, namely the reduction of the surface area accessible to leaching when the transporting or permanent storage container is damaged, is thus eliminated. A more extensive compression than described above without the danger of destruction of the ceramic tablets can be realized if it is assured, by a high mixing ratio of ceramic powder to ceramic tablet, that in the compressed state there will always be matrix material between the ceramic tablets. Independently of whether this state can be realized with sufficient reliability under conditions applicable to working with highly radioactive substances, there exists the drawback that the volume of the container holding the block with the solidified tablets cannot be utilized optimally with respect to the tablets, since the matrix material enforces a certain "spacing" of the tablets. This drawback is connected with the fact that unavoidably expensive permanent storage volume must be occupied with inactive substances.
--- abstract: 'Recent advances in deep reinforcement learning (RL) have demonstrated its potential to learn complex robotic manipulation tasks. However, RL still requires the robot to collect a large amount of real-world experience. To address this problem, recent works have proposed learning from expert demonstrations (LfD), particularly via inverse reinforcement learning (IRL), given its ability to achieve robust performance with only a small number of expert demonstrations. Nevertheless, deploying IRL on real robots is still challenging due to the large number of *robot* experiences it requires. This paper aims to address this scalability challenge with a robust, sample-efficient, and general meta-IRL algorithm, SQUIRL, that performs a new but related long-horizon task robustly given only a single video demonstration. First, this algorithm bootstraps the learning of a task encoder and a task-conditioned policy using behavioral cloning (BC). It then collects real-robot experiences and bypasses reward learning by *directly* recovering a Q-function from the combined robot and expert trajectories. Next, this algorithm uses the Q-function to re-evaluate all cumulative experiences collected by the robot to improve the policy quickly. In the end, the policy performs more robustly (90%+ success) than BC on new tasks while requiring no trial-and-errors at test time. Finally, our real-robot and simulated experiments demonstrate our algorithm’s generality across different state spaces, action spaces, and vision-based manipulation tasks, e.g., pick-pour-place and pick-carry-drop.' author: - 'Bohan Wu, Feng Xu, Zhanpeng He, Abhi Gupta, and Peter K. Allen [^1]' bibliography: - 'IEEEabrv.bib' title: '**SQUIRL: Robust and Efficient Learning from Video Demonstration of Long-Horizon Robotic Manipulation Tasks**' --- Introduction ============ We aspire robots to become generalists who acquire new complex skills robustly and quickly. The robotic system, whether planned or learned, needs to leverage its existing knowledge to solve a new but related task in an efficient yet high-performance manner. Thanks to recent advances in machine learning and sim-to-real transfer mechanisms, short-horizon robotic manipulation such as grasping has improved in performance. However, many real-world robotic manipulation tasks are long-horizon, diverse, and abundant in volume. In the absence of a scalable and systematic way to construct simulation environments for a large number of tasks, the robot needs to learn a new task directly in the physical world from only a handful of trials, due to the high cost of collecting real-robot trial-and-errors and experiences. ![**Learning from a single video demonstration of a long-horizon manipulation task via Soft Q-functioned Meta-IRL (SQUIRL).** In the pick-pour-place example above, the robot needs to approach, pick-up and carry the grey bottle, pour the iron pebble inside the bottle into a specific container, and finally place the bottle back on the table. During training, the robot is given a single video demonstration for *each* of the 117 training tasks. After learning from these 117 videos, the robot also practices 90 trial-and-errors *in total* on these tasks. From such combined expert and robot trajectories, the robot learns the general skills of pouring robustly. At test time, given a single video demonstration of pouring into a *new, unseen* red container at a *new* position, the robot successfully replicates this new task without the need for any trial-and-errors.[]{data-label="fig:intro"}](figures/intro.png){width="1\linewidth"} We observe that real-world robotic skill acquisition can become more sample-efficient in several important ways. First, we notice that humans learn tasks quickly by watching others perform similar tasks. Among many forms of task representations such as rewards, goal images, and language instructions, human demonstrations guide exploration effectively and can lead to significant sample efficiency gains. Furthermore, learning from video demonstrations sidesteps hand-designing a proper reward function for every new task. In the case of a vision-based task, video demonstrations also conveniently provide the same pixel state space for the robot. In learning from demonstrations (LfD), the robot should be sample-efficient in two dimensions – it should use as few expert demonstrations (“demonstrations” hereafter) as possible and take as few trial-and-errors (practices) as possible on its own to learn a robust policy. Among LfD methods, behavioral cloning (“BC” hereafter) is sample-efficient but susceptible to compounding errors. Here, compounding errors refer to the problem in which every time a behavioral-cloned robot makes a small error, it makes a larger error down the road as it drifts away from the expert state distribution. In contrast, IRL alleviates compounding errors by allowing the robot to try the tasks out in the real world and measure its behavior against the expert. However, due to the need to learn a reward function, IRL can require many trial-and-errors in the real world, while BC does not require such robot experiences. We posit that leveraging off-policy experiences of trial-and-errors is essential to making IRL sample-efficient enough for real robots. Here, “off-policy experiences” refer to the *cumulative* experiences that the *robot* has collected thus far during *training*. In contrast, “on-policy experiences” are the most recent experiences that the robot has collected using its *current* policy. Humans leverage lifelong, cumulative experiences to learn quickly at present. We envision robots to acquire new skills more quickly by learning from off-policy (i.e., cumulative) experiences. Finally, many real-world tasks are related and share structures and knowledge that can be exploited to solve a new but similar task later. For example, humans can quickly learn to pick and place a new object after learning to pick and place many known objects. Meta-learning, explicitly utilizing this property, aims to learn a new but related task quickly if it has already learned several similar tasks in the past. With these motivations, we introduce SQUIRL, a meta-IRL algorithm that learns long-horizon tasks quickly and robustly by learning from 1) video demonstrations, 2) off-policy robot experiences, and 3) a set of related tasks. Fig.\[fig:intro\] explains this algorithm using the example of a set of long-horizon pick-pour-place tasks, using the UR5-Seed[^2] robot setup shown in Fig.\[fig:hardware\]. In this task, we have the containers (green, yellow, orange, and red), a cylindrical bottle (grey), and an iron pebble inside the bottle. The robot needs to *first approach* and pick-up the grey bottle, pour the iron pebble inside the bottle into a specific container on the table, and then finally place the bottle back on the table, as shown in each row of images in Fig.\[fig:intro\]. At the beginning of each task, the bottle is *not yet* in hand, but the iron pebble is already in the bottle. At training time, the robot is given a single video demonstration for each of the 117 pick-pour-place tasks, as shown in the first two rows of images in Fig.\[fig:intro\]. Every new combination of container positions represents a different pick-pour-place task. Furthermore, the robot only needs to pour into *one* of the containers in a single task. Therefore, pouring into different containers also represents different tasks. After learning from these 117 demonstrations, the robot also practices 90 trial-and-errors on these tasks *in total*. From such a combination of expert and robot trajectories, the robot learns the general skills of pick-pour-place robustly. In all 117 training tasks, only *two* of the four containers appear on the table: the green and yellow containers, as shown in the first two rows of images in Fig.\[fig:intro\]. The orange and red containers are excluded during training and only appear at test time, as shown in the last row of images in Fig.\[fig:intro\]. We do so to evaluate our algorithm’s generalizability to unseen containers *at test time*. As shown in the last row of images in Fig.\[fig:intro\], the robot successfully pours into a *new* container (red) at test time, at a *new* position never seen before during training, without the need for any trials or practices. To achieve such fast generalization to new tasks, our algorithm learns a task encoder network and a task-conditioned policy. The task encoder generates a 32-dimensional task embedding vector that encodes task-specific information. The policy network then learns to generalize to new tasks by accepting this task embedding vector as input, thus becoming “task-conditioned”. During training, our algorithm first bootstraps learning by training both the task encoder and the policy jointly via the BC loss. The robot then collects 10 trials across 10 tasks using the warmed-up policy and the task encoder. Next, using the combined experiences of the expert and the robot, our algorithm bypasses reward learning by directly learning a task-conditioned Q-function. Using this Q-function, our algorithm then reuses and re-evaluates all cumulative experiences of the robot to improve the policy quickly. This cycle repeats until the $90^{th}$ trial. Finally, at test time, the task encoder generates a new task embedding from a *single* video demonstration of a new task. This embedding is then inputted into the task-conditioned policy to solve the new task without any trial-and-errors and yet in a high-performance manner. In summary, our contributions are: 1. A *robust* meta-IRL algorithm that outperforms ($90\%$+ success) its behavioral cloning counterpart in real-robot and simulated vision-based long-horizon manipulation; 2. A *novel* Q-functioned IRL formulation that circumvents reward learning and improves IRL sample efficiency; 3. An *efficient* method that leverages off-policy robot experiences for training and requires no trials at test time; 4. A *general* approach that tackles various long-horizon robotic manipulation tasks and works with both vision and non-vision observations and different action spaces. Related Work ============ Inverse Reinforcement Learning (IRL) and Meta-IRL ------------------------------------------------- Inverse reinforcement learning (IRL) models another agent’s (typically the expert’s) reward function, given its policy or observed behavior. Previous works have approached the IRL problem with maximum margin methods [@abbeel2004irl][@ratliff2006mmp] and maximum entropy methods [@ziebart2010entropyirl][@ziebart2008maximum][@boularias11a2011entropyirl]. In particular, maximum entropy methods recover a distribution of trajectories that have maximum entropy among all distributions and match the demonstrated policy’s behaviors. While these methods have shown promising results in continuous control problems, they suffer from low sample efficiency due to the need for evaluating the robot’s policy, which can be alleviated by meta-learning (i.e., meta-IRL). SMILe [@smile] and PEMIRL [@yu2019meta] are two meta-IRL algorithms based on AIRL [@fu2018learning] that leverage a distribution of tasks to learn a continuous task-embedding space to encode task information and achieve fast adaptation to a new but similar task. Our work differs from [@smile][@yu2019meta] in four crucial ways. First, our meta-IRL algorithm works with real robots and image observations. Second, instead of a reward function, we directly model a Q-function that the policy can optimize, in order to increase IRL sample efficiency. Third, we train the task encoder with the behavioral cloning (BC) gradient as opposed to the IRL gradient for stabler and more efficient learning. Lastly, we bootstrap policy and task encoder learning using BC before training via meta-IRL. Real-robot Learning from Demonstrations (LfD) --------------------------------------------- Our work is related to real-robot LfD [@argall-survey-robot-learning-2009], such as [@xu2018neural][@huang2019neural][@huang2019motion]. In particular, [@finn2016guided] developed IRL on real robots without learning from raw pixels. Other works (e.g., [@zhang2017imitationmanipulation][@Kober2010RAM][@paster2009motorskills][@sermanet2018time]) used BC for real-robot LfD. Another work [@lynch2019latentplan] developed goal-conditioned BC on a simulated robot to learn long-horizon tasks by playing with objects in the scene. While enjoying efficient learning by casting imitation learning into a supervised learning problem, BC suffers from the covariate shift between the train and test data. In comparison, IRL achieves robust performance by modeling the state-action joint distribution instead of the conditional action distribution in BC [@divergence2019]. Different from previous works, our meta-IRL algorithm works on real-robot *vision-based* tasks, and its Q-functioned IRL policy gradient can be directly combined with the BC gradient signal to approach both the sample efficiency of BC and the robustness of IRL. One-shot Meta-imitation Learning on Real Robots ----------------------------------------------- Our algorithm attempts to enable robots to quickly and robustly imitate a single unseen video demonstration by learning from a distribution of tasks with shared structure, i.e., one-shot robot meta-imitation learning. For example, [@finn2017one] combines gradient-based meta-learning and BC on a real robot to learn quickly from video demonstrations. [@yu2018one] then extends [@finn2017one] to enable robots to learn from human-arm demonstrations directly. [@yu2019one] then improves [@yu2018one] to meta-imitation-learn multi-stage real-robot visuomotor tasks in a hierarchical manner. However, constrained by the covariate shift problem of BC, these works show limited task performance (e.g., around $50\%$ success rate for the training tasks). In contrast, our algorithm learns a vision-based manipulation task robustly ($90\%+$ success rates) and efficiently (117 videos, 90 trials) by utilizing the generalization ability of task embeddings [@rakelly2019efficient] and a novel Q-functioned IRL formulation. Preliminaries ============= Off-policy Reinforcement Learning via Soft Actor-Critic ------------------------------------------------------- Standard RL models a task $\mathcal{M}$ as an MDP defined by a state space $\mathcal{S}$, an initial state distribution $\rho_0 \in \Pi(\mathcal{S})$, an action space $\mathcal{A}$, a reward function $\mathcal{R}: \mathcal{S} \times \mathcal{A} \to \mathbb{R}$, a dynamics model $\mathcal{T}: \mathcal{S} \times \mathcal{A} \to \Pi(\mathcal{S})$, a discount factor $\gamma \in [0, 1)$, and a horizon $H$. Here, $\Pi(\cdot)$ defines a probability distribution over a set. The robot acts according to stochastic policy $\pi: \mathcal{S} \to \Pi(\mathcal{A})$, which specifies action probabilities for each $s$. Each policy $\pi$ has a corresponding $Q^\pi: \mathcal{S}\times\mathcal{A} \to \mathbb{R}$ function that defines the expected discounted cumulative reward for taking an action $a$ from $s$ and following $\pi$ onward. Off-policy RL, particularly Soft Actor-Critic (SAC) [@haarnoja2018soft], reuses historical experiences to improve learning sample efficiency by recovering a “soft” Q-function estimator $Q_{\theta}$. A policy can then be learned by minimizing the KL divergence between the policy distribution and the exponential-Q distribution: $\pi^* = \operatorname*{arg\,min}_{\pi \in \Pi} D_{KL} {\infdivx}{\pi(a \mid s)}{\frac{\exp(Q^{\pi_{old}}_{\theta}(s, a))}{Z(s)}}$ Timestep-centric IRL as Adversarial Imitation Learning {#sec:Timestep-centric} ------------------------------------------------------ The purpose of IRL is to learn the energy function $f_\theta$ implicit in the provided expert demonstrations and use $f_\theta$ to learn a policy that robustly matches the expert performance. In particular, timestep-centric IRL aims to recover an energy function $f_\theta(s, a)$ to rationalize and match the demonstrated expert’s action conditional distribution: $p_{\theta}(a \mid s) = \frac{\exp(f_\theta(s, a))}{Z_\theta} \propto \exp(f_\theta(s, a)) = \overline{p_{\theta}}(a \mid s)$, where $Z_\theta$ is the partition function, an integral over all possible actions given state $s$. In other words, IRL minimizes the KL divergence between the actual and predicted expert conditional action distributions: $\pi_E(a \mid s)$ and $p_{\theta}(a \mid s)$. Adversarial IRL [@fu2018learning][@ho2016generative] provides a sampling-based approximation to MatEntIRL [@ziebart2008maximum] in an adversarial manner. Specifically, AIRL [@fu2018learning] learns a generative policy $\pi_\psi$ and a binary discriminator $D_\theta$ derived from energy function $f_\theta$: $$\begin{aligned} \label{eq:dtheta} &D_\theta(s, a) = P((s, a) \text{ is generated by expert})\nonumber\\ &= \frac{\overline{p_{\theta}}(a \mid s) }{\overline{p_{\theta}}(a \mid s) + \pi_\psi(a \mid s)} = \frac{\exp (f_\theta(s, a))}{\exp (f_\theta(s, a)) + \pi_\psi(a \mid s)}\end{aligned}$$ and $\theta$ is trained to distinguish state-action pairs sampled from the expert vs. the policy, using binary cross entropy loss: $$\begin{aligned} \label{eq:discriminatorloss} \mathcal{L}^{IRL} = -\mathbb{E}&_{(s, a) \sim \pi_\psi, \pi_E}[y(s, a) \log(D_\theta(s, a)) \nonumber\\ &+ (1-y(s, a))\log(1-D_\theta(s, a))]\end{aligned}$$ where $y(s, a) = \mathds{1}\{(s, a) \text{ is generated by expert }\pi_E\}$. Meanwhile, the policy is trained to maximize the MaxEntIRL Objective [@ziebart2008maximum], or equivalently, to match the expert’s state-action joint distribution via reverse KL divergence [@divergence2019]. One-shot Meta-imitation Learning from A Single Video ---------------------------------------------------- In one-shot meta-imitation learning, the robot is trained to solve a large number of tasks drawn from a task distribution $p(\mathcal{M})$. The total number of tasks in this task distribution can be finite or infinite. Each imitation task $\mathcal{M}_{train}^i$ consists of a single video demonstration $\mathcal{D}^i_{\pi_E}$. During training, the robot can also generate limited practice trajectories (e.g., 90). For example, in the Pick-Pour-Place experiment in Fig.\[fig:intro\], the robot receives a single video demonstration for each of the 117 tasks. Each task is characterized by a different combination of container positions, or pouring into the green vs. the yellow container. At test time, the robot receives a single video of a new task $\mathcal{M}_{test}^i$ drawn from $p(\mathcal{M})$. For example, a new Pick-Pour-Place test task can be a *new* combination of container positions or pouring into a *new* container (e.g., the red or orange container). The robot then needs to solve this task the first time *without trial-and-error*. Embedding-based Meta-learning ----------------------------- Embedding-based meta-learning [@yu2019meta][@rakelly2019efficient] learns a task-specific embedding vector $z$ that contains task-level abstraction to adapt to a new but related task quickly. This method aims to learn a task-conditioned policy $\pi(a | s, z)$ that maximizes task-conditioned expected returns: $\max_\pi \mathbb{E}_{(s_t, a_t) \sim \pi, \rho_0} [\sum_{t=1}^T r(s_t, a_t | c) + \alpha \mathcal{H}(\pi(a_t|s_t, c))]$, by learning an embedding space $Z$ that maximizes the mutual information between $z$ and task context $c$. The goal is to make this learned embedding space generalizable to new tasks so that at test time, the policy can quickly adapt to unseen tasks with no or few practices. A key advantage of embedding-based meta-learning is the ability to learn from off-policy experiences. However, current methods are mostly if not only demonstrated in non-vision tasks in simulation. Mathematical Formulation for SQUIRL =================================== SQUIRL: Timestep-centric IRL as Soft Q-Learning {#sec:q} ----------------------------------------------- Previous works in timestep-centric IRL such as [@smile][@yu2019meta][@fu2018learning] have interpreted the energy function $f_\theta$ in Eq.\[eq:dtheta\] as a reward function $r_\theta$ and later recover a Q or advantage function based on reward $r_\theta$ for policy improvement. To improve IRL sample efficiency, we propose to *bypass* this reward learning and directly interpret $f_\theta(s,a)$ as the soft Q-function [@haarnoja2018soft] $Q^{\pi_{mix}}_\theta(s,a)$. This soft Q-function models the expert’s behavior as maximizing both the Q-value and its *entropy* (i.e., randomness) simultaneously. It also encourages the robot to explore the real world to imitate the expert more robustly. Under this formulation, approximating the expert’s conditional action distribution is equivalent to recovering a soft Q-function under which the expert is soft Q-optimal: $$\begin{aligned} &\operatorname*{arg\,min}_\theta D_{KL}{\infdivx}{\pi_E(a \mid s)}{p_{\theta}(a \mid s)} \nonumber\\ = &\operatorname*{arg\,max}_\theta \mathbb{E}_{a \sim \pi_E(a \mid s)} [Q^{\pi_{mix}}_\theta(s, a)] - \log Z_\theta \label{eq:softqtheta}\end{aligned}$$ Eq.\[eq:softqtheta\] rationalizes the expert behavior intuitively because the expert should be optimal with respect to the *cumulative* reward [@ziebart2010entropyirl], not the immediate reward. Here, $Q^{\pi_{mix}}_\theta$ is under a mixture policy $\pi_{mix}$ between the robot and expert’s policies. SQUIRL as Expert Imitation and Adversarial Learning {#sec:match} --------------------------------------------------- Under SQUIRL, the policy learning objective (Eq.\[eq:rl\]) is also equivalent (derivations on website) to matching: 1) the exponential-Q distribution of the discriminator $\theta$ (Eq.\[eq:match\]), 2) the generator’s objective in Generative Adversarial Networks (GANs) [@goodfellow2014generative] (Eq.\[eq:gan\]), and 3) the joint state-action distribution of expert [@divergence2019] (Eq.\[eq:joint\]): $\pi^* = \operatorname*{arg\,min}_{\pi \in \Pi} \mathcal{L}^{RL}(\pi)$, where $$\begin{aligned} \label{eq:rl}&\mathcal{L}^{RL}(\pi) =D_{KL} {\infdivx}{\pi_\psi(a \mid s)}{\frac{\exp{Q^{\pi_{mix}}_{\theta}(s, a)}}{Z(s)}} \\ \label{eq:match}&= D_{KL} {\infdivx}{\pi_\psi(a \mid s)}{p_\theta(a \mid s)}\\ \label{eq:gan}&= \mathbb{E}_{(s,a) \sim \pi_{mix}}[\log(1-D_\theta(s,a))-\log(D_\theta(s,a))]\\ \label{eq:joint}&= D_{KL} {\infdivx}{\rho_{\pi_\psi}(s, a)}{\rho_{\pi_E}(s, a)}\end{aligned}$$ Meanwhile, the discriminator $\theta$ is matching its Q-function to the log-distribution of the expert’s conditional action distribution (Section \[sec:Timestep-centric\]). Therefore, when this Q-function is optimal: $Q^{\pi_{mix}}_\theta = Q^{\pi_{mix}}_{\theta^*}$, the robot’s policy objective (Eq.\[eq:rl\]) is also matching the expert’s conditional action distribution: $$\label{eq:indirect} \psi^* = \operatorname*{arg\,min}_\psi E_{\rho_{\pi_{mix}}(s)} [D_{KL} {\infdivx}{\pi_\psi(a \mid s)}{\pi_E(a \mid s)}]$$ Comparison to the Behavioral Cloning (BC) Objective {#sec:bc} --------------------------------------------------- While BC attempts to learn a policy that also matches the expert’s conditional action distribution [@divergence2019], the *fundamental* difference is that the KL-divergence in BC’s case is computed under the *expert’s* narrow state distribution $\rho_{\pi_E}(s)$: $\psi_{BC}^* = \operatorname*{arg\,min}_\psi E_{\rho_{\pi_{E}}(s)} [D_{KL} {\infdivx}{\pi_E(a \mid s)}{\pi_\psi(a \mid s)}]$. In contrast, ours (Eq.\[eq:indirect\]) is computed under $\rho_{\pi_{mix}}(s)$: the state distribution of the *combined* *cumulative* experience of the robot and the expert, which is a much wider distribution than the expert distribution. We hypothesize that this, along with matching the joint state-action distribution of the expert (Eq.\[eq:joint\]), makes our algorithm less susceptible to compounding errors than BC, as experimentally tested in Section \[sec:exp\]. ![image](figures/architecture.png){width="\linewidth"} SQUIRL: Soft Q-functioned Meta-IRL ================================== Shown in Fig.\[fig:architecture\], our algorithm learns three neural networks jointly – a task encoder (yellow), a task-conditioned policy (orange), and a task-conditioned soft Q-function (green): 1. $\Psi_{\phi} (c)$: a **task encoder** that encodes a sampled batch of $C=64$ expert state-action pairs $c = \{s^i_{1:C}, a^i_{1:C}\}$ from a task $i$ into a single 32-dim embedding vector $z^i \in\mathbb{R}^{32}$ (by computing the mean vector across 64 embeddings) that enables generalization to new tasks. This batch of expert state-action pairs is randomly sampled and thus does not encode time information. Both the policy and the Q-function accept this embedding vector as input. 2. $\pi_\psi(s, z^i)$: a **task-conditioned policy** the robot uses to perform a task $i$ given state $s$ and the task embedding vector $z^i \in \mathbb{R}^{32}$ outputted by the task encoder $\Psi_{\phi}(c)$. 3. $Q_\theta(s,a,z^i)$: a **task-conditioned soft Q-function** used to train the policy $\pi_\psi(s, z^i)$ to more robustly mimic the expert’s behavior for the robotic manipulation task $i$. To begin, the robot is given an expert trajectory of state-action pairs $\mathcal{D}_{\pi_E}$ for each of the 117 training tasks. The robot first uses these expert trajectories to bootstrap training for both its policy $\pi_\psi$, and the task encoder $\Psi_\phi$ via behavioral cloning (Eq.\[eq:bc\]). This way, the robot can distinguish the train tasks better and learn more quickly in the real world. Next, the robot generates 10 trials (state-action pairs) $\overline{\mathcal{D}}_{\pi_\psi}$ *in the physical world* (not simulation) using its warmed-up policy and task encoder. Then, the robot uses both the expert’s and its state-action pairs to train a discriminator $\theta$. This discriminator classifies which state-action pairs come from the expert $\pi_E$ vs. the robot $\pi_\psi$. At first, the robot is distinctively worse than the expert at performing the tasks. This makes it easy for the discriminator to classify. By doing so, the discriminator learns a Q-function $Q^{\pi_{mix}}_\theta$ using Eq.\[eq:softqtheta\]. Using the learned Q-function $Q^{\pi_{mix}}_\theta$, the robot trains its policy $\pi_\psi$ via Eq.\[eq:rl\]. Meanwhile, the robot also has the option to continue updating its task-conditioned policy and task encoder via behavioral cloning (Eq.\[eq:bc\]). Since training the policy via Eq.\[eq:rl\] is equivalent to indirectly imitating the expert (Eq.\[eq:joint\] and \[eq:indirect\]), as derived in Section \[sec:match\], the trajectories generated by the policy gradually become more similar to the expert. This makes the state-action pairs more difficult for the discriminator to classify. This difficulty, in turn, forces the discriminator to learn a more precise Q-function, which then encourages the policy to mimic the expert even more closely. This cycle repeats until convergence (90 trials *in total*), at which point: 1) the policy matches the expert performance, 2) the task encoder learns to generalize to new tasks, and 3) the discriminator continues to struggle to distinguish state-action pairs correctly despite having learned an accurate Q-function. Rationale for Bypassing Reward Learning via SQUIRL -------------------------------------------------- SQUIRL learns a Q-function without rewards because 1) the policy is ultimately trained by the Q-function, not rewards, thus bypassing reward learning improves IRL sample efficiency, and 2) circumventing reward learning avoids off-policy Q-learning from a *constantly changing* reward function and makes training easier and more stable empirically. Architectures for Policy, Task Encoder, and Q-function ------------------------------------------------------ For all non-vision tasks, we parameterize $\pi_\psi, \Psi_\phi, Q_\theta$ with five fully-connected (FC) layers. For vision tasks, we use a 5-layer CNN followed by a spatial-softmax activation layer for the RGB image. This activation vector is then concatenated with the non-vision input vector and together passed through five FC layers. Our algorithm is *general* and works with many other network architectures, state, and action spaces. Incorporating BC to Bootstrap and Accelerate Learning ----------------------------------------------------- Since our algorithm’s IRL objective (Eq.\[eq:indirect\]) is compatible with BC, as explained in Section \[sec:bc\], our algorithm can jointly be trained with BC to stabilize and accelerate learning without conflicting gradient issues (line 16 in Algorithm \[algo:irl\]): $$\mathcal{L}^{BC} = \mathbb{E}_{(s, a) \sim \pi_E} [\left\lVert\pi_\psi (s, \Psi_\phi(c)) - a\right\rVert^2] \label{eq:bc}$$ This, combined with the off-policy nature of our algorithm, also allows the robot to bootstrap learning by first “pre-training” via BC (Eq.\[eq:bc\]) using the expert demonstrations, before improving performance further via meta-IRL training. **Input:** One expert video demonstration trajectory of state-action pairs $\mathcal{D}^i_{\pi_E}=\{s^i_{1:H}, a^i_{1:H}\}$ for each of the $n$ training tasks $i = 1:n$, where $H$ is the horizon of the task (e.g., $n=117, H=100$) Initialize soft Q-function $Q_\theta$, policy $\pi_\psi$, task encoder $\Psi_{\phi}$, and an empty buffer of off-policy robot trajectories $\mathcal{D}^i_{\pi_\psi} \leftarrow \{\}$ for each training task $i = 1:n$ Warm-up policy and task encoder via $\mathcal{L}^{BC}$ (Eq.\[eq:bc\]) Sample a batch of $m$ task indices $\{i^{1:m}\}$ from all training tasks $i=1:n$, (e.g., $m=10$) Infer task embedding $z^i=\mathbb{R}^\mathcal{Z} \leftarrow \Psi_\phi(c)$, where $c = \{s^i_{1:C}, a^i_{1:C}\} \sim \mathcal{D}^i_{\pi_E}$ (e.g., $\mathcal{Z} = 32, C=64$) Generate a robot trajectory of state-action pairs $\overline{\mathcal{D}}^i_{\pi_\psi} = \{s^i_{1:H}, a^i_{1:H}\}$ from task $i$ using $\pi_\psi, z^i$ $\mathcal{D}^i_{\pi_\psi} \leftarrow \mathcal{D}^i_{\pi_\psi} \cup \overline{\mathcal{D}}^i_{\pi_\psi}$ Sample another batch of $m$ task indices $\{i^{1:m}\}$ $\theta \leftarrow \theta - \nabla_\theta \mathcal{L}^{IRL}$ (Eq.\[eq:discriminatorloss\]) using a combined batch of $\mathcal{B}=128$ robot and expert timesteps: $\overline{\mathcal{D}}^i_{\pi_\psi} \cup \overline{\mathcal{D}}^i_{\pi_E}$ and $z^i$, where $\overline{\mathcal{D}}^i_{\pi_\psi} \sim \mathcal{D}^i_{\pi_\psi}$, $\overline{\mathcal{D}}^i_{\pi_E} \sim \mathcal{D}^i_{\pi_E}$, $i=\{i^{1:m}\}$ Sample another batch of $m$ task indices $\{i^{1:m}\}$ $\{\psi, \phi\} \leftarrow \{\psi, \phi\} - \nabla_{\psi, \phi} \mathcal{L}^{BC}$ (Eq.\[eq:bc\]) using a batch of $\mathcal{B}$ expert timesteps $\overline{\mathcal{D}}^i_{\pi_E} \sim \mathcal{D}^i_{\pi_E}, z^i$, $i=\{i^{1:m}\}$ **end if** $\psi \leftarrow \psi - \nabla_\psi \mathcal{L}^{RL}$ (Eq.\[eq:rl\]) using a combined batch of $\mathcal{B}$ robot and expert timesteps: $\overline{\mathcal{D}}^i_{\pi_\psi} \cup \overline{\mathcal{D}}^i_{\pi_E}$ and $z^i$, where $\overline{\mathcal{D}}^i_{\pi_\psi} \sim \mathcal{D}^i_{\pi_\psi}$, $\overline{\mathcal{D}}^i_{\pi_E} \sim \mathcal{D}^i_{\pi_E}$, $i=\{i^{1:m}\}$ **return** soft Q-function $Q_\theta$, policy $\pi_\psi$, task encoder $\Psi_{\phi}$ \[algo:irl\] **Input:** $\pi_\psi$, $\Psi_\phi$, $Q_\theta$, and a single expert video demonstration of state-action pairs $\mathcal{D}^i_{\pi_E}=\{s^i_{1:H}$, $a^i_{1:H}\}$ from a *new* task $i$ unseen during training Infer task embedding vector $z^i=\mathbb{R}^\mathcal{Z} \leftarrow \Psi_\phi(c)$, where $c = \{s^i_{1:C}, a^i_{1:C}\} \sim \mathcal{D}^i_{\pi_E}$ (e.g., $\mathcal{Z} = 32, C=64$) Rollout robot trajectory in the real world using $\pi_\psi$, $z^i$ \[algo:irltest\] Using Expert Demonstration as Both the Input Task Context Variables and Training Signal for the Task Encoder ------------------------------------------------------------------------------------------------------------ Learning robust task embeddings enables robots to generalize to new tasks quickly [@rakelly2019efficient]. To this end, our algorithm proposes to use 64 expert timesteps as the input task context variable $c$ into the task encoder, as opposed to 64 robot timesteps. This is because context variables should explore the task and environment sufficiently well to expose the key information of the task, and expert demonstration timesteps are an ideal candidate compared to the timesteps from the robot’s suboptimal policy. As a result, the context variable $c$ input into the task encoder only includes the states and actions of the expert, but not the rewards or the next states. In addition, we choose the BC loss $\mathcal{L}^{BC}$ in Eq.\[eq:bc\] as the training loss for learning the task encoder $\Psi_\phi$. This BC loss is stable since the expert timesteps are fixed. In contrast, the IRL loss $\mathcal{L}^{IRL}$ (Eq.\[eq:discriminatorloss\]) and the policy loss $\mathcal{L}^{RL}$ (Eq.\[eq:rl\]) are less stable because the training data distribution for both losses are non-stationary. This design choice also allows us to learn a robust task embeddings first via BC pre-training before performing meta-IRL training via SQUIRL. We empirically observe that such pre-training can improve the training stability and the sample efficiency of SQUIRL, but the final policy performance is similar with or without BC pre-training. In summary, our algorithm is detailed in Algorithm \[algo:irl\] (train) and Algorithm \[algo:irltest\] (test), with hyperparameters detailed here[^3]. Experiments and Results Analysis {#sec:exp} ================================ We evaluate the generality and robustness of our algorithm across long-horizon vision and non-vision tasks with continuous state and action spaces in both simulation (Pick-Carry-Drop, a horizon of 1024 timesteps, 30 train tasks) and real-world (Pick-Pour-Place, a horizon of 100 timesteps, 117 train tasks). There is only a *single* expert demonstration for *each* of the train or test tasks. We compare with the PEARL-BC baseline, which is the behavioral cloning version of PEARL [@rakelly2019efficient]. **Evaluation:** We evaluate real-robot and simulation experiments on **50** and **500** trials respectively across **50** seen and unseen tasks. We report mean and standard deviation (“stdev” hereafter). The performance difference between different experiments is statistically significant if the difference in **mean** is at least **either** standard deviation away. Experimental video is at <http://crlab.cs.columbia.edu/squirl>. Simulation Experiment: Pick-Carry-Drop -------------------------------------- **Description.** We modify the planar Stacker task [@dmcontrol] to create “Pick-Carry-Drop”. Shown in Fig.\[8PP-decompose\], a robot is tasked to approach, pick, carry, and drop the black box into the stack marked in green. The task is successful if the box is dropped into the stack within 1024 timesteps, and failed otherwise. **State Space.** We evaluate our algorithm on both the vision and the non-vision version of the task, to demonstrate that SQUIRL is *general* across different state space modalities. The state space for the vision version includes 1) the joint angles and velocities for its 5-DOFs, 2) a one-hot vector indicating the current stage of the task, and 3) an RGB image shown in Fig.\[8PP-decompose\]. The non-vision version’s state space replaces the RGB image with the *position of the black box*. **Action Space**. The robot controls its 5-DOF *joint torques*. **Task Definition**. There are a total of 30 training tasks in this experiment, each corresponding to a different *drop location*: $x \in \{-0.15, -0.14, \ldots , 0.14\}$. During test time, we randomly sample a new, *real-valued* drop location from the maximum valid range: $x \in [-0.25, 0.25]$. The green drop location is *invisible* in both the vision and the non-vision version of the task. Therefore, the robot needs to infer the green drop location (i.e., task information) solely from the provided expert video demonstration. On the other hand, the starting pose of the robot and the location of the black box are all initialized randomly at the beginning of each task. **Robot Trials**. The robot uses 150 training trials *in total*. **Expert Demonstration**. We trained an expert policy from scratch via RL to provide expert demonstrations. The reward function used to train the expert policy comprises of six stages, each stage with a reward of 10. Designing this reward function has taken significant human effort, which exhibits the value of directly learning from video demonstrations. [|c|c|c|c|c|]{} Tasks & Seen& Unseen & Seen& Unseen\ & &\ SQUIRL (BC + IRL) & **95.8$\pm$1.7** & **95.0$\pm$1.5** & **97.3$\pm$3.0** & **96.9$\pm$2.0**\ Baseline (PEARL-BC) & 77.8$\pm$1.6 & 76.5$\pm$0.7&90.8$\pm$2.5 & 89.5$\pm$1.6\ \ SQUIRL (IRL Only)& 93.8$\pm$1.8 & 93.2$\pm$1.6 & 94.7$\pm$1.7 & 93.9$\pm$1.4\ **Simulation Results and Analysis.** As shown in Table \[tab:sim\], our algorithm, “SQUIRL (BC + IRL)”, pre-trains via BC and then trains the policy using both the BC loss (Eq.\[eq:bc\]) and the IRL policy gradient loss (Eq.\[eq:rl\]). It statistically significantly outperforms the PEARL-BC baseline in both the vision (95.8%$\pm$1.7 vs. 77.8%$\pm$1.6) and non-vision (97.3%$\pm$3.0 vs. 90.8%$\pm$2.5) version of the task for seen tasks. For unseen tasks, we observed similar outperformance (95.0%$\pm$1.5 vs. 76.5%$\pm$0.7 in the vision case and 96.9%$\pm$2.0 vs. 89.5%$\pm$1.6 in the non-vision case). Qualitatively, in the PEARL-BC’s case, the robot sometimes misses the drop location as it attempts to drop the box or fails to pick up the box when the box gets stuck by the walls of the stack (kindly see website). The performance drop of the baseline from the non-vision version (90.8%$\pm$2.5 and 89.5%$\pm$1.6 for seen and unseen tasks) to the vision version (77.8%$\pm$1.6 and 76.5%$\pm$0.7 for seen and unseen tasks) is mainly because vision-based manipulation tends to suffer from larger compounding errors. Nevertheless, as evident in the statistical similarities between seen and unseen tasks for SQUIRL (95.8%$\pm$1.7 vs. 95.0%$\pm$1.5 for vision) and PEARL-BC (77.8%$\pm$1.6 vs. 76.5%$\pm$0.7 for vision), both algorithms can generalize to unseen tasks, due to the generalizability of task embeddings. **Ablation: IRL Gradient Only**. To compare the performance contribution of SQUIRL’s meta-IRL core training procedure directly against PEARL-BC, we created “SQUIRL (IRL only)”, which trains the policy using only the policy gradient loss in Eq.\[eq:rl\] (no BC joint training or pre-training). This *ablated* version still outperforms the PEARL-BC baseline (93.8%$\pm$1.8 vs. 77.8%$\pm$1.6 for seen vision tasks, 93.2%$\pm$1.6 vs. 76.5%$\pm$0.7 for unseen vision tasks). Nevertheless, by combining BC and IRL gradients, “SQUIRL (BC + IRL)” improves performance slightly further (95.8%$\pm$1.7 and 95.0%$\pm$1.5). Intuitively, while BC only matches the expert’s conditional action distribution under the *expert’s* state distribution, BC’s supervised learning signal is stabler than IRL. Joint training with BC and IRL gradients can be interpreted as combining the stability of BC and the robustness of Q-functioned IRL, by matching the conditional action distribution of the expert under the broader state distribution of the expert-robot mixture experience (Eq.\[eq:indirect\]), in addition to matching the expert’s joint state-action distribution (Eq.\[eq:joint\]). Real-Robot Experiment: Pick-Pour-Place -------------------------------------- **Description.** We evaluated our algorithm on the UR5-Seed robot (Fig.\[fig:hardware\]) to perform a set of long-horizon pick-pour-place tasks. As shown in Fig.\[fig:hardware\], in each task, there is a grey cylindrical bottle, an iron pebble that is already in the bottle, and more than one container on the table. The robot is tasked to approach and pick-up the grey bottle, pour the iron pebble into a specific container, and place the bottle back on the table. The task is a success only if the pebble is poured into the *correct* container and the bottle is placed upright on the table within $H=100$ timesteps, and a failure otherwise. **State Space.** The state space contains a top-down or 45camera’s RGB image (Fig.\[ppp\]), and 2 binary indicators for whether the robot has poured or closed the hand, respectively. **Action Space.** The action space includes the Cartesian unit directional vector for the end-effector movement. During each timestep, the robot can adjust the end-effector by 2cm along any 3D direction. The action space also includes a binary indicator to control the arm vs. the hand and a trinary indicator to close, open, or rotate the hand for pouring. **Orthogonality to State and Action Representions**. While Pick-Pour-Place can be tackled by first localizing the correct container via object detection (alternative state space) and then executing motion-planning trajectories to pour (alternative action space), our algorithm is *general* across and orthogonal to alternative state and action spaces. **Task Definition.** As shown in each row of images in Fig.\[fig:intro\], each task is defined by the positions and colors of the containers, and by the correct container to pour into. There are *always* *only* the green and yellow containers in the 117 train tasks. 25 of the 50 test tasks have the green and yellow containers at *new* positions. The remaining 25 test tasks *add* the red and the orange *unseen* containers, or either. Since there is always more than one container in the RGB image, the robot will not know which container to pour into *without* the expert demonstration. Therefore, the robot needs to depend solely on the task encoder’s ability to extract the correct task information from the expert demonstration. **Robot Trials**. The robot collects 90 training trials *in total*. **Expert Demonstration.** We collect demonstrations via teleoperation using a Flock of Birds sensor[^4]. Using the human wrist pose detected by the sensor in real-time, we move, open, close, or rotate the robot hand for pouring. We collected $117$ video demonstrations across 117 tasks for training. It takes 1-2 minutes to collect one demonstration. Tasks RGB Image Seen Unseen ------------------------ ---------------------------- ------------------ ------------------ SQUIRL (BC + IRL) **92.0$\pm$4.5** **90.0$\pm$7.1** Baseline (PEARL-BC) 70.0$\pm$7.1 68.0$\pm$11.0 Baseline (Standard-BC) 60.0$\pm$10.0 56.0$\pm$11.4 SQUIRL (BC + IRL) $45\degree$ (**Ablation**) 90.0$\pm$7.1 88.0$\pm$8.4 : Pick-Pour-Place Results (% Pour Success$\pm$Stdev)[]{data-label="tab:real"} **Real-robot Results and Analysis.** As shown in Table \[tab:real\], our algorithm outperforms the PEARL-BC baseline statistically significantly in both seen tasks (92.0%$\pm$4.5 vs. 70.0%$\pm$7.1) and unseen tasks (90.0%$\pm$7.1 vs. 68.0%$\pm$11.0). This observed outperformance mainly originates from our soft Q-functioned IRL formulation, which forces the robot to imitate the expert under a much wider state distribution provided by the expert-robot mixture trajectories, instead of the narrow state distribution of the expert demonstrations. This helps reduce compounding errors during task execution. The low performance of the PEARL-BC baseline is mainly due to additional compounding errors induced by real-world sensory noises such as unstable lighting conditions and small perturbation to camera positions. Qualitatively, the PEARL-BC baseline sometimes pours into the wrong container, misses the target container by a few centimeters, or moves past the target container while failing to pour in time (kindly see website for examples). Nevertheless, from the statistical similarity between seen and unseen tasks for both our algorithm (92.0%$\pm$4.5 vs. 90.0%$\pm$7.1) and PEARL-BC (70.0%$\pm$7.1 vs. 68.0%$\pm$11.0), we see that the learned task encoder is still effectively generalizing to a new, related task. **Comparison to the “Standard-BC” Baseline**. We also compared to “Standard-BC” (60.0%$\pm$10.0 and 56.0%$\pm$11.4 for seen and unseen tasks), which performs no meta-learning and learns every train or test task *independently* from scratch via BC. As a result, the neural network *overfits* to the single demonstration and fails to generalize to real-world sensory (camera) noises at test time. Note that Standard-BC’s unseen-task performance is slightly lower than seen tasks since the unseen tasks are more challenging with at most 4 containers on the table, compared to only 2 containers in seen tasks. **Ablation: Non-top-down Camera**. We also tested our algorithm with a $45\degree$ RGB image (90.0%$\pm$7.1 and 88.0%$\pm$8.4 for seen and unseen tasks) against a top-down RGB image (92.0%$\pm$4.5 and 90.0%$\pm$7.1 for seen and unseen tasks). The statistical similarity between the two shows that SQUIRL is *general* and can accept a non-top-down RGB input image. Conclusion ========== We introduced SQUIRL, a robust, efficient, and general Soft Q-functioned meta-IRL algorithm, towards enabling robots to learn from limited expert (one per task) and robot (90 in total) trajectories. This algorithm is statistically significantly more robust than behavioral cloning and requires no trial-and-errors at test time. Finally, this general algorithm has been tested to work with various long-horizon manipulation tasks, and across vision and non-vision state and action spaces. In the future, we will extend this algorithm to learn from direct human-arm demonstrations instead of teleoperation. This will lower the cost of collecting real-world expert demonstrations further. We also aim to incorporate hierarchical learning into SQUIRL to solve much longer horizon manipulation tasks by reusing low-level subpolicies. [^1]: This work is supported by NSF Grant CMMI-1734557. Authors are with Columbia University Robotics Group, New York, NY, 10027, USA [^2]: Site: [www.seedrobotics.com/rh8d-dexterous-hand.html](www.seedrobotics.com/rh8d-dexterous-hand.html) [^3]: Hyperparameters in Algorithm \[algo:irl\] and \[algo:irltest\]. Policy gradient batch size $\mathcal{B}$: 1024 (non-vision), 128 (vision); task embedding batch size $C$: 64; all learning rates: $3e^{-4}$; starting SAC alpha: $1e^{-5}$; SAC target entropy: $-300$; IRL updates per epoch $J$: $400$; policy updates per epoch $K$: $2000$; task embedding size $\mathcal{Z}$: 32; meta-batch size $m$: 10; discount rate $\gamma$: 0.99 [^4]: Flock of Birds is a 6D pose tracker from Ascension Technologies Corp.
Story highlights White House complains of investigations but its reluctant responses contribute to GOP charge Republicans have undermined their claims of oversight that Democrats call a witch hunt Possible presidential candidates' involvement makes 2016 jockeying a factor Former CIA director says there is still a lack of clarity on what biggest security lapses were Nearly a year later, Benghazi remains a flashpoint in Washington for two very different reasons: indefensible pre-attack policy decisions and irresistible post-attack politics. The Obama White House, from the president on down, complains of "phony" Republican-led congressional investigations. Yet the administration's own reluctant, and at times inaccurate, responses to congressional inquiries have contributed to the GOP charge that the administration, at a minimum, has been less than transparent. "We need to get to the bottom of what happened that terrible night, why it happened, and how we can prevent similar tragedies in the future," House Speaker John Boehner said last week in serving notice the House Benghazi investigations would continue into the fall, and include new subpoenas for documents and testimony if necessary. There are legitimate questions about why repeated and specific warnings about the Benghazi security situation were undervalued or ignored. Both lawmakers and intelligence professionals point to this weekend's unprecedented wave of Middle East and Africa embassy closings as, at least in part, a lesson learned from the September 11, 2012, attack that killed Ambassador Chris Stevens and three other Americans. Republicans, however, have at times undermined their own claim to be interested only in legitimate congressional oversight, not what Democrats often label a partisan witch-hunt. Boehner, for example, has at times privately urged lawmakers leading the investigations to focus on what the evidence shows -- and not let their partisan instincts allow their public rhetoric to get out ahead of the facts. Photos: Photos: Attack on U.S. mission in Benghazi Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – Attackers set the U.S. mission in Benghazi, Libya, on fire on September 11, 2012. The U.S. ambassador to Libya, Christopher Stevens, and three other U.S. nationals were killed during the attack. The Obama administration initially thought the attack was carried out by an angry mob responding to a video, made in the United States, that mocked Islam and the Prophet Mohammed. But the storming of the mission was later determined to have been a terrorist attack. Hide Caption 1 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – Obama and Clinton stand at Andrews Air Force Base as the bodies of the four Americans killed are returned on September 14. Hide Caption 2 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – A desk sits inside the burnt U.S. mission on September 13, two days after the attack. Hide Caption 3 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – Damage is seen inside the U.S. mission on September 13. Hide Caption 4 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – A lounge chair and umbrella float in the swimming pool of the U.S. mission on September 13. Hide Caption 5 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – Demonstrators gather in Libya on September 12 to condemn the killers and voice support for the victims. Hide Caption 6 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – U.S. President Barack Obama, with Secretary of State Hillary Clinton on September 12, makes a statement at the White House about Stevens' death. Hide Caption 7 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – A burnt vehicle is seen at the U.S. mission in Benghazi on September 12. Hide Caption 8 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – People inspect the damage on September 12. Hide Caption 9 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – A small American flag is seen in the rubble on September 12. Hide Caption 10 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – A man stands in part of a burned-out building of the U.S. mission on September 12. Hide Caption 11 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – Smoke and fire damage is evident inside a building on September 12. Hide Caption 12 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – Half-burnt debris and ash cover the floor of one of the U.S. mission buildings on September 12. Hide Caption 13 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – The U.S. mission is seen in flames on September 11, the day of the attack. Hide Caption 14 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – A protester reacts as the U.S. mission burns on September 11. Hide Caption 15 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – A vehicle and the surrounding area are engulfed in flames on September 11. Hide Caption 16 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – Flames erupt outside of a building on September 11. Hide Caption 17 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – A vehicle burns during the attack on the U.S. mission on September 11. Hide Caption 18 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – Onlookers record the damage from the attack on September 11. Hide Caption 19 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – Onlookers walk past a burning truck and building on September 11. Hide Caption 20 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – A vehicle sits smoldering in flames on September 11. Hide Caption 21 of 22 Photos: Photos: Attack on U.S. mission in Benghazi Attack on U.S. mission in Benghazi – People duck flames outside a building on September 11. Hide Caption 22 of 22 JUST WATCHED What is the truth about Benghazi? Replay More Videos ... MUST WATCH What is the truth about Benghazi? 01:43 JUST WATCHED The CIA's secret presence in Benghazi Replay More Videos ... MUST WATCH The CIA's secret presence in Benghazi 05:28 JUST WATCHED Benghazi witness: No one questioned me Replay More Videos ... MUST WATCH Benghazi witness: No one questioned me 03:48 Also undermining the GOP effort: fund-raising e-mails and videos from Republican and conservative groups asserting a Benghazi cover up orchestrated by President Barack Obama and then-Secretary of State Hillary Clinton. "You've got a very valid point," Rep. Darrell Issa, chairman of the committee leading the House Benghazi review told CNN in an interview for "The Truth About Benghazi," a one-hour program looking at the attack and at the lingering policy and political questions. "I would prefer that fund-raising by outside groups stay away from the hard work of Congress. "But it isn't going to happen." Early 2016 jockeying also factors into the politics. Kentucky GOP Sen. Rand Paul says Clinton is responsible for the sub-par security in Benghazi even if, as she says, the stream of warnings and requests for more personnel never reached her desk. "That's precisely her culpability," Paul told CNN. "When you lead the State Department, decisions in one of the most dangerous countries in the world should rise to your level. That's your job to make sure those messages get to you." GOP strategist Kristen Soltis Anderson understands the GOP effort to use Benghazi to tarnish Clinton's image, but describes her as the most "formidable" Democratic 2016 hope and adds, "I don't know that Benghazi alone would sink a Clinton candidacy for president." And if the attacks now are somehow designed to discourage Clinton from running, veteran Democratic strategist and Clinton ally Paul Begala suggests an opposite reaction. "She's never been a person to back to down to a bully," Begala said in an interview. "Hillary is the type of person to be motivated by that. To stand up and fight back." Benghazi politics often get more attention because of the personalities involved. But there is also a policy divide. JUST WATCHED Return to Benghazi: Embassy under attack Replay More Videos ... MUST WATCH Return to Benghazi: Embassy under attack 07:39 JUST WATCHED Return to Benghazi: A city divided Replay More Videos ... MUST WATCH Return to Benghazi: A city divided 05:25 JUST WATCHED Return to Benghazi: Questions unanswered Replay More Videos ... MUST WATCH Return to Benghazi: Questions unanswered 09:46 Republicans say they do not yet have a full picture of three critical issues: • Why the warnings didn't reach the point where the State Department either sent more security help or ordered the Benghazi mission closed. • Why, especially given the weeks of threat warnings, there was no viable military option to assist the State Department personnel at the Benghazi mission and the predominantly CIA-run annex that came under attack later same evening and where two of the four Americans were killed. • And why, nearly a year later, no one has been brought to justice. New FBI Director James Comey is being pressed to update Congress on the investigation within a month of his officially beginning work next month. On the military response question, the Pentagon now says it is making significant improvements in its contingency planning for responding to embassies in danger zones. But former Pentagon brass told Congress there was no viable option that night, with then-Defense Secretary Leon Panetta saying the military "should not be a 911 service capable of being on the scene within minutes." The top Democrat on the House Armed Services Committee, Rep. Adam Smith of Washington, calls Benghazi a tragedy but labels continued GOP questioning a witch hunt. "The bottom line is there was not a force available that could get there in time," Smith told CNN. "So I don't think these questions need to be asked again." GOP Rep. Jason Chaffetz, however, says that his classified briefings have included information that convince him more could have been done. "How is it that we have a man down, in this case four men down and an injured person, and the cavalry never comes over the hill," Chaffetz said. On the first question, the administration's official review, by what is known as the Accountability Review Board, found a series of State Department bureaucratic failures at several levels, but found no negligence. "A whitewash," is how Issa describes that analysis. That process was led by veteran diplomat Thomas Pickering, who has served in both Democratic and Republican administrations. Pickering defended the integrity of the review in the weeks after its release. He several times, however, canceled scheduled interviews with CNN and ultimately said his attorney had advised him not to speak to the network. Now, House investigators are pursing documents the review board used in its review and this is likely to be one fall focus. Former CIA Director Michael Hayden, the retired Air Force general and a veteran of many Washington dramas, said GOP talk of a cover up is "a loaded word." But he says there is, nearly a year later, a lack of full clarity in determining where the biggest security related mistakes were made. "This is team ball," Hayden said. "There are a lot of people who now look back and say, 'I should have done that. I could have done that. Maybe I could have prevented this.'"
630 P.2d 217 (1981) In the Matter of A.J.S. Youth In Need of Care. No. 80-483. Supreme Court of Montana. Submitted on Briefs April 8, 1981. Decided June 17, 1981. Rehearing Denied July 17, 1981. *218 Jones Law Firm, Billings, for appellant. Harold F. Hanser, County Atty., Olsen, Christensen & Gannett, Billings, for respondent. SHEEHY, Justice. DS, mother of AJS, appeals from an order of the Thirteenth Judicial District Court, Yellowstone County, declaring AJS an abused and neglected child and awarding permanent custody of AJS to the Department of Social and Rehabilitation Services (SRS). We affirm. *219 Appellant raises these issues: 1. Was the evidence sufficient to support the finding that AJS is a youth in need of care? 2. Is the testimony of a psychologist subsequent to a court-ordered psychological evaluation violative of the psychologist-client privilege? 3. Did the admission of psychologists' testimony resulting from a court-ordered psychological evaluation violate DS's constitutional right of privacy? 4. Does the delay in the adjudication of this matter necessitate reversal? AJS was born on December 16, 1963, mentally retarded, possibly autistic and with epilepsy, characterized by both grand and petit mal seizures. DS and OS are the natural parents of the youth. OS abandoned the family in 1967 and has had no contact with the family since. DS was subsequently married to a man whom she divorced after discovering the husband sexually abusing AJS. At the time AJS was removed from the family home, DS was living with a man 11 years her junior, in whose care AJS frequently was entrusted. AJS first began attendance in special education classes at Garfield School in 1972, and has attended continuously since that time. Despite her years of education, AJS is not toilet trained, has very little speech and is considered nonverbal. She functions at approximately a two-year-old developmental level. DS raised and cared for AJS without interference by the authorities until late 1976. At that time, DS placed herself in a six-week drug rehabilitation program to overcome her twelve-year dependence on Darvon and Librium and left AJS in the care of a foster home. During this period, AJS's appearance and cleanliness improved dramatically, her reported seizure activity subsided and her classroom attitude and aptitude improved. All these conditions deteriorated when AJS returned to her mother's household. School officials and SRS personnel had from the outset been concerned about the squalid condition of DS's household. The home was consistently filthy, cluttered, frequently had animal excretions scattered about, and had an odor which nauseated visitors to the point that it was difficult for the unaccustomed to remain in the home. DS allowed the house to be used as a flop-house by friends of her other children. These conditions prevailed both before and after the 1976 drug rehabilitation. While in the care of her mother, the state of AJS's cleanliness and personal hygiene had been distressing to school and SRS personnel from the time her schooling commenced. She frequently came to school with body odor so intensive she was difficult to approach closely, her hair was often greasy and matted with food, and she frequently displayed brown phlegm (apparently a side-effect of Dilantin, her anticonvulsant) hanging from her teeth. AJS occasionally arrived at school unfed, and often slept through large portions of the school day. DS sometimes varied AJS's Dilantin dosages according to the phases of the moon. Beginning in 1976, school officials also began observing an unusual number of bruises on AJS. Her teacher and the school nurse each noticed, on numerous occasions, fingermarks on the inner aspect of AJS's upper thighs and bruises on both shoulders. When queried about the various bruises, DS typically dismissed them as resulting from falls during seizures (although teachers had observed that AJS knew when a seizure was at hand and would protect herself by lying on a bed prior to onset of the seizure). In 1979, bruising and injuries to AJS grew drastically more pronounced and frequent. On January 24, AJS arrived at school with a large bruise extending from her right shoulder to her elbow, with a long scratch down the center. On January 26, AJS had several large, dark, streak-type bruises — believed by the nurse to be fingermarks — on the inner portions of each thigh. On February 14, AJS displayed small bruises on her cheeks and nose, and a raised bright red rash over the entire upper portion of her back, with small abrasions in the *220 center of the rash. On February 19, she had six large, deep scratches, each about three inches long, on her left cheek. On February 26, a social worker visiting the home, noticed two black eyes on AJS. Finally, on February 28, the social worker and school nurse visited the home and observed a second-to-third-degree burn approximately ten centimeters long on her left shoulder. She also had a bruise around her left eye across the bridge of her nose, small bruises on her midchest, a small scratch on her upper abdomen, and a small bruise on her right front groin area. AJS was removed from the home the following day, March 1, 1979. Following her removal, AJS was placed in a foster home for one month, then transferred to a Special Training for Exceptional People (STEP) group home. During the period following her removal, her appearance and personal hygiene again improved, her school attendance improved markedly, and she was more alert while at school. There was also testimony that her reported seizure activity subsided and her performance in school improved. SRS filed its petition alleging AJS was a youth in need of care on June 29, 1979. The cause was heard by the District Court at multiple hearings held on September 6, 1979; December 6, 1979; April 3, 1980 and June 3, 1980. The District Court entered its findings, conclusions and order on October 31, 1980. SRS presented as witnesses, school officials, nurses and SRS personnel who testified to substantially the facts found by the Court and related above. Additional testimony was given by Dr. Monty Gustafson, a clinical psychologist, who conducted a court-ordered psychological examination of DS. Dr. Gustafson performed an extensive psychological interview of DS and administered several detailed tests, from which he concluded DS has some organic brain damage as well as a personality disorder termed "inadequate personality." He suggested these conditions greatly interfere with DS's parenting ability, and expressed his opinion that DS is unable to deal adequately with and care for AJS over the long term. Our function in reviewing dependency and neglect cases has been well defined in a number of previous decisions. Matter of LFG (1979), Mont., 598 P.2d 1125, 36 St.Rep. 1547; In Re Gore (1977), 174 Mont. 321, 570 P.2d 1110. In Gore, we stated: "This Court is mindful that the primary duty of deciding the proper custody of children is the task of the district court. As a result, all reasonable presumptions as to the correctness of the determination by the district court will be made. [Citation omitted.] Due to this presumption of correctness, the district court's findings will not be disturbed on appeal unless there is a mistake of law or a finding of fact not supported by credible evidence that would amount to a clear abuse of discretion. (Citation omitted.)" 174 Mont. at 325, 570 P.2d at 1112. We have subsequently held in Matter of JLB (1979), Mont., 594 P.2d 1127, 36 St.Rep. 896, that the court's findings must be supported by clear and convincing evidence. That burden has been sustained here. DS attacks the sufficiency of the evidence on a number of bases. However, we find clear and convincing evidence of unexplained physical injuries and inadequate concern for the cleanliness and hygiene of AJS to support the court's findings. We therefore address only those areas. A number of school officials testified of the absolute squalor of DS's household, as related above. The same witnesses provided vivid documentation of the various injuries sustained by AJS, and of her chronic hygienic problems while in the care of her mother. These same people noticed a dramatic turnabout of AJS's cleanliness and physical well-being after she had been removed from the home. DS, on the other hand, testified that her home was adequately maintained, and that AJS was kept as scrubbed as her condition would allow. DS attributed her daughter's poor school attendance and frequent exhaustion while at school to seizure activity. *221 Falls accompanying seizures were credited by DS for all the various bruises; and the burn resulted from a bath tub accident involving nearly impossible physical contortions by AJS. Where testimony is directly conflicting, we presume that the judge's findings are correct because he was present when the testimony was given and had the opportunity to observe the demeanor of the witnesses. Matter of TER (1979), Mont., 590 P.2d 1117, 36 St.Rep. 276. Here, the court chose to believe that the home was not properly maintained despite repeated efforts of SRS to provide homekeeping assistance, and that the injuries to AJS were neither adequately nor credibly explained. The court did not abuse its discretion in so finding. DS submits that since there was no direct evidence that she deliberately inflicted the injuries upon AJS, the finding of abuse and neglect was improper. Section 41-3-102, MCA, defines an abused or neglected child as "... a child whose normal physical health or welfare is harmed or threatened with harm by the acts or omissions of his parent or other person responsible for his welfare." Regardless of any actual proof that a parent intentionally inflicted injuries upon his or her child, the occurrence of serious and frequent, yet unexplained, physical injuries to the child is sufficient to properly bring the child within the statutory definition. Additionally, the statute is broad enough to include extreme and prolonged uncleanliness under the definition of neglect. JLB, supra, 594 P.2d at 1135, 36 St.Rep. at 907. DS moved in limine, relying on section 26-1-807, MCA, the psychologist-client privilege, to exclude Dr. Gustafson's testimony. The motion was denied and the evidence subsequently received. DS argues that she trusted Dr. Gustafson, expected their communications to remain confidential; and insists her expectation should be honored. We reject this argument. We instead find that there was in fact no psychologist-client relationship between Dr. Gustafson and DS. Section 26-1-807, MCA, places the relationship of psychologist and client on the same status as attorney and client. In that regard, a party is entitled to the protection accorded to privileged communication if the communications have been made to an attorney acting, for the time being, in the character of legal advisor for the purpose of securing professional advice or aid upon the subject of the client's rights and liabilities. Bernardi v. Community Hospital Association (1968), 166 Colo. 280, 443 P.2d 708, 716. Here DS did not seek out and retain Dr. Gustafson for professional help, but was ordered by the Court to undergo an evaluation. Nor were the communications between the two directed toward securing professional assistance for DS. The privilege clearly did not attach in this instance. This issue is somewhat complicated by a previous contact between DS and Dr. Gustafson in an unrelated matter. However, even assuming arguendo, that the previous contacts did establish a psychologist-client relationship, it was yet within the discretion of the District Court to consider the testimony. In proceedings of this type, the child's best interest and welfare, not those of the natural mother, are the paramount considerations. In re Bad Yellow Hair (1973), 162 Mont. 107, 509 P.2d 9. The District Court must balance the rights of the mother and the child; and while the mother's rights are important, they are not absolute. Matter of CMS (1979), Mont., 609 P.2d 240, 36 St.Rep. 2004. In some instances, the best interests of the child require some degree of flexibility in procedure to insure that all evidence pertaining to the best interests of the child may be considered. TER, supra. In applying these rules, we find this language persuasive: "in the exercise of the court's inherent power to do what is best to protect the welfare of the infant, the right of [the mother] to invoke the patient-physician privilege must yield to the paramount rights of the infant." People v. Fitzgerald (1963), 40 Misc.2d 966, 244 N.Y.S.2d 441, 442. DS next argues the admission of Dr. Gustafson's testimony violated her right to *222 individual privacy under 1972 Mont.Const., Art. II, § 10. The record indicates this argument is raised for the first time here on appeal. The District Court was presented with and decided only the question of privileged communications. DS may not now raise the issue of infringement of her right to privacy. It is well settled that a party may not change a theory to this Court from that advanced at trial court. Velte v. Allstate Ins. Co. (1979), Mont., 593 P.2d 454, 36 St.Rep. 724. See also, Johnson v. Doran (1975), 167 Mont. 501, 540 P.2d 306. DS finally submits, without citation of authority, that the District Court should be reversed for failure to handle this cause expeditiously. We agree that the interval here of 20 months between the time of removal from the home until the final order was long; and we exhort District Courts to give preference to custody cases. Section 41-3-401(2), MCA. We believe, however, that reversal here would be an ill-advised and improper sanction. We reiterate that our paramount concern is for the best interest of the child. Bad Yellow Hair, supra. Here, the court did act somewhat slowly in permanently removing AJS from an abusive environment. However, were we to replace the child in that abusive environment due to the District Court's deliberate pace, we would be negating our expressed concerns for the child's best interests. The delay does not necessitate reversal. Affirmed. HASWELL, C.J., and DALY, SHEA and HARRISON, JJ., concur.
دریچه کولر در شرکت تهران دما انواع دریچه کولر در شرکت تهران دما عرضه می گردد. دریچه کولر محصولی است که اغلب در داخل منازل و شرکت های اداری و کارخانه جات مورد استفاده قرار می گیرد. دریچه کولر به منظور انتقال هوای موجود در کانال کولر به داخل محیط ها استفاده می شود. شرکت تهران دما انواع دریچه کولر را با متریال های گوناگون و در ابعاد مختلف عرضه می نماید. تهران دما از سال 1364 فروشگاه دریچه کولر را راه اندازی کرده است تا بتواند قدمی در راستای رساندن دمای مطلوب به محیط ها بردارد. در تهران دما دریچه کولر از جنس آلومینیوم،آهن و چوب تولید می شود. از دیگر محصولات تهران دما می توان به دریچه کولرهای خطی،سقفی،دمپردار،بدون دمپر و انواع دریچه کولر بازدید اشاره کرد. تهران دما خدمات پس از فروش ارائه می نماید و همچنین قیمت دریچه کولر را بسیار مناسب در نظر گرفته است. متخصصان نصب تهران دما همواره در تلاشند تا دریچه کولرهای مشتریان را برای آنان نصب نمایند. تهران دما بخش خرید اینترنتی دریچه کولر را راه اندازی کرده است تا مشتریان بتوانند به راحتی و در کوتاهترین زمان محصولات مورد نیاز خود را تهیه کنند. در این شرکت انواع دریچه کولر در طرح ها، رنگ ها و متریال های گوناگون ساخته می شود. تهران دما با توجه به نیاز مصرف کننده دریچه های کولرهای خود را طراحی و اجرا می نماید و با استفاده از ایده های جدید کارشناسان خود دریچه کولرجدید را از جنس های مختلفی همچون آهن،آلومینیوم و چوب تولید و به بازار مصرفی ارائه می نماید. انواع دریچه کولر انواع دریچه کولر در کانال سازی تهران دما تولید و به فروش می رسد. در تهران دما تمامی دریچه کولرهای جدید و به روز دنیا از جمله دریچه کولر مشبک، دریچه کولر دیواری، دریچه کولر سقفی و دریچه کولر خطی عرضه می گردد. انواع دریچه کولر که در بالا نام برده شد با استفاده از ورق های متنوعی تولید می شوند.فروشگاه اینترنتی تهران دما انواع دریچه کولر را در نوع گالوانیزه، آلومینیوم،پلاستیکی و چوبی به فروش می رساند. کاربرد هر یک از دریچه کولرها بسته به نوع محیط و کاربری آنها متغیر می باشد. همچنین در تهران دما انواع مختلفی از دریچه کولرهای چوبی خطی ثابت، خطی دمپردار، دیواری درپوش دار و انواع دریچه کولرهای چوبی زیر فن کوئلی تولید و به فروش می رسد. یکی از جدیدترین دریچه کولرهای شرکت تهران دما، دریچه کولر درپوش دار می باشد. این محصول محبوبیت ویژه ای در میان مشتریان کسب کرده است. هر یک از دریچه کولرها مزایای منحصر به فردی دارند که در بخش های بعدی توضیحاتی درباره ی هر کدام درج شده است. این مطالب به منظور سهولت مشتریان تهران دما در انتخاب محصول مورد نظرشان تهیه و ثبت شده است. علاوه بر اطلاعات متنی درج شده، انواع مختلف دریچه کولرهای تهران دما توسط تصاویر واضح و با کیفت هر یک از دریچه های تنظیم هوا قابل مشاهده می باشد. خرید اینترنتی دریچه کولر در گذشته افراد برای تهیه ی دریچه کولر می بایست به مراکز فروش فیزیکی مراجعه می کردند. اما امروزه با پیشرفت ارتباطات و اینترنت این قابلیت فراهم شده است که علاقه مندان بتوانند خرید اینترنتی دریچه کولر را به آسانی انجام دهند. شرکت تهران دما سامانه ی خرید اینترنتی دریچه کولر را برای مشتریان خود فراهم نموده است. خرید اینترنتی دریچه کولر سبب افزایش سرعت در خرید شده است و همچنین مشتریان ازطریق خرید از این روش می توانند محصولات خود را در محل کار و یا خانه ی خود دریافت کنند. با استفاده از این نوع خرید مشتریان با انجام تنها یک تماس تلفنی و دریافت مشاوره از کارشناسان دریچه کولر تهران دما، ابعاد محصول خود را ارائه می نمایند. سپس متخصصان در تهران دما پس از بررسی های لازم اقدام به ساخت دریچه کولر مورد نظر مشتریان می کنند. دریچه کولر جدید شرکت تهران دما با استفاده از متخصصین و کارشناسان خود اقدام به طراحی دریچه کولر جدید می نماید. کانال سازی تهران دما بنا به درخواست مشتریان و نیاز آنها دریچه کولر جدید تولید می کند. برخی از مشتریان علاقه مند هستند تا طرحی نو در دریچه کولر آبی ایجاد شود. به همین دلیل آنها محصول مورد نیاز خود را سفارش می دهند و تهران دما نیز در کوتاهترین زمان ممکن و با بهترین کیفیت محصول آنان را در طرح ها و رنگ های گوناگون تولید می کند. از جمله جدیدترین محصولات می توان به دریچه کولرهای خطی 30 درجه دمپردار، دریچه کولرهای خطی 90 دریچه کولرهای خطی 90 درجه و همچنین دریچه کولرهای چوبی درپوش دار اشاره نمود. دریچه های جدید علاوه بر اینکه به روزاند، زیبا و منحصر به فرد هم می باشند. دریچه کولر آبی جدید در صنعت کانال سازی کولر هر ساله دریچه کولرآبی جدید تولید و به بازار عرضه می شود. به دلیل رشد و پیشرفت صنعت ساختمان سازی تولید دریچه کولر آبی جدید نیز رو به افزایش است. ویژگی های دریچه کولر می بایست با استانداردهای روز مطابقت داشته باشد. لحاظ نکات اصولی و استاندارد در هنگام تولید دریچه کولر موجب بازدهی عالیه این محصول در محیط های مسکونی و صنعتی می شود. دریچه های آبی جدید به راحتی تنظیم می شوند، به طوری که مشتریان می توانند دما و هوای دریچه کولرهای آبی جدید را به جهت های راست و چپ و بالا و پایین محیط هدایت کنند. بدین صورت با انجام این عمل می توان دمای تمامی فضا را به صورت یک دست هماهنگ نمود. ضخامت ورق دریچه کولر ضخامت ورق دریچه کولر در تهران دما استاندارد می باشد. تهران دما برای بالا بردن کیفیت محصولات خود و نیز طول عمر مفید دریچه کولرها از ورق با ضخامت مناسب استفاده می نماید. شرکت تهران دما برای تولید دریچه کولر از ورق با ضخامت 50 بهره می برد. این ورق بسیار مقاوم و با کیفیت می باشد. ضخامت ورق تاثیر چشم گیری بر روی قیمت دریچه کولر دارد. به همین دلیل می توان گفت ورق با ضخامت 50 بهترین و مناسبترین گزینه هم از نظر کیفیت و هم از لحاظ قیمت می باشد. بهترین روش اندازه گیری ابعاد دریچه کولر در این بخش بهترین روش اندازه گیری ابعاد دریچه کولر را ارائه می دهیم. ابتدا دریچه کولرهایی که در گذشته نصب شده اند را از روی کانال کولر جدا می کنیم. سپس با استفاده از متر ابعاد داخلی و یا به اصطلاح تو در توی کانال کولر را اندازه گیری می کنیم. در آخرین مرحله ابعاد را بر روی یک قطعه کاغذ یادداشت می کنیم و به سازندگان دریچه کولر ارائه می دهیم. با استفاده ار این روش دریچه کولرهای شما به صورت استاندارد ساخته شده و به راحتی در قسمت دهانه ی کانال قرار می گیرد. به کمک این روش اندازه گیری دیگر نیازی به صرف هزینه ی اضافی برای بازدید و اندازه گیری نصب کنندگان دریچه کولر نیست. دریچه کانال دریچه کانال از همان ابتدای پیدایش کولر آبی و کانال سازی تولید شده است. هدف از ساخت دریچه کانال، در دسترس قرار دادن هدایت هوای کانال های کولر برای افراد بوده است. پس از اینکه کولر آبی تولید شد، افراد کارشناس در این حوزه اقدام به طراحی کانال سازی کولر و نیز دریچه کولر نمودند. آنها بعد از بررسی های کارشناسانه متوجه این موضوع شدند که برای انتقال هوا می بایست کانال کولر طراحی و اجرا شود. در نهایت آنها برای تنظیم جهت باد دریچه کانال کولر را تولید کردند. دریچه به دلیل رفع مشکل و بهبود عملکرد کولر آبی توانست به سرعت جایگاه مناسبی را در صنعت کانال سازی کسب کند. دریچه کولر آبی دیواری دریچه کولر آبی دیواری در ابعاد و انواع مختلفی تولید می گردد. دریچه کولر آبی با توجه به شرایط محیطی یک مکان مورد استفاده قرار می گیرد. کانال سازی تهران دما دریچه کولر آبی را در رنگ های مختلفی تولید می کند. این نوع از دریچه تنظیم هوا ظاهری بسیار زیبا دارد. دریچه کولر آبی دیواری در ساختمان ها بسیار پر کاربرد می باشد. پره های این محصول به نحوی طراحی شده است که افراد به راحتی می توانند جهت و سمت و سوی آن را تغییر دهند. بر روی این نوع از دریچه ها به منظور جلوگیری از ورود سرما در زمستان دمپر نصب می شود. از مزایای مهم دریچه کولر آبی دیواری تغییر یافتن جهت باد کولر می باشد که این موضوع موجب محبوبیت این کالا در میان مشتریان گشته است. دریچه کولر سقفی دریچه کولر سقفی اغلب در مکان های بزرگ اجرا می شود. این محصول در تالارهای مجلل،هتل ها،سالن های همایش و بسیاری از محیط های بزرگ استفاده می شود. این کالا همانطور که از نامش پیداست در سقف نصب می گردد. دمپرهای دریچه کولر سقفی به صورت ثابت طراحی شده اند. دریچه کولرهای سقفی در نوع های مختلفی تولید و مورد استفاده ی مردم قرار می گیرند. دریچه کولر خطی ثابت، دریچه کولر مشبک و یا لانه زنبوری، دریچه کولرهای چهارطرفه و دریچه کولرهای سه طرفه به منظور نصب در سقف طراحی شده اند. دریچه کولر دیواری در صنعت ‌کانال‌سازی دریچه‌کولر دیواری بسیار مورد استفاده قرار می‌گیرد. در تولید دریچه کولر دیواری بیشترین تنوع را می توان اعمال کرد. این‌ محصول به صورت گالوانیزه،آلومینیوم و چوبی ساخته می شود. دریچه کولر خطی، دریچه کولر پره شمشیری،دریچه کولر مشبک و دریچه کولر معمولی همه از نوع دریچه ی دیواری می باشند. تمامی این محصولات به صورت بدون دمپر و با دمپر ساخته می شوند. یکی از ویژگی های مثبت این محصول نصب شدن آسان آن می باشد. مشتریان هر کدام از این‌ دریچه ها را با در نظر گرفتن معیارهای خود و همچنین فضای اجرایشان انتخاب می‌کنند. شرکت تهران دما در زمانی بسیار کوتاه انواع دریچه کولر دیواری را با بهترین ‌کیفیت تولید و عرضه می نماید. دریچه کولر آهنی قیمت دریچه کولر آهنی بسیار مناسب می باشد. این کالا از ورق های آهنی موجود در بازار تولید می گردد. دریچه کولر آهنی محبوب ترین محصول در میان‌ مردم می باشد. این محصول با استفاده از دستگاه های صنعتی و همچنین به کمک‌ نیروی متخصص تولید می شود. در ابتدای پیدایش صنعت کانال سازی کولر اولین دریچه تولید شده توسط کاشناسان، از جنس ورق گالوانیزه بوده است. این نوع از دریچه ی تنظیم هوا برای جاهایی مناسب است که در آن رطوبت وجود ندارد. در برخی از شهرهای ایران به دلیل رطوبتی بودن هوای آن ها می بایست از سایر متریال های موجود استفاده نمود. دریچه کولر آلومینیومی طول عمر دریچه‌ کولر آلومینیومی خیلی زیاد می باشد. دلیل این امر استفاده ی ورق آلومینیوم در این محصول می‌باشد. دریچه کولر آلومینیومی نسبت به سایر دریچه های تنظیم هوا سبک تر می باشد زیرا فلز آلومینیوم سبک تر از سایر فلزات می باشد. مشتریان شرکت تهران دما بسته به نوع نیاز خود از دریچه کولرهای متنوع تولید شده در این شرکت استفاده می نمایند. دریچه کولر چوبی دریچه کولر چوبی محصولی بی نظیر می باشد. زیبایی و جذابیت دریچه کولر چوبی مثال زدنی است. تولید کنندگان این نوع از دریچه تنظیم هوا از متریال چوب استفاده می نمایند. دریچه های چوبی به دلیل زیبایی چشم نوازشان در محیط های مسکونی مورد استقبال قرار می گیرند. دریچه کولر پلاستیکی دریچه کولر پلاستیکی دارای مزایای ویژه ای می باشد. این محصول بسیار با کیفیت و پر کاربرد است. یکی از امتیازات دریچه کولر پلاستیکی قیمت مناسب آن نسبت به سایر دریچه کولر ها می باشد. این محصول ماندگاری و طول عمر بالایی دارد. دریچه کولر پلاستیکی در مدل های مختلفی تولید می گردد. در ادامه دو نوع پرکاربرد این محصول را به شما معرفی می نمائیم. دریچه بازدید پلاستیکی که اغلب برای پوشش تاسیسات و کنتورهای برق مورد استفاده قرار می گیرد.دریچه گرد پلاستیکی که برای تنظیم هوای مطلوب در سرویس های بهداشتی و حمام کاربرد دارد دریچه کولر گرد دریچه کولر گرد کاربرد ویژه ای دارد. دریچه کولر گرد به منظور انتقال هوای تازه از بیرون محیط ها به داخل فضا مورد استفاده قرار می گیرد. برای ایجاد هوای مطلوب در سرویس های بهداشتی، حمام ها و راه پله ها از دریچه کولر گرد استفاده می شود. این محصول به دلیل ظاهر زیبایی ای که دارد، مورد توجه مردم واقع شده است بورس فروش دریچه کولر در تهران بورس فروش دریچه کولر در تهران دما می باشد. چرا که این شرکت تمامی دریچه کولرها را در طرح ها و رنگ های متنوعی تولید و به بازار عرضه می کند. دریچه کولر محصولی است که باید با کیفیت بالایی تولید شود. تهران دما این تضمین را به مشتریان خود می دهد که دریچه کولرهای خود را با استفاده از بهترین متریال ها تولید می کند. همچنین شرکت تهران دما خدمات پس از فروش دریچه کولر را به مشتریان خود ارائه می نماید. مواد مصرفی در تولید دریچه کولر مواد مصرفی در تولید دریچه کولر انواع ورق ها می باشند. به منظور تولید دریچه کانال از ورق های گالوانیزه و ورق های آلومینیومی استفاده می شود. همچنین در هنگام رنگ آمیزی دریچه کولرها از رنگ های شیمیایی مختلفی استفاده می شود. برخی از دریچه کولر ها از چوب تهیه می شوند. این نوع از دریچه کولرها با استفاده از چوب های موجود در بازار تهیه می شوند. کیفیت دریچه کولر کیفیت دریچه کولر رابطه ی مستقیمی با نحوه ی تولید آن دارد. در شرکت تهران دما نیروهای متخصص و حرفه ای تولید دریچه کولر ها را انجام می دهند. به همین دلیل کیفیت دریچه کولر تهران دما بسیار بالا می باشد. نکته ای دیگر که بر کیفیت دریچه کانال تأثیر گذار می باشد، مواد مصرفی استفاده شده در تولید دریچه های هوا می باشد. دریچه کولر شیک شرکت تهران دما دریچه کولر شیک تولید می کند. تهران دما دریچه کولرهای سفارشی تولید می کند. مشتریان می توانند هر رنگ دریچه ای را که نیاز دارند از تهران دما درخواست کنند. به دلیل استفاده نمودن متریال های با کیفیت در تولید محصولات، شرکت تهران دما دریچه کولرهای بسیار زیبا و شیکی را تولید و به بازار عرضه می نماید. همچنین استفاده از رنگ های جذاب و دل نشین در محصولات موجب این شده است که دریچه کولرهای شیک تولید شوند. روش های تولید دریچه کولر در این قسمت به روش های تولید دریچه کولر در کارگاه های تهران دما می پردازیم. تهران دما با بهره گیری از متدهای روز دنیا اقدام به طراحی دریچه کولرهای جدید و به روز می نماید. در مرحله ی بعدی دستگاه های خط تولید دریچه کولر را خریداری می کند. در پایان با استفاده از نیروی انسانی متخصص و همچنین دستگاه های پیشرفته و به روز دریچه های تنظیم هوا را تولید و برای مصرف کنندگان آماد می نماید. اجرا و نصب دریچه کولر برای اجرا و نصب دریچه کولر نیاز به متخصص می باشد. شرکت تهران دما افراد ماهری را به منظور نصب دریچه کولر در منازل مسکونی و اداری، کارخانه ها و... استخدام نموده است. به منظور اجرای دریچه تنظیم هوا باید از دلر برقی یا شارژی، پیچ و قاب چوبی استفاده شود. متخصصان تهران دما در اسرع وقت به محل مورد نظر نصب دریچه کولر اعزام شده و با سرعت خیلی بالایی دریچه ی کانال های کولر را نصب می نمایند. دریچه کولر و نحوه ی نصب آن در این بخش به شیوه ی نصب دریچه کولر می پردازیم. در مرحله ی اول ابعاد دهانه ی کانال را به صورت طول و عرض ثبت می نمائیم. سپس اندازه ها را به سازندگان دریچه کولر اعلام می نمائیم تا محصول مورد نیاز را تولید کنند. پس از آن دور تا دور کانال کولر را با قاب چوبی پر می نمائیم. در آخر دریچه کولر را بر روی قاب چوبی پیچ می کنیم. نکته ی حائیز اهمیت در هنگام نصب توجه به تراز بودن دریچه کولر با سطح سقف می باشد. از کجا دریچه کولر بخرم؟ از کجا دریچه کولر بخرم؟ جواب این سوال بسیار ساده است. امروزه به دلیل وسعت ارتباطات، اینترنت نقش بسیار مهمی در امور زندگی هر فرد ایفا می کند. به همین جهت می توان گفت خرید اینترنتی دریچه کولر بهترین روش برای خرید و دریافت این کالا می باشد. بهترین راه برای خرید دریچه کولر مراجعه به فروشگاه های اینترنتی معتبر می باشد. شرکت تهران دما فروشگاه اینترنتی خرید دریچه کولر را برای رفاه خرید مشتریان دایر نموده است. قیمت دریچه کولر قیمت دریچه کولر با توجه به متریال آن محاسبه می شود. قیمت دریچه کولر بر اساس اینچ تعیین می شود. قیمت دریچه کولر گالوانیزه نسبت به دریچه تنظیم آلومینیومی ارزان تر می باشد. در هنگام قیمت گذاری بر روی دریچه هوا ابعاد (طول و عرض) محصول نیز تأثیر گذار است. یکی دیگر از عوامل مهمی که در زمان ارزیابی قیمت دریچه کولر مورد بررسی قرار می گیرد دستمزد متخصصین کارخانه می باشد. تهران دما محصولات خود را با بهترین کیفیت و با قیمت بسیار مناسب به بازار عرضه می کند. مهارت ساخت دریچه کولر در کارخانه ها و کارگاه های تولید دریچه کانال به نیروی انسانی با مهارت ساخت دریچه کولر نیاز می باشد. چرا که در هنگام تولید دریچه نیروی متخصص ماهر می بایست به کمک دستگاه های تولید اقدام به ساخت نمایند. در هنگام ساخت دریچه کولر رعایت نکاتی حائز اهمیت است. طول و عرض دریچه کانال، نوع دریچه کولر، رنگ دریچه تنظیم هوا و دمپر از جمله مواردی هستند که متخصصین در هنگام تولید می بایست توجه ویژه ای به آنها داشته باشند دریچه کولر دمپر دار دریچه کولر دمپردار یکی از پر کاربرد ترین دریچه های تنظیم هوا می باشد. دمپر در قسمت پایینی دریچه کولر طراحی و اجرا شده است. به کمک دمپر می توان هوا دهی دریچه کولر را تنطیم نمود. وقتی دمپر را به سمت پایین حرکت می دهیم، دیگر هیچ هوایی وارد فضا نمی شود، این ویژگی در زمان هایی که هوا سرد می شود استفاده ی بسیار زیادی دارد. دریچه کولر بازدید برای پوشش برخی از نقاط در ساختمان ها از دریچه کولر بازدید استفاده می نمایند. در جاهایی که شیر فلکه، کنتورهای برق وجود داشته باشند، برای استتار آنها از دریچه کولر بازدید استفاده کنند. دریچه کولر بازدید در طرح های متنوعی طراحی می شود و بسته به کاربری تولید و عرضه می گردد. استفاده از دمپر دریچه کولر استفاده از دمپر دریچه کولر مزایای زیادی دارد. به منظور تنظیم مسیر باد دریچه کولر و جهت دهی آن به سمت بالا و پایین می توان از دمپر استفاده نمود. همچنین برای اینکه ورود هوا از بیرون به درون را متوقف کنیم می توانیم دمپر دریچه کولر را کاملا بسته و مانع نفوذ هوا به داخل فضا شویم. این امر اغلب در فصل های سرد کاربرد دارد. افراد می توانند بسیار آسان دمپر دریچه کولر را باز و بسته نمایند. دریچه کولر با رنگ کوره ای دریچه کولر با رنگ کوره ای زیبایی بیشتری نسبت به سایر محصولات دارد. استفاده از رنگ کوره ای برای تولید دریچه کولر، سبب یکدست شدن رنگ بر روی دریچه کانال می شود. علاوه بر این رنگ کوره کیفیت بیشتری نسبت به رنگ آمیزی معمولی دارد. شرکت تهران دما تمامی محصولات دریچه کولر خود را با استفاده از کوره رنگ می کند. دریچه کولر برای تنظیم هوا دریچه کولر برای تنظیم هوا ساخته می شود. برای ایجاد دمایی مناسب در خانه ها و سایر مکان های فیزیکی از دریچه کولر برای تنظیم دما و هوا استفاده می نمایند. افراد می توانند با دریچه تنظیم هوا را به راحتی به هر جهتی که می خواهند تغییر دهند. از ویژگی های مثبت دریچه کولر جدید تعبیه نمودن دمپر می باشد. دمپر این امکان را می دهد تا با باز و بسته نمودن آن به راحتی پره های دریچه کولرها را ببندیم. دریچه کولر خطی 30 درجه دریچه کولر خطی 30 درجه از فلز آلومینیوم ساخته می شود. دریچه کولر خطی 30 درجه بسیار زیبا می باشد. از این محصول در محیط های مسکونی اغلب استفاده می شود. دریچه کولر خطی 30 درجه همانطور که از نامش پیداست به صورت 30 درجه رو به پایین ساخته می شود. برای مشتریانی که دریچه کولر و دریچه تنظیم هوای شیک و با دوام نیاز دارند، دریچه تنظیم هوای 30 درجه توصیه می شود. دریچه کولر تاسیسات دریچه کولر تاسیسات برای پوشش تاسیسات مورد استفاده قرار می گیرد. نام دیگر دریچه کولر تاسیسات دریچه بازدید می باشد. دریچه کولر تاسیسات در بیشتر ساختمان ها کاربرد دارد. به منظور جلوگیری از دسترس افراد به شیر فلکه های لوله های تاسیساتی از دریچه کولر تاسیسات استفاده می کنند. دریچه کولر چوبی درپوش دار دریچه کولر چوبی درپوش دار محصولی جدید و کاربردی می باشد. این محصول در مدل های دمپردار و بدون دمپردار در رنگ های متنوعی ساخته می شود. دریچه کولر چوبی درپوش دار نسبت به سایر دریچه ها متمایز می باشد، چرا که در این کالا درپوشی تعبیه شده است. این نوع از دریچه تنظیم هوا موجب جلوگیری از هدر رفت انرژی گرمایشی داخل محیط ها به بیرون و همچنین مانع از ورود هوای سرد در فصل زمستان به داخل مکان ها می شود. دریچه کولر چوبی ام دی اف برای لوکس و زیبا شدن بیشتر خانه ها می توان از دریچه کولر چوبی ام دی اف استفاده نمود. دریچه کولر چوبی ام دی اف مقاومت بسیار بالایی داشته به همین دلیل دریچه کولر از جنس ام دی اف در میان دریچه کولرهای چوبی از محبوبیت بسیار بالایی بر خوردار شده است. شکل ظاهری دریچه کولر مشبک شکل ظاهری دریچه کولر مشبک شبیه به لانه زنبور می باشد. این محصول به منظور مکش هوای فضا مورد استفاده می باشد. در سرویس های بهداشتی و در حمام ها برای بیرون بردن هوای نا مطلوب از دریچه کولر مشبک گرد و یا چهارگوش استفاده می شود. دریچه کولر مشبک در دو نوع دمپردار و بی دمپر تولید و روانه ی بازار می گردد. دریچه کولر فانتزی در شرکت تهران دما دریچه کولر فانتزی تولید و به فروش می رسد. این سبک از دزیچه کولرها با توجه به نیاز مشتریان تولید و پخش می گردد. دریچه کولر فانتزی سایزها و رنگ های مختلفی دارد. ظاهر این مدل از دریچه کولرها بسیار زیبا و خاص می باشد. دریچه کولر در شهرهای مختلف ایران تهران دما به تمامی نقاط ایران دریچه کولر ارسال می نماید. یکی از سوالات متداولی که مشتریان از ما می پرسند، موضوع ارسال دریچه کولر به سایر استان ها می باشد. دریچه کولر اصفهان، دریچه کولر تبریز، دریچه کولر مشهد و سایر شهرهای ایران بزرگ همگی تحت پوشش شرکت تهران دما می باشند. هزینه ی ارسال دریچه کولر به شهرهای دیگر بسیار کم می باشد. به طوری که هزینه ی ارسال از طریق باربری حداقل 5 عدد دریچه تنظیم هوا تقریبا به مبلغ بیست هزار تومان می باشد. بهترین دریچه کولر در فصل زمستان بهترین دریچه کولر در فصل زمستان دریچه کولرهای درپوش دار تهران دما می باشند. دریچه کولرهای درپوش دار تهران دما به نوعی طراحی شده اند که می توان در فصل های زمستان از نفوذ سرما به داخل محیط به صورت قطعی محافطت کرد. این محصول در دو نوع پلاستیکی و چوبی تولید می گردد. تهران دما برای مشتریانی که به زیبایی دکوراسیون داخلی منازل خود اهمیت می دهند، محصول ویژه ی دریچه کولر درپوش دار را معرفی می نماید. دریچه کولر سفارشی و ویژه شرکت تهران دما دریچه کولر سفارشی و ویژه تولید می کند. این شرکت با توجه به پیشنهادات و درخواست های مشتریانش اقدام به تولید محصولات ویژه نموده است. این محصولات از نظر کیفیت و نوع با سایر محصولات دریچه کولر تهران دما برابر هستند. بزرگ ترین تفاوت این کالاها تمایز در رنگ آن ها می باشد. مشتریان زیادی از ما درخواست دریچه کولر با رنگ دلخواهشان را دارند. به همین دلیل تصمیم گرفتیم دریچه کولر ویژه و سفارشی برای افراد تولید نمائیم. دریچه کولر چهار طرفه دریچه کولر سقفی چهار طرفه در سقف نصب می گردد. این نوع از دریچه کولر بهترین و مناسبترین محصول کاربردی برای استفاده در کانال هایی است که دهانه ی آن ها در سقف قرار دارد. همانطور که از نام آن پیداست، این دریچه کولر هوا را به صورت مساوی به چهار طرف هدایت می کند و بدین ترتیب هوای مطلوب به تمامی نقاط محیط پخش می گردد. دریچه کولر سه طرفه دریچه کولر سه طرفه کاربرد خاص خود را دارد. به طور عمده این نوع از دریچه کولر نیز در بیشتر مواقع بر روی سقف محیط ها نصب می شود. ظاهر این محصول بسیار شبیه به دریچه کولر چهار طرفه است. بر روی این مدل از دریچه تنظیم هوا سه قسمت مساوری تعبیه شده است. قسمت باقیمانده ی آن بسیار کوچک تر از سایر بخش های می باشد. کاربردهای دریچه کولر در این بخش به کاربردهای دریچه کولر می پردازیم. این محصول در مکان ها و محیط های گوناگونی مورد استفاده قرار می گیرد. عمده ی کاربرد دریچه کولر، برای رساندن هوا و دمای گرم و سرد به محیط های مسکونی و اداری می باشد. اما در برخی موارد استفاده های دیگری از دریچه تنظیم هوا می شود. به طور مثال برای پوشش دهی محفظه های تاسیساتی، پوشاندن فیوزهای برق و... از دریچه کولر بازدید استفاده می نمایند. یکی دیگر از کاربردهای محصول دریچه هوا، پوشاندن دستگاه های داکت اسپیلت، هواساز و فن کوئل های نصب شده بر روی سقف ها می باشد. که در این موارد اغلب از دریچه کولر زیر فن کوئلی و یا بازدید چفتی و قفل زیمنس دار استفاده می کنند. دریچه کولر روشنایی دریچه کولر روشنایی برای نور پردازی در محیط ها استفاده می شود. اگر به سقف های برخی از منازل مسکونی،ادارات و یا تالارهای پذیرایی توجه نمایید، این نوع از دریچه کولر را مشاده خواهید کرد. در این محصول فضایی به صورت تهی طراحی شده است تا نور لامپ بتواند از آن عبور کرده و به محیط زیبایی خاصی را جلوه دهد. پس از نصب دریچه کولر روشنایی در قسمت بالایی آن از شیشه ها و یا طلق برای رنگی نمودن نورها استفاده می شود. اگر می خواهید محیطی زیبا و جذاب را ایجاد کنید، بهترین گزینه برای این کار بهره گیری از دریچه روشنایی می باشد. دریچه کولر زیر فن کوئلی گریل دار در این متن با دریچه کولر زیر فن کوئلی گریل دار بیشتر آشنا می شوید. این محصول در محل هایی اجرا می شود که دستگاه های فن کوئل داکت اسپیلت نصب شده باشد. دریچه کولر زیر فن کوئلی گریل دار همانند سایر دریچه های زیر فن کوئلی می باشد. تنها یک تفاوت وجود دارد و آن وجود گریل در حاشیه و یا در وسط این محصول است. گریل جهت انتقال هوای دستگاه های فن کوئل و یا داکت اسپیلت می باشد. یکی دیگر از کارهایی که دریچه کولر گریل دار انجام می دهد، مکش هوای داخل محیط ها به بیرون از خانه می باشد. دریچه کولر قفل زیمنس دار یکی از محبوب ترین محصولات تهران دما دریچه کولر قفل زیمنس دار می باشد. ققل زیمنس بر روی دریچه کولرهای بازدید، بازدید زیر فن کوئلی و بازدید تاسیساتی تعبیه می گردد. وقتی قفل زیمنس بر روی دریچه کولرها نصب می شود، دسترسی به درون دریچه ها فقط برای فردی مقدور است که کلید قفل را دارد. شرکت تهران دما به همراه محصولات ارسالی به مشتریان، کلید قفل را به آن ها ارائه می نماید. یکی دیگر از نکات مثبتی که می توان به آن اشاره کرد، زیبایی و شیک بودن دریچه کولرهای قفل زیمنس دار است. بهترین نوع دریچه کولر آبی بهترین نوع دریچه کولر آبی بستگی به سلیقه ی افراد تعیین می گردد. برای بعضی از مشتریان دریچه کولر دیواری دمپردار معمولی بهترین گزینه می باشد. برای برخی دیگر بهترین گزینه دریچه کولر آبی خطی 30 درجه دمپردار است. این بدان معناست که هر شخص با توجه به معیارهای خود، سلیقه ی خود و نیز محیط کاربری خود اقدام به انتخاب دریچه کولر آبی مورد نظر خود می نماید. شرکت تهران دما تولیدات متنوعی را ارائه می نماید به همین دلیل مشتریانی که از تهران دما دریچه کولر آبی تهیه می کنند، حق انتخاب زیادی دارند. دریچه کولر آبی آلومینیومی یکی از محصولات پر فروش در بازار دریچه کولر آبی آلومینیومی می باشد. این محصول به دلیل کیفیت متریالی که دارد، جایگاه منحصر به فردی را در میان مردم کسب کرده است. دریچه کولر آبی آلومینیومی به دلیل جنس با کیفیت خود، ماندگاری و طول عمر بسیار بالایی دارد. یکی دیگر از وجه تمایزهای این نوع از دریچه کولر، قابلیت شست و شوی آن می باشد. مشتریان اغلب علاقه مند به این هستند که دریچه های تنظیم هوای خود را شست و شو دهند، دریچه هوای آبی آلومینیومی این امکان را برای مشتریان فراهم نموده است. نحوه ی ارسال دریچه کولر به سایر شهرها در این بخش به چگونگی نحوه ی ارسال دریچه کولر به سایر شهرهامی پردازیم. در مرحله ی اول شرکت تهران دما سفارش دریچه کولر مشتریان خود را از دیگر شهرها ثبت می نماید. پس از تولید محصولات، تهران دما دریچه کولرها را با استفاده از پلاستیک های مخصوص، بسته بندی می نماید. بسته بندی محصولات به این جهت انجام می شود که دریچه کولرها هنگام بارگیری و در مسیر آسیبی به آنها وارد نشود. پس از این مرحله عوامل تدارکات تهران دما محصولات را به یکی از باربری های معتبر ارسال می کنند. در پایان دریچه کولرها پس از دو روز کاری به دست مشتریان می رسد. دریچه کولر پره شمشیری چیست؟ دریچه کولر پره شمشیری با سایر محصولات برابر است. این نوع از دریچه کولر در دو مدل دمپر دار و بی دمپر ساخته می شود. افراد می توانند مسیر باد را توسط دریچه کولر پره شمشیری به سمت چپ و یا راست و بالا و پایین هدایت کنند. تنها تفاوت این نوع دریچه کولر با سایر تولیدات، شمشیری بودن پره ها می باشد. پره های این محصول باریک می باشد، به همین دلیل زیبایی آن دو چندان شده است. دریچه هوای دکوری زیبا شرکت تهران دما دریچه هوای دکوری زیبا را به سفارش مشتریانش تولید می کند. تهران دما محصول جدید و زیبایی را به بازار عرضه نموده است. این محصول زیبایی چشم نوازی را در منازل مسکونی و مکان های اداری به وجود می آورد. محصولی که در بالا درباره ی آن مطلب خواندید، دریچه کولرهای چوبی از جنس ام دی اف شرکت تهران دما می باشد. دریچه کولرهای چوبی تهران دما در رنگ های متفاوتی تولید می شوند. علاوه بر زیبایی، می توان به کیفیت و طول عمر این محصول اشاره نمود. دریچه کولر تزئینی و دکوری شرکت تهران دما دریچه کولر تزئینی و دکوری تولید و عرضه می کند. در کارگاه تهران دما با استفاده از طرح ها و ایده های متخصصین، دریچه کولرهای تزئینی زیبا و جذابی تولید می گردد. کاربرد هر یک از دریچه کولرهای تزئینی علاوه بر تنظیم دما و جهت باد، زیبا نمودن محیط و دکوراسیون داخلی می شود. تهران دما این محصول را با توجه به سایز و اندازه های دلخواه مشتریان خود تولید می نماید. زمان مورد نیاز برای تولید دریچه کولر زمان مورد نیاز برای تولید دریچه کولر با توجه به عوامل مختلفی متفاوت می باشد. نوع و مدل و رنگ در زمان ساخت دریچه کولر تآثیر گذار است. به طوری مثال دریچه کولرهای فلزی در 4 الی 5 روز کاری تولید می شوند اما زمان تولید دریچه های چوبی بیشتر می باشد. دریچه های جوبی در 7 روز کاری ساخته می شوند. دریچه های رنگی فلزی زمان ساختشان 2 روز بیشتر از سایر محصولات زمان نیاز دارد. تفاوت دریچه کولر گالوانیزه و آلومینیوم در این بخش تفاوت دریچه کولر گالوانیزه و آلومینیوم را بررسی می کنیم. دریچه کولر گالوانیزه از ورق گالوانیزه و دریچه کولر آلومینیومی از ورق آلومینیومی تولید و تهیه می شوند. دریچه کولر گالوانیزه وزن بیشتری نسبت به دریچه کولر آلومینیومی دارد. دریچه کولر آلومینیومی قابل شست و شو بوده و در مقابل زنگ زدگی بسیار مقاوم است. دریچه کولر گالوانیزه ارزانتر از دریچه کولر آلومینیومی می باشد. ممانعت از ورود حشرات از درون دریچه کولر حشرات می توانند به راحتی از درون دریچه کولر به درون محیط های مسکونی و اداری نفوذ کنند. کارشناسان شرکت تهران دما راه کار خوبی را برای ممانعت از ورود حشرات از دورن دریچه کولر به محیط ها طراحی کرده اند. برای انجام این کار لازم است مقداری توری پارچه ای تهیه شود. پس از آن به اندازه طول و عرض داحلی کانال کولر توری را برش می زنیم. سپس تور را قبل از اینکه دریچه کولرمان را نصب کنیم به داخل کانال کولر با استفاده از چسب مخصوص می چسبانیم. بعد از اینکه توری کاملا چسبید دریچه کولر را در جای خود نصب می کنیم. پس از این دیگر هیچ حشره ای به درون محیط زندگی و کار ما ورود نخواهد کرد. تاریخ تعویض دریچه کولر تاریخ تعویض دریچه کولر به عوامل مختلفی بستگی دارد. برخی از مردم هر چند سال یکبار در دکوراسیون داخلی منازل و محیط کار خود تغییر ایجاد می کنند. به همین خاطر در حین تغییر مبلمان و یا نقاشی دیوارها، دریچه کولرهای محیط خود را نیز تعویض می کنند. اما به طور طبیعی و میانگین بهتر است هر 10 سال یکبار دریچه کولرها را تعویض نمائیم. شرکت تهران دما برای تعویض بازسازی دریچه هوای منازل و ادارات و سایر موارد، پیشنهادهای بسیار خوبی را به مشتریان خود ارائه دهد. انتخاب بهترین دریچه کولر کارشناسان فروش شرکت تهران دما شما را در انتخاب بهترین دریچه کولر برای محیط کاربری تان راهنمایی می کنند. مشتریان تهران دما با برقراری تماس با کارشناسان این شرکت به راحتی می توانند به صورت رایگان مشاوره دریافت کنند. افراد می توانند تمامی انتظارات، سلایق و دیگر نکات را به کارشناسان دریچه کولر تهران دما ارائه نمایند. در پایان مدیران فروش تهران دما پس از بررسی درخواست مشتریان بهترین نوع دریچه کولری که نیاز آن ها را تامین کند، به آن ها معرفی می کنند. دریچه کولر مکنده دریچه کولر مکنده برای خروج هوای نامطلوب در ساختمان ها طراحی شده است. این محصول در محیط های مختلفی کاربرد دارد. از جمله مکان هایی که دریچه کولر مکنده اجرا می گردد، رستوران ها، تالار های پذیرایی، سالن های همایش، سالن های نمایش تاتر و سینما و در برخی موارد در منازل مسکونی استفاده می شود. دریچه کولر مشبک، دریچه کولر سقفی چهار طرفه، دریچه کولر زیر فن کوئلی گریل دار همگی به منظور مکش هوای نا مطلوب در محیط ها اجرا می شوند. دریچه کولر دمنده یا دهش دریچه کولر دمنده یا دهش هوای مطلوب دستگاه های تهویه را به داخل محیط هدایت می کنند. این محصول عملیات به نوعی دهنده ی هوا و دمای مطلوب به داخل محیط های مسکونی و تجاری مختلف می باشد. دریچه کولرهای متنوعی کاربری دهش و دمنده را دارند. دریچه کولر آبی دیواری، دریچه هوای سقفی چهار طرفه، دریچه هوای سقفی سه طرفه، دریچه ی خطی 30 درجه، دریچه ی خطی 90 درجه و دریچه ی پره شمشیری را می توان به عنوان دریچه ی تنظیم هوای دمنده به کار برد. رنگ بندی دریچه کولرهای تهران دما رنگ بندی دریچه کولرهای تهران دما بسیار زیاد است. تا کنون متشریان زیادی درخواست دریچه کولرهای رنگی دلخواهشان را از شرکت تهران دما خوستار شده اند. شما می توانید رنگ دلخواه و مورد نظرتان را به صورت عکس برای کارشناسان فروش این شرکت ارسال کنید. پس از بررسی های لازم در قسمت خط تولید، کارکنان کد رنگ مورد نظر را بر روی دریچه کولر مشتریان اعمال می کنند. کاربرد جای پیچ دریچه کولر چیست؟ کاربرد جای پیچ دریچه کولر چیست؟ جای پیچ دریچه کولرها برای استفاده در هنگام نصب طراحی شده اند. پیچ ها نگهدارنده ی دریچه کولر بر روی دهانه ی کانال کولر هستند. پیچ ها درون جای پیچ دریچه هوا قرار گرفته و درون چهارچوب ها درج می شوند. تعداد جای پیچ ها برای دریچه کولرهای آبی دیواری 2 عدد می باشد. همچنین تعداد پیچ ها در دریچه های بزرگی مانند زیر فن کوئلی بیشتر می باشد. اندازه ی قاب دریچه کولر اندازه ی قاب دریچه کولر در هر یک از متریال ها و مدل ها متفاوت است. اندازه ی قاب دریچه های معمولی فلزی 4 سانتی متر می باشد. در برخی از مدل های دریچه کولر اندازه ی قاب بزرگ بوده و در بعضی دیگر ابعاد کوچک می باشد. جای پیچ دریچه کولر و دمپر بر روی این قاب ها نصب می گردد. دریچه کولر برقی دریچه کولر برقی طرفداران بسیار اندکی دارد. این مدل از دریچه های تنظیم هوا به وسیله ی کنترل از راه دور تنظیم می شوند. به وسیله ی دکمه هایی که بر روی دریچه کولر برقی قرار گرفته است می توان دمپرها را باز و بسته و نیز به چپ و راست هدایت نمود. این محصول به وسیله ی باطری کار می کند. همانطور که در بخش ابتدایی این متن اشاره شد، دریچه ی برقی مشتریان کمی را نسبت به دریچه کولرهای فلزی و چوبی دارد. شابلون در دریچه سازی کولر استفاده از شابلون در دریچه سازی کولر امری ضروری است. در حقیقت شابلون ها برای تولید انبوه محصولات متنوع مورد استفاده قرار می گیرند. در صنعت کانال سازی و هم در کارخانه جات دریچه کولر سازی از انواع گوناگونی از شابلون ها برای سرعت بخشیدن به خط تولید استفاده می گردد. با به کارگیری این ابزا در ساخت دریچه های تنظیم هوا دیگر نیازی نیست افراد متخصص برای برش ورق هر کدام از محصولات آن ها را اندازه گیری نمایند. بسته بندی دریچه کولر تهران دما بسته بندی دریچه کولر تهران دما اصولی و مناسب می باشد. پس از تحقیقات و دریافت انتقادات و پیشنهادات، مدیر عامل شرکت تهران دما تصمیم بر این گرفت تا دریچه کولرهای این شرکت را با در بسته بندی ها مرتب به دست مشتریان برساند. بسته بندی محصولات دریچه کولر موجب جلوگیری از آسیب رسیدن به محصولات در حین باربری و ارسال شده است. پس از اتمام مراحل تولید دریچه ی هوا با استفاده از پلاستیک های مخصوص محصولات را بسته بندی می نمایند. قیمت دریچه کولر طرح چوب قیمت دریچه کولر طرح چوب با توجه به ابعاد (طول و عرض) دریچه محاسبه می شود. قیمت ورق ام دی اف خریداری شده بر روی مبلغ دریچه های چوبی تأثیر گذار می باشد. همچنین به دلیل اعمال ظرافت های جزئی در هنگام تولید دریچه کولر چوبی قیمت این محصول نسبت به سایر محصولات بیشتر می باشد. دریچه هوای طرح چوب بسیار زیبا بوده و تهران دما تمامی رنگ هایی را که مشتریان بخواهند برایشان تولید می کند. دریچه کولر آهنی خطی دریچه کولر آهنی خطی نیز در تهران دما عرضه می گردد. شرکت تهران دما پیش از این دریچه کولر خطی را با استفاده از متریال آلومینیومی تولید می کرد. اما با گذشت زمان با توجه به نیاز بازار اقدام به ساخت و تولید دریچه کولر از ورق آهن (گالوانیزه) نمود. تهران دما دریچه کولر آهنی خطی را در انواع مختلفی از رنگ ها روانه ی بازار می کند. دریچه زیر فن کوئلی دسترسی بهترین گزینه به منظور پوشاندن دستگاه های سرمایشی و گرمایشی دریچه کولر زیر فن کوئلی دسترسی می باشد. این سبک دریچه کولر همچون کمد قابلیت باز و بسته شدن دارد. به همین دلیل در زمان هایی که دستگاه هایی همچون چیلر، فن کوئل، داکت اسپیلت و... نیاز به تعمیر و یا سرویس دارند، بهترین محصول کاربردی، دریچه کولر زیر فن کوئلی دسترسی می باشد. دریچه کولر خطی 90 درجه دریچه کولر خطی 90 درجه هوا را به رو به رو هدایت می کند. کاربرد این دریچه کولر در زمان هایی است که از دستگاه های داکت اسپیلت و زیر فن کوئلی استفاده می گردد. همانطور که می دانید دریچه کولر خطی 90 درجه به صورت ثابت بوده و پره های آن قابل تنظیم نمی باشد. این محصول زیبا و با دوام می باشد، به همین خاطر افراد جذب آن می شوند. جلوگیری از ورود هوای سرد از دریچه کولر شرکت تهران دما برای جلوگیری از ورود هوای سرد از دریچه کولر به محیط ها ارائه ی راهکار نموده است. افراد در محیط ها به منظور انجام اینکار می بایست ابتدا دمپر روی دریچه کولر را کاملا ببندند. حدود 80 درصد از ورود هوا با اینکار متوقف می شود. برای جلوگیری از ورود هوای 20 درصد باقیمانده روی دریچه کولرها را با مقوا و درپوش مخصوص می بندیم. بدین ترتیب دیگر هیچ هوایی به درون محیط مان نفوذ نخواهد کرد. ثبت سفارش دریچه کولر جهت ثبت سفارش دریچه کولر ابتدا با کارشناسان فروش شرکت تهران دما تماس بگیرید. پس از اینکه ابعاد و نوع دریچه کولر خود را انتخاب کرده و به کارشناسان فروش انتقال دادید. پیش فاکتور محصول برای شما ارسال می گردد. در پایان بعد از تأیید نهایی محصول دریچه کولر واریزی را به صورت اینترنتی به شماره حساب مورد نظر ارسال می نمائید. ارسال دریچه کولر با اسنپ شرکت تهران دما ارسال دریچه کولر با اسنپ را برای مشتریان انجام می دهد. اجرای این طرح سبب کم شدن هزینه های مشتریان شده است. وقتی دریچه کولر را با اسنپ ارسال می کنیم، دیگر نیازی نیست مشتریان به صورت فیزیکی و حضوری به فروشگاه تهران دما مراجعه نمایند.این کار در هنگام خرید دریچه کولر موجب صرفه جویی در زمان و هزینه ی افراد می شود. عرضه ی دریچه کولر به صورت تک و عمده شرکت تهران دما آرکا دریچه کولرهای خود را به صورت تک و عمده عرضه می نماید. برخی از مشتریان نیاز به تعویض دریچه کولر های منازل خود دارند. تهران دما آماده ی ساخت و ارائه ی دریچه کولرهای سفارشی مشتریان بر اساس نیاز آن ها می باشد. همچنین تهران دما به صورت عمده دریچه کولر های خود را به کارفرمایان و انبوه سازان بزرگ کشوری ارائه می نماید. صادرات دریچه کولر به عراق شرکت تهران دما صادرات دریچه کولر به عراق را اجرایی کرده است. تهران دما با توجه به نیاز اقدام به تولید و صادرات انواع دریچه کولرهای تنظیم هوا به کشورهای همسایه نموده است. هدف از اجرای این طرح تامین محصولات مورد نیاز کشورهای حاشیه ای و البته رشد و توسعه ی شرکت تهران دما می باشد. مدیر عامل شرکت تهران دما در سال گذشته انواع مختلفی از دریچه کولرهای پلاستیکی را به کشور عراق صادر نموده است. خرید دریچه کولر به صورت اینترنتی خرید دریچه کولر به صورت اینترنتی سبب افزایش سرعت و لذت در خرید افراد می شود. با استفاده از این روش دیگر نیازی به صرف هزینه برای بازدید و کارشناسی دریچه کولر نیست. مشتریان می توانند در هنگام خرید دریچه کولر اینترنتی مشاوره ی رایگان کارشناسان شرکت تهران دما را دریافت نمایند. پس از دریافت مشاوره کارشناسان فروش تهران دما ابعاد را از مشتریان گرفته و سپس دریچه کولر آنها را ثبت می کنند. اجرای دریچه کولر در شرکت های برجسته شرکت تهران دما اجرای دریچه کولر در شرکت های برجسته را در سال های اخیر ارائه نموده است. شرکت تهران دما انواع مختلقی از دریچه کولر را در کارخانه جات و شرکت های مشهور نصب نموده است. یکی از مکان هایی که شرکت تهران دما محصولات خود را عرضه نموده است، شرکت لبینات کاله می باشد. مدیریت شرکت تهران دما همواره در تلاش است تا خدمات با کیفیتی را به مشتریان خود ارائه نماید. دریچه کولر اسلوت چه کاربردی دارد؟ دریچه کولر اسلوت چه کاربردی دارد؟ این محصول مورد توجه افراد زیادی قرار گرفته است. دریچه کولر اسلوت به دلیل پره های ثابت و همچنین فاصله ی زیاد هر یک از پره ها از یکدیگر، هوا را به صورت یکنواخت و یکدست به درون محیط ها هدایت می کند. از این رو می توان گفت کاربرد دریچه کولر اسلوت در مکان هایی است که ما نیاز به ایجاد هوای یکدست داریم. دریچه کولر زیر فن کوئل یک تکه دریچه کولر زیر فن کوئل یک تکه در ابعاد خاصی تولید و اجرا می شود. معمولا دریچه کولر زیر فن کوئل یک تکه یا به اصطلاح عام یک دره در ابعاد کوچک فقط قابلیت اجرا دارد. در ابعاد بزرگ تر می بایست برای دریچه کولر زیر فن کوئلی دو درب تعبیه نمود که به آن دریچه کولر زیر فن کوئل دو تیکه یا دو دره می گویند. دریچه کولر گرد هواکش دریچه کولر گرد هواکش اغلب در ساختمان ها استفاده می شود. از این نوع دریچه کولر به عنوان مکش در ساختمان ها استفاده می گردد.برای تخلیه ی هوای نا مطلوب راه پله ها،حمام ها و سرویس های بهداشتی از دریچه کولر گرد استفاده می شود. دریچه کولر گرد هواکش در بیشتر مواقع به صورت مشبک تولید می شود. به منظور نصب این نوع دریچه ی تنظیم هوا از فنرهای مخصوص بهره می گیریم. دریچه کولر گرد بلندگویی از دریچه کولر گرد بلندگویی می توان هم به عنوان دهش و هم به عنوان مکش استفاده نمود. این مدل از دریچه کولر طرفداران زیادی دارد. به دلیل شکل خاص و زیبایی ظاهری ای که دارد، بیشتر مشتریان این نوع از دریچه کولر گرد را انتخاب می کنند. دریچه کولر گرد بلندگویی همانطور که از نامش پیداست ظاهری شبیه به بلندگو دارد. جهت نصب این محصول از فنرهای مخصوصی که در پشت دریچه نصب می شود، استفاده می کنند. ویژگی های دریچه کولر چوبی در این سظر به ویژگی های دریچه کولر چوبی می پردازیم. دریچه کولر چوبی با استفاده از متریال چوب تولید می گردد. برای ساخت این محصول از ام دی اف چوبی بهره برداری می شود. دریچه کولر چوبی در دو مدل قابل تنظیم دمپرداره و بدون دمپر عرضه می گردد. یکی از ویژگی های برجسته ی دریچه کولر چوبی تنوع رنگ این محصول می باشد. این محصول در رنگ های قهوه ای سوخته، کرم قهوه ای،سفید، مشکی، آجری و شکلاتی تولید و روانه ی بازار می شود. ساخت دریچه کولر چوبی رنگی ساخت دریچه کولر چوبی رنگی در تهران دما پذیرفته می شود. شرکت تهران دما آرکا با در نظر گرفتن عایق مشتریان اقدام به تولید و پخش دریچه کولر چوبی در رنگ های تیره و روشن نموده است. تهران دما 10 رنگ متنوع دریچه کولر چوبی عرضه می نماید. این محصولات در دو نوع مات و براق در رنگ های دلخواه مشتریان تولید می شود. دریچه کولر روشنایی سقفی دریچه کولر روشنایی سقفی زیر مجموعه ی دریچه های دکوراتیو می باشد. از محصول دریچه کولر روشنایی سقفی به منظور نوردهی در فضاهای مسکونی استفاده می گردد. این محصول در دو نوع ساده و بعلاوه تولید و عرضه می شوذ. در ساختمان ها پس از نصب دریچه کولر روشنایی یک عدد طلق و یا شیشه در پشت دریچه قرار می گیرد. همچنین در محلی که در سقف وجود دارد از نورهای مصنوعی برای نوردهی به محیط استفاده می کنند. استفاده از دریچه کولر روشنایی سبب یکدست شدن نور در محیط ها می شود. استفاده از دریچه کولر اسلوت استفاده از دریچه کولر اسلوت در سقف بسیار رایج است. جهت مکش و دهش هوا از این نوع کالا می توان بهره برد. دریچه کولر اسلوت با توجه به شکلی که دارد،در هنگام هوادهی، هوا را به صورت یکجا در محیط انتقال می دهد. یکدست شدن هوا در فضاها یکی از عوامل مهمی است که در ایجاد دمایی مطلوب نقش بسزایی دارد. دریچه کولر های اسلوت طاهری شبیه به دریچه کولر خطی 90 درجه دارند. دریچه کولر مربع شکل دریچه کولر مربع شکل دو دو نوع سقفی و دیواری تولید می شود. بیشترین کاربرد دریچه کولر مربع شکل در سقف می باشد. اغلب دریچه کولرهای سقفی 4 طرفه و مشبک به صورت مربع شکل ارائه می شوند. این محصول به طور میانگین در بیشتر مواقع در تالارهای پذیرایی، رستوران ها، هتل ها، سینماها، سالن های همایش و تاتر و در فضاهای اداری استفاده می گردد. در 90 درصد از مکان هایی که در بالا نام برده شد، معمولا دریچه کولر سقفی 4 طرفه اجرا می شود. دریچه کولر درپوش دار پلاستیکی دریچه کولر درپوش دار پلاستیکی علاقه مندان زیادی در سراسر ایران دارد. این محصول در فصل زمستان با توجه به ویژگی ای که دارد مانع نفوذ سرما به داخل محیط می شود. دریچه کولر پلاستیکی در دو نوع بی دمپر و دمپردار تولید و عرضه می گردد. در فصل گرما درپوش های دریچه کولر پلاستیکی را می توان به راحتی جدا نمود و در فصل سرما مجدد آنها را بر روی قاب قرار داد. این محصول به دلیل مزایای ویژه ای که دارد، در کشورهای همسایه طرفداران بسیار زیادی دارد. دریچه کولر تنظیم هوای خطی دریچه کولر تنظیم هوای خطی استفاده های گوناگونی دارد. این محصول در 3 نوع خطی 30 درجه، خطی 90 درجه و خطی 45 درجه تولید و روانه ی بازارهای مصرفی می شود. همچنین دریچه کولر خطی در مدل های دمپردار و بی دمپر ساخته می شود. در 99 درصد از ساختمان هایی که دستگاه های زیرفن کوئل و داکت اسپیلت اجرا شده است، از دریچه کولرهای خطی به منظور هوا دهی استفاده می گردد. بسته بندی دریچه کولر بسته بندی دریچه کولر بسیار حائز اهمیت است. بسته بندی کردن محصولات مزایای زیادی دارد. جلوگیری از آسیب دیدن محصول دریچه کولر، اعم از خردگی، رنگ پریدگی، خم شدن و... با استفاده از بسته بندی مناسب و با کیفیت امری دست یافتنی است. شرکت تهران دما آرکا با توجه به ارسال محصولات خود به تمامی نقاط کشور ایران، در نظر گرفته است، بسته بندی محصولات سایر شهرها را به صورت ویژه و ضخیم اعمال نماید. به کمک این کار دریچه کولرهای ارسالی به شهرها را در برابر خطرات ایمن می کنیم. دریچه کولر دایره شکل دریچه کولر دایره شکل اغلب جهت تخلیه ی هوای نا مطلوب مورد استفاده قرار می گیرد. از دریچه کولر دایره ای در جاهایی نظیر راه پله ها، سرویس های بهداشتی، حمام، آشپزخانه، تراس و... استفاده می نمایند. در اکثریت جاهایی که محصول دریچه کولر دایره شکل مورد استفاده قرار می گیرد، محیط های مسکونی و اداری می باشند. در برخی از کارخانه جات نیز این نوع دریچه کولر اجرا می گردد، نام این نوع دریچه ی سانتیریفیوژ می باشد. 353
Introduction ============ Microalgae are promising source of biomass due to their advantageous features such as their phototropic nature, high growth rate, lack of competition with food crops for arable land, and abundant nutritious components, such as protein, pigments, and trace elements ([@ref-14]; [@ref-37]). Therefore, it has been used as feedstock, such as in food, feed, functional foods, biofuels, or chemicals integrated in novel biorefinery concepts ([@ref-43]; [@ref-36]). Unlike terrestrial plants, the biologically active compounds extracted from microalgae have shown unique properties, such as antibacterial, antiviral, antifungal, antioxidative, anti-inflammatory, and anti-tumor properties ([@ref-7]; [@ref-13]; [@ref-15]; [@ref-16]; [@ref-6]; [@ref-8]). From economical point of view, polysaccharides from algae are promising products due to their abundance in algae ([@ref-22]). Polysaccharides can be extracted from algae by several "green" extraction techniques, such as microwave-assisted extraction ([@ref-30]) and enzyme-assisted extraction methods ([@ref-21]). The characteristics of different polysaccharides from microalgae, including their composition and structure, were discussed ([@ref-7]). It was reported that *G. impudicum* and *C. vulgaris* contained homo galactose ([@ref-40]) and glucose ([@ref-27]), respectively. However, the other polysaccharides from microalgae are heteropolymers of galactose, xylose, glucose, rhamnose, fucose, and fructose ([@ref-24]; [@ref-34]; [@ref-28]). [@ref-11] found that the structure of the polysaccharides from *Phaeodactylum tricornutum* was a ramified sulfated flucoronomannan, with a backbone composed of β-(1,3)-linked mannose. Many studies have shown that the polysaccharides from microalgae are characterized by antibacterial, antitumor, and antiviral properties ([@ref-26]). As a kind of diatom, PTP has been found in great abundance in coastal and oceanic waters ([@ref-4]). It contains approximately 36.4% crude protein, 26.1% carbohydrate, 18.0% lipid, 15.9% ash, and 0.25% neutral detergent fiber on a dry weight (dw) basis ([@ref-29]). In addition, it can accumulate valuable pigments such as fucoxanthin, triacylglycerols, and omega-3 long-chain polyunsaturated fatty acids, such as eicosapentaenoic acid (EPA; C20:5) ([@ref-18]; [@ref-31]; [@ref-41]; [@ref-25]). Currently, it is commercialized for its lipids, especially EPA, and several studies have sought to increase the production yield of EPA and biomass ([@ref-12]; [@ref-2]; [@ref-25]). In recent years, due to its many therapeutic activities, fucoxanthin has been commercialized from PTP. However, there is little research about the polysaccharides extracted from PTP. Therefore, to make full use of the alga, in this paper, we extracted polysaccharides from PTP, characterized its chemical structure, and studied the anticancer activity of the polysaccharides. Materials and methods ===================== *Phaeodactylum tricornutum* samples and reagents ------------------------------------------------ Dried PTP powder was supplied by the Institute of Oceanology, Chinese Academy of Sciences. All the reagents used were of analytical grade and commercially available unless otherwise stated. Extraction of polysaccharides from *Phaeodactylum tricornutum* (PTP) -------------------------------------------------------------------- The extraction diagram was as [Fig. 1](#fig-1){ref-type="fig"}. ![Extraction process.\ The extraction diagram of PTP.](peerj-07-6409-g001){#fig-1} The dried PTP powder was extracted by the Soxhlet method with ethanol to remove pigments and lipids. The residue was then dried in an oven at 50 °C, and polysaccharides were extracted by hot distilled water with the assistance of ultrasonic methods. The optimal temperature, times of ultrasonic treatment, and extraction time were determined (shown in [Supplemental Information](#supplemental-information){ref-type="supplementary-material"}). According to the optimal conditions, the residue algal powder was treated by the ultrasonic method for 20 times, 10 s working, 10 s rest, and 380 W power. Then, it was extracted at 80 °C for 2 h with stirring. The solution produced by filtration was condensed by rotary evaporator and dialyzed for salt removal. The obtained solution was condensed again, and the final solution was freeze-dried to get the purified sulfated polysaccharides, called PTP. Chemical characterization ------------------------- The Mw of PTP was measured by HPLC with a TSK gel G4000PWxl column using 0.05 mol/L Na~2~SO~4~ as the mobile phase on an Agilent 1260 HPLC system equipped with a refractive index detector. The column temperature was 35 °C, and the flow rate of the mobile phase was 0.5 mL/min. Dextran standards with a Mw of 1, 5, 12, 50, 80, 270, and 670 KDa (Sigma, Mendota Heights, MN, USA) were used to calibrate the column. Total sugars were analyzed by the phenol-sulfuric acid method ([@ref-10]) using galactose as the standard. sulfated content was determined by the barium chloride gelatin method ([@ref-17]). The molar ratios of the monosaccharide composition were determined according to [@ref-33]. 1-phenyl-3-methyl-5-pyrazolone pre-column derivation HPLC was used to determine the molar ratio of the monosaccharide composition. Briefly, 10 mg polysaccharide sample was dissolved into one mL distilled water. The mixture was hydrolyzed in 4 mol/L trifluoroacetic acid, followed by neutralization with sodium hydroxide. Then, HPLC was used to determine every monosaccharide composition on a YMC Pack ODS AQ column (4.6 mm × 250 mm). Mannose, rhamnose, fucose, galactose, xylose, glucose, and glucuronic acid from Sigma-Aldrich were used as standards. FT-IR spectra of PTP were determined on a Nicolet-360 FT-IR spectrometer between 400 and 4,000 cm^−1^. Evaluation of inhibiting HepG2 growth activity in vitro ------------------------------------------------------- ### Cell culture HepG2 cells purchased from Kunming Cell Bank, Chinese Academy of Sciences, were cultured in DMEM supplemented with 10% fetal bovine serum solution, 100 U/mL penicillin and 100 mg/mL streptomycin at 37 °C in a humidified atmosphere containing 5% CO~2~. ### Evaluation of inhibiting HepG2 growth activity in vitro The cell growth inhibitory activity of PTP with different concentrations (50, 100, 150, 200, and 250 ug/mL) was assessed by MTT assay. The cells were seeded in a 96-well plate at a concentration of 1 × 10^4^ cells/mL and incubated with various concentrations PTP for 48 h. Then, 200 ul 0.5 mg/mL MTT solution was added to each well. After 4 h incubation, the plates were centrifuged for 10 min at 8,000 rpm. MTT solution was removed. And 200 uL DMSO was added into each well. The absorbance at 570 nm was determined. ### Apoptosis assessment The apoptosis states of HepG2 cells were determined by an Annexin V-FITC/PI apoptosis kit. Cells were collected and washed with ice-cold PBS twice. Then, the cells were resuspended and diluted to 1 × 10^6^ cell/mL with binding buffer. The suspended cells were dyed by 10 μL of Annexin V-FITC for 30 min at room temperature and then stained with five μL of propidium iodide (PI) for 5 min. After incubation, the apoptosis of cells was determined by flow cytometry with Guava® easyCyte 6-2L (Millipore, Billerica, MA, USA). ### Analysis of the cell cycle A cell cycle analysis kit (Beyotime, Haimen, Jiangsu, China) was used to analyze the cell cycle according to the manufacturer\'s instructions. Briefly, cells were plated in DMEM with different concentrations of sample for 24 h. Then, both the suspension and the adherent cells were collected and placed into the flow cytometry tube and centrifuged at 1,500 rpm for 5 min to obtain cell pellets. After that, the cell pellets were washed with precooling PBS and fixed in ice-cold 70% ethanol overnight at 4 °C. Fixed cells were rewashed with PBS and incubated with PI staining solution (0.5 mL of staining buffer, 25 μL of PI staining solution, and 10 μL of RNAase A) for 30 min at 37 °C in the dark. Cell cycle analysis was carried out with Guava® easyCyte 6-2L (Millipore, Billerica, MA, USA) using 10,000 counts per sample. The percentage of cells distributed in the different phases of G0/G1, S, and G2/M were recorded and analyzed. Statistical analysis -------------------- All data are shown as means ± SD (standard deviation) of three independent experiments to ensure the reproducibility of the results. Statistical analysis was performed using SPSS. The difference among groups was analyzed by one-way ANOVA. Results ======= Chemical characterization ------------------------- *Phaeodactylum tricornutum* was extracted and purified from , with a yield of 1.5%(% dw). It was further characterized regarding Mw, total sugars, sulfate content, and monosaccharide composition ([Table 1](#table-1){ref-type="table"}). 10.7717/peerj.6409/table-1 ###### Chemical composition. ![](peerj-07-6409-g006) Sample Total sugar/% Sulfate/% Mw/kDa Monosaccharides composition (Molar ratio) -------- --------------- ----------- -------- ------------------------------------------- ------ ------ ------ ------ ------ ------ PTP 29.94 20.36 4810 0.00 0.25 0.68 0.53 0.56 1.00 0.75 **Notes:** Chemical composition of PTP (%w/w dry weight). Man, mannose; Rha, rhamnose; Glc A, glucuronic acid; Gal, galactose; Glc, glucose; Xyl, xylose; Fuc, fucose. According to [Table 1](#table-1){ref-type="table"}, the total sugar and sulfate contents were 29.94% and 20.36%, respectively, which indicated that PTP was a type of sulfated polysaccharide. The Mw of PTP was higher (4,810 kDa). The results of the monosaccharide composition showed that the most common monosaccharide of PTP was xylose, followed by fucose, glucose, and galactose, with a small amount of rhamnose. The glucuronic acid content of PTP (0.68) was higher. These results indicated that PTP was a hybrid and acidic polysaccharide. To further characterize the chemical structure of PTP, the corresponding FT-IR spectrum was examined ([Fig. 2](#fig-2){ref-type="fig"}). The O--H stretching vibration appeared at 3,272 cm^−1^, and the C--H stretching vibration appeared at 2,926 cm^−1^. The adsorption at 1,632 and 1,408 cm^−1^ represented the asymmetric and symmetric stretching vibration of C=O, respectively. The adsorption at 1,226 and 1,038 cm^−1^ corresponded to the S=O stretching vibration and C--O--H deformation vibration, respectively. These results further indicated that PTP was an acidic and sulfated polysaccharide, which chelated with other positive ions. ![FTIR.\ FT-IR spectra of PTP.](peerj-07-6409-g002){#fig-2} Evaluation of inhibiting HepG2 growth activity in vitro ------------------------------------------------------- [Figure 3](#fig-3){ref-type="fig"} shows the inhibitory effect of different concentrations of PTP on HepG2 tumor cells. The results indicated that PTP had an antiproliferative effect on HepG2 cells in a dose-dependent manner. With concentration increasing, PTP had higher inhibitory activity, and the inhibition rate was up to 60.37% when the concentration was 250 ug/mL. However, the manner of PTP inhibiting HepG2 growth was not clear. To analyze the main cause, we determined the cell apoptosis and cell cycle by flow cytometry. ![Inhibition rate of HepG2 by MTT assay.\ The effect of different concentration PTP on the inhibition rate of HepG2 by MTT assay for 48h.](peerj-07-6409-g003){#fig-3} Induction of apoptosis according to cell cycle analysis ------------------------------------------------------- [Figure 4](#fig-4){ref-type="fig"} shows the results of flow cytometry, when the HepG2 cells were treated with different concentrations of PTP. From the results, we deduced the apoptosis rate under different concentrations of PTP (shown in [Fig. 4](#fig-4){ref-type="fig"}). From [Fig. 4](#fig-4){ref-type="fig"}, when HepG2 cells were treated with PTP, the apoptosis rate increased in a dose-dependent manner, although it decreased slightly under 200 ug/mL PTP. When the concentration of PTP was 250 ug/mL, 30% apoptosis of cells was induced. Double negative PI-Annexin V cells accounted for about 63%. The above results were consistent with those of the MTT assay. They indicated that PTP could significantly induce cell apoptosis. Then, we determined the HepG2 cell cycle rate under three different concentrations (50, 150, and 250 ug/mL) of PTP, as shown in [Fig. 5](#fig-5){ref-type="fig"}. From [Fig. 5](#fig-5){ref-type="fig"}, the treatment of different concentrations of PTP did not influence the HepG2 cell cycle rate, which might indicate that PTP's anticancer effect occurred mainly through induction of apoptosis without affecting the mitosis of HepG2 cells. ![Cell apoptosis rate.\ The cell apoptosis rate under different concentration of PTP.](peerj-07-6409-g004){#fig-4} ![Cell cycle.\ The cell cycle rate under different concentration of PTP.](peerj-07-6409-g005){#fig-5} Discussion ========== Cancer is the leading threat to the world population, and it is the first-leading cause of death worldwide. The current cancer treatments often cause side effects ([@ref-32]; [@ref-5]). Recently, due to their favorable properties, polysaccharides from microalgae have been given increased attention. Polysaccharides from *Spirulina platensis* have been shown to have antitumor functions on human HT-29 cells ([@ref-38]), MB-231 cells ([@ref-39]), HeLa cells ([@ref-42]) and HepG2 cells ([@ref-9]). Polysaccharides from *Platymonas subcondigoramis* inhibited melanoma ([@ref-23]). [@ref-35] showed that polysaccharides from the dinoflagellate *Gymnodinium* sp. exhibited significant cytotoxicity against a variety of cancer cells, which meant that the polysaccharides might be a potential anticancer chemotherapeutic agent. For PTP, some references reported antioxidant ([@ref-1]), anti-obesity ([@ref-19]), anti-inflammatory, and immunomodulatory activities ([@ref-20]; [@ref-13]). A novel fatty alcohol ester isolated from PTP showed apoptotic anticancer activity ([@ref-32]). Few studies have addressed polysaccharides isolated from PTP. [@ref-1] extracted endo-exopolysaccharide and determined its antioxidant activity on DPPH. The composition of the polysaccharides included xylose, glucose, and galactose. Similarly, the monosaccharide composition of PTP mainly included xylose, fucose, glucose and galactose. Fucose was not reported by [@ref-1], which may be due to the different origins of the algae. In this paper, we determined not only the monosaccharide composition but also the total sugar, sulfate contents and Mw (29.94%, 20.36%, and 4,810 kDa, respectively) and found that PTP is a complicated sulfated polysaccharide. A type of lipopolysaccharide extracted from Phaeodactylum tricornutum exhibited anti-inflammatory activity by blocking the activation of nuclear factor-κB and phosphorylation of p38 mitogen-activated protein kinases, extracellular signal-regulated kinases 1 and 2 and c-Jun N-terminal kinase ([@ref-20]). However, there was no related information about the lipopolysaccharide. To our knowledge, no anticancer activity has been reported for PTP. In this paper, we determined the anticancer activity of PTP on HepG2 cells. Significant anticancer activity (up to 60.37% under 250 ug/mL) by MTT assays, which was much better than for polysaccharides isolated from *Spirulina platensis* ([@ref-38], [@ref-39]). In addition, several studies reported that polysaccharides isolated from *Spirulina platensis* exhibited anticancer activity by blocking G0/G1 phase of cancer cells, which induced the mitosis of cancer cells and led to apoptosis of the cells ([@ref-38], [@ref-39]; [@ref-42]; [@ref-9]). However, in this paper, although the apoptosis rate of HepG2 cells increased, cell cycle analysis indicated that PTP's anticancer effect occurred mainly through induction of apoptosis without affecting the cell cycle and mitosis of HepG2 cells. This result might differ according to the chemical components and structure. It needs further investigation. In addition, some references reported that microalgae polysaccharides have the capacity to modulate the immune system so that they display anticancer activity in vivo ([@ref-3]). For this study, only in vitro cell experiments were carried out, and it is necessary to explore the anticancer activity in vivo. Further research will address this issue. Conclusion ========== In this paper, a sulfated polysaccharide (PTP) was extracted from PTP with a high Mw (4,810 kDa). The monosaccharide composition of PTP was mainly xylose, fucose, glucose, and galactose. MTT assays showed that PTP has significant anticancer activity (up to 60.37% under 250 ug/mL). Furthermore, the anticancer effect occurred mainly through induction of apoptosis without affecting the cell cycle and mitosis of HepG2 cells. Thus, PTP may be a potential drug for anticancer treatment. Supplemental Information ======================== 10.7717/peerj.6409/supp-1 ###### Anticancer activity of PTP. ###### Click here for additional data file. 10.7717/peerj.6409/supp-2 ###### Apoptosis and cycle analysis of PTP. ###### Click here for additional data file. 10.7717/peerj.6409/supp-3 ###### Extraction. Selecting the optimal extraction conditions ###### Click here for additional data file. Additional Information and Declarations ======================================= The authors declare that they have no competing interests. [Shengfeng Yang](#author-1){ref-type="contrib"} conceived and designed the experiments, performed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. [Haitao Wan](#author-2){ref-type="contrib"} performed the experiments. [Rui Wang](#author-3){ref-type="contrib"} analyzed the data. [Daijun Hao](#author-4){ref-type="contrib"} analyzed the data, contributed reagents/materials/analysis tools. The following information was supplied regarding data availability: The raw measurements are available in the [Supplementary Files](#supplemental-information){ref-type="supplementary-material"}.
Introduction ============ Having excellent knowledge of the referent values of red blood cells (RBCs) variables with children and adolescents is profoundly important for proper interpretation of the results of complete blood count. Reference values for RBCs variables are lower with children in comparison with the adults ([@B1]). Several studies which investigated hematologic parameters have been done in different populations, racial, ethnic and gender subgroups, even in different seasons ([@B2]--[@B5]). In most of these studies, age, ethnic and sex differences were significant and therefore it was stressed the need for establishing normal reference values for different populations. RBC variables are fairly stable through adult life, but significant differences exist in the pediatric population. The newborn infant, older child, and adult show profound differences ([@B6]). Because hemoglobin level and red cell indices vary with age, it is crucial to take as reference standards that change in each period of life, from fetal life to adolescence. Adult value will be reached gradually during the second part of childhood, around 15 yr of age ([@B7]). To ensure that interpretation of hematology results in children are appropriate, the laboratory has to have established age-specific reference ranges ([@B8]). The sex differences in hemoglobin level in adults are well documented, and the underlying mechanisms are probably a direct effect of sex hormones, both estrogen and androgens on erythropoiesis ([@B9]). "In pre-pubertal humans no major differences can be found between the sexes in red blood cell count or hemoglobin and serum ferritin concentrations" ([@B10]). "The difference in hematological variables between sexes emerges after onset of menstruations and persistent until 10 yr after the menopause" ([@B9], [@B10]). Menstruation and nutritional intake are principal reasons for lower values of hemoglobin and iron of women regarding men ([@B11]). The total amount of hemoglobin increases more in boys than in the girls in the period of puberty ([@B12]). Among children 6--14 yr old the values increased from about 12 to about 14 gr per 100 ml of blood. In girls between 14 and 20 yr of age, the hemoglobin values decreased slightly, reaching 13gr/100ml. In boys of corresponding ages, there was an increase to about 15gr/100ml. In both sexes, these values were attained at about 20 yr of age and remained characteristic of the third decade of life ([@B13]). A few comparative studies have been conducted on children in pre-adolescent and adolescent years and the lack of studies and information on hematological parameters for this population is obvious. Assessment of RBC variables in young population and determination of normal values is necessary for identification of anemia. The aim of this paper was to determine the values of RBC variables with young population from both sexes, within age span 8 to 18 years. Possible differences in the group(s) have to be determined regarding the age difference and between the groups regarding the sex. Methods ======= Subjects -------- Study participants consisted of 300 healthy young individuals (age span 8 to 18 yr) which participated continuously in different kinds of sports activities and were involved in regular medical pre-participation check-ups in 2016. A group with male subjects was composed of 240 participants and female group was composed of 80 participants. Both groups were divided into subgroups regarding the two-year interval: under 10 (U10); under 12 (U12); under 14 (U14); under 16 (U16); under 18 (U18). Blood collection ---------------- The hematological testing was part of complete medical checkup for sports pre-participation screening, during morning hours (from 8:00 to 12:00 am) in a controlled laboratory with constant temperature (between 20 °C and 24 °C) and humidity. To determine the blood count blood samples were collected from capillary vessel using sterile plastic containers with anticoagulant (EDTA K3) incorporated in its walls. An experienced evaluator was in charge of the collection procedures. Analysis was determined by automated hematology analyzer ABX Micros 60-OT (ABX hematology, Montpelier, France). The technical error intra rater measurement showed values lower than 1%. Reagents, calibrators, and controls were obtained from the instrument manufacturer. Analysis of samples was performed immediately after blood drawing. The testing was conducted at The Institute of Physiology, Medical Faculty Skopje, Republic of Macedonia. Definitions of analyzed hematological parameters ([@B14], [@B15]) ----------------------------------------------------------------- The erythrocyte or red blood counts also referred to as "RBCs", involve counting the number of RBCs per unit volume of whole blood. Male 4.7 -- 6.1 × 10^6^ cells/mm^3^; female 4.2--5.4 × 10^6^ cells/mm^3^. Hematocrit (Hct) is the percentage of blood that is represented by the red blood cells. Normal ranges for hematocrit are strongly dependent on the age, and well described from newborns to adult age. Optimal values for adult males are between 42% and 54% and for female 38% to 46%.Hemoglobin level (Hb) is expressed as the amount of hemoglobin in grams per deciliter of whole blood. Adult males should have between 14 to 18 g/dl, adult woman 12 to 16 g/dl.Mean corpuscular volume (MCV) is the mean volume of all the red blood cells in the sample or the average size of the red blood cell. It can be calculated by dividing the hematocrit (volume of all RBC) by RBC number. The value is expressed in volume units, femtolitres (fL=10^−15^ L). The normal range is 80-94fL.Mean corpuscular hemoglobin (MCH) represents the mean mass of hemoglobin in the one red blood cell and is expressed in the mass unit, picograms (pg= 10^−12^ gr). It is calculated by dividing the total mass of hemoglobin by the number of red blood cells. The normal range is 27--31 pg.Mean corpuscular hemoglobin concentration (MCHC) is the mean concentration of hemoglobin in the red cell or average concentration of hemoglobin in one liter of red blood cells. It is calculated by dividing the hemoglobin by the hematocrit. MCHC fulfill the meaning of MCH considering the size of the cell. The normal range is 31.5--35 g/dl.RDW or red cell distribution width is parameter that measures variation in red blood size or red blood cell volume. The reference ranges for RDW for adult is 11.6%--14.6%. Ethics ------ Institutional ethical approval was received from the Ethics Committee of the Medical Faculty, Ss Cyril and Methodius University, Skopje, Republic Macedonia (No=03-1197/5). Informed consents were obtained from the parents. Statistical Analysis -------------------- Statistical analysis as performed using the computer software SPSS for Windows version 14.0 (SPSS Inc., Chicago, USA). Analysis of variance factorial analysis and post hoc multiple comparisons were used to evaluate the significance of the differences. Differences in proportions were analyzed using the Chi-square test or Fisher's exact test when appropriate. All data were presented as mean (±SD). Results were considered to be statistically significant when *P*-value was less than 0.05 (*P*\<0.05). Results ======= Hematologic parameters in males ------------------------------- The mean value and standard deviations for general features (age, height and weight) and hematologic parameters (RBCs-red blood cells; Hb-hemoglobin, Hct -- hematocrit; and hematological indices: MCV, MCH, MCHC and RDW) for group of male participants (N=240) are presented in [Table 1](#T1){ref-type="table"}. All parameters are shown for five age different subgroups. High statistically significant difference is found for all general features of participants: age, height, and weight. ###### General characteristics and hematologic parameters of the male participants (8--18 yr, N=240) for age different subgroups ***MALE*** ***U10*** ***U12*** ***U14*** ***U16*** ***U18*** ***P-value*** ----------------- --------------- --------------- -------------- --------------- --------------- --------------- Age(yr) 9.24 (0.32) 11.08 (0.58) 13.28 (0.28) 14.93 (0.54) 16.78 (0.35) 0.001 Height (cm) 131,78 (22.9) 148,69 (22.9) 168,08 (8.6) 177,06 (6.87) 182,29 (6.88) 0.001 Weight (kg) 33,00 (6.9) 41,93 (6.99) 58,65 (12.3) 65,68 (17.02) 76,13 (9.45) 0.001 RBC (10^9/^/dl) 4,79 (0.38) 4,84 (0.39) 5,08 (0.37) 5,27 (0.36) 5,22 (0.35) 0.001 Hb (gr/dl) 12,95 (0.89) 13,31 (0.9) 14,35 (1.15) 14,96 (0.9) 15,25 (0.98) 0.001 HCT (%) 40,48 (2.67) 40,87 (2.69) 44,02 (3.5) 46,25 (2.83) 46,79 (2.81) 0.001 MCV (μm^3^) 84,00 (3.2) 84,45 (3.2) 86,75 (3.4) 87,88 (3.5) 89,7 (2.37) 0.001 MCH (pg) 27,07 (1.36) 27,62 (28.3) 28,28 (1.5) 28,46 (1.89) 29,24 (1.45) 0.001 MCHC (g/dl) 32,21 (0.95) 32,65 (9.5) 32,61 (1.5) 32,39 (1.37) 32,62 (1.37) 0.423 RDW (%) 9,9 (0.56) 9.67 (0.6) 9,76 (0.48) 9,78 (0.45) 9,63 (0.39) 0.174 Values are mean (SD): RBC-red blood cell count, Hct,- packed cell volume, Hb - hemoglobin concentration, MCV- mean corpuscular volume, MCH- mean corpuscular hemoglobin, MCHC - mean corpuscular hemoglobin concentration, RDW- red cell distribution width ANOVA test and multivariate tests (Pillal's trace, Wilks Lambda, Hotelling's trace, and Roy's largest root) showed high statistical level of significance between age different groups (*P*=0.001) for all studied parameters except MCHC (*P*=0.423) and RDW (*P*=0.174). Post hoc multiple comparisons tests for hematologic parameters between age different groups showed that subjects from U10 and U12 groups have similar values between themselves and significantly lower values from all other groups for all parameters except for MCHC and RDW. The similar situation is within U16 and U18 group. They have insignificantly different results between themselves (for RBC, Hb, Hct, MCV, MCH) and significantly higher mean values for these parameters from the younger groups. The subjects from U14 group showed statistically higher means for hematological parameters than U10 and U12 group, but statistically lower means than U16 and U18 group. Values are mean (SD): RBC- red blood cell count, Hct,- packed cell volume, Hb - hemoglobin concentration, MCV- mean corpuscular volume, MCH- mean corpuscular hemoglobin, MCHC - mean corpuscular hemoglobin concentration, RDW- red cell distribution width Hematologic parameters in girls ------------------------------- The mean values and standard deviations for general features and hematologic parameters for group of female participants are presented in [Table 2](#T2){ref-type="table"}. All parameters are shown for five age different subgroups. ANOVA test and multivariate tests showed that there is no significantly difference for all hematological parameters between age different groups (*P*\>0.05). Multiple comparisons test for hematologic parameters between age different groups showed that only subjects from U10 and U18 groups significantly differ for only two parameters, RBC (*P*=0.05) and MCH (*P*=0.28). The youngest group show significantly higher mean RBC than the oldest group. ###### General characteristics and hematologic parameters in female participants (8--18 yr, N=80) for age different subgroups ***Variable*** ***U10*** ***U12*** ***U14*** ***U16*** ***U18*** ***ANOVA, P*** ----------------- -------------- --------------- -------------- -------------- -------------- ---------------- Age(yr) 9.11 (0.42) 10.98 (0.56) 13.12 (0.25) 14.73 (0.51) 16.81 (0.45) 0.001 Height (cm) 133,87 (9.5) 149,34 (10.8) 162,46 (6.3) 164,71 (4.3) 170,25 (7.1) 0.001 Weight (kg) 32,0 (8.29) 44,78 (10.3) 51,85 (7.86) 58,18 (9.3) 62,94 (13.4) 0.001 RBC (10^9/^/dl) 4,99 (0.39) 4,71 (0.22) 4,71 (0.46) 4,68 (0.55) 4,59 (0.26) 0.349 Hb (gr/dl) 12,98 (1.26) 13,33 (1.0) 13,08 (1.38) 12,92 (1.1) 13,49 (1.5) 0.800 Hct (%) 40,61 (3.53) 40,50 (3.2) 40,72 (3.75) 40,6 (3.4) 41,27 (3.33) 0.990 MCV (μm^3^) 81,75 (6.86) 86,25 (34.3) 82,43 (17.5) 87,0 (7.0) 89,75 (3.85) 0.362 MCH (pg) 26,15 (2.74) 27,98 (2.1) 28,25 (3.43) 27,62 (3.16) 29,32 (2.08) 0.253 MCHC (g/dl) 31,94 (0.97) 32,69 (1.3) 30,59 (6.2) 30,18 (6.38) 32,61 (1.12) 0.490 RDW (%) 9,94 (0.58) 9,89 (0.65) 9,82 (0.88) 10,15 (0.78) 10,16 (0.61) 0.712 Comparison of red blood cell parameters by sex ---------------------------------------------- The comparison of the hematologic parameters for total male and female group is presented in [Table 3](#T3){ref-type="table"}. All studied parameters, except MCV and MCH, showed sex-related differences. Analysis of variance factorial analysis applied to the whole male and female groups showed that male participants have significantly higher red blood count (*P*\<0.001), Hemoglobin content (*P*\<0.001) and hematocrit *(P*\<0.001). No differences were found for mean corpuscular volume (MCV) and mean concentration of hemoglobin in one red blood cell (MCH), (*P*=0.292; *P*=0.563). MCHC, mean corpuscular concentration of hemoglobin in 1 L of RBC was significantly higher in boys (*P*=0.002) and RDW, the range of red blood cells width distribution were significantly wider in girls (*P*=0.004). ###### Comparison of hematologic parameters of physically active boys (N=240) and girls (N=80) ***Variable*** ***groups*** ***mean*** ***SD*** ***SE*** ***95% Confidence Interval for Mean*** ----------------- -------------- ------------ ---------- ---------- ---------------------------------------- ------- -------- ------- RBC (10^12^/dl) boys 5.02 0.42 0.274 4.97 5.08 24.450 0.001 girls 4.72 0.41 0.526 4.62 4.83 Hb (gr/dl) boys 14.08 1.29 0.084 13.92 14.25 25.617 0.001 girls 13.15 1.19 0.155 12.84 13.46 Hct (%) boys 43.37 3.85 0.249 42.88 43.86 23.622 0.001 girls 40.69 3.33 0.437 39.82 41.57 MCV (μm^3^) boys 86.27 4.03 0.261 85.75 86.78 1.115 0.292 girls 85.40 9.87 1.274 82.85 87.95 MCH (pg) boys 28.07 1.83 0.118 27.84 28.31 0.335 0.563 girls 27.90 2.84 0.374 27.15 28.65 MCHC (g/dl) boys 32.53 1.23 0.079 32.38 32.69 9.978 0.002 girls 31.51 4.37 0.574 30.36 32.66 RDW (%) boys 9.75 0.49 0.319 9.68 9.81 8.496 0.004 girls 9.98 0.72 0.943 9.79 10.17 Values are mean, SD-standard deviation, SE-standard error. The frequency of hemoglobin concentration is fewer than the lower boundary in nominal values, 12g/dl. With the boys, 4.6% showed subnormal values, i.e. the rest 95.4% used to have normal values. Frequency of suboptimal values of Hb with girls (13.3%) was significantly higher than with the boys (*P*=0.013) ([Table 4](#T4){ref-type="table"}). ###### Frequency of normal and low hemoglobin concentration in boys and girls ***Variable*** ***Hb lower than 12g/dl*** ***Hb normal values*** ***Total*** ***Chi-square test (Pearson)*** ---------------- ---------------------------- ------------------------ ------------- --------------------------------- ------- Male Count % 11 229 240 0.013 4.6% 95.4% 100% Female Count % 8 52 60 13.3% 86.7% 100% Total Count % 19 281 300 6.4% 93.6% 100% Discussion ========== In the Republic of Macedonia, there are no elaborate studies used as local reference ranges for basic RBC parameters and hematological indices for young population. The goal of this study was to help the physicians in comparing the laboratory test results with locally generated RBC variables values. The results of the present study support findings reported by number of authors that the red blood cell variables undergo age different changes in adolescents, and sex-related difference between boys and girls. Dependence of the hematologic parameters of age ----------------------------------------------- Children's reference ranges for routine hematological testing are usually stratified as reference values for newborn at birth, at 2 wk, 4 wk, 2--6 months, 6 months to year, 1 to 6 yr and 6--12 yr, for both sexes. Reference values for children older than 12 yr are different for male and female subjects ([@B16]). Some authors suggest different reference values for hematologic parameters for girls and boys after 13 yr of age ([@B17]). In this paper we decided to divide the examinees from 8 to 18 yr of age, into age different groups from 2-year intervals, groups. The mean values for RBC, Hct, Hb, MCV and MCH in male group showed tendency of increasing with the growth of the age. These parameters in groups U10 and U12 show significantly lower values than other (older) groups, and U16 and U18 show higher values for these parameters from other (younger group). Therefore, the U14 group has significantly higher values for most of the hematologic indices than U10 and U12, and significantly lower values than two older groups U16 and U18. These data indicate that boys older than 12 but younger than 14 year of age, are in the intermediate period regarding the hematological parameters. As far as the hematological indices with the male participants are concerned, the average size of erythrocyte (MCV) grows with the age. Average content, mass of hemoglobin in one erythrocyte (MCH), also grows gradually, with significantly highest values with U18, but with boys younger than 12 (U10 and U12) and boys older than 12 (U14, U16 and U18) there is a significant difference. The average concentration of hemoglobin in one erythrocyte (MCHC) does not show intergroup difference because the size of the cell is taken into consideration. The explanation is simple, with the age of the young examinees, the size of the cell grows and the average content of hemoglobin in it. But their ratio, i.e. concentration of hemoglobin in the cell remains approximately the same. Another parameter, RW, which describes the span of the size of different erythrocytes shown in percentages, does not show mutual difference which leads to equality of the size of erythrocytes in all different groups. The analysis of the hematological parameters with the girls showed that there was not statistically significant difference among the examined hematological parameters. Even though there was a significant difference in age, weight and height, there was not any significant difference found among the hematological parameters, except for the RBC and MCH between the youngest and the oldest group. When age different groups were compared, the only significant difference was found in the number of the erythrocytes (RBC) between the youngest (U10) and the oldest (U18) groups and it was in favor of the younger examinees. The amount of hemoglobin is similar in all groups (it increased insignificantly with the age), but the number of erythrocytes decreases insignificantly with the age and it is much bigger with U10 then with U18. The lack of this examination is that a fairly small number of girls were included, because of the low presence of girls as patients in our laboratory. A similar cross-sectional study of hematologic parameters was conducted in Spain where adolescents with age ranging from 13 to 18.5 yr old. Younger male subjects presented lower RBC, Hb, Hct and MCV mean values than their older counterparts. Same as in our study, these differences were not found in female subjects. As expected and as we found in this study, RBC, Hb, and Hct mean values were found significantly higher in males than in the females ([@B18]). Evaluation of hematologic indices in healthy Ugandan population (aged 1 to 94 yr) showed that erythrocytes, hemoglobin, hematocrit levels and mean corpuscular volume all significantly increased with age (*P*\<0.001) and were independent of age until the age of 13 yr (*P*\<0.001) ([@B19]). The hematologic parameters in youth national soccer teams were investigated and found no difference between U14, U15, and U16 groups, except for the RBC variable ([@B20]). Hemoglobin contents and the RBCs gradually rise to adult levels by the age of puberty ([@B21]). Investigation of hematological parameters in population 1 to 14 yr of age in Bangladesh showed difference between age groups and no difference was found between two sex groups ([@B22]). Dependence of hematologic parameters of sex ------------------------------------------- Men and women have different mean hemoglobin levels in health in venous blood --- women have mean levels approximately 12% lower than men. Since no difference is noticed in the level of erythropoietin with different sexes, the difference in the intensity of erythropoiesis comes from the physiological changes in the kidneys not in bone marrow ([@B9]). There is no evidence showing reduced cellular mechanisms for hem synthesis in women, and there is no difference in the iron absorption between women and men ([@B23]). The established reference ranges for woman are under the influence of large proportion of those with iron deficiency. ([@B11]). The difference in hemoglobin concentration regarding sex has not been found in infants and preschool children ([@B24]), but it has been shown in teenagers and adolescents ([@B25]). In our research, we compare the hematological parameters with male and female examinees, as well as whole groups regardless of their age. The male examinees showed significantly higher values of RBC (5.02 \*10^12^/l vs 4.72 \*10^12^/l; *P*\<0.001); higher values of Hb (14.08 g/dl vs 13.15 g/dl; *P*\<0.001); Hematocrit (43.37% vs 40.69%; *P*\<0.001). The average size of red blood cells (MCV) and medium content of Hb in them (MCH) insignificantly higher with the boys. Due to the similar size of the cells and higher total amount of hemoglobin with the males, MCHC, concentration of Hb in erythrocyte, is higher with the boys (*P*=0.002). The size span of erythrocytes is wider with the girls which lead to bigger variability of the erythrocytes size. For the clinical reference values of RBC, Hb, and Hct no sex differences were observed bellow the age of 12. The values for males were significantly higher than in females in the age range 13--79 ([@B26]). Unusual results are reported regarding the hematologic indices in male and female children younger than 12 yr. Mean Hb, Hct, MCV, and MCH of school-aged boys were significantly lower than girls ([@B27]). In the survey on haemoglobin level in the different age groups in man and woman in Indian population in the group aged 12 to 19 yr, males showed Hb mean concentration of 11.76 g/dl and female showed higher mean value, 12.31 g/dl ([@B28]). The research on hematological indices in in Kuwaiti children aged 7--12 yr, were RBC=4.78±0.42; Hb=127.3±9.4 /dl for boys and RBC=4.7±0.4; Hb=126.9±9.8 /dl for girls. Same parameters for older children, 13--17 yr, were RBC=5.18±0.48; Hb=145±14.4 /dl for boys and RBC=4.68±0.43; Hb=129.6±9.8 /dl for girls ([@B29]). As we can see these results are concordant with ours, regarding the sex differences (in favor of boys), and regarding the existing substantial age difference in male group and no age difference in female group. Normal hemoglobin levels according to WHO for children aged 5--12 yr above 11.5 gr/dl and teenagers aged 12--15 yr equal or above 12g/dl. Above 15 yr the adolescent are referred to as adults, and normal Hb level for adult male is 13.8--17.2 g/dl, and for adult female 12.1--15.1 g/dl ([@B30]). The primary aim of analyzing red blood cells variables is to discover and diagnose type of anemia in case it is present. Anemia was defined as hemoglobin concentration \<11g/dL for children aged between 6 and 59 months, while 11.5 g/dl for children aged 5 and 11 yr and \< 12 g/dl for children older than 12 yr according to WHO ([@B30]). As a lower boundary of nominal values for hemoglobin concentration in our laboratory is considered the value of 12 g/dl. In our laboratory that value is the same for both children and adults. Only 4.6% of our male examiners had low values of hemoglobin, and much more in the female group 13.3%. Conclusion ========== The young male examinees there exists a significant difference among different age groups, with special emphasis that hematological variables in boys aged 12 to 14 yr have the intermediate values between those with pre-puberty (\<12 yr) towards those adolescent boys (\>14 yr). RBC variables with girls from different age subgroups did not show significant differences. Significant difference is found only in red blood cell counts between the oldest (U18 group) and youngest (U12 group) in favor of the younger girls. RBC variables, regardless of the age, differ very much between male and female examiners, in favor of the male examinees. Hematological indices were insignificantly higher in males. Ethical considerations ====================== Ethical issues (Including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, redundancy, etc.) have been completely observed by the authors. The researchers want to thank the Institute of Physiology, Medical Faculty, UKIM, Skopje, and all the people who had been involved in this study. **Conflict of interest** The authors declare that there is no conflict of interests.
3 Cal.3d 398 (1970) 475 P.2d 880 90 Cal. Rptr. 608 JEFFERSON INSURANCE COMPANY OF NEW YORK et al., Petitioners, v. THE SUPERIOR COURT OF ALAMEDA COUNTY, Respondent; FONG HONG MAY, Real Party in Interest. Docket No. S.F. 22752. Supreme Court of California. In Bank. October 29, 1970. *400 COUNSEL Long & Levit, Bert W. Levit, Victor B. Levit, John B. Hook and Stephen H. Silver for Petitioners. No appearance for Respondent. Stark, Simon & Sparrowe, Stark, Stewart, Simon & Sparrowe, John F. Wells and V. James Jackl for Real Party in Interest. OPINION McCOMB, J. By this petition for a writ of mandate, petitioners (hereinafter referred to as "the insurers") seek to compel respondent court to set aside an order vacating an appraisal award. The matter is before us on an alternative writ issued by the Court of Appeal. Real party in interest (hereinafter referred to as "the insured") is the owner of a hotel building, which has a fair market value, excluding the value of the land, of $65,000. Prior to the fire loss which resulted in this litigation, the insured had acquired from the insurers fire insurance policies written in the California standard form prescribed by section 2071 of the Insurance Code. The policies contained an "average clause," providing for a proportionate reduction of any loss unless the building was insured to 70 percent of its "actual cash value."[1] The policies were written in the total amount of $45,000, which is approximately 70 percent of the fair market value of the building. *401 The parties agreed that the amount of the loss was $24,102.05 ($25,702.05, the cost of repairs, less $1,600 betterment). The insurers, however, refused to pay that amount, contending that the property was substantially underinsured according to the average clause. Their theory was that "actual cash value," as used in the policy, does not mean fair market value, but means the replacement cost of the building less depreciation. The replacement cost less a reasonable depreciation factor is approximately $170,000. The insured contended that the building was sufficiently insured, asserting that the "actual cash value" referred to in the policy means fair market value. Upon demand by the insurers, appraisers were appointed, pursuant to the statutory appraisal clause contained in the policy, for the purpose of having them determine the actual cash value of the building.[2] The appraisers, after some disagreement among themselves, accepted the insurers' contention that the term "actual cash value" means replacement cost less depreciation of the building, and determined on that basis that the actual cash value of the building was $169,547. One of the appraisers independently determined that the fair market value of the building was $65,000. From the appraisers' determination that the actual cash value was $169,547, the insurers offered to pay $10,154 as their proportion of the $24,102.05 loss sustained.[3] The insured rejected the offer and petitioned respondent court under section 1285 of the Code of Civil Procedure to vacate the appraisal award.[4] The evidence before respondent court established conclusively that the appraisers had determined as a matter of law that the issue before them was the "replacement cost less depreciation" of the building, and that in arriving at the value listed in their award as "cash value," they refused to consider income, location, or any other relevant factor tending to show *402 the fair market value of the property, despite the fact that such evidence was made available for their use. Based upon this showing, respondent court ordered that the award be vacated pursuant to section 1286.2, subdivisions (d) and (e), of the Code of Civil Procedure, thus finding by implication (1) that the appraisers had exceeded their powers by erroneously deciding a question of law (the meaning of "actual cash value"), which they had not been authorized to decide, and (2) that the insured had been substantially prejudiced by the refusal of the appraisers to consider material evidence. Respondent court, in ordering a second appraisal, directed that new appraisers "employ the standard definition of fair market value, which is synonymous with the `actual cash value' in said insurance policy, namely, the price that a willing buyer would pay a willing seller, neither being under any compulsion to sell or buy." (1) Questions: First. Did the appraisers, in determining the "actual cash value" of the insured's building, properly use "replacement cost less depreciation"? No. "Actual cash value," as used in section 2071 of the Insurance Code, is synonymous with "fair market value." (See Martin v. State Farm Mut. Auto. Ins. Co., 200 Cal. App.2d 459, 470 [19 Cal. Rptr. 364]; Hughes v. Potomac Ins. Co., 199 Cal. App.2d 239, 252-253 [18 Cal. Rptr. 650].) Thus, in Martin, the Court of Appeal, in construing the section, said: "The loss payable on an insurance policy is not the cost of the car to plaintiffs but its fair market value just prior to its destruction." (P. 470.) (2) It is clear that the Legislature did not intend the term "actual cash value" in the standard policy form, set forth in section 2071 of the Insurance Code, to mean replacement cost less depreciation. The term appears not only in the average clause, hereinabove referred to, but also in the insuring clause and must be given the same meaning in both. The latter clause insures "to the extent of the actual cash value of the property at the time of loss, but not exceeding the ... cost to repair or replace the property...." Since replacement cost less depreciation can never exceed replacement cost, it would not be logical to interpret this clause to mean "to the extent of the replacement cost less depreciation, but not exceeding the ... cost to repair or replace the property." (Italics added.) If "actual cash value" had been intended to mean replacement cost less depreciation, the Legislature would not have used "the cost to ... replace the property" as a limiting factor, and would have specified as a limiting factor only the cost to repair the property. *403 (3a) Second. Did respondent court act properly in vacating the appraisal award because the appraisers based the award on a misconception of the law? Yes. Although arbitrators are frequently, by the terms of the agreement providing for arbitration, particularly in construction contracts, given broad powers (see, e.g., Olivera v. Modiano-Schneider, Inc., 205 Cal. App.2d 9, 11 [23 Cal. Rptr. 30], where the contract provided that any controversy or claims arising out of the contract were to be settled by arbitration), appraisers generally have more limited powers. (4) As stated in Hughes v. Potomac Ins. Co., supra, 199 Cal. App.2d 239, 253 [9]: "The function of appraisers is to determine the amount of damage resulting to various items submitted for their consideration. It is certainly not their function to resolve questions of coverage and interpret provisions of the policy." (3b) Thus, in the present case the appraisers were authorized to determine only a question of fact, namely, the actual cash value of the insured building. (5) Since the evidence shows that the appraisers misinterpreted the meaning of "actual cash value" and therefore failed to decide the factual issue submitted to them, the insured properly invoked the jurisdiction of respondent court to vacate the award and order a rehearing. (Cf. Allen v. Interinsurance Exchange, 275 Cal. App.2d 636, 642, 644 [80 Cal. Rptr. 247].) As stated in Meat Cutters Local No. 439 v. Olson Bros., Inc., 186 Cal. App.2d 200, 204 [6] [8 Cal. Rptr. 789]: "... it is in the determination of whether a decided issue was properly before the arbitrator or an issue before him was not decided, that the agreement or order of submission falls under the scrutiny of the court." (Italics added.) (6) Where an appraisal award is based upon a misconception of the law, this fact may be proved to the court by extrinsic evidence, including a declaration of one of the appraisers. The declaration of an appraiser is properly received to show what the appraisers considered the issue to be, for the purpose of determining whether they exceeded their powers by making an error of law. (See Sapp. v. Barenfeld, 34 Cal.2d 515, 523 [212 P.2d 233]; Allen v. Interinsurance Exchange, supra, 275 Cal. App.2d 636, 642-643.) The alternative writ is discharged, and the petition for a peremptory writ is denied. Wright, C.J., Peters, J., Tobriner, J., Mosk, J., Burke, J., and Sullivan, J., concurred. NOTES [1] The "average clause" limits the liability of the insurers, as follows: "[T]his company shall be liable for no greater proportion of such loss than the amount of insurance specified in such item bears to the percentage specified in the first page of this policy [70%] of the actual cash value of the property...." (Italics added.) [2] The appraisal clause in the policies is in the required statutory language of section 2071 of the Insurance Code, as follows: "In case the insured and this company shall fail to agree as to the actual cash value or the amount of loss, then, on the written demand of either, each shall select a competent and disinterested appraiser and notify the other of the appraiser selected within 20 days of such demand. The appraisers shall first select a competent and disinterested umpire.... The appraisers shall then appraise the loss, stating separately actual cash value and loss to each item; and, failing to agree, shall submit their differences, only, to the umpire. An award in writing, so itemized, of any two when filed with this company shall determine the amount of actual cash value and loss...." [3] The figure of $10,154 is arrived at by taking the ratio of the $45,000 policy limit to 70 percent of the actual cash value, and applying that fraction to the amount of the loss. [4] By section 1280 of the Code of Civil Procedure, enforcement procedures respecting arbitration have been made applicable to appraisals.
Background ========== Hepatocellular carcinoma (HCC) is the most common type of liver cancer \[[@B1]\] and various therapeutic options have been developed by focusing on the specific tumour stage and hepatic functional reserve \[[@B2]-[@B9]\]. A variety of transarterial treatments have been provided to cases at relatively advanced stages \[[@B3]\], and these treatments were roughly divided into the following three groups: tran-sarterial chemoembolization (TACE), transarterial oily chemoembolization (TOCE) and transarterial chemotherapy (TAC), based on the likelihood of deteriorating hepatic reserve. TACE involves hepatic arterial injections of chemotherapeutic agents combined with embolizing materials. TOCE is solely an arterial administration of a combination of chemotherapeutic agents and oily contrast medium of lipiodol ultra fluid (Laboratory Guerbet, Aulnay-sous-Bois, France), while in TAC, chemotherapeutic agents alone are infused through the hepatic artery. Although TACE is only a transarterial procedure, for which therapeutic efficacy has been proved in randomised prospective controlled studies, the deterioration of hepatic reserve is estimated at 20%--58%, mainly because of ischaemic damage to the nontumourous background liver \[[@B10],[@B11]\], inferring a higher risk of unfavourable reduction in hepatic reserve function in cases with poor hepatic reserve. Therefore, to develop a safe and efficient transarterial therapeutic procedure in such cases, other effective means of performing TOCE, TAC, and TOCE + TAC have been tested \[[@B5],[@B12]-[@B15]\]. TACE and TOCE were recently compared in a randomised phase III trial using zinostatin stimalamer dissolved in lipiodol \[[@B12]\] with subsequent arterial embolization (TACE) or without embolization (TOCE). Interestingly, the results showed no improvement in survival rates by performing embolization and TOCE represented to be a therapeutic option for HCC patients with low hepatic reserve. However, two major concerns with TOCE are: 1) the method of combining water-based chemotherapeutic agents with oily lipiodol in a stable formulation; and 2) that TOCE is unable to target wide area of the liver as it reduces the hepatic arterial flow, although tentative, that may result in hepatic failure. For first concern, Miriplatin, a third-generation platinum derivative with lipophilic moiety that forms a suspension with lipiodol, was recently developed and approved for clinical use in Japan as a novel chemotherapeutic agent for HCC \[[@B16]-[@B21]\] with promising results \[[@B22]-[@B24]\]. For second concern, as TAC requires no embolization, that can be injected in wide area and its anti-tumour effect has been reported in several studies \[[@B5],[@B13]-[@B15]\], followed by the promising results from a multicentre phase II study in patients with unresectable HCC using cisplatin (CDDP), a first-generation platinum agent, in which the response rate was recorded as 33.8% \[[@B13]\], it might be effective to treat wide area of the liver with poor hepatic reserve function. In addition, the first-pass kinetics \[[@B25]\] of CDDP by TAC contribute to the anti-tumor effect and decrease the adverse systemic events \[[@B5]\]. Since highly concentrated CDDP powder for TAC (DDP-H, IA-call^®^; Nippon Kayaku Co., Ltd) is available in Japan, TAC is now widely used in Japan to treat multiple small tumours or patients with poor hepatic reserve \[[@B5],[@B13],[@B26]\]. Based on these results and the advances in the development of new chemotherapeutics, it is reasonable to consider the combination therapy of CDDP-TAC with miriplatin-TOCE to treat advanced stage HCC with poor hepatic reserve function safely and effectively. Therefore, in this study we conducted a phase I dose-escalation study on DDP-H-TAC followed by miriplatin-TOCE to determine the maximum tolerated dose (MTD) and dose-limiting toxicity (DLT) in unresectable HCC. The safety issue with regard to the combination of two platinum-based chemotherapeutic agents will be discussed by referencing the pharmacokinetics of platinum. Methods ======= Patient selection ----------------- Patients with HCC were considered eligible for the study if they fulfilled the following criteria: 20--80 years of age; at least one measurable tumour blush on angiography; histologically and/or clinically diagnosed HCC; no other therapeutic treatment was found to be effective or appropriate to their condition, according to the Japanese guidelines for HCC treatment; an Eastern Cooperative Oncology performance status of 0--2; adequate hepatic function (Child--Pugh, score ≤7; total bilirubin, ≤3.0 mg/dl; albumin, ≥3.0 g/dl); adequate haematological function (neutrophils, ≥1,500/mm^3^; platelets, ≥50,000/mm^3^; haemoglobin, ≥8.0 g/dl); adequate renal function (creatinine clearance, ≥50 ml/min adjusted for 1.73 m^2^ of body surface area); serum amylase, ≤324 IU/dl and an interval of 4 weeks or more since previous therapy. All nodules were radiologically diagnosed as HCC when they satisfied at least one of the following criteria based on CT or MRI: typical haemodynamics of classical HCC (substantial enhancement during arterial phase followed by a washout with 'corona-like' peripheral enhancement in equilibrium phase) and similar characteristics of coexisting nodules that had been diagnosed as HCC. All eligible HCC cases were recurrent with a history of CDDP administration in eight patients. Patients with the following characteristics were considered ineligible: massive pleural effusion and/or ascites refractory to treatment; active cancer other than HCC; active infectious disease; active haemorrhagic state; severe mental disorder; hepatic encephalopathy; history of allergic reaction to iodine phase contrast and/or platinum agents; ongoing interferon therapy and difficulty with oral food intake. This study was approved by the institutional review board of Niigata University Hospital and was registered with the University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR 000003541). Written informed consent was obtained from all patients and the study protocol conformed to the ethical guidance of the 1975 Declaration of Helsinki. Method of administration ------------------------ CDDP powder, DDP-H (Nippon Kayaku Co., Ltd. Tokyo, Japan), was solubilised in saline at a concentration of 100 mg/70 ml immediately before use and infused into the entire liver through the proper hepatic artery at a rate of 126 ml/h, providing in total 35 mg/m^2^. This was followed by TOCE with miriplatin, prepared according to the instructions, through the nutrient vessels of the target tumour using a maximal dose showing corresponding drainage portal veins up to a volume of 6 ml. If no DLT was recorded, the same regimen was carried out by increasing DDP-H by 15 mg/m^2^, based on the modified Fibonacci method in which DLT is defined as adverse events of grade ≥3 in nonhaematological or grade ≥4 in haematological toxicity, according to the NCI-CTCAE version 4.0. If any of the three patients showed as having DLT, three more patients were enrolled. MTD was judged to have been exceeded when two patients showed DLT. MTD was defined as the maximum dose where no more than two of the six patients experienced DLT. If two or more cases were already suffering from DLT at the initial dose of 35 mg/m^2^, this dose was reduced by 10 mg/m^2^ to 15 mg/m^2^. Evaluation of anti-tumour effects --------------------------------- Anti-tumour response was evaluated from CT images obtained before and 3 months after treatment. Evaluation was performed in accordance with the modified Response Evaluation Criteria in Solid Tumors (RECIST) guideline, a new response evaluation criteria in solid tumours \[[@B27]\]. The tumour markers of AFP and DCP were followed at appropriate time periods for each patient. Platinum pharmacokinetics ------------------------- Total plasma platinum concentration was measured and pharmacokinetic evaluation performed for all patients. Plasma samples were collected in heparinised tubes at 24 h and 7 days following the administration of DDP-H and miriplatin. As reference, 50 mg/m^2^ (80 mg/body) of CDDP in liquid form was administered through the proper hepatic artery for the entire liver at a rate of 1 mg/min, and the concentration was quantified before the administration and at 0.5, 1.0, 1.5, 2, 4, 12 and 24 h after administration. Plasma platinum concentration was measured by atomic absorption spectrometry (Nac Co., Ltd., Tokyo, Japan). Results ======= Patient characteristics ----------------------- A total of nine eligible patients were enrolled in this study from July to October 2010 and divided into three groups; none of the three patients from each group developed DLT at DDP-H dose levels of 35 (level 1), 50 (level 2) and 65 (level 3) mg/m^2^. Patient characteristics before treatment are summarised in Table  [1](#T1){ref-type="table"}. Performance status was 0 in eight patients and 1 in one patient (case 1). The aetiology of liver cirrhosis was HBV infection (*n =* 1), HCV infection (*n* = 4), alcoholic abuse (*n* = 3) and autoimmune hepatitis (*n* = 1). Residual liver function was relatively good with a median Child--Pugh score of 6, eight patients in grade A and one in grade B, and no marked renal dysfunction was observed. All patients had a history of HCC treatment; eight patients, other than case 3, had a history of DDP-H-TAC followed by epirubicin-TOCE. ###### Patient characteristics **Group** **Level 1** **Level 2** **Level 3** -------------------------------------- ------------- ------------- ------------- ------ ------ ------ ------- ------- ------- **Age (years)** 80 62 80 78 61 80 63 79 80 **Gender (M, Male/F, Female)** M M M M M M M F F **Performance status** 1 0 0 0 0 0 0 0 0 **HBV infection** \- \- \- \- \- \- \+ \- \- **HCV infection** \+ \+ \- \- \- \+ \- \+ \- **Alcohol** \- \- \+ \+ \+ \- \- \- \- **Autoimmune hepatitis** \- \- \- \- \- \- \- \- \+ **Child-Pugh Score** 6 6 5 6 6 7 5 6 6 **Recurrence (Y, Yes/N, No)** Y Y Y Y Y Y Y Y Y **Interval to previous therapy (M)** 6 8 6 23 10 21 13 19 3 **Previous therapy** TACE TAC TACE TACE TAC TACE TAC TACE TACE **History of CDDP Administration** Y Y N Y Y Y Y Y Y **Number of tumors** 3 2 1 \>5 \>5 4 4 \>5 \>5 **Maximum tumor size (mm)** 15 15 14 20 10 34 24 10 30 **Vascular invasion (Y, Yes/N, No)** N N N N N N N N N **Metastasis (Y, Yes/N, No)** N N N N N N N N N **Stage (UICC)** II II I II II II II II II **Tumor location (PAMLC)** PA ML ML AM ML M P A PA **BSA (m**^**2**^**)** 1.486 1.6 1.457 1.5 1.72 1.68 1.415 1.538 1.538 **Ccr (ml/min)** 68 118 75 89 121 92 83 95 85 **CDDP (mg/body)** 52 56 51 75 86 84 92 100 100 **Miriplatin (mg/body)** 86 18 80 120 60 60 74 100 120 TACE, transarterial chemoembolization; TAC, transarterial chemotherapy. Tumour location: P, posterior segment; A, anterior segment; M, medial segment; L, lateral segment; C, caudal segment. BSA, body surface area; Ccr, creatinine clearance. The total dose of DDP-H administered was 51, 52 and 56 mg/body at level 1; 75, 84 and 86 mg/body at level 2 and 92, 100 and 100 mg/body at level 3. Total dose of miriplatin administered was 18, 80 and 86 mg at level 1; 60, 60 and 120 mg at level 2 and 74, 100 and 120 mg at level 3. All nine patients were assessed for toxicity of CDDP combined with miriplatin and for the pharmacokinetics of plasma platinum concentration. One patient underwent radio frequency ablation (RFA) before response evaluation, and thus eight patients were assessed for anti-tumour response. Toxicity -------- Haematological and nonhaematological toxicity in all patients was evaluated using NCI-CTCAE (National Cancer Institute Common Terminology Criteria for Adverse Events) version 4.0, summarised in Table  [2](#T2){ref-type="table"}. No grade ≥3 in nonhaematological or grade ≥4 in haematological toxicity was observed. One patient (case 4 in the level 2 group) developed grade 3 neutropenia (reduced from 3000/mm^2^ to 1710/mm^2^, 6 weeks after injection) and subsequently recovered over 2 weeks. All three groups showed a grade 2 increase in aspartate aminotransferase and alanine aminotransferase (cases 3, 5, 6 and 9) and grade 1--2 hypoalbuminaemia (cases 1, 2, 5, 7 and 9). No marked increase was noted in creatinine, except in case 7, which showed a transient increase of 1.13 times higher than baseline level 4 days after the administration of 65 mg/m^2^ CDDP combined with 74 mg/body miriplatin. The most frequent adverse event was grade 1 monophasic fever, which was observed in cases 1, 4, 8 and 9 receiving 86, 120, 100 and 120 mg/body of miriplatin, respectively. Therefore, in this clinical study, the MTD of CDDP in combination with miriplatin was determined as 65 mg/m^2^, which is the maximum dose for DDP-H-TAC monotherapy. ###### Haematological and nonhaematological toxicity   **Level 1** **Level 2** **Level 3** --------------------------------------- ------------- ------------- ------------- ------- ------- ------- ------- ------- ------- ------- ------- ------- **Hematological toxicity (grade)** **1** **2** **3** **4** **1** **2** **3** **4** **1** **2** **3** **4** **White blood cell decreased** 0 1 0 0 0 0 1 0 0 0 0 0 **Neutrophil count decreased** 0 1 0 0 0 0 1 0 0 0 0 0 **Platelet count decreased** 0 1 0 0 0 0 0 0 2 0 0 0 **Anemia** 0 1 0 0 0 0 0 0 0 0 0 0 **Nonhematological toxicity (grade)** **1** **2** **3** **4** **1** **2** **3** **4** **1** **2** **3** **4** **AST increased** 0 1 0 0 1 1 0 0 0 1 0 0 **ALT increased** 0 1 0 0 1 1 0 0 0 1 0 0 **Blood bilirubin increased** 0 0 0 0 1 0 0 0 1 0 0 0 **INR increased** 0 0 0 0 0 0 0 0 0 0 0 0 **Hypoalbuminemia** 0 2 0 0 0 1 0 0 1 0 0 0 **Creatinine increased** 0 0 0 0 0 0 0 0 1 0 0 0 **Anorexia** 0 0 0 0 0 0 0 0 1 0 0 0 **Nausea** 0 0 0 0 0 0 0 0 0 0 0 0 **Vomiting** 0 0 0 0 0 0 0 0 0 0 0 0 **Fever** 1 0 0 0 1 0 0 0 2 0 0 0 **Diarrhea** 0 0 0 0 0 0 0 0 0 0 0 0 **Fatigue** 0 0 0 0 0 0 0 0 1 0 0 0 **Alopecia** 0 0 0 0 0 0 0 0 0 0 0 0 **Urticaria** 0 0 0 0 0 0 0 0 0 0 0 0 **Abdominal pain** 0 0 0 0 0 0 0 0 0 0 0 0 National Cancer Institute Common Terminology Criteria for Adverse Events version 4.0 was applied to evaluate toxicity. Pharmacokinetics of platinum ---------------------------- To examine whether additional miriplatin following DDP-H administration further increases plasma platinum concentration, plasma samples were collected for pharmacokinetic studies from all nine patients at appropriate time periods after the administration of these agents. Since total platinum concentration in peripheral plasma during and after TAC in a control case, using 50 mg/m^2^ of CDDP administered through the hepatic artery, peaked at the end of TAC and gradually decreased over the following 2 days (Figure [1](#F1){ref-type="fig"}), the plasma platinum concentration was evaluated at the end of DDP-H-TAC and miriplatin-TOCE and 24 h and 7 days after the initiation of DDP-H-TAC. At the end of DDP-H-TAC, median Cmax for level 1, 2 and 3 groups was 2000, 2933 and 4233 ng/ml, respectively. No further increase was detected following the administration of miriplatin: the plasma platinum concentration gradually decreased over 7 days to 310, 456 and 580 ng/ml in level 1, 2 and 3 groups, respectively. These results indicate that the concentration of platinum in the plasma showed no substantial increase with the addition of miriplatin to CDDP administration, as expected. ![**Platinum pharmacokinetics.** Platinum concentration was measured in all patients at three levels. Level 1, white circle; level 2, grey circle and level 3, black circle. Plasma platinum concentration was also measured in a 63-year-old male patient during and after administration of CDDP (50 mg/m^2^ or 80 mg/body weight) for HCCs through the proper hepatic artery at a concentration of 0.5 mg/ml and at a flow rate of 1 mg/min (black triangle with broken line)](1471-230X-12-127-1){#F1} Anti-tumour effects ------------------- Relatively good tumour control was recorded in one patient (case 3 in the level 1 group) who underwent RFA before response evaluation. Therefore, anti-tumour response was assessed in eight patients using computed tomography (CT) and tumour markers. Changes in the HCC diameter and levels of α-fetoprotein (AFP) and des-γ-carboxy prothrombin (DCP) following treatment are summarised in Table  [3](#T3){ref-type="table"} and Figure [2](#F2){ref-type="fig"}. With a median follow-up period of 120, 87 and 83 days for level 1, 2 and 3 groups, respectively, case 9 in the level 3 group showed a partial response (PR) to therapy. Cases 1, 4, 6 and 8 showed stable disease response, while cases 2, 5 and 7 showed progressive disease response (Table  [3](#T3){ref-type="table"}). These changes were consistent with the changes recorded by the tumour markers (Figure [2](#F2){ref-type="fig"}). One patient (case 9) with multiple HCC in both lobes (Figure [3](#F3){ref-type="fig"}a--d), who showed resistance to previous treatment with DDP-H-TAC and epirubicin-TACE, evidenced a PR response following combination therapy of 65 mg/m^2^ of DDP-H-TAC and miriplatin-TOCE (Figure [3](#F3){ref-type="fig"}e--h). Significant reduction in HCC size in the right lobe was seen on right hepatic angiography (Figure [3](#F3){ref-type="fig"}a, e). A representative tumour in S6 showed no enhancement by CT during arterial portography (white arrow in Figure [3](#F3){ref-type="fig"}b), and significant enhancement in the early phase of CT hepatic arteriography was followed by 'corona-like' staining, which is a typical enhancement pattern seen in classical HCC (white arrowheads in Figure [3](#F3){ref-type="fig"}c, d) before treatment. Two months following treatment, the remaining lipiodol (black arrow in Figure [3](#F3){ref-type="fig"}f) and a marked decrease in tumour enhancement in the area were seen (Figure [3](#F3){ref-type="fig"}g, h). ###### Anti-tumour effects: clinical efficacy **Antitumor response** **Level 1** **Level 2** **Level 3** ------------------------------- ------------- ---------------- ------------- **CR** 0 0 0 **PR** 0 0 1, (Case 9) **SD** 1, (Case 1) 2, (Case 4, 6) 1, (Case 8) **PD** 1, (Case 2) 1 (Case 5) 1, (Case 7) **Not evaluable** 1, (Case 3) 0 0 **DCR (%)** 50 66.7 66.7 **Period of follow up (day)**       **Median** 120 87 83 **Range** 50-213 24-140 54-84 Modified Response Evaluation Criteria In Solid Tumors guidelines were followed to evaluate anti-tumour effects. CR, complete response; PR, partial response; SD, stable disease; PD, progressive disease; DCR, disease control rate. ![**Anti-tumour effects: levels of tumour markers.** Time-dependent levels of α-fetoprotein (**a**--**c**) and des-γ-carboxy prothrombin (**d**--**f**) after combination therapy of IA-call^®^ and miriplatin at levels 1 (**a**, **d**), 2 (**b**, **e**) and 3 (**c**, **f**). Tumour markers are represented as white circles in cases 1, 4 and 7; grey circles in cases 2, 5 and 8 and black circles in cases 3, 6 and 9. PR, partial response; SD, stable disease; PD, progressive disease; N/A, not applicable for the response evaluation](1471-230X-12-127-2){#F2} ![**Representative images of tumour from case 9 showing partial response after administration of DDP-H TAC and miriplatin-TOCE.** Before treatment: **a**, right hepatic angiography; **b**, computed tomography during arterial portography (CTAP). White arrow indicates tumour defect on CTAP; **c**, early phase of CT hepatic arteriography (CTHA); **d**, delayed phase of CTHA. White arrowheads indicate staining of tumour. Two months after treatment: **e**, right hepatic angiography; **f**, plain CT image; **g**, early phase of dynamic CT; **h**, delayed phase of dynamic CT. Black arrow indicates remaining lipiodol.](1471-230X-12-127-3){#F3} Discussion ========== Treatment for HCC was determined along with tumour stage and hepatic functional reserve, with only 30% of HCC cases being an indication of curative therapies such as surgical resection and RFA \[[@B2],[@B6]\]. TACE and sorafenib have recently been reported to show a definite survival advantage in advanced cases \[[@B3],[@B4],[@B6]-[@B8],[@B28]\]. Unfortunately, however, the application of TACE or sorafenib is strictly restricted by other factors, mainly hepatic functional reserve. TACE requires a Child--Pugh score of 5--9, grade A--B, for hepatic function as it involves arterial embolization and may not be completed in a patient with major arterioportal shunts or portal vein tumour thrombosis. Sorafenib is contraindicated in patients with the exceptions of Child--Pugh score of 5--6, grade A or with brain metastases \[[@B2]\]. In contrast, TOCE and TAC can be provided over a broad range of cases as these are performed without arterial embolization and their efficacy has been reported \[[@B5],[@B13]-[@B15],[@B26]\]. Among various chemotherapeutic agents such as epirubicin \[[@B15]\] and mitomycin C \[[@B5]\], which carry a 15%--20% response rate, platinum agents appear to be the most promising as CDDP-TAC achieved a response rate of 33.8% in a multicentre phase II study enrolling unresectable HCC cases \[[@B13]\]. To investigate the highly effective and less toxic combination of TOCE and TAC, this study focused on safety issues associated with the concomitant use of two platinum agents. Miriplatin is a third-generation platinum agent with amphipathic properties that forms a stable suspension with lipiodol and gradually releases active derivatives *in situ*, which circumvents systematic release and toxicity \[[@B18]\]. Treatment in few HCC cases has shown cross-resistance with different generations of platinum agents \[[@B16],[@B21],[@B29]\]. In a rat model, miriplatin exhibited higher anti-tumour activity and lower hepatic toxicity than CDDP-lipiodol \[[@B16]\], and promising results have been reported in HCC patients \[[@B22]-[@B24]\]. On the other hand, clearance of platinum compounds following short-term intravenous infusion of cisplatin was reported as triphasic (distribution half-life, 13 min; elimination half-life, 43 min and terminal half-life, 5.4 days). The short distribution half-life suggests that TAC easily exceeds tissue distribution speed and saturates the target liver on the basis of concentration rather than the total amount of the drug administered. Accordingly, DDP-H is currently the most suitable form of platinum agent for TAC by providing the highest concentration available. The combination of DDP-H-TAC and miriplatin-TOCE supports the hypothesis that higher the free platinum concentration in the target liver, lesser the systemic spill over and more sustained delivery achieved by a less cross-resistant agent leads to marked tumour response and less toxicity (both systemic and hepatic), leading to improvement in survival rates. Conclusions =========== In conclusion, in this study, no DLT was recorded following the combined administration of DDP-H and miriplatin at a maximum dose of 65 mg/m^2^ and 120 mg/body, respectively. These are the maximum doses recommended for each monotherapy individually, indicating that the MTD of DDP-H and miriplatin in combination therapy is the maximum monotherapy dose. No evidence of systemic platinum release from miriplatin-TOCE was recorded, as expected. Reflecting a possible higher disease control rate and PR response, a phase II randomised prospective study is now ongoing to investigate the efficacy of this combined therapy in a larger cohort. Competing interests =================== The authors declare that they do not have a current financial arrangement or affiliation with any organisation that may have a direct interest in their work. Author's contributions ====================== KK wrote manuscript and performed research. TS designed a research and wrote manuscript. YT, MT, MI, HK, and SY performed research including the angiography. TY analysed data. MN and YA designed and analysed all data. All authors read and approved the final manuscript. Disclosure ========== The authors declare that they do not have a current financial arrangement or affiliation with any organisation that may have a direct interest in their work. Pre-publication history ======================= The pre-publication history for this paper can be accessed here: <http://www.biomedcentral.com/1471-230X/12/127/prepub> Acknowledgements ================ This study was supported by a grant from the Niigata University Medical and Dental Hospital (Clinical research support project/2012) to T.S.
699 F.Supp. 508 (1988) L.J., An Infant, By and Through His Next Friend, Lydia Kaye DARR, et al., Plaintiffs, v. Ruth MASSINGA, etc., et al., Defendants. Civ. No. JH-84-4409. United States District Court, D. Maryland. September 27, 1988. *509 William L. Grimm, Ethel Zelenske, and the Legal Aid Bureau, Inc., Baltimore, Md., Carol R. Golubock, and the Children's Defense Fund, Washington, D.C., Nevett Steele, Jr., Ward B. Coe, III, and Whiteford, Taylor & Preston, Baltimore, Md., for plaintiffs. J. Joseph Curran, Jr., Atty. Gen. of Maryland, Catherine M. Schultz and Mark J. Davis, Asst. Attys. Gen., Baltimore, Md., for defendants. MEMORANDUM JOSEPH C. HOWARD, District Judge. Pending before the court is this civil rights class action brought by foster children in the care and custody of the Baltimore City Department of Social Services ("BCDSS"). Named as defendants are Ruth Massinga, Secretary of Maryland's Department of Human Resources, BCDSS, and various foster-care officials. These children allege that the defendants' administration of the foster care system in Baltimore City violates their rights under federal statutory law, Titles IV-E and IV-B of the Social Security Act, and the Fourteenth Amendment to the United States Constitution. The class seeks equitable relief in the form of an affirmative injunction that would require reforms of the foster care system. In addition to these equitable claims, some named class representatives seek monetary damages for harms allegedly suffered while in the defendants' care. The immediate matter under consideration is whether a consent decree proposed by the parties as settlement of the equitable claims is fair and adequate and thereby merits the court's approval.[1] After the proposed decree was submitted on April 26, 1988, the court met with the parties, directed that notice be provided the class members and interested persons, held a hearing at which those provided notice were invited to present objections or comments, and met with foster care workers to learn their views of the decree. After completion of these measures and careful study of the decree, the court approves the decree for the reasons provided below. *510 I. The history of this action is long and arduous. Since the complaint was filed in December, 1984, the court has issued over seventy orders and held a dozen status conferences with the parties. The docket, now seventeen pages long, lists over two hundred entries. On January 2, 1987, the court granted a motion to intervene that had been filed the previous November by two additional proposed class representatives. That same day the court certified a class composed of all children who are, have been, or will be placed in foster homes by the BCDSS and are or will be placed in the custody of the BCDSS through voluntary placement or court order. On February 6, after conducting extensive discovery, including a random sampling of BCDSS foster care case records, the plaintiffs filed a motion for a preliminary injunction. A hearing on the motion was held over a period of two weeks commencing on April 2, 1987. Some 91 separate items of evidence were introduced, and the court heard from 12 witnesses. Among the items of evidence were the preliminary results of plaintiffs' random sampling of case records, contained in several thick looseleaf binders. The witnesses included an expert on the research methodology used in conducting the plaintiffs' study.[2] The court also heard the testimony of relatives and experts regarding the cases of sixteen children who had been severely neglected and abused while in defendants' care and custody. The court found overwhelming evidence of serious systematic deficiencies in Baltimore's foster care program such that foster children would suffer irreparable harm if immediate injunctive relief were not granted and, in a Memorandum and Order issued July 27, 1987, granted plaintiffs' motion for a preliminary injunction.[3] Specifically, among its findings, the court determined that there was a lack of satisfactory foster homes; that the defendants failed to remove children from homes where physical and emotional abuse and neglect were threatened; that homes were licensed where foster parents were unable to care properly for the children; that "exceptions" were granted allowing clearly inadequate homes to remain open; that the system for providing medical care to foster children was inadequate to ensure continuous and informed treatment; and that the defendants had substantially failed to undertake the improvements recommended by an internal study produced by the "Harris Task Force." As preliminary injunctive relief, the defendants were ordered to (1) review the status of each foster home where there had been a report of maltreatment; (2) visit each child in a BCDSS foster home on a monthly basis; (3) visit each child who had been the subject of a report of maltreatment on a weekly basis; (4) assign sufficient staff and resources to ensure appropriate medical care was rendered and medical histories were obtained and provided to those rendering medical care to each child; and (5) provide a written copy of any complaint of maltreatment of a foster child to the juvenile court and the child's attorney. On February 1, 1988, the Fourth Circuit affirmed this court's decision to grant plaintiffs a preliminary injunction. See L.J. By and Through Darr v. Massinga, *511 838 F.2d 118 (4th Cir.1988).[4] Thereafter, the parties engaged in extensive settlement negotiations. On April 26, 1988, approximately two and a half months prior to trial, the parties submitted the proposed settlement of plaintiffs' equitable claims now before the court. The consent decree that embodies the settlement retains substantially those measures ordered by the court as preliminary injunctive relief. It also seeks to make substantial improvements in several aspects of the foster care system including placing limits on the number of cases a worker may be responsible for, improving the system for providing medical treatment to foster children, providing assistance to natural parents that would allow children to remain with them thereby avoiding foster care where possible, and providing for a continuum of appropriate foster care placements including the recruitment of new foster homes. Different improvements are to be implemented at different times; however, all improvements are to be made within two years. After preliminary study of the decree and meeting with the parties, the court determined that the decree was within the range of reasonableness and approved a "Notice of Proposed Settlement of Class Action" on May 19, 1988. II. Under Fed.R.Civ.P. 23(e), notice of settlement of a class action "shall be given to all members of the class in such manner as the court directs." The court directed that the approved notice of settlement, which contained a detailed summary of the proposed decree, be sent to all foster parents, all relatives with whom children had been placed by BCDSS, and all biological parents of children who had been placed in foster homes or with relatives on or before June 8, 1988. The Court also ordered that the notice be posted at any BCDSS office frequented by foster parents or by the natural parents of foster children. The full notice of settlement also was mailed to the heads of organizations known to represent foster children or known to have an interest in foster care issues.[5] In addition to the mailing and posting of the full notice, a court-approved abbreviated notice was published five times in four daily newspapers.[6] The notices informed interested parties that they could object to the decree at a hearing held on July 18, 1988. Those interested in testifying at the hearing were told to submit written statements to the court by July 8; however, at the hearing all were invited to testify regardless of whether that requirement had been met. At the hearing, a total of ten people testified. These included foster parents, natural parents, a spokesman for a union which represents some foster care workers, a former foster child, and the husband of a foster care worker. None of those who testified objected to the decree. The foster parents expressed concern about the system for providing medical services to foster children. The former foster child told the court that she had been abused and molested while she was in the defendants' care, and she asked the court to implement the decree as soon as possible. The union representative expressed concern with some provisions and omissions of the decree; however, he said that the union's foster care worker members generally supported the decree. Both the union representative and the foster care worker's husband urged the court to meet privately with the foster care *512 workers, who did not wish to express any criticisms publicly. So that the court could hear the views of the people who would implement the decree on a day-to-day basis, an off the record meeting with foster care workers was held on August 3, 1988, with counsel present. During that meeting, the workers expressed several concerns. In particular, the foster care workers stated that they often travel hundreds of miles per month and asked that transportation aides be employed by BCDSS to assist them. They also said that a pool of temporary foster care workers should be available to assist when a worker is ill or on vacation. The foster care workers also asked that they be assured a role in the implementation and monitoring of the decree. III. The court's approval of a proposed settlement is required in order to protect the interest of absent class members. Piambino v. Bailey, 610 F.2d 1306, 1327 (5th Cir.), cert. denied, 449 U.S. 1011, 101 S.Ct. 568, 66 L.Ed.2d 469 (1980); Grunin v. International House of Pancakes, 513 F.2d 114, 123 (8th Cir.), cert. denied, 423 U.S. 864, 96 S.Ct. 124, 46 L.Ed.2d 93 (1975). Accordingly, the Fourth Circuit has admonished that the district court is not "to give the settlement `mere boilerplate approval'" that is "`unsupported by evaluation of the facts or analysis of the law.'" Flinn v. FMC Corporation, 528 F.2d 1169, 1173 (4th Cir.1975), cert. denied, 424 U.S. 967, 96 S.Ct. 1462, 47 L.Ed.2d 734 (1976) (quoting, Protective Committee For Independent Stockholders of TMT Ferry, Inc. v. Anderson, 390 U.S. 414, 434, 88 S.Ct. 1157, 1168, 20 L.Ed.2d 1, reh. denied, 391 U.S. 909, 88 S.Ct. 1649, 20 L.Ed.2d 425 (1968)). The court "must independently and objectively analyze the evidence and circumstances before it in order to determine whether the settlement is in the best interest of those whose claims will be extinguished." 2 H. Newberg, Newberg on Class Actions, § 11.40 at 451 (2nd ed. 1985). Approval will be given only where a proposed settlement is determined to be "fair, reasonable and adequate." In re Mid-Atlantic Toyota Antitrust Litigation, 605 F.Supp. 440, 442 (D.Md.1984) (quoting, Manual on Complex Litigation, § 1.46 at 56-57 (5th ed. 1982)); Washington v. Keller, 479 F.Supp. 569, 572 (D.Md. 1979). In making that determination, this court has followed the bifurcated analysis set forth by Judge C. Stanley Blair in In re Montgomery County Real Estate Antitrust Litigation, 83 F.R.D. 305, 315-317 (D.Md.1979). See also, In re Mid-Atlantic Toyota Antitrust Litigation, supra, 605 F.Supp. at 442-43. "That analysis includes separate inquiries on the `fairness' and the `adequacy' of the proposed settlement." Id. at 443. Regarding fairness, Judge Blair stated: The factors tending to reveal the `fairness' of a settlement are those which indicate the presence or absence of collusion among the parties. Because of the danger of counsel's compromising a suit for an inadequate amount for the sake of insuring a fee, the court is obliged to ascertain that the settlement was reached as a result of good-faith bargaining at arm's length. The good faith of the parties is reflected in such factors as the posture of the case at the time settlement is proposed, the extent of discovery that has been conducted, the circumstances surrounding the negotiations and the experience of counsel. (citations omitted). In re Montgomery County Real Estate Antitrust Litigation, supra, 83 F.R.D. at 315. When inquiring into adequacy, "the court must weigh the likelihood of the plaintiffs' recovery on the merits against the amount offered in settlement." Id. at 315-316. Specifically, Judge Blair noted that: [C]ourts should weigh the amount tendered to the plaintiffs against such factors as (1) the relative strength of the plaintiffs' case on the merits; (2) the existence of any difficulties of proof or strong defenses the plaintiffs are likely to encounter if the case goes to trial; (3) the anticipated duration and expense of *513 additional litigation; (4) the solvency of the defendants and the likelihood of recovering on a litigated judgment; and (5) the degree of opposition to the settlement. (citations omitted). Id. at 316. In Flinn v. FMC Corporation, supra, the Fourth Circuit further noted that "[t]he fact that all discovery has been completed and the cause is ready for trial is important, since it ordinarily assures sufficient development of the facts to permit a reasonable judgment on the possible merits of the case." (footnote omitted). 528 F.2d at 1173. IV. This case represents perhaps the most hotly and thoroughly contested litigation the undersigned has experienced in twenty years as a judge. Exhaustive discovery efforts were undertaken by both sides. As described earlier, the court has entered over seventy orders in this case and there are over two hundred entries on the docket. The court concludes that the settlement reached in this action was the result of good faith bargaining at arms' length. Serious settlement negotiations commenced only after plaintiffs had completed a substantial random sampling of defendants' case files as the major item of their discovery; after the court had granted a preliminary injunction following an evidentiary hearing that lasted twelve days; and after the Fourth Circuit had affirmed the injunction. See, L.J. by and through Darr v. Massinga, supra, 838 F.2d at 122. Discovery as to plaintiffs' equitable claims is now complete. Settlement negotiations took place over a period of six weeks and included several half-day and full-day sessions. During these negotiations and throughout this litigation, plaintiffs have been represented by a dedicated, highly skilled, and very experienced team of attorneys. Two members of this team, including William L. Grimm, Esquire, who served as lead counsel, came from the Baltimore office of the Legal Aid Bureau, which represents the great majority of Baltimore's foster children in the juvenile courts. Carol R. Golubock, Esquire, of the Children's Defense Fund, has extensive experience in class litigation concerning child welfare law in federal courts. In addition, Nevett Steele, Jr., Esquire, and Ward B. Coe, III, Esquire, partners in the firm of Whiteford, Taylor and Preston, participated in representing the plaintiffs. Both have excellent qualifications and extensive experience in federal litigation, including class action litigation, before this court. Finally, the question of attorneys' fees was addressed separately from the negotiations concerning the terms of the decree. The discussion of fees was undertaken by a different group of lawyers and concluded well after the submission of the proposed consent decree. Under these circumstances, the court concludes that the settlement was reached in an appropriate manner and is the product of arms'-length bargaining. V. Had this action gone to trial, it is very likely that the plaintiffs would have succeeded. For the reasons stated in its Memorandum and Order of July 27, 1987, this court already has determined that success by the plaintiffs would be the likely outcome of a trial on the merits. No viable defenses to plaintiffs' claims for equitable relief are apparent. In deciding whether the proposed consent decree is adequate, the court must weigh this likelihood of plaintiffs' success on the merits against the quality of the relief afforded by the decree. In re Montgomery County Real Estate Antitrust Litigation, supra, 83 F.R.D. at 316. Any settlement of this action must afford the plaintiffs relief that is at least comparable to what they could have received following trial on the merits. The court's ability to make an independent assessment of the adequacy of the settlement in this case rests on substantial knowledge of the problems facing Baltimore's foster care system. This knowledge was acquired through study of the pleadings, meetings with the parties, conducting the settlement hearing, meeting with case workers, and, primarily, through *514 the twelve-day-long preliminary injunction hearing at which hundreds of pages of documents were entered into evidence. Evidence presented during the preliminary injunction hearing revealed that many current foster homes are inadequate, and that there is a severe shortage of foster parents. As a result of the shortage of foster homes, defendants have been willing to grant exceptions allowing homes that should have been closed to remain open; have allowed some people to become or remain foster parents who should not have been; and have appeared reluctant to remove children from homes even when there should have been concern for their safety. Accordingly, had judgment on the merits been rendered and the court been charged with fashioning appropriate relief, it would have insisted that a diligent effort be made to recruit new homes. Specific numbers of new foster homes might have been ordered opened by specific dates. Paragraph 11 of the proposed consent decree addresses recruitment of new foster homes. It does not state specifically what efforts will be made nor estimate how many homes will be opened. Paragraph 11 provides: Defendants shall maintain a foster home recruitment unit in Baltimore City Department of Social Services. The unit shall develop and implement a sustained recruitment plan, and shall issue periodic reports on the status of its recruitment efforts. The court's concern regarding paragraph 11 were heightened by published news accounts of the decree in which defendant Ruth Massinga, Secretary of the Department of Human Resources, was quoted as suggesting that, under the terms of the decree, children could be left in the homes of their natural parents if space in the foster care system was not available. Naturally, the criteria for deciding when a child is to be removed from the home should focus on the well-being of the child. If the safety of the child requires that a child be removed from the natural parents, space must be available in foster care. Any settlement that provides otherwise is simply inadequate to protect these children and unworthy of the court's approval. Read in its entirety the proposed decree does appear to provide that foster care placements will be made available for all children who need them. Indeed, paragraph 9 provides in part, that "defendants shall establish and maintain a continuum of foster care placements reasonably calculated to ensure that there are appropriate foster care placements for all children who come into foster care." During two conferences and in a lengthy letter, the court sought clarification and interpretation of the decree from the parties as to these issues. Defendants responded in their memorandum in support of the decree submitted on July 11, 1988 and at the settlement hearing held on July 18, 1988. In their memorandum and at the hearing, defendants stressed that the decree represents a balance between efforts toward family preservation (aimed at keeping children with their natural parents where possible) and efforts to provide additional foster care placements. Specifically, defendants' memorandum declares that: ... [F]ederal law mandates equivalent efforts in family preservation and foster care initiatives and defendants believe that these programs complement each other. Thus, the decree contains provisions with respect to each of these complementary programs. Foster home recruitment and services will be enhanced significantly under the decree. Specifically, recruitment efforts have been and continue to be extensive. Similarly, significant funding has been obtained to provide intensive family services to prevent children from coming into foster care. Reunification services are also recognized under the decree. In sum, the Consent Decree adequately addresses the need to provide for foster care placements along a continuum of appropriate placements, including the recruitment of regular foster homes, and simultaneously addresses the need to keep, where appropriate, children from entering the foster care system.... *515 Defendants' Memorandum in Support of defendants' Motion for Approval of the Consent Decree, pages 9 and 10 (citations omitted). See also transcripts of settlement hearing held July 18, 1988, page 88. Defendants' memorandum relies on the accompanying affidavits of Philip C. Holmes, Director of the Office of Child Welfare Services of the Social Services Administration of the Maryland Department of Human Resources, and Regina M. Bernard, Director of the Office of Family and Child Development of the Social Services Administration of the Maryland Department of Human Resources. Mr. Holmes avers that the Department of Human Resources "will continue to intensify efforts to recruit foster parents," and that the Department of Human Resources and the Baltimore City Department of Social Services "both have aggressive campaigns to solicit applications from new families." Efforts to recruit new foster homes include increases in the board rates paid to foster parents and an aggressive public relations campaign.[7] With these clarifications in mind, it is the opinion of the court that, if properly implemented, the consent decree will result in substantial and needed improvements in Baltimore's foster care system, and is adequate to protect the interests of these plaintiffs.[8] Indeed, the decree appears to represent an innovative approach aimed at keeping children with their parents, where possible, coupled with efforts to provide additional and varied placement where placement in foster care is required. Under the "Intensive Family Services" program provided for by the decree a social worker is made available during a period of ninety days for as many hours as necessary to alleviate a family crisis threatening removal of a child from the parents' home. See Affidavit of Regina M. Bernard; Consent Decree, par. 15. During this period, a variety of services are made available to the parents that will help them to better care for the child. Consent Decree, par. 15. A similar program is also to be initiated to facilitate quick reunification in some cases where the child is removed *516 from the home. Id. par. 17. Among the variety of placements that will be provided, in addition to the recruitment of new regular foster homes, are emergency shelter care placements and specialized foster care placements for children with specialized needs. Id. par. 9. In its order of July 27, 1987 granting preliminary injunctive relief, the court included various remedial measures intended to provide increased protection to foster children until a full hearing on the merits could be held. The court's confidence in the settlement is strengthened by the inclusion of several of these measures as part of the decree. These include requirements that each foster home be visited once a month; that, if an abuse or neglect complaint is received regarding a home, visits be made once a week, id., par. 22-24; copies of the abuse or neglect report be provided to the child's attorney, id., par. 30; and that some major improvements be made in the system for providing health care to foster children. Id., par. 21A-F. Most important in assessing the adequacy of the settlement proposed in this action is the great degree to which the decree provides plaintiffs with substantially all the equitable relief they requested from the court in their complaint. The relief provided under the terms of the decree is comprehensive in scope and includes provisions that strengthen requirements for education of foster children, id., par. 19; require certain information about foster children to be provided to their foster parents, id., par. 14; increases foster care stipends, id., par. 10; and provides for training of foster parents and foster care workers, id., par. 6, 7, and 13. Importantly, the decree requires substantial decreases in the work load of foster care workers by providing low maximum case loads for workers, id., par. 5. The preliminary injunction hearing revealed serious deficiencies in the system for providing health care to foster children. Specifically, the court found that incomplete medical histories were provided to medical care professionals and that treatment rendered to foster children was episodic rather than continuous. Accordingly, as preliminary injunctive relief, the court required defendants to assign sufficient staff and resources to ensure that proper medical histories are obtained and that appropriate medical care is provided foster children. The decree amplifies and expands on the court's preliminary injunctive relief, id., par. 21. It requires that an initial health care screening take place within twenty-four hours of the child's placement in foster care; that a comprehensive health assessment be completed within sixty days of placement; and that complete medical histories containing specific information be obtained and provided to physicians. Defendants are responsible for ensuring that treatment for any diagnosed problems is promptly provided. Foster children placed in the homes of relatives are not expressly mentioned in the plaintiffs' prayers for relief. At the time plaintiffs' amended complaint was filed with the court, plaintiffs' counsel were unaware that this group of foster children was treated far differently from other foster children. Indeed, at the time of the preliminary injunction hearing, it appeared that foster children placed with relatives were not considered nor included as part of the foster care system. Children placed with relatives were not counted in the foster care system inventory and their caretakers did not receive foster care benefits. According to defendants, approximately 1,100 children are placed with their relatives.[9] At this time, most of the provisions of the decree will not be applied to them; instead a study by an impartial consultant will be undertaken in which the status of each child placed with a relative will be assessed. Id., Par. 27A. The plaintiffs may request additional relief for these children when the impartial assessment is completed. Id., par. 28. The decree, however, does provide for the immediate implementation of certain basic protections for children placed with relatives, including the development of case plans; six and eighteen-month *517 reviews by persons outside the BCDSS, and bimonthly home visits to ensure compliance with health and safety standards. Id., Par. 25-28. In addition, relatives providing care to foster children will be encouraged to apply for licensure as foster parents. Id., Par. 25D. Care being provided to these children also will be evaluated by means of contacts with their teachers and medical case providers. Id., Par. 27B. Although the court required a thorough notice to the class, there were no outright objections lodged against the decree. Both the Foster Care Review Board and foster care workers, however, did express some reservations. In a letter to the court dated July 14, 1988, the chairperson of the State Foster Care Review Board[10] expressed particular concern about the lack of foster homes. Joan L. Graham wrote that the lack of homes "keeps the placement of children at a crisis level, results in inappropriate, short term placements and multiple placements for some children." As noted earlier, the court shares the Board's concern that more be done to recruit adequate foster homes. In deciding that the decree adequately protects this class of children, the court relies on defendants' assurances and interpretation of the decree as requiring vigorous efforts to recruit new homes.[11] At the settlement hearing addressing the adequacy of the consent decree and later during a conference with the court, foster care workers expressed strong reservations about whether the terms of the decree could be implemented from a practical standpoint. They emphasized that they will be charged with the responsibility of implementing the decree on a day-to-day basis, and, without the benefit of additional resources, they doubted they could carry out the decree's terms. Specifically, the foster care workers noted that they are often required to travel hundreds of miles in a month in order to meet their obligations. They also must take on additional obligations when a co-worker is sick or on vacation. It is the apparent consensus among foster care workers that they will be unable to make the additional visits to foster homes and foster children required by the decree without additional resources. Accordingly, the workers asked the court to amend the decree to require their superiors to provide transportation aides and a pool of temporary substitute workers. The transportation aides could assist workers in meeting requirements to visit foster homes and also assist in transporting children and foster parents to medical and other appointments. A pool of temporary or substitute case workers could take over cases assigned to workers who are unable to be at work without the necessity of over-burdening other regular workers. Noting that the suggestions of the foster care workers had substantial practical merits, the court wrote the parties and asked if transportation aides and a pool of temporary workers might be agreed upon as a means of properly implementing the decree. In response, both parties informed the court that these measures had been a subject of the negotiations that produced the decree. It was determined in those negotiations, however, that the specific measures adopted in order to achieve the requirements of the decree were to be left to the judgment of the defendants, at least at this early stage. Furthermore, during the meeting with foster care workers, counsel for the plaintiffs emphasized that, if the requirements *518 of the decree could not be met by case workers, the defendants would be required to reduce the case load ratios of children to foster care workers below the maximum ratios allowed by the decree. In this way, the work load would be lessened to allow the foster care workers to better meet their obligations under the decree. The foster care workers also asked that they be allowed to participate in the implementation and monitoring of the decree. Specifically, they requested to receive the reports required every six months from the BCDSS and the Department of Human Resources that set forth the steps taken to achieve compliance with the decree, and requested that they be given an opportunity to be heard. During its August 3rd meeting with foster care workers, the court was impressed by their commitment to foster children and their strong desire that the foster care system be improved. Moreover, since these workers will implement the decree on a day-to-day basis, their views may be worth hearing in the future. Accordingly, as part of its enforcement powers, the court will order that defendants deliver the six month reports to the foster care workers. Should they wish to be heard after receiving a report, the court would seriously consider such a request at that time. VI For the reasons stated above, the court finds that the consent decree submitted by the parties on April 26, 1988 is fair, reasonable, adequate and deserving of approval. The court closes with a personal note and word of caution. I have now been a judge for twenty years. During this time much human tragedy has passed before me; however, none has so deeply touched me as the plight of these children. I believe that vigorous enforcement of this decree is essential, and I will do all within my power to see that its provisions are fully implemented. ADDENDUM A: CONSENT DECREE CONSENT DECREE This Decree is made and entered into by and between all of the named plaintiffs, L.J., O.S., M.S., C.S., P.G., R.K., and S.J., and the certified class of persons whom plaintiffs represent as set forth in the January 16, 1987 Order of this Court (hereafter described in Attachment A and collectively referred to as "plaintiffs") and all defendants. WHEREAS, on or about December 5, 1984, plaintiffs commenced an action in the United States District Court for the District of Maryland (hereinafter "the Court" or "this Court") and thereafter filed a first amended complaint, and plaintiffs R.K. and S.J. filed a motion to intervene, which was granted herein on or about January 21, 1987; WHEREAS, plaintiffs' complaint, amended complaint, and complaint in intervention make certain allegations and seek certain relief with respect to the foster family care program administered by the State of Maryland, particularly as that program is administered by the Baltimore City Department of Social Services; WHEREAS, defendants deny all of the allegations of the complaint, amended complaint, and complaint in intervention, particularly all legal contentions that any defendant has ever violated any State or federal law in the conduct of the family foster care program; WHEREAS, plaintiffs allege that children who are committed by the juvenile court to the defendants' care and custody and who are placed with their relatives are entitled to the same protections as children placed with non-relatives, and defendants dispute that the same protections apply to these children; WHEREAS, defendants have taken and continue to take substantial positive actions to improve the quality of care and services provided to foster care children; and WHEREAS, in an effort to avoid further litigation, plaintiffs and defendants believe that settlement of this matter and entry of this Consent Decree is in the public interest, without any admission of liability by any defendant for any purpose, to settle *519 and resolve all claims for declaratory relief and equitable relief, including injunctive relief, raised in the complaint, amended complaint, and complaint in intervention, and all matters addressed in this Decree. NOW, THEREFORE, it is hereby ORDERED, ADJUDGED, and DECREED as follows: JURISDICTION 1. This Court has jurisdiction of the subject matter of this Consent Decree. In the event of subsequent litigation relating to the matters in this litigation other than in an action to enforce this Decree, defendants retain and have the right to contest jurisdiction, venue, and/or assert any other defenses. PARTIES 2. The provisions of this Consent Decree shall apply to and be binding upon the parties to this civil action, and upon their employees, heirs, successors-in-interest, and assigns. 3. The undersigned representatives of the plaintiffs and defendants certify that they are fully authorized subject to the Federal Rules of Civil Procedure to enter into and to execute the terms and conditions of this Consent Decree and to legally bind the parties, including all members of the certified plaintiff class. 4. The parties agree that the defendants' obligation to give notice of this Consent Decree to the plaintiff class is restricted to giving notice to their undersigned counsel by their signing and receipt of this Decree, receipt of which is hereby acknowledged. In addition, defendants will send out notice of this Consent Decree to all foster parents, to all relatives with whom DSS has placed children, to all parents known to defendants as having children in foster care or placed with relatives and to the organizations listed in Attachment B. ASSIGNMENT OF CASEWORKERS AND CASES 5. Within two years of the date of the entry of this Decree: (a) continuing care caseworkers in the Baltimore City Department of Social Services (hereinafter "DSS") who are responsible for children in foster care, other than those aftercare workers responsible for children for whom a recission order has been requested, shall have average caseloads of no more than 20 children and their biological families; (b) intake caseworkers in DSS who are responsible for a caseload of children in foster care shall have average caseloads of no more than 14 children and their biological families; (c) DSS caseworkers who are responsible for the supervision of foster family homes shall have an average caseload of no more than 40 foster families; (d) immediate supervisors of DSS foster family care workers shall have an average of no more than six caseworkers under their supervision; and (e) the standard with respect to the transfer of cases when a worker leaves DSS or transfers to another unit shall be as follows: When a worker leaves or transfers to another unit, the supervisor shall reassign cases, except for priority cases, to other workers within five working days. The supervisor may, based on the needs of the unit, retain a priority case or reassign it. Priority cases will include those in which a child requires a new placement; a child has medical needs or imminent appointments; a child has impending juvenile court or administrative review; or a child is the subject of a report of maltreatment. There shall be a conference between the supervisor and the new worker within 10 working days of reassignment. If possible, the former worker shall attend the conference. The topics to be discussed at this conference shall include, among other things, a discussion of any immediate unmet needs of the child, therapy and evaluations in progress, and existing service agreements. CREDENTIALS AND TRAINING OF CASEWORKERS 6. Defendants shall continue their current policy that no DSS caseworker without *520 at least a B.S. or a B.A. degree shall have responsibility for supervising the continuing care of children in foster family homes. 7. A. Within two years of the date of entry of this Decree, all caseworkers shall receive at least four days of orientation and training relating to the substantive aspects of the caseworker's responsibilities within 60 days of beginning employment as a DSS caseworker. Such training will take into account the level of prior child welfare experience and the need for additional training for those with limited or no prior training. Such training will include casework skills; interviewing; developing service agreements and case plans; working with families; and the structure and law governing child welfare. B. Within two years of the date of entry of this Decree, all caseworkers shall receive annually 20 hours of training relating to the substantive aspects of the caseworker's responsibilities. This training shall begin for each caseworker during his or her second year of employment. SPECIALIZED SUPPORT UNIT 8. Within six months of the entry of this Decree, defendants shall establish within DSS a specialized unit to assist caseworkers and supervisors to manage effectively cases that require specialized experience and/or knowledge in areas such as assisting children or parents who need services for drug and alcohol abuse; special educational needs; developmental disabilities; mental health or other specialized health care needs; or the development of independent living skills. This unit shall assist workers in identifying, locating and obtaining resources or services for drug and alcohol abuse; special educational needs; developmental disabilities; mental health or other specialized health care needs; or the development of independent living skills. The responsibilities of this unit do not include direct case responsibility or the providing of direct services. FOSTER PLACEMENT RESOURCES 9. Within two years of the entry of this Decree and to the extent within their control, defendants shall establish and maintain a continuum of foster care placements reasonably calculated to ensure that there are appropriate foster care placements for all children who come into care. The continuum shall include regular foster homes, specialized homes, emergency shelter homes, emergency shelters, group homes and therapeutic foster homes as defined in COMAR. (Therapeutic foster homes are homes in which foster parents receive a salary and other services in addition to the foster care board rate.) In addition, defendants shall seek annually sufficient funds through their budget requests or elsewhere (i) to purchase special services for children in foster care needed to prevent their institutionalization, and (ii) to assure stipends to emergency shelter care homes even in months in which no children are being cared for. 10. Defendants shall continue their past practice of seeking through the budget process increases in the rate of reimbursement paid to foster families by including such increases in their budget requests and advocating for their appropriation with the goal of reaching by State Fiscal Year 1991 a rate of no less than the amount determined by the United States Department of Agriculture as necessary to care adequately for children in urban areas of the southern region of the country. 11. Defendants shall maintain a foster home recruitment unit in DSS. The unit shall develop and implement a sustained recruitment plan, and shall issue periodic reports on the status of its recruitment efforts. 12. Within one year of entry of this Decree, defendants shall require as a condition of licensure that all new foster parents complete a course of pre-service training of at least 12 hours. The training shall cover an appropriate curriculum, including applicable DSS regulations; the role of the foster parents and the child's caseworkers; the special needs of foster children; the need to work with natural parents; appropriate disciplining methods and alternatives to corporal punishment; the importance of utilizing medical, dental, educational, and *521 other community services; and the legal rights of foster parents, children and natural parents. 13. Defendants shall require foster parents to participate in at least six hours of foster parent training a year. One year after entry of this Decree, no foster parent's license may be renewed unless one of the foster parents in the home has received the required training. Defendants shall seek through the budget process and advocate for their appropriation funds to pay foster parents a reasonable sum in consideration of their attendance at required training including reasonable transportation and child care expenses. INFORMATION ON FOSTER CHILDREN 14. Before a child is placed in a foster home, DSS shall provide the foster parents necessary information about the child including the reason for the child's coming into care initially and, if applicable, the reason for the current placement; medical, psychological or behavioral problems that the child may have of which the agency has knowledge and any on-going treatment the child is receiving for any such problems of which the agency has knowledge. In addition, DSS shall make reasonable efforts to provide foster parents with the child's recent grade and attendance record in school. If an emergency placement is necessary, defendants shall provide the information to the foster parent within ten working days of placement. PERMANENCY AND INTENSIVE FAMILY SERVICES 15. A. Except in emergency situations where a child faces a substantial risk of harm and where services cannot prevent the removal of the child, reasonable efforts will be made by the appropriate DSS personnel prior to placement of a child in foster care to prevent or eliminate the need for removal of the child from his or her home. Such reasonable efforts to prevent or eliminate the need for placement or to reunify a child who has been placed shall include, where appropriate in the worker's professional judgment, the provision or securing of family counseling services, drugs and alcohol abuse services, day care, parenting education services and assistance provided under the federal Emergency Assistance to Families with Children program to the extent allowed by law. Services and assistance shall be provided in a duration and intensity reasonably assured of meeting their goal. B. Defendants shall seek through the budget process and advocate for their appropriation sufficient funds to provide a program of intensive family services the goal of which shall be to reduce the number of children who need to be removed from their biological homes. C. A case plan for each child in foster care shall set forth the services and assistance that have been provided to prevent or eliminate the need for removal from the home and the reasons those efforts did not succeed. 16. In all cases in which the goal is to return a foster child to his or her biological home, defendants shall make reasonable efforts to facilitate weekly visits between the parent and child, unless the juvenile court orders otherwise, or DSS finds that such visits are not in the child's best interest. Before permanent reunification, overnight and weekend visits should be provided if appropriate. 17. A. In each case in which the case plan is the child's return home, DSS shall enter into a service agreement with the biological parent of the child within 60 days of the child's placement unless the parent is unavailable or unwilling to agree. The agreement shall set forth the current barriers to the child's return home, the steps the parent must take in order to have the child returned to him or her, the timelines for completion of these steps, the services, if any, the caseworker and DSS will provide to the parent (for example, referral to alcohol abuse counseling) and the timelines within which any services will be provided. B. Defendants shall continue to follow the guidelines for workers on when a permanency plan shall be changed from return home. Such guidelines require that the *522 case plan goal be changed promptly when the parent fails continuously to fulfill terms agreed to in the service agreement and/or when the parent has not maintained regular visitation or other contact with the child. 18. A petition for termination of parental rights shall be filed on behalf of each child for whom the goal is adoption within 120 days of the DSS establishing such a goal. EDUCATION 19. A. Within five working days of being placed in nonemergent foster care, a child of school age shall be attending school (if school is in session), unless school attendance within five working days is unattainable for reasons outside the control of DSS. In such cases, DSS will make all reasonable efforts to obtain school attendance as soon as practicable. B. If a child's caseworker has reason to believe that a foster child may be educationally handicapped and is not receiving special educational services, the worker shall promptly notify the local educational agency and request a screening for that child in writing. The child's caseworker shall be responsible for: (1) providing, when requested, all evaluations of the child contained in DSS files; (2) attending meetings on behalf of the foster child relating to identification, evaluation and placement of the child in a special educational program, where possible; (3) providing the address of the biological parents to the local education agency if contained in DSS files; and (4) facilitating appointments for evaluation of the child relating to the special educational decision-making process. C. Within two years of the entry of the Decree, all caseworkers shall receive training with respect to the special education screening, evaluation, assessment and individualized education plan process. Thereafter, the worker shall notify the child's attorney if these services are not provided in a timely fashion. D. If DSS holds guardianship with the right to consent to adoption or long-term care short of adoption of a child and that child is educationally handicapped or is suspected of being educationally handicapped, the child's caseworker shall provide the local education agency with appropriate documentation of the child's legal status so that the school can apply for the appointment of a parent surrogate. EXPLANATION OF RIGHTS 20. Within six months of entry of this Decree, defendants shall prepare a handbook describing the rights and responsibilities of foster children, biological parents and foster parents. Defendants shall provide a draft of the handbook to plaintiffs' counsel. Defendants shall consider, but need not adopt, any suggestions plaintiffs' counsel report to defendants within 30 days of receipt of the draft handbook. Thereafter, the defendants shall cause the handbook to be reproduced and distributed to all current foster children, where age appropriate, their biological parents and all current foster parents. The handbook shall be provided to all new foster children, where age appropriate, their biological parents, and all new foster parents. HEALTH CARE 21. A. Defendants shall develop and maintain a medical care system reasonably calculated to provide comprehensive health care services to foster care children in a continual and coordinating manner in accordance with their needs. B. All foster children shall have an initial health care screening if possible before placement in an out-of-home care setting, but in any event, no later than 24 hours following placement. C. All foster children shall be referred for a comprehensive health assessment within 30 days of entering placement. The assessment shall be completed within 60 days of entering placement. This assessment shall address the child's medical, emotional and developmental needs. The results of this assessment will be made available to the child's health care provider(s). *523 The provider(s) selected by DSS to provide health care for the child shall be reasonably calculated to meet the child's specific needs identified by the assessment. D. All foster children shall have periodic medical, dental and developmental examinations in accordance with the schedules or protocols of the EPSDT. If needs are identified at the periodic examinations that were not identified previously, the provider(s) selected by DSS shall be reasonably calculated to meet these additional needs. E. For each child in foster care the defendants shall develop and use an abbreviated health care record (e.g., medical passport), which shall accompany the child through the out-of-home care system and upon his or her return home, adoption or emancipation. An abbreviated health care record shall require the following information: the medical facilities where the child usually receives care, the child's condition at placement as documented by his or her physician, and the child's immunization record, allergies/adverse reactions, chronic health problems and present medications. The foster parents of the child shall be provided with the health passport completed to the extent possible at the time of a child's replacement or if an initial placement within 5 days of placement. Copies of the forms contained in the passport shall be included in the child's case record and shall be reviewed by a supervisor at least every 6 months. F. Within two years of entry of this Decree, defendants shall establish and maintain a health services management unit within DSS. This unit shall be staffed by one or more health professionals who are trained and experienced in child health care. CASEWORKER VISITS WITH FOSTER CHILDREN 22. Each child in a foster family home shall be visited by their assigned caseworker or his or her substitute at least once every month. The purpose of the visit is to assess the quality of care being provided to the child and the child's adjustment to the foster home, foster parents, other persons present in the home, and school. The interview shall be of sufficient duration and privacy to evaluate the child's adjustment to placement in the foster home. The caseworker shall indicate the date and summarize the results of each visit in the child's case record. Where indicated, the case-worker, based on his or her professional judgment, shall visit or contact the child more frequently. During the first three months a child is placed or replaced, the caseworker shall visit or contact the child more frequently when in his or her professional judgment such is appropriate. 23. If an abuse or neglect complaint is filed pertaining to a foster family home, the assigned caseworker(s) shall visit the home at least once a week until the complaint is ruled out. 24. If an abuse or neglect complaint is not ruled out, the caseworkers shall visit the home at least once a week until the children are removed from the home or until the juvenile court orders otherwise or the child's attorney and DSS agree otherwise. PLACEMENT WITH RELATIVES 25. A. A child committed by the juvenile court to DSS may be placed with his or her relative(s). B. Such a child shall be provided a case plan and 6-month administrative and 18-month juvenile court reviews of his or her placement. DSS shall request that the Foster Care Review Board conduct the 6-month administrative reviews. C. Within six months of the date of entry of this Decree, each child placed with a relative shall be visited by a caseworker no less frequently than once every two months. D. A relative with whom a child committed to DSS has been placed may apply for a license as a foster family home. DSS shall inform the relative of the benefits of and requirements for licensure. 26. A. Within one year of the date of the entry of this Decree, DSS shall complete an inventory of each relative placement to determine whether each home *524 meets basic health and sanitary standards such as the existence of adequate heat, light, water, cooking and refrigeration facilities, toilet facilities and smoke detectors, and the absence of exposed wiring, rodent or insect infestation, broken windows, doors or steps, and holes in walls or ceilings. If the DSS employee or agent conducting the inventory observes evidence of any threat to the child's health or safety, the DSS employee or agent, if other than the child's worker, shall report that evidence to the child's worker. The results of the inventory shall be made available to plaintiffs' attorneys upon the issuance of a protective order. B. In addition, defendants will seek the necessary statutory authority to conduct criminal background investigations for relative caretakers and others known to be in the household. After such approval is obtained, DSS shall conduct such investigations for existing and prospective caretakers and others known to be in the household. C. Within six months of the entry of this Decree, DSS shall determine if a home meets basic health and sanitary standards within 30 days of placement. 27. A. Within one year of the entry of this Decree, an assessment shall be made of the health and educational status of each child placed with a relative. The assessment shall be completed by an impartial consultant selected through the State procurement process. The selection of the consultant shall be made by an evaluation committee or review panel. One member of the committee or panel shall be mutually acceptable to the parties. B. The consultant shall oversee the gathering of data for the assessment. The assessment shall include contacts with the child's education provider and medical provider. The consultant shall determine generally the child's educational and medical status and the existence, if any, of unmet needs of the child. The child's caseworker shall make reasonable efforts to facilitate the child's obtaining educational and medical services sufficient to address the identified unmet needs. A report of the assessment result in regard to each child shall be made available on a quarterly basis to plaintiffs' attorneys upon the issuance of a protective order. 28. Within 30 days of receipt of the final consultant's report, plaintiffs may file objections pursuant to ¶ 35 of this Decree, including a statement of why children placed with relatives are entitled to additional protections. REPORTS OF ABUSE AND NEGLECT 29. Whenever a DSS employee has reason to suspect that the abuse or neglect of a child in foster care or a child placed with a relative has occurred, the DSS employee shall notify the protective service unit of DSS. Children who are the subject of an abuse report shall be visited within 24 hours of the receipt of a complaint by either a protective services worker or staff of the police department. Children who are the subject of a neglect report shall be visited within five days. 30. Whenever there is a report of abuse or neglect of a child in a foster family home or a child placed with a relative, DSS shall notify the attorney for the child in a foster family home and, within six months of the entry of this Decree, the attorney for the child placed with a relative, if it knows of any, the child's biological parents unless psychologically contraindicated or their whereabouts or identity is unknown, and such other persons as are required to be notified by State law. Notification to the child's attorney and/or biological parents shall be within five working days of receipt of a report. A copy of the report shall be provided to the child's attorney. The completed disposition of the complaint shall be submitted to the child's attorney within five working days of its completion. SCOPE AND APPLICATION OF DECREE 31. This Decree shall apply only to those children certified as members of the plaintiff class. This Decree creates no rights in favor of any other person and creates no obligations or duties on the part of defendants with respect to any programs *525 other than the DSS foster family care program and the DSS services to extended families with children program. A violation of this Consent Decree shall not create a new, independent private cause of action for damages for anyone. Nothing set forth in this paragraph shall bar the Court's contempt power for violation of the Decree. REPORTING, MONITORING AND ENFORCEMENT 32. If the Court ever finds that any defendant, or any successor of any defendant, has failed to satisfy his, her or its obligation under this Decree, the Court shall not order any extraordinary relief (including the imposition of a fine or imprisonment) against or respecting that defendant or against any defendant (either to punish a defendant for alleged non-compliance or to stimulate future compliance) unless the Court first finds by a preponderance of the evidence that the defendant(s) failed to meet his, her, their or its obligations due to some fault or lack of good faith on the part of the defendant(s). 33. Beginning six months following the entry of this Decree and at six-month intervals thereafter, defendants shall file with the Court a report setting forth the steps they have taken to achieve compliance with this Decree. A copy of the report shall be served on plaintiffs' attorneys of record. The report shall include the following data from a six month period ending no earlier than two months before the date of the report: a. the number of DSS foster care, continuing care and intake caseworkers; the number of immediate supervisors of such caseworkers; and the number of average cases for continuing care workers and for intake workers; b. the number of DSS foster home caseworkers, the number of immediate supervisors of such caseworkers; and the number of average cases; c. the number of restricted and general foster homes approved; d. the number of children's and home caseworkers who have been hired; e. schedule of the rates of reimbursement available to foster parents; f. the number of emergency foster homes and the number of children who can be served by each home; g. effective July 1, 1988, the number of current foster parents who have completed the requisite pre-service and/or continuing training; h. the number of foster children receiving aftercare services who are placed with a relative, the number of foster children who are placed with a relative in a restricted foster home, and the number of children who are committed by the juvenile court to DSS and who are placed in a relative home, which home is not a licensed foster care home; i. the number of complaints of abuse and/or neglect of children in foster homes received and the disposition of such complaints; j. commencing with the second semiannual report, the number of complaints of abuse and/or neglect of children placed with relatives received and the disposition of such complaints; k. the number of children entering foster care and the date of his or her first medical assessment in regard to each such child; l. the number of children for whom a goal of return home has been established; the number for whom a plan of adoption has been established; the number for whom a petition to terminate parental rights has been filed; and the number for whom such petitions have been granted; m. a report on expenditures for support services and reunification funds as of the most recent end of fiscal year or mid-fiscal year; n. the number of foster homes reassessed; o. a summary of the quality assurance forms used by DSS as described in a letter dated April 5, 1988 from Mark J. Davis to *526 William L. Grimm attached hereto as Attachment C; and p. the number of workers who have attended training and the nature of the training provided. 34. A. Any time after the expiration of two years following the entry of this Decree, defendants may file a final report showing implementation of and compliance with this Decree. B. Until the defendants file their final report, defendants shall file a semiannual report in the format set forth in paragraph 33. Defendants' obligation to report to the Court shall conclude once the final report has been filed with the Court. 35. Plaintiffs may file any objections to defendants' reports within 30 days of the filing of the report, after which the Court may decide to hold a hearing on the matter, assuming strict compliance with the terms of ¶ 36, infra. RESOLUTION OF DISPUTES 36. A. Before any party may bring any matter before the Court with respect to any problem arising under this Decree, including any alleged non-compliance, the parties must confer and attempt to resolve the problem. If plaintiffs' attorneys present a dispute arising under this Decree involving an individual class member, plaintiffs' attorneys may inspect the file of that child, the child's parents, and the child's foster parent(s) upon obtaining a protective order. The parties agree to cooperate in obtaining the necessary protective order. Nothing set forth in this paragraph shall limit the rights of discovery of an attorney appointed for a child by the juvenile court in that proceeding. B. The Court shall not entertain any alleged dispute in which the movant does not certify that good faith efforts have been made to attempt to resolve the dispute. This certificate shall include the date, place, time and participants in any conference to resolve the matter. CLAIMS OF INDIVIDUAL PLAINTIFFS 37. The claims of plaintiff R.R. are hereby dismissed with full prejudice. 38. With respect to the individual damage claims of the other individual plaintiffs, with the exception of plaintiffs-intervenors R.K. and S.J. for whom no individual damage claims have been made, this Decree does not resolve these individual damage claims. ATTORNEYS' FEES AND COSTS 39. The parties agree to continue to negotiate in good faith the settlement of plaintiffs' claims for attorneys' fees and costs until July 30, 1988. If settlement is not reached by that date, the plaintiffs may file a petition for an award of attorneys' fees and costs with the Court for its consideration or for referral to a magistrate. Plaintiffs agree not to file any such petition during the negotiations up to and including July 30, 1988. CONTINUING JURISDICTION 40. The parties agree that the Court shall retain jurisdiction over this case until the terms of this Consent Decree are fully implemented for the purposes of (i) assuring implementation and (ii) allowing any party to apply at any time for an order seeking interpretation, implementation, enforcement, or modification of this Decree. THE PLAINTIFFS, BY THEIR COUNSEL, AND THE DEFENDANTS BY SECRETARY MASSINGA AND THEIR COUNSEL ENTER INTO THIS CONSENT DECREE AND SUBMIT IT TO THE COURT THAT IT MAY BE APPROVED AND ENTERED AS AN ORDER OF COURT. *527 For the plaintiffs: For the Defendants: (s) William L. Grimm (s) Ruth Massinga Legal Aid Bureau, Inc. Secretary, Department of Candler Building Human Resources 714 East Pratt Street J. Joseph Curran, Jr. Baltimore, Maryland 21202 Attorney General of Maryland (301) 539-5340 (s) Catherine M. Shultz (s) Carol R. Golubock Assistant Attorney General Children's Defense Fund The Munsey Building, 2nd floor 122 C Street, N.W. 7 North Calvert Street Washington, D.C. 20001 Baltimore, Maryland 21202 (202) 628-8787 (301) 576-6317 (s) Mark J. Davis (s) Nevett Steele, Jr. Assistant Attorney General Whiteford, Taylor & Preston 311 West Saratoga Street 7 St. Paul Street, Suite 1400 Baltimore, Maryland 21201 Baltimore, Maryland 21202 (301) 333-0019 (301) 347-8700 Counsel for defendants. APPROVED AND ENTERED on this 27th day of September, 1988. (s) Joseph C. Howard United States District Judge ATTACHMENT A L.J. v. Massinga Class Members All children who are, have been and may possible again, or will be placed in foster homes by the Baltimore City Department of Social Services and are or will be placed in the custody of the Baltimore City Department of Social Services pursuant to: (a) an authorization or order of emergency shelter care granted to the Baltimore City Department of Social Services by an intake officer or by the Circuit Court for Baltimore City, Division of Juvenile Causes, under the provisions of Md.Cts. & Jud.Proc.Code Ann. § 3-815, or (b) in order of commitment, care, or custody granted to the Baltimore City Department of Social Services by the Circuit Court for Baltimore City, Division for Juvenile Causes, under Md.Cts. & Jud.Proc.Code Ann. § 3-820, or (c) an order of guardianship with the right to consent to adoption or long-term care short of adoption granted to the Baltimore City Department of Social Services by the Circuit Court for Baltimore City under Md.Fam.Law Code Ann. § 5-301 et seq., or former Md.Ann.Code Art. 16, §§ 67 et seq., or (d) a voluntary foster care agreement between their natural parents or legal guardians and the Baltimore City Department of Social Services. ATTACHMENT B L.J. v. Massinga Consent Decree List of Organizations to Receive Notice Clinton Bamberger, Esq. University of Maryland School of Law Clinical Law Office 510 West Baltimore Street Baltimore, Maryland XXXXX-XXXX Stephen Ney, Esq. Maryland Disability Law Center 2510 St. Paul Street Baltimore, Maryland 21218 Sheila K. Sachs, President Bar Association of Baltimore City 111 North Calvert Street Room 627, Courthouse East *528 Baltimore, Maryland 21202 James Wiggins, President Monumental City Bar Association Clarence M. Mitchell Jr. Courthouse Room 401 Baltimore, Maryland 21202 Pamela Anne Bresnahan, President Women's Bar Association of Maryland 28th Floor 401 East Pratt Street Baltimore, Maryland 21202 Anne Pecora, Esq. University of Baltimore School of Law Clinical Law Office Suite 101 1420 North Charles Street Baltimore, Maryland 21201 John Michener Maryland Volunteer Lawyer Service 520 West Fayette Street Suite 130 Baltimore, Maryland 21201 ATTACHMENT C THE ATTORNEY GENERAL Saratoga State Center Suite 1015 311 W. Saratoga Street Baltimore, Maryland 21201 (301) 333-0019 April 5, 1988 William L. Grimm, Esq. Legal Aid Bureau, Inc. 7th Floor 714 E. Pratt Street Baltimore, Maryland XXXXX-XXXX Re: L.J. v. Massinga Quality Assurance Report Summaries Dear Bill: This letter supersedes my letter to you of March 29, 1988 on the contents of Quality Assurance Report Summaries to be provided to plaintiffs in accordance with paragraph 32(o) of the Consent Decree. DSS continues to use forms D-885 and D-887 to review a child's case record and a foster home record, respectively. Monthly summaries of the information gathered from the files will be provided to plaintiffs from these forms or forms reasonably in accordance with them. DSS has yet to modify the form to reflect the Health Care provisions of the Consent Decree. However, it expects to do so and will track compliance with the following requirements: 1. That foster children have an initial screening no later than 24 hours following a placement; 2. That foster children be referred for a comprehensive health assessment within 30 days of entering placement and that the assessment be completed within 60 days; 3. That foster parents be provided with a child's health passport within five days of initial placement or at the time of a child's placement; 4. That copies of forms contained in the passport be included in the child's case records and be reviewed by a supervisor every six months; and 5. That foster children have periodic medical, dental and developmental examinations in accordance with the schedules or protocols of the EPSDT. Very truly yours, /s/ Mark Davis Mark J. Davis Assistant Attorney General MJD089:jas cc: Carol R. Golubock, Esq. Jeanne D. Hitchcock, Esq. Catherine M. Shultz, Esq. Nevett Steele, Jr., Esq. Ethel Zelenske, Esq. ADDENDUM B: MEMORANDUM AND ORDER GRANTING PLAINTIFFS' MOTION FOR A PRELIMINARY INJUNCTION DATED JULY 27, 1987 MEMORANDUM AND ORDER This is a class action by foster care children who allege that defendants' administration *529 of the foster care system in the City of Baltimore violates plaintiffs' rights under federal statutory law, Titles IV-E and IV-B of the Social Security Act, and the Fourteenth Amendment of the United States Constitution. The class representatives also seek monetary damages for the harms allegedly suffered while in the care and custody of the defendants. Pending before the Court are plaintiffs' motions for (1) a preliminary injunction (2) sanctions based upon defendants' failure to respond factually to plaintiffs' motion for preliminary injunction and (3) a default judgment. A hearing on the motions was held from April 2 to April 15, 1987, the parties submitted post-hearing briefs by May 22, 1987. Some 91 separate items of evidence were introduced and the Court heard from 12 witnesses. Among the items of evidence were seven looseleaf binders including scores of documents. For the reasons which follow, the motions will be granted. I. MOTION FOR PRELIMINARY INJUNCTION In their motion for a preliminary injunction and accompanying memorandum plaintiffs allege, inter alia, that, before and after the filing of this action in December 1984, "some [foster] children continue to receive brutal treatment in foster homes in which they are placed by the defendants," and "substantial numbers of children do not receive basic medical care and treatment for disease and disabilities." Plaintiffs contend that the continued acts and omissions by the defendants violate plaintiffs' rights under the Constitution and federal foster care law. This Court is asked to order the defendants to administer the Baltimore foster care system in compliance with federal statutory and constitutional law by enjoining plaintiffs from allowing inadequate homes to remain in the foster care system; failing to provide proper medical care; and by requiring prompt reporting of complaints of abuse and neglect to the appropriate authorities. In defendants' response to plaintiffs' motion, defendants contend that they have acted vigorously to make substantial improvements in the Baltimore foster care; that plaintiffs cannot demonstrate that defendants acted with deliberate indifference to plaintiffs' rights; and that plaintiffs' proof reveals, at most, isolated incidents of past exposure to harm. Defendants support their opposition with thirteen documents purporting to demonstrate improvements they have made or are attempting to make in the foster care program. The contentions advanced by plaintiffs concerning irreparable harm to foster children can be grouped into four categories: (a) instances of neglect and abuse revealed through random case sampling; (b) inadequate medical care; (c) absence of protection afforded Code 517 children; and (d) failure to undertake adequate and effective measures to address deficiencies in the system in the Baltimore City Foster Care Program revealed by the Harris Task Force in September 1984. The Court will discuss plaintiffs' motion within the above categories. (A) Instances of Neglect and Abuse Revealed Through Random Case Sampling Plaintiffs base their allegations of widespread, systematic omissions and failures by defendants, in part, on a study undertaken at their request by Dr. Trudy Festinger. Dr. Festinger chairs the Department of Research of the School of Social Work at New York University and has studied how caseworkers assess adoptive applicants, foster care agreements between departments of social services and foster parents nationally, and the effectiveness of court supervision of children in foster care. Dr. Festinger has published ten studies and lectured about foster care issues and research methodology. She serves on the New York State Board of Social Welfare. *530 Plaintiffs' study was accomplished through the review of individual foster care case records maintained by the defendants. Dr. Festinger reviewed files and the foster care policies of the Baltimore City Department of Social Services (BCDSS) to formulate the study. She also selected and trained casereaders and on-site supervisors and monitored the case reading. The case reading focused on children who had been placed in foster care from January 1, 1983 to April 30, 1986. Two criteria determined which files were read: (1) the child could not have been placed in care out-of-state nor in a purchase care facility during the relevant time period; and (2) a child had to have been placed in a foster home for at least 60 days. The four thousand (4,000) children represented by the Legal Aid Bureau in Child in Need of Assistance (CINA) proceedings were the universe from which the files to be read were selected. Eight hundred and ninety seven (897) names were randomly selected from the universe. Through information from the Baltimore City Juvenile Circuit Court, the Attorney General, the Baltimore City Foster Care Review Board, BCDSS and the Legal Aid Bureau, each randomly drawn name was checked against the study criteria, approximately one-quarter met the criteria. Plaintiffs' casereaders read 149 of the 224 cases that met the sampling criteria (¼ of 897). Dr. Festinger concluded that her opinion could reasonably be based on that number. Casereaders extracted only information found in the child's record. They recorded any documented concern, suspicion, or complaint of child maltreatment, casereader judgments were not recorded. The readers used a 69 page questionnaire containing 77 questions to be answered for each case. The casereading instruments provided uniform summaries of the readers' observations. Criticisms of Plaintiffs' Study Defendants' witness Roger White, Ph.D., Associate Professor at John Hopkins School of Public Health in the Department of Maternal and Child Health, was qualified as an expert in statistics and the use of random sampling techniques in social sciences and child welfare studies. Dr. White's criticism of the plaintiffs' sampling methodology, included: (1) uncertainty as to the sampling frame; (2) the lack of an explanation why only 149 of the 230 "total end sample" were the basis for Dr. Festinger's conclusions; (3) the lack of operational definitions for such terms as "emotional abuse" and "sexual abuse"; (4) alleged inaccuracies in computation of random sample (i.e., the sample should have been about 500 names not 230); and (5) concern about the reliability of the completed case reading instruments. After considering these criticisms, the Court finds that the plaintiffs' study is sufficiently sound for the purposes of this motion. The sampling frame was foster children represented by the Legal Aid Bureau in CINA proceedings. The 149 cases used as a basis for Dr. Festinger's opinions had been analyzed in time to be utilized at the hearing. There is no evidence of significant statistical bias in the sample. Approximately 220 cases were to be examined because that is the number of cases that met the sample criteria. With respect to "operational definitions", plaintiffs' casereaders merely recorded expressions of concern found in defendants' own records about each child. As to the reliability of the completed instruments, the Court observes that during training every case was double-read and, after training, every tenth case was double-read. The Court's examination of the compiled instruments indicates that they were completed accurately. Moreover, defendants' expert recognizes Dr. Festinger as an expert in the field of social research and has acknowledged that she is "a very appropriately-acclaimed individual;" he has also cited Dr. Festinger in his own work.[1] *531 In light of the soundness of plaintiffs' methodology, * * * the Court concludes that plaintiffs' study is a reliable evidentiary basis for the Court's findings. Results of Sampling Of the 149 cases read, 42 indicated maltreatment in the foster home. From this finding, Dr. Festinger projected that 282 children per thousand might be maltreated. Dr. Festinger further commented that if only 14 of the 149 children had actually been maltreated, then 94 children per thousand were likely to have been maltreated during the study period. As of the hearing, plaintiffs had conducted an intensive review of 18 of the 42 cases in which maltreatment was indicated during the most recent sampling period of May 1, 1985 through April 30, 1986. Plaintiffs believe that the most probative indicators of the current status of the foster care program are these 18 cases. The 18 cases depict a pattern of physical, sexual and emotional abuses inflicted upon children in the custody of BCDSS. The additional 24 cases are less recent and plaintiffs had not examined them in detail at the time of the hearing. For these additional 24 cases, plaintiffs introduced portions of the casereading instruments where maltreatment was recorded. The Court's review of the submitted documentation on these additional 24 children indicates that maltreatment is likely to have occurred in at least two-thirds of the cases. Instances of Maltreatment Not Developed By Random Sampling Plaintiffs also offered evidence of maltreatment or neglect of 16 other children who came to the attention of the Legal Aid Bureau immediately before the filing of the motion for preliminary injunction. Largely uncontroverted testimony from treating physicians, some parents, and documentation submitted to the Court revealed children who had suffered continuous sexual and physical abuse or neglect in foster homes; children who had been placed in homes which defendants knew were inadequate; and cases where reports of abuse were not promptly or adequately investigated to prevent further placements of other children in those homes. The tragic consequences of these deficiencies include sexual abuse of young girls by their foster fathers and a child who contracted gonorrhea of the throat after sexual abuse by the adult son of a foster parent in an unlicensed foster home.[2] In eight cases, defendants failed to assure that medical treatment prescribed by physicians and basic education were provided. Findings of Fact The Court makes the following enumerated findings of fact as to plaintiffs' sampling and the 16 additional cases not developed through sampling: 1. Dr. Festinger is an expert in the field of social research methodology, foster care systems and child welfare policy. 2. The Court's review of the casereading instruments indicates that the casereading instruments were completed accurately. 3. Plaintiffs' random sampling study of children in the custody and care of defendants is a sufficient basis for determining whether systemic problems exist in the Baltimore foster care program. *532 4. From the Court's review of the case records of the cited 18 children, it is evident that the expressed concerns about the conditions in the foster home and treatment of each child were well-founded in 15 cases. Indeed, in some cases the state of the foster home and treatment of the child were cause for grave concern. 5. The concerns expressed in the additional 24 cases were consistent with those of the 18 cases of most recent origin. 6. When the 18 cases of most recent origin are considered together with the additional 24 cases, the number of cases where expressed concerns were well founded very likely exceeds 30. 7. Where the concerns expressed are all well-founded and verified, the children were at risk of harm to their emotional and physical well being. 8. The number of children at risk out of the sample of 149 is sufficient to indicate that deficiencies exist throughout the foster care system as administered by the BCDSS. 9. The sample reveals that existing deficiencies include the failure to remove children from homes where physical and emotional abuse and neglect are threatened; the licensing of foster homes where foster parents are unable to care properly for the children; the granting of "exceptions" that allow homes to remain open when they are clearly inadequate or a risk to the children in them; the lack of appropriate numbers of satisfactory homes; and the over-reliance on physical evidence of abuse and questioning of children when abuse reports are investigated. 10. There is a great likelihood that many children in the foster care administered by BCDSS are at risk of suffering irreparable harm. 11. The additional 16 cases of abuse and neglect that were not developed through random sampling cannot be dismissed as "anecdotal." Indeed, while the Court does not rely on these cases in concluding that systemic deficiencies exist, these cases corroborate the deficiencies identified through plaintiff's study. 12. In most of these 16 cases, the situations placing these children at risk were not resolved or confirmed until two months before or after, the filing of plaintiffs' motion for a preliminary injunction. (B) Inadequate Medical Care To support their allegations that foster care children receive inadequate health care, plaintiffs offered the testimony of two medical experts. Dr. Archie Golden is an expert in pediatrics and the administration of health programs for children, he has been the medical director of the Chesapeake Health Plan[3] ("CHP") for over six years. Dr. Charles Shubin is an expert in the field of pediatrics and the diagnosis of child abuse and neglect. Drs. Golden and Shubin have extensive experience treating foster care children in the custody of BCDSS. The testimony of Drs. Shubin and Golden established that foster children were often abused or neglected in their natural homes and are generally in greater need of health care than other children. Foster children have numerous mental health or psycho-social problems and typically suffer from chronic illnesses; approximately 29% have eye problems; and some 28% of foster children do not have up-to-date immunizations. Foster children often receive treatment for one episode of an illness in one facility and visit another for a subsequent episode of the same illness; thus, there is no continuity of information and records about the child's health treatment and a lower quality of care results. Findings of Fact as to Medical Care The Court makes the following enumerated findings of fact about medical care provided foster care children: *533 1. Of the 2,600 to 2,800 children in BCDSS foster care today, 600 are enrolled in CHP. While most of the remaining children do not qualify for CHP enrollment, up to 400 children eligible to participate in CHP are not enrolled. 2. Although many children do not remain in foster care longer than 30 days, and there are no reliable statistics about the number of children who do not appear for medical appointments, it is apparent from the testimony of Drs. Shubin and Golden that a major problem in rendering health care to long term foster children is their failure to appear at medical appointments. CHP has articulated this concern to BCDSS, but BCDSS has failed to respond adequately. 3. Physicians treating foster children are often provided incomplete medical histories and must rely on the child being treated or an older sibling for such information. 4. The lack of an adequate medical history impairs effective medical treatment and exposes a foster child to such risks as redundant vaccinations or immunization with vaccines to which they may be allergic. 5. Sometimes natural parent hostility toward BCDSS increases the physician's ability to obtain a complete medical history, and physicians also encounter difficulties obtaining medical histories of children since entering foster care. 6. Defendants' present system of providing necessary medical care to foster children is inadequate to ensure continuous and informed treatment for those children. (C) Absence of Protection Afforded Code 517 Children In 1983, defendants created the "Code 517" category of foster children. Although defendants are legally responsible for these children, they are placed in unlicensed homes. The caretakers in these homes, often relatives, are either unwilling to assume approved foster parent status or unable to meet foster home standards. Code 517 children are not provided services by the Division of Foster Care Services; their cases are not reviewed by the Foster Care Review Board; they are not subject to the review required by Federal law as are other foster children, there are no foster care payments for these children, and they are not considered by defendants to be in foster care. Suspected abuse or neglect in "Code 517" homes is not reported as foster home abuse. As of the commencement of the hearing, there were 312 children who were committed by the Courts to BCDSS and placed in the Code 517 category. No evidence of neglect or abuse was presented as to this class of children. While plaintiffs' suggestion of likely harm to these children from nonsupervision is credible, there is no basis for a finding of systemic abuse or neglect. (D) The Harris Task Force and Purported Improvements Sometime before filing suit in 1984, plaintiffs supplied defendants with a copy of the original complaint in this case. In response, defendants established the "Harris Task Force" to conduct a review of the foster care system in Baltimore City. The Task Force conducted interviews of several senior staff members at BCDSS. The Task Force also conducted a random review of 15 foster care cases including ten involving reports of suspected abuse. Based on these interviews and this random sample, the Task Force identified a number of "systematic problems" and suggested corrective action.[4] Defendants purport to have undertaken a number of improvements in the Baltimore foster care system based, in part, on the findings of the Harris Task Force. The Harris Task Force found the following "major systematic problems" in the *534 BCDSS foster care program (1) the purpose of family care was not well-defined leading, for example, to the placement in foster homes of children whose needs could not be met within a private home setting; (2) payments to foster families were unrealistically low; (3) there were not enough homes, and there was "no concerted effort to recruit foster homes"; (4) licensing of foster homes was inadequate: a lower standard is applied to restricted homes than regular homes, and licensing is based on inadequate information; (5) "serious gaps" in the training provided to foster families, BCDSS case workers and their supervisors; (6) BCDSS files contained inadequate information about medical histories, foster parents and education; (7) the "agency's organizational structure is conducive to chaos"; (8) some caseworkers and supervisors lacked necessary training; (9) more strict enforcement of policies requiring investigation of abuse and neglect complaints was necessary; (10) the Department of Human Resources (DHR) needed to improve monitoring of BCDSS to ensure the adequacy of services; (11) substantial increases in staff size were necessary to reduce ratios of cases handled by foster care workers; (12) poor morale among BCDSS staff; (13) need for a pre-placement diagnostic facility to place children on an emergency basis and identify their problems; (14) need for an automated system to monitor foster care cases; (15) a lack of coordination between BCDSS and agencies outside the city when children were placed outside the city; (16) a policy classification was required for nonlegally responsible custodians who requested a foster care license or payments; and (17) poor relationships among BCDSS caseworkers, BCDSS Legal Services, and the Juvenile Courts with respect to child placement decisions. The findings of the report are uncontroverted. However, defendants contend that they have attempted to improve foster care in Baltimore. Among the major efforts defendants have undertaken are (1) review of all BCDSS foster homes; (2) training for foster care workers; (3) recruitment of new foster parents; (4) increases in foster care board rates; (5) training of foster parents; (6) additional foster care staff to reduce the ratios of caseworker to children to 1 to 20; (7) clarification of lines of authority within the DSS; (8) recruitment of better qualified social workers; (9) medical screening of all children before their placement; (10) development and distribution of a foster care manual and revised policies; and (11) initiation of an intensive family services program. Findings of Fact as to Purported Improvements Undertaken Pursuant to the Harris Task Force The Court makes the following enumerated findings of fact as to the significance and effectiveness of defendants' purported improvements undertaken pursuant to the Harris Task Force: 1. Many of the most essential of defendants' efforts to improve foster care in Baltimore have been incomplete and ineffective. 2. As recommended by the Harris Task Force, defendants undertook a review of all BCDSS foster homes. This review, defendants contend, "was initiated to determine whether foster children were at risk in their placements." As a result of that review, 444 homes were closed in 1985, and 189 were closed in 1986. Defendants contend that as a result of the closings, the pool of BCDSS foster homes is safer now than at the time of the Harris Task Force report. However, during cross-examination of Secretary Massinga, it was established that the homes closed were primarily those of elderly foster parents who no longer wanted to be registered as foster homes and no longer had foster children in their homes. 3. During 1985 and 1986, only 53 additional regular or unrestricted foster homes were recruited and opened by defendants. Furthermore, when the hearing commenced, defendants had only one person working solely on recruitment of foster homes. 4. The "Intensive Family Services Unit" at BCDSS, set forth as a recent improvement in foster care, was disbanded in 1986. 5. The Harris Task Force recommended that foster care workers receive training. *535 Defendants have proffered a document entitled "Training Sessions for BCDSS Foster Care Workers: 1984-1986." During cross-examination it was revealed that this document was not what it purported to be, and defendants were unable to offer any reliable evidence of training provided to foster care workers. 6. The Harris Task Force recommended that DHR "improve its foster care quality control review system." DSS has devised a system for case record reviews to evaluate the quality of services provided to foster care children. However, from January until October, 1986, no quality assurance reviews of foster care records took place. 7. At the hearing defendants highlighted several policies implemented to protect children in foster care; the plaintiffs presented evidence that those policies are violated. Violated were the policies requiring that an investigation of an abuse report be commenced within 24 hours of its receipt; that no further placements be made in a foster home that is the subject of a report of suspected abuse until completion of an investigation; and that all children be removed from a home where a finding of abuse is "indicated." 8. Defendants have yet to reduce the ratio of workers to children to 1 to 20. 9. Given defendants' incomplete and ineffective responses to the systemic problems identified by the Harris Task Force in 1984, it is likely that these problems still exist and that significant numbers of foster children are at risk of irreparable physical and emotional harm. 10. Given defendants' ineffective and incomplete responses to the problems identified by the Harris Task Force and the ineffective implementation of their own child protection policies, it cannot be assumed that the recent increased funding of some programs will resolve systemic foster care problems. Discussion of the Legal Standard for Determining the Propriety of Preliminary Injunctive Relief The standard for determining whether a party is entitled to preliminary injunctive relief is the "balance-of-hardship" test of Blackwelder Furniture Co. v. Seilig, 550 F.2d 189, 196 (4th Cir.1977). "This test requires a `flexible interplay' among four factors: the likelihood of irreparable harm to the plaintiff if the preliminary injunction is denied; the likelihood of harm to the defendant if the requested relief is granted; the likelihood that the plaintiff will succeed on the merits; and the public interest." Federal Leasing v. Underwriters at Lloyd's, 650 F.2d 495, 499 (4th Cir.1981). Under Blackwelder the first consideration is the "likelihood of irreparable harm to the plaintiff, as balanced against the likelihood of harm to the defendant." Federal Leasing, 650 F.2d at 499 (citing Blackwelder, 550 F.2d at 196). "If that balance is struck in favor of plaintiff, it is enough that grave or serious questions are presented; and plaintiff need not show likelihood of success." Blackwelder, 550 F.2d at 196. See also Merrill Lynch, Pierce, Fenner & Smith v. Bradley, 756 F.2d 1048, 1054-1055 (4th Cir.1985); Federal Leasing, 650 F.2d at 499; Johnson v. Bergland, 586 F.2d 993, 995 (4th Cir.1978). The public interest should always be considered. Blackwelder, 550 F.2d at 196. The violation of a constitutional right constitutes per se irreparable injury. Johnson v. Bergland, supra, 586 F.2d at 995. Accordingly, "if plaintiffs are able to demonstrate a loss of constitutional rights, they will have met the irreparable injury requirement." Greater Baltimore Bd. of Realtors v. Hughes, 596 F.Supp. 906, 924 (D.Md.1984). See also 11 C. Wright and A. Miller, Federal Practice and Procedure, § 2948 (1973). In Lynch v. King, 550 F.Supp. 325 (D.Mass.1982), aff'd sub nom., Lynch v. Dukakis, 719 F.2d 504 (1st Cir.1983) plaintiffs foster care children brought a class action alleging that Massachusetts' foster care system violated the due process clause of the Fourteenth Amendment, the Social Security Act, 42 U.S.C. §§ 601 et seq. and regulations promulgated by the Secretary of the Department of Human Resources. *536 Although plaintiffs brought their action in 1978 plaintiffs did not move for a preliminary injunction until 1981. The evidence in Lynch addressed defendant's compliance with § 608(f) of Title IV-A of the Social Security Act, § 671(a) of Title IV-E and § 627(a) of Title IV-B which require the development and implementation of a case plan for each child to assure appropriate care and a periodic review of the status of each child to determine the appropriateness of placements. The Court found the evidence sufficient to establish noncompliance; however, it acknowledged "certain weaknesses" in plaintiffs' proof. 550 F.Supp. at 337. In granting plaintiffs a preliminary injunction the Court stated that "[t]hese flaws are not fatal to plaintiffs' motion for preliminary injunction," because the defendants, "as the parties having greater access to and control of relevant evidence, have offered little proof to rebut the powerful inference that case plans and periodic review are not being provided in significant numbers of cases involving children in foster care." Id. The Court also found that defendants' apparent failure to comply with the Social Security Act was likely to cause plaintiffs irreparable harm and observed "[t]he physical and emotional damage threatening these children, should it occur, could never be undone." 550 F.Supp. at 338. In balancing this harm against that likely to be suffered by the defendants should an injunction be granted, the Court quoted defendants' characterization of their hardship as follows: The defendants' interest consists in freedom from a burdensome judicial order that will disrupt the management of [DSS], including delivery of the very services plaintiffs seek. For a court to intrude in the present case is to risk demoralizing agency personnel and engendering cynicism in an improving administration; to substitute judicial judgment for that of trained professionals and a legalistic atmosphere for a therapeutic one; to risk a confrontation with the state legislature; to risk stripping funds from crucial programs in order to pay for others receiving judicial attention; to risk forcing the state to give up badly needed federal funds, rather than comply with a far more costly judicial order. Id. at 339. In weighing those concerns, the Court agreed that "it is essential for federal courts to be ever sensitive to these considerations" and that "[e]very federal judge must be concerned about the prospect of issuing relief that unduly hampers the day-to-day administration of a state agency." Id. Nevertheless, "concerns of federalism ... are present in any case in which a class of plaintiffs seeks the aid of a federal court in securing state compliance with federal law." Id. In concluding that concerns of federalism were outweighed by the prospect of harm to the foster children, the court held that the need for judicial sensitivity to these concerns does not justify abdication of judicial responsibility. Id. Here, Congress—and not any court—created requirements it thought essential to protect the welfare of foster children. The Commonwealth voluntarily undertook to fulfill those requirements as a condition of receiving federal money. Plaintiffs filed suit to enforce those requirements because they believed it would serve their best interests to do so.... In granting preliminary relief to plaintiffs, this court does not substitute its judgment for that of state officials. It instead gives realization to the will of Congress and protection requested by those Congress intended to protect. Indeed, if the court chose to deny relief on the grounds urged by defendants, that denial would reflect a judgment that the wisdom of Congress and desires of plaintiffs should go unheeded because the Commonwealth knows better than any of them how to serve plaintiffs' interests. This court is not free to make such a judgment. 550 F.Supp. at 339-340. Concerning the impact of a preliminary injunction on the public interest the District Judge added: Congress imposed these requirements in the belief that they were essential to assure the proper care of children in the foster care system. The evidence confirms *537 that failure to satisfy the Congressional conditions may result in grave harm to foster children. Guided by the Congressional determination of the public interest in this context, I conclude that the public interest will be furthered by awarding a remedy calculated to ensure that Massachusetts' foster care system conforms to the dictates of the Social Security Act. 550 F.Supp. at 340. As preliminary injunctive relief, the Court ordered that the caseload ratio of foster care workers to children be reduced to 1:20; that each child's case receive a periodic review every six months; that a written case plan be formulated for each child; and that a foster care worker be assigned to each case within 24 hours of its receipt. Id. at 355-357. Failure to comply with the order would result in a termination of federal funds. In the instant case, the defendants contend that their interests will be impaired by a preliminary injunction; however, none of these interests outweighs the harm likely to befall plaintiffs should no injunction be issued. Defendants argue that the interests underlying principles of federalism preclude the imposition of an injunction. Bloodgood v. Garraghty, 783 F.2d 470, 475 (4th Cir.1986), is cited for the proposition that it is an abuse of discretion for a federal court to grant injunctive relief against state officials who have not been found to have violated the law or have shown any intention to violate the law. In addition, defendants argue that the interests of the Maryland state courts require this Court to abstain from deciding the motion for preliminary injunction and cite Pennzoil Co. v. Texaco, 481 U.S. 1, 107 S.Ct. 1519, 95 L.Ed. 2d 1 (1987); Moore v. Sims, 442 U.S. 415, 99 S.Ct. 2371, 60 L.Ed.2d 994 (1979); Younger v. Harris, 401 U.S. 37, 91 S.Ct. 746, 27 L.Ed.2d 669 (1971); and Cox v. Planning Dist. I Community Mental, Etc., 669 F.2d 940 (4th Cir.1982). This Court agrees that sensitivity to the concerns of federalism are required when a federal court considers enjoining state officials. To this end, any relief granted should be tailored to correct ongoing harm in a way that is not overly intrusive. In granting injunctive relief this Court does not substitute its will for that of state officials. Rather, the Court seeks to enforce the will of Congress as expressed in the foster care provisions of the Social Security Act and to protect plaintiffs' constitutional rights. Plaintiffs have made a showing of threatened irreparable injury. Therefore, the Fourth Circuit's holding in Bloodgood does not bar an injunction in the instant case; neither is relief barred by the abstention doctrine. Abstention to accommodate adjudication by the state courts "is the exception, not the rule." Colorado River Water Conservation District v. United States, 424 U.S. 800, 813, 96 S.Ct. 1236, 1244, 47 L.Ed.2d 483 (1976). Abstention is appropriate when a question of federal constitutional law may be mooted by a state court determination of state law, Railroad Commission of Texas v. Pullman Co., 312 U.S. 496, 61 S.Ct. 643, 85 L.Ed. 971 (1941); where unsettled questions of state law affecting important state policy concerns are presented and federal jurisdiction would impair the establishment of a consistent policy, Burford v. Sun Oil Co., 319 U.S. 315, 63 S.Ct. 1098, 87 L.Ed. 1424 (1943); and where federal jurisdiction is sought to restrain a state court proceeding. Younger v. Harris, supra. Here the Court seeks to enforce a federal statute embodying a federal policy to which defendants committed themselves through their acceptance of federal funds. Also at issue is the right to protection guaranteed plaintiffs by the federal constitution. There are no unsettled questions of state law before the Court, nor is any pending state court proceeding to be restrained. Moreover, it is federal policy which is not being served. This case is, therefore, distinguishable from cases where abstention is appropriate. See, e.g., Pennzoil Co. v. Texaco, supra (federal injunction directly interfered with the execution of a state judgment and challenged the process by which the judgment was obtained in state court); Moore v. Sims, supra (federal court should have abstained in light of a *538 pending state court proceeding on the matter); Younger v. Harris, supra (federal court improperly enjoined a state court proceeding); Cox, supra (abstention appropriate where unsettled question of state law was at issue). Although the injunction granted herein might not be extremely long in duration, the harm threatening foster children and the express will of Congress that proper care be extended to these children, indicate that the public interest would be served by the granting of a preliminary injunction.[5] Plaintiffs have offered sufficient evidence to establish the existence of serious systemic deficiencies in the Baltimore foster care system. These deficiencies include the failure to implement policies to protect children in foster care; the lack of an effective effort to recruit new foster homes; the licensing of questionable homes; the granting of exceptions allowing homes that should be closed to remain open; and the incomplete medical histories of children in foster care. As a result of these deficiencies, foster children are threatened with and are likely to suffer severe physical and emotional injury. Furthermore, plaintiffs' constitutional right to protection while in defendants' custody is in jeopardy.[6] Accordingly, plaintiffs are entitled to a preliminary injunction. Blackwelder, supra, at 196. Likelihood of Prevailing on the Merits Although the Court is required to inquire no further, the issues before the Court are of such magnitude and public importance that the Court will address the plaintiffs' likelihood of success on the merits. Plaintiffs have stated claims under Title IV-B and IV-E of the Social Security Act, 42 U.S.C. §§ 620 et seq. and 670 et seq. The defendants have accepted funds under these programs and do not dispute that they are obligated to adhere to funding requirements. The federal foster care and adoption assistance program, Title IV-E, requires that defendants "be responsible for establishing and maintaining standards for foster family homes ... which are reasonably in accord with recommended standards of national organizations concerned with standards for such ... homes." 42 U.S.C. § 671(a)(10). These should include "standards relat[ing] to admission policies, safety, sanitation, and protection of civil rights...." Id. Title IV-E and IV-B require defendants to provide for the development of a case plan for each child for the purpose of "assuring that the child receives proper care" and "that services are provided to the parents, child, and foster parents in order to improve the conditions in the parents' home...." 42 U.S.C. § 675(1). The term "proper care" as used in the Social Security Act has been interpreted to include necessary medical and educational services. See, Gary W. v. Louisiana, 437 F.Supp. 1209 (E.D.La.1976). Titles IV-E and IV-B *539 also require a case review system to determine the "appropriateness of the placement." 42 U.S.C. § 675(5)(B). Systemic problems in the Baltimore foster care program have been revealed by the findings of plaintiffs' study and the Harris Task Force. Given the magnitude of these problems as revealed by the evidence received during the two week hearing, it appears unlikely that defendants will be able to prove they are in compliance with Title IV-E and IV-B. Plaintiffs also claim that their Fourteenth Amendment right to protection is being violated by defendants. In Jensen v. Conrad, 747 F.2d 185 (4th Cir.1984), cert. denied, 470 U.S. 1052, 105 S.Ct. 1754, 84 L.Ed.2d 818 (1985), the Court provided guidance for determining whether an individual has a "special relationship" with the state such that a constitutional duty of protection exists. Specifically, the Court enumerated three considerations: "(1) Whether the victim or the perpetrator was in legal custody at the time of the incident, or had been in legal custody prior to the incident ... (2) Whether the state has expressly stated its desire to provide affirmative protection to a particular class or specific individuals ... [and] (3) Whether the state knew of the claimants' plight...." 747 F.2d at 194-195. n. 11. Applying these factors, the Court finds plaintiffs have demonstrated the existence of a "special relationship" with defendants such that plaintiffs are owed an affirmative duty of protection by defendants. Here defendants undertook to provide plaintiffs with proper care and defendants have known, or had reason to know, of systemic deficiencies since the Harris Task Force report. Most importantly, plaintiffs are vulnerable children in the custody of defendants. Children in similar circumstances have been held to have a right to protection. In Estate of Bailey by Oare v. County of York, 768 F.2d 503 (3d Cir.1985), a civil rights complaint was brought against a county welfare agency by the father of a five-year-old girl who was beaten to death by her mother and mother's boyfriend. The welfare agency had previously determined that the child had been abused and agreed to return the child to the mother only upon the condition that the boyfriend be denied access to the child. The complaint alleged that the agency returned the child to the mother without conducting an independent investigation to determine whether the child's mother and the mother's boyfriend were living together. 768 F.2d at 505. Following the Jensen analysis, the Court determined that a "special relationship" had existed between the child and the agency sufficient to state a claim based on a duty to protect. Id. at 510-511. See also Doe v. New York City Department of Social Services, 649 F.2d 134 (2nd Cir.1981), cert. denied, 464 U.S. 864, 104 S.Ct. 195, 78 L.Ed.2d 171 (1983) (Court held that an agency that placed a child in foster care could be liable for the child's sexual abuse by her foster parent if the agency had failed to supervise adequately the child's placement.). Having determined that plaintiffs have a "special relationship" with defendants such that an affirmative duty to protect exists, this Court must determine whether plaintiffs are required to show that defendants have acted with "deliberate indifference" to that right. Regardless of whether "deliberate indifference" must be proven, the evidence in this case shows that, at least since the Harris Task Force report, defendants have been aware of serious deficiencies in the system and their tragic consequences. For example, the evidence shows that one major problem with the system is the lack of satisfactory foster homes. Yet as of the hearing, only one person worked solely on recruiting homes for BCDSS. Moreover, only 53 new nonrestrictive homes were opened in 1985 and 1986. In order to compensate for the lack of homes, defendants have placed children in homes that are unsatisfactory and are reluctant to close homes where maltreatment is either suspected or confirmed. The defendants' duty to protect and the systemic nature of the BCDSS failure to *540 perform that duty create a clear likelihood of plaintiffs' success on the merits. * * * * * * Therefore, it is this 27th day of July, 1987, by the United States District Court for the District of Maryland, ORDERED: As to plaintiff's motion for a preliminary injunction, 1. That plaintiff's motion for a preliminary injunction BE, and the same hereby IS, GRANTED. 2. That defendants shall submit to the Court within 20 days, a plan for a review of each foster home in which a report of maltreatment has been made and in which foster children continue to reside to ensure that such home meets licensing standards reasonably in accord with those recommended by nationally recognized professional organizations. 3. That defendants shall monitor each child in a DSS foster family home by, at least, monthly visits to the child to ensure that the child is receiving proper care and the foster home continues to meet licensing standards. Where there has been a report of maltreatment of the child and the child remains in the home, the child shall be visited at least weekly. 4. That defendants shall assign sufficient staff and resources to ensure that available medical histories are obtained and provided to children's medical and other service providers, including foster parents, to ensure that appropriate medical preventive care, services, treatment and diagnoses and other care are promptly and appropriately provided in accord with approved medical standards. 5. That defendants shall provide a written copy of any complaint of maltreatment of a foster child to the juvenile court and the child's attorney within five days of its receipt and shall provide to the juvenile court and the child's attorney a written report of any action taken on the complaint within five days of its disposition by the agency. (s) Joseph C. Howard United States District Judge DATED: July 27, 1987 NOTES [1] The full decree is attached as Addendum A to this Memorandum. [2] For a detailed review of the methodology used in plaintiffs' random sampling see the court's Memorandum and Order dated July 27, 1987, attached to this opinion as Addendum B. [3] The court also granted plaintiffs' motions for sanctions due to certain conduct of defendants' attorneys. Specifically, pursuant to Fed.R.Civ. P. 37(b)(2)(A) and 16(f), the court ordered it taken as established that defendants "fail to protect effectively children in foster homes where there is reason to know that such children are at risk of harm to their physical and emotional well-being." Having deemed these facts admitted, the court found plaintiffs also entitled to preliminary injunction on this alternative basis. The court's Memorandum and Order dated July 27, 1987 has been attached to this memorandum as Addendum B. That memorandum has been edited to eliminate the court's detailed discussion of its basis for imposing sanctions because those facts do not serve as part of the basis for the court's determination of whether the decree is fair and adequate. [4] In the same opinion, the Fourth Circuit also affirmed this court's ruling that the defendants were not entitled to qualified immunity as to plaintiffs' claims for damages. 838 F.2d at 123-124. On that issue, defendants have petitioned the Supreme Court for a writ of certiorari. [5] The Legal Aid Bureau of Maryland, whose lawyers serve as lead counsel to the class plaintiffs in this action, provides legal services to and represents the great majority of Baltimore's foster children in the juvenile court. The notice also was mailed to the Office of the Superintendent of the Baltimore City Public School, the State's Attorney for Baltimore City, the Baltimore City Juvenile Court judge and masters and to organizations that provide medical care to foster children. [6] The Baltimore Sun, The Baltimore Evening Sun, The Afro-American and The Daily Record. [7] Specifically, in this regard, Mr. Holmes states in his affidavit that: I am aware of the Court's special concerns about foster home recruitment. It must be remembered that family foster care is not the only, and often not even most appropriate, out-of-home placement for children, particularly those increasing numbers with severe emotional and behavioral problems. DHR has and will continue to intensify efforts to recruit foster parents. Providing child care for working foster parents is an effective recruitment tool. DHR and BCDSS both have aggressive campaigns to solicit applications from new families. DHR has contracted with Vanita Enterprises, Inc., a media consulting firm, to devise and implement a recruitment campaign, which began April 15, 1988, and includes: regular and frequent public service announcements on 12 television and 32 radio stations with Tim and Daphne Reid, Brooks Robinson, John Minor, Rev. Sidney Daniels and Alex Williams; two foster/adoptive care olympic events scheduled in August, 1988; direct mail to Maryland teachers and ministers; and corporate sponsorship of paid network spots. Preliminary results include 195 inquiries from parents interested in becoming foster or adoptive parents. BCDSS' own efforts have resulted in 43 new foster homes from January 1 through May 31, 1988 out of a total of 168 applications. Recruitment activities have included: paid ads on WBGR-AM, public service announcements on the major television stations, recruitment booths at city fairs, hospitals, the Social Security Administration and the General Motor plant, subway posters, articles in selected employee newsletters and a speakers's bureau to community groups and churches. [8] The defendants' memorandum furnished in support of the decree, the affidavits of Mr. Holmes and Ms. Bernard, and the presentation of defendants' counsel made during the settlement hearing of July 18, 1988, provide valuable details as to what measures defendants will undertake in order to meet the requirements set forth in the decree. The court has not asked that the decree be amended to recite specific efforts that will be made by defendants to meet the requirements of the decree. It was the intent of the parties to allow the defendants flexibility in implementation of the decree's provisions. Nevertheless, in evaluating the decree, the court relies on the parties' representations as to specific measures that will be undertaken and may later utilize those representations as a standard through which good faith in carrying out the terms of the decree will be measured. Accordingly, the court fully expects the defendants to undertake those specific measures revealed to the court or to undertake measures comparable to them. The court is confident that defendants will make every effort to do so. [9] In a letter to the court dated July 14, 1988, the Foster Care Review Board estimated that as many as 2,000 children are placed with relatives. [10] There are 24 citizen Foster Care Review Boards in Baltimore City with seven members each. The Boards provide independent citizen input as to whether BCDSS plans for each child in foster care is appropriate. [11] The Board also expressed concern about proper training of foster care workers; that provisions be made for children placed with relatives; and that visits to foster homes be meaningful. The court believes the decree's provisions for training of foster care workers are adequate. The provisions implemented immediately for children placed with relatives are also adequate pending the earlier described independent assessment of the status of those children. Lastly, if the visits to homes cannot be carried out with the maximum ratios of children-to-workers provided by the decree, the defendants will be required to reduce the workers' case loads below the maximum ratios. [1] Defendants' expert undertook a study of the population of children in foster care in Baltimore which was similar in many ways to that conducted by Dr. Festinger. In that study defendants expert examined the health status of foster children. The methodology used was a random sampling of foster care records from which were excluded the files for children who were placed in group or institutional care; children who remained in foster care for less than 30 days; and children whose case records could not be located. From those case records that met his sample criteria, defendants' expert concluded that scant information is kept as children's medical history; that a concerted effort needed to be made by BCDSS to insure that such information is available for use; that BCDSS needed to improve the adequacy of attention to health needs; and that there had to be sufficient personnel and budgetary resources necessary to attend to the health needs of children. [2] See Appendix 2 for additional summary of these 16 cases. [Editor's Note: Appendix 2 was omitted from publication.] [3] The CHP was established as a private health maintenance organization in 1976 to provide for the health needs of foster children. A health maintenance organization for foster children was desirable because they usually have special health needs and a history of noncontinuous or episodic care. [4] Although defendants attack the validity of the methodology used by plaintiffs in their random sampling, defendants do not take issue with the Harris Task Force's findings of systemic deficiencies based on a review of fifteen foster care cases. [5] At the hearing, defendants also contended that a preliminary injunction was inappropriate because trial of this matter is scheduled for November. The trial date is, however, tentative at best. Neither defendants nor plaintiffs were able to complete discovery within the time allowed by the current Scheduling Order in this case. In requesting an extension of the discovery deadline, defendants contend that they are unable to commence discovery until plaintiffs' discovery is completed. Following the completion of discovery, a period for filing and ruling on additional motions is anticipated. In the unlikely event that trial is able to go forward in November, it would last all month. The need for post-trial briefs on an issue of this gravity might consume December, and, because of the voluminous amounts of documents likely to be introduced into evidence, the Court would probably require until late April or early May to issue a ruling. Defendants suggest that a preliminary injunction is unnecessary because policies and programs which they are implementing should satisfy plaintiffs' concerns. However, this argument carries little weight in light of defendants' failure to effectively implement current programs and policies. [6] Plaintiffs have demonstrated systemic deficiencies in the foster care system with direct personal injuries traceable to defendants' conduct. Accordingly, the Court need not address defendants' reliance on Allen v. Wright, 468 U.S. 737, 104 S.Ct. 3315, 82 L.Ed.2d 556 (1984) for the proposition that no real controversy exists, and that plaintiffs here prosecute a mere "generalized grievance."
David Peter Lafayette Hunter David Peter Lafayette Hunter MC (24 November 1919 – 5 September 2001) was a Royal Marines officer who was prisoner of war captive in Colditz Castle during the Second World War. He later served as the commanding officer of 40 Commando, and was a recipient of the Military Cross. Early life David Peter Lafayette Hunter was born at Minnis Hall, Stelling Minnis, Kent on 24 November 1919. He was the third son of Major Edgar Lafayette Hunter MC and Dorothy Thompson. He was educated at Shrewsbury. Military career Hunter joined the Royal Marines in 1937 and passed out at Deal, Kent, just before the outbreak of World War II. On 2 Feb 1940 he was made probationary Lieutenant. He was posted to the heavy cruiser patrolling waters around Iceland. The Norfolk was bombed whilst at Scapa Flow on 16 March 1940 and sent to the Clyde for repair. Hunter was redeployed to Chatham, where he was selected for the Calais force as part of the BEF. Calais Hunter was part of Captain Darby Courtice's company of 85 Royal Marines which landed at Calais shortly after midnight on 25 May 1940. With one other officer, Lt Hugh Bruce, they were charged with helping French marines to defend the ancient citadel at the centre of the town. There they were attacked by the full might of XIX Panzer Corps and, by early evening, were surrounded and out of ammunition. Hunter was later mentioned in dispatches for his "courage and devotion to duty" in racing up and down the beach to keep his unit's machinegun supplied with ammunition. They had fought with such vigour that the official German record read, "The enemy gives the impression of being fresh, and seems to have received reinforcements after two days of heavy fighting." Despite their efforts, within two days Calais had been surrendered to the Germans, and the British troops, including Hunter, taken prisoner. Prisoner of war The captured troops were marched through northern France, the Ardennes and Trier to Mainz. From there, they were moved on to Laufen camp in Bavaria, then transferred to Tittmoning. The Royal Marines officers were moved to Marlag und Milag Nord part of Stalag X-B at Sandbostel, where they soon started planning their escape. Bruce, Hunter's fellow Marine officer, was imprisoned with him and, over the winter of 1941–42, the two men became firm friends. With a number of colleagues they conceived, designed and built by hand a masterpiece of British engineering – a 251-yard-long tunnel, complete with rest bay, electric lighting and air flow system, as well as a signalling device to warn of the approach of sentries. Over 100 tons of soil was excavated and concealed under a hut. On 7 April 1942 Hunter, Bruce and 10 other officers made their escape. After 12 days on the run, Bruce and Hunter were captured near Flensburg, within a few hundred yards of the Danish border. After a brief spell back at Sandbostel, the pair escaped, this time by jumping aboard a prison lorry, but were recaptured at Hamburg railway station by the German police. They were transferred to Stalag VIII-B in Lamsdorf, Silesia, a prison camp for "other ranks". Their stay lasted only a few months. Hunter was found dangling from a window within inches of a snarling guard dog, and two of Hunter's colleagues were also caught escaping. The miscreants were summarily banished to Colditz Castle. Colditz In early August 1942 Bruce and Hunter arrived at Colditz Castle (then prisoner of war camp Oflag IV-C), where fellow persistent escapees were highly engaged in planning more escapes, and Hunter was soon involved in the various projects. The three Royal Marine officers (Capt Courtice, their company commander at Calais, was also at Colditz) had a reputation for bravery and good humour, and Hunter was noted as being particularly outspoken, a persistent nuisance to his captors and equally amusing to his colleagues. He once stole the cap of the German officer who was expounding on the merits of Wagner during a musical evening. Another incident even made the Germans laugh when, late for a roll-call, he called languidly from a castle window to the parade below "I'll come down and join you all in a minute". In October 1943 Mike Sinclair was caught during the daring Franz Josef escape. Although Sinclair had surrendered, he was shot at close range by a German officer. Hunter, along with many other witnesses, believed his friend to be dead and shouted "German murderers!". He was subsequently sentenced at a court martial to two months in Graudenz military prison. Forty years later, some 30 officers and their wives made a return visit to Colditz, and Hunter was seen by millions of television viewers standing in the courtyard and taking off the Commandant's "Call to Appell" at the top of his voice. Despite the many notable escape attempts from Colditz, Hunter remained in Colditz until release on 16 April 1945. Post war Following release he underwent a brief re-training period. He was appointed temporary captain 25 February 1946. Hunter was appointed officer commanding Royal Marines in Berlin. This was not a sensitive posting, and Hunter was soon returned to Britain. He was next posted to the aircraft carrier, . Detecting a poor level of morale aboard, he and Donald Douglas, a former prisoner of the Japanese, determined to confront the ship's captain and insist on reasonable treatment. On entering the captain's cabin, Hunter declared "Look, Sir, we're here to tell you that we've both been b******d about as PoWs and we're not having any of it in peacetime!". Douglas was aghast, but to his surprise the captain replied, "All right, I hear you. Dismiss!". The ship's captain later confided to them "Lucky for you on the first day we met that I was reading a book on how to deal with ex-PoWs, or your fate might have been different." Subsequent postings took him to Egypt, Aqaba, Hong Kong and, in 1950, Malaya. He was made Officer in Charge Cameron Highlands Jungle Operation, protecting planters from Communist guerillas during the Malayan Emergency. Not long after arriving, he was asked to take a Mr Justice Brown on a jungle patrol with 45 Commando. Whilst advancing up a hill at Ringlet they encountered six bandits, one of whom threw a grenade at the soldiers whilst they made their escape. In an act he later described as a "mental aberration", Hunter calmly covered the grenade with his hat and held it while his comrades ran to safety. Fortunately the grenade failed to detonate. Later, to Hunter's astonishment, he was awarded the Military Cross for his "vigour, determination and outstanding skill" in conducting operations against the bandits. He was promoted to major on 14 January 1955. In 1956 he became Amphibious Staff officer, 3 Commando Brigade at Suez. There followed postings to the RN Staff College, Greenwich, and at Amphibious Warfare HQ, London, followed by a six-month Joint Training Course with the US Marines in San Diego. In 1961 his promotion to lieutenant colonel was confirmed. and he took command of 40 Commando until 1963. Based in Singapore, he was frequently employed in Borneo during the confrontation with the Indonesians following the Brunei Revolt of 1962. After a series of staff appointments, Hunter retired from the Royal Marines 3 March 1967. Civilian Life Hunter married WAAF officer, Barbara Lewis, in Brentford late in 1945. They had two sons. Following his retirement from the Marines, Hunter and his family emigrated to Freeport, Bahamas. In 1967 Hunter joined the real estate company of McPherson & Brown. Barbara died in 1971 and, in 1974 Hunter subsequently remarried to Suzanne Twiston-Davies, a journalist with the BBC. In 1981, he and colleague, Hilary Jones, bought McPherson & Brown, changing its name to Churchill & Jones. In 1997 Hunter led Churchill & Jones into obtaining the franchise for the Northern Bahamas of RE/MAX, the international real estate conglomerate. David Peter Lafayette Hunter died on 5 September 2001. Notes Sources Extracted from the obituary of Lt-Col David Hunter, The Daily Telegraph, 7 September 2001 Category:1919 births Category:People educated at Shrewsbury School Category:Royal Marines officers Category:Royal Marines personnel of World War II Category:World War II prisoners of war held by Germany Category:Prisoners of war held at Colditz Castle Category:Recipients of the Military Cross Category:2001 deaths
IN THE COURT OF CRIMINAL APPEALS OF TENNESSEE AT NASHVILLE April 20, 2010 Session STATE OF TENNESSEE v. JERRY LEN ANGUS Direct Appeal from the Criminal Court for Davidson County No. 2007-C-2624 Mark J. Fishburn, Judge No. M2009-01151-CCA-R3-CD - Filed December 1, 2010 Defendant, Jerry Len Angus, was indicted in a seventeen-count indictment by the Davidson County Grand Jury for three counts of official misconduct in violation of Tenn. Code Ann. § 39-16-402, nine counts of sexual battery by an authority figure in violation of Tenn. Code Ann. § 39-13-527, four counts of statutory rape in violation of Tenn. Code Ann. § 39-13- 506, and one count of rape in violation of Tenn. Code Ann. § 39-13-503. Defendant was convicted by a jury of three counts of official misconduct, one count of attempt to commit sexual battery, a lesser-included offense of the charged offense of sexual battery, one count of sexual battery, and two counts of attempt to commit statutory rape, a lesser-included offense of statutory rape. The jury did not consider eight counts of the indictment as the trial court granted judgments of acquittal at the close of the State’s proof, and Defendant was acquitted by the jury of the remaining two counts. Defendant filed a motion for new trial, and following a hearing, the trial court vacated his conviction for official misconduct in Count 1 of the indictment for insufficiency of the evidence. The court granted a mistrial as to Defendant’s conviction for attempted sexual battery in Count 4, his conviction for official misconduct in Count 8, and his conviction for sexual battery in Count 11. In an amended order, the trial court also vacated Defendant’s conviction for official misconduct in Count 3 of the indictment. On appeal, Defendant asserts that the trial court’s polling of the jury was improper and that he is entitled to a new trial. Finding no error, we affirm the judgments of the trial court. Tenn. R. App. P. 3 Appeal as of Right; Judgments of the Criminal Court Affirmed THOMAS T. WOODALL, J., delivered the opinion of the Court, in which DAVID H. WELLES and JOHN EVERETT WILLIAMS, JJ., joined. John S. Colley, III, Columbia, Tennessee, for the appellant, Jerry Len Angus. Robert E. Cooper, Jr., Attorney General and Reporter; Brent C. Cherry, Assistant Attorney General; Victor S. (Torry) Johnson, III, District Attorney General; J.W. Hupp, Assistant District Attorney General; and Brian Holmgren, Assistant District Attorney General, for the appellee, the State of Tennessee. OPINION The sole issue raised in this appeal is whether the two counts of attempt to commit statutory rape for which Defendant stands convicted should be vacated for lack of unanimity in the jury’s verdict. Defendant alleges that his due process rights were violated by the procedure employed by the trial court in polling the jury after the verdict was announced. A summary of the facts leading to the convictions is not necessary to address the issue raised on appeal. The record shows that following deliberations, the jury returned to the courtroom, and the trial court read the jury’s verdict and sua sponte polled the jury. Defense counsel then requested individual polling because, according to him, “[Juror] Ms. Davis several times did not hold her hand up or only did it at the Court’s prompting.” During the individual polling by the trial court, it became apparent that another juror, Ms. Febo, did not agree, in part, with the verdict. Juror Febo stated as to Count 4 of the indictment, “My vote was different, but I vote guilty.” The trial court sent the jury out and discussed the jury polling with counsel for Defendant and the State. The jury was brought back into the courtroom, and the trial court addressed the jury as follows: First of all, I’m going to start count four individual polling of the jurors over again. On an individual poll, each of you have to announce in open court for the record when I call your name whether or not the verdict, which I will read to you, is the verdict – is your verdict, is your individual verdict for each count. I need to hear that individually and independently by each of you. If any of you have confusion about what I’m asking, communicate with the Court. I’ve been – it’s been represented that a verdict has been reached, I have taken the verdict based on the show of hands, but the law allows for individual polling. As of right now, your discussions and deliberations are over with. It is my responsibility to ensure both sides that a unanimous verdict has in fact been reached. I know you all have been under a lot of stress over a span of three different days working on an extremely difficult situation and I know that probably all of you are tired and nerves frazzled -2- and everything, so if that’s not your verdict, I just need to know whatever it is so that I can then address that particular issue. The court then resumed individual polling, and as to Count 4, charging Defendant with attempt to commit sexual battery by an authority figure, juror Davis indicated that she had voted guilty, but denied that guilty was her individual verdict. As to Count 5, charging Defendant with sexual battery, to which the jury had returned a verdict of not guilty, juror Davis again indicated that this was not her individual verdict. As to Count 8, all jurors affirmed the guilty verdict. As to Count 11, charging sexual battery, juror Davis again denied that her individual verdict was guilty. As to Counts 12 and 13, charging Defendant with statutory rape, all jurors affirmed the verdicts of guilty to the lesser-included offense of attempt to commit statutory rape. As to Count 17, charging rape, all of the jurors affirmed the not guilty verdict. During the individual polling, juror Febo affirmed her individual verdict as to all of the above counts. After the trial court completed the individual polling, however, juror Febo indicated that she disagreed with some of the jury’s verdicts. She stated, “What I wanted to say was, in some of them, I didn’t agree, I didn’t – my vote was not guilty, but because nobody felt the same way that I did, I had to vote guilty. There was no other way.” Specifically, juror Febo told the trial court that as to Count 4, she did not agree with the jury’s guilty verdict. The trial court again sent the jury out and addressed the issue with counsel. Counsel for Defendant made a motion for mistrial, which the trial court denied. When the jury was brought back in, the trial court once again individually polled jurors Febo and Davis only as to Counts 4, 5, 8, 11, 12, 13, and 17, and both jurors affirmed the verdicts of the jury. Analysis Defendant contends, as to the two counts of attempted statutory rape for which he stands convicted, that “he did not receive a fair trial on these counts, nor was the verdict unanimous.” Defendant asserts that he is entitled to a new trial on Counts 12 and 13. The State contends that because Defendant failed to object at trial or raise the issue of the impropriety of the trial court’s polling procedure in his motion for new trial, the issue is waived. The State further asserts that because Defendant failed to provide a transcript or other record of the hearing on Defendant’s motion for new trial, this Court must presume the trial court’s ruling is correct. Generally, where a party fails to include an issue in its motion for new trial, the issue is waived. Tenn. R. App. P. 3(e); see State v. Walker, 910 S.W.2d 381, 386 (Tenn. 1995). Also, it is the duty of the accused to provide a record which conveys a fair, accurate and -3- complete account of what transpired with regard to the issues which form the basis of the appeal. Tenn. R. App. P. 24(b); see State v. Taylor, 992 S.W.2d 941, 944 (Tenn. 1999). The record before us does not contain a transcript of the hearing on Defendant’s motion for new trial, and the transcript from the reading of the jury verdict and subsequent polling shows that defense counsel failed to object specifically to the polling procedure employed by the trial court. However, defense counsel moved the trial court to declare a mistrial after the court sent the jury out for the second time. Furthermore, in his motion for new trial, Defendant contends that the jury polling procedure employed by the trial court was improper. The motion states as grounds for a new trial, “[o]nce jurors announce the delivered verdict was not theirs, the only two options available to the Court are (1) a mistrial on those counts or (2) instructing the jury to continue its deliberations to a unanimous verdict. This Court did neither.” Tenn. R. App. P. 24(b) permits an appellant to file less than a complete transcript if the determination of the issues it intends to raise will not require the appellate court to review the entire transcript. We conclude that a transcript of the trial court’s hearing on Defendant’s motion for new trial is not necessary to determine the issue raised in this appeal. The record before us does contain a transcript of the trial proceedings as well as Defendant’s motion for new trial and the trial court’s order and amended order. We will, therefore, address this issue on the merits. Tennessee Rule of Criminal Procedure 31(e) provides “After a verdict is returned but before the verdict is recorded, the court shall – on a party’s request or on the court’s own initiative – poll the jurors individually. If the poll indicates that there is not unanimous concurrence in the verdict, the court may discharge the jury or direct the jury to retire for further deliberations.” Following the federal rule for guidance, a panel of this Court addressed the issue of jury polling, as an issue of first impression, in State v. Clayton, 131 S.W.3d 475 (Tenn. Crim. App. 2003): An examination of the application of Federal Rule of Criminal Procedure 31(d) reveals that a trial court’s method of polling the jury is subject to an abuse of discretion standard. Additionally, federal courts have also noted that Rule 31(d) “invests that trial judge with a measure of discretion in assessing the impact of a dissenting vote during a jury poll, and the reasonable exercise of this discretion should be accorded proper deference by a reviewing court.” **** -4- Additionally, . . . ., it rests within the trial court’s discretion to determine the manner of polling the jury. . . . Thus, it stands to reason that the trial court’s determination of whether a juror’s answer to the jury poll is equivocal is within the trial court’s discretion. Id. at 478-79 (citations omitted). In that case, the trial court observed one juror’s hesitation “for about five seconds” before her affirmation of the jury verdict. This Court concluded that the trial court did not interpret the juror’s hesitation as her disagreement with the jury verdict. This Court further concluded, “[w]e discern no reason why the trial court, once satisfied with the unanimity of the verdict, should have conducted further inquiries.” Id. at 479. In this case, two members of the jury did more than hesitate. As the trial court observed in its order, “Ms. Febo and Ms. Davis voiced their disagreement with the verdict on several occasions before ultimately agreeing to the verdicts announced.” The court further explained, The court had the opportunity to observe both jurors during the lengthy verdict process. The court observed the comments made towards Ms. Febo by another juror when she dissented from the verdict. The court also noted both jurors were shaking their heads when the original verdict was read and during individual polling. It is also not lost on the court that staff had to intervene with the jury when they were sent out the first time because of angry name-calling. Finally, the court recalls the general defeated demeanor of both jurors of having surrendered their individual convictions when they ultimately concurred with the announced verdict (heads down, somber voice and suddenly reticent attitude). The court concluded, as to Counts 4, 5, 8, and 11, that both the repetitive polling and limiting the third polling to the two jurors in question could have compromised the integrity of the verdict process and “could have given the unintended effect of the court expressing its dissatisfaction with their dissent and giving them the opportunity to correct their misapprehension with the verdict.” Therefore, the court ruled that the jury’s verdicts as to Counts 4, 5, 8, and 11 were not unanimous and, accordingly, declared a mistrial as to those counts. The court also vacated Defendant’s conviction in Count 3, as it was dependent upon a guilty verdict in Count 4. The trial court granted a new trial as to all of those counts. Defendant contends that because of the lack of unanimity in the jury’s verdicts as to the above counts, his convictions should be vacated in Counts 12 and 13 as well. The State -5- argues that the record does not support a finding that the jury’s verdict as to Counts 12 and 13 was anything less than unanimous. We agree. The record shows that when individually polled, jurors Febo and Davis both affirmed the jury’s verdict of guilty as to these two counts. There is nothing in the record to suggest that any of the jurors exhibited any disagreement with the verdict in Counts 12 and 13. Furthermore, the trial court, in its order, did not find any of the above indicators of juror dissent as to those counts. Accordingly, we conclude that the trial court did not err by declining to grant a new trial as to Counts 12 and 13. CONCLUSION Based on our review of the record, the judgments of the trial court are affirmed. _________________________________ THOMAS T. WOODALL, JUDGE -6-
Ever woke up in the morning and the first thought that comes to your mind is, “I just don’t want to get out of bed….groan….. 😥 ?” Or this, “Arrrgh, I have to face the same old things today…..groan….,” and you hide yourselves further under the covers and refuse to budge? Despite the fact that you may be late for work or late in getting your kids to school? 🙄 At this point, it is very hard to try and motivate ourselves because we are already stuck in the rut. So, what I am going to point out here are some simple ways that you can apply while you ARE NOT IN THE RUT YET because they are meant to be preventive measures. For me, it is imperative that I do this because everyday, day in and day out, it has been more or less the same type of tasks that I face for the last 8 years that I have been a full time homemaker. Sometimes, the monotony of it all can really get to me… Now, on the term “homemaker” – I am going to loosely define that as someone (mostly ladies; well, men, too, are included if they are involved in this and I applaud them indeed! 😀 ) who manages and cares for the household, turning it into a cosy home and sanctuary where her family can take refuge in. This will include full-time stay-at-home moms like me, working wives, single moms, work-from-home moms and new wives, okay? Try and adopt some of the ways mentioned below :- 1) when you go to bed at night, you should be in a positive frame of mind, with a good thought. If you fall asleep with the last thought being negative like, “Oh boy, tomorrow is Monday, it’s going to be the blues for me…,”, you are going to wake up the next day definitely feeling depressed and you are going to be stuck in bed for sure! 2) similarly, when you wake up, try to have a positive first thought, like, “Today is going to be a wonderful and beautiful day!” Do this even if you are going to be facing an onslaught of heavy workload that day, and I can bet you with this start up in the morning, you are in a better condition to face your new day. 😀 3) you can do positive affirmations like, “I am happy, I am healthy, I am prosperous,” in the morning and throughout the day because they will help to “prompt” your mood for the day. 4) remember that it doesn’t pay for you to be in a bad mood once you wake up. Have you noticed how come things can go even more wrong just when you are in a bad mood? Especially when you are angry? So, say no to bad moods and crankiness! 5) try to plaster a smile on your face! Now, I said “plaster” because when you are in a lousy mood, there is no way you can even bring yourself to feel like smiling. So, what you have to do is to force your mouth to smile, even if you have to bring up your two forefingers to the edges of your lips and lift them up into a semblance of a smile! 😆 Hahaha! Isn’t this fun? It may feel ridiculous to you doing that, but trust me, it works after a few minutes, cos somehow, your mood is not as down as before and that smile of yours may just turn genuine! Or you will be uplifted by your own laughter, laughing at your silly antics! By the way, keep smiling as much as you can during the day. 😀 6) promise yourself a reward for the end of the day if you have tried your best to have a good day. I have different little rewards just for myself every day…..going to the mamak stall for a nice cup of lime tea (teh O ice limau), or just putting up my tired legs and watch a good movie, or just having some quiet time to myself “vegetating” my brain, i.e. just not doing any thinking. 7 ) it helps a lot to laugh during the day. I alway believe that, “Laughter is the BEST medicine” because when we laugh, our brain produces endorphins, a feel-good hormone. Watch a sit-com, or read some jokes, or better still tell some jokes to someone and make them laugh instead! 😆 8 ) our body has auras which can accumulate negative energies during the day and over a period, the auras can become dull and heavy, thus making us feel lethargic, dull and sluggish. I like to call this condition “being in the Twilight Zone” because that is precisely how I would feel. The remedy is easy and all you need to do is a salt bath to clean your auras. There are other ways that I can think of to help energise our life but I will stop here at 8, because I am hungry and I need to go have my dinner now – my brain is screaming for food and refuse to think anymore! 😆 🙄 Late one night, I had a strong craving for cakes and I decided to bake a carrot cake at midnight! 🙄 Well, it was a Saturday night and my husband and kids were all staying up late anyway….and so, this was a real treat for them, a slice of carrot cake for supper! I had some leftover cream cheese in my refrigerator and I decided to use all of it and somehow, I messed up with the measurements for making the cheese topping, and I ended up with a frosting that was a tad too watery…but…. it sure tasted great with the cake anyway. 😆 This recipe is really easy to follow and it only took me 20 minutes to prepare the cake batter from scratch all by myself and without any little assistants (hehe, my kids) helping out in the kitchen. I’m not very good with baking cakes and so, I was very happy with the results here. 😀 First, the oven is preheated to 180 degrees Celcius or 350 degrees Fahrenheit. While the oven is going, grease and flour a 8-inch or 10-inch round or square cake pan. Then, prepare the batter as follows – Wholemeal Carrot Cake With Cream Cheese Frosting Ingredients – 1 1/2 cups oil 2 cups brown sugar 2 tsps cinnamon powder 4 eggs, beaten lightly 1 tsp vanilla 2 cups wholemeal flour 1 tsp salt 3 cups finely grated carrots 1 tsp baking powder 1 tsp baking soda or soda bicarbonate Method – Mix the first five ingredients thoroughly in a large bowl. Then add in the rest of the ingredients one by one until they are all incorporated. Pour into the prepared cake pan and bake in the preheated oven for approximately 50 minutes, or when the top of the cake is golden in colour and the cake is tested for doneness when an inserted toothpick into the centre of the cake turns out dry (without any batter sticking onto it). While the cake is baking, prepare the Cream Cheese Frosting … Cream Cheese Frosting – Ingredients – 1/4 cup unsalted butter, at room temperature 230 gm or 8 oz of soft Philadelphia Cream Cheese, at room temperature 2 cups or 230 gms of icing/Confectioners’ sugar, sifted zest of 1 lemon 1 tsp vanilla extract Method – Beat cream cheese and butter with a hand mixer until smooth and well incorporated. Add in the icing sugar bit by bit, then the lemon zest and vanilla – make sure to mix thoroughly. Frost this onto a completely cooled cake. Bon Appetit! 😀 Note – Because I was using wholemeal flour, the cake’s texture is a bit heavy but if you prefer a softer cake, use plain flour and for a “more moist” cake, add in 1/2 cup of well drained and crushed canned pineapple with the wet ingredients for the cake and bake a few minutes longer. You can also add 1/2 cup of chopped walnuts or raisins, too, if you like. I didn’t put any in this recipe here because my family doesn’t really like them in their cakes. I was reading the The Star newspapers some time back and came across a very interesting article on some tips to help us develop the habit of staying positive. Contrary to what we think, optimism is not an inborn trait. Optimism, being a state of mind, can be worked on and improved. Optimism is not only essential when striving to realise our goals and ambitions, but it also improves our performance. It is powerful and contagious because when one is optimistic, we have a positive and confident feel to us and this energy can be felt by people around us, be it at home or at work. 😀 The picture above was printed in The Saturday Post, Pakistan and was painted by an artist named Hassan. He drew this artwork to show the value of optimism in life. He said, “People have lots of bad things going on in life, which are shown by the rough patched building. It is a rough building which you can say is totally dead, as you can feel by looking at the texture on the walls. The best part of life is that there is always a ray of hope, and that is shown by the candle. So my concept is that you have to think positively and extract positive elements of life to live beyond the bad things going on in your surroundings. In a nutshell, optimism in life is the key to success for sure!” How true! Sir Winston Churchill said in 1954, “I am an optimist! It does not seem to be much use being anything else.” According to research, optimistic people smile 38 percent more than the pessimists, so try to see the bright side of life’s little mishaps. Of course, there are grave occasions where seeing the funny side is clearly inappropriate. Here are some of the recommended tips to be an optimist :- * say something positive and good everyday. It can be anything like praising your children, pets, or the lady that is selling noodles at the coffee shop. * be courteous. Always say “please” and “thank you” for integrity is a highly valued trait that we should have as our base character to go in line with optimism. So is being trustworthy, kind, respectful and grateful. * be realistic and frank about our mistakes. Apologise to people we may have treated poorly. * when things go wrong, take it on the chin and move on. Identify and implement the key learning points from the failure, but do not dwell on the situation. Sir Winston Chruchill summed up the qualities of the optimistic leader eloquently when he said, “An optimist is someone who sees opportunity in every disaster. A pessimist is someone who sees disaster at every opportunity.” Like this: What better way to unwind from a busy schedule or destress than listening to our favourite songs, especially retro songs that bring back fond memories…..just let go and tune out whatever is happening out there, …. relax and enjoy the music. This week, the Malaysians got hit by 2 pieces of bad news – a 41 % immediate increase in the price of petrol and a 18 % hike in electricity tariffs starting on the 1st of next month. Needless to say, these will trigger a chain reaction in everything getting more expensive soon. In fact, the transportation companies for goods are already asking for a 40 % to 60 % increase in their charges. Just do a simple math calculation and we get a pretty grim picture of things to come. 😯 🙄 As a homemaker who oversees the household/utilities, family and food expenses, this means I will have to be more frugal to adjust to the higher cost of living, while making sure that my family can still have a happy, comfortable and healthy lifestyle. Sigh….I am not happy this week and so are most Malaysians … never mind, it’s time for me to switch off these problems just for a short while, relax, listen to some of my favourite retro songs and things will surely seem brighter tomorrow …. 😆 A mountain of pork chops on a bed of fried potatoes, topped with sauteed onions, garnished with sliced cucumbers Good evening and Happy Friday, dear friends 😀 As promised earlier, here is the recipe for my family’s favourite Pork Chops which are served with fried potatoes and sauteed onions. My children, like most kids, just love potatoes cooked in anyway and because of that, I usually serve this type of pork chops in this style. The pork chop recipe is very easy to follow and requires just a few ingredients but it is very delicious, and although there is no sauce, the pork chops go very well with white rice, too. I usually make a large batch of these and the leftovers are eaten with bread as sandwiches or “pork burgers” (with sliced cucumbers and tomatoes) the next day. I first had a taste of these pork chops 30 years ago at my then boyfriend’s (now my husband 😀 ) house when his mother cooked them for dinner. Hence, this is another yummy recipe learned from my late mother-in-law. Easy Yummy Chinese Pork Chops Ingredients – 1 kg boneless pork chops or pork tenderloin 1 kg potatoes 1/2 kg big onions Marinade to be mixed together in a large bowl – 6 tbsps soy sauce 4 tbsps oyster sauce 3 tbsps dark or thick soy sauce 1 tbsp sesame oil 3 tbsps sugar 3 tbsps rice wine, or any wine 1 tbsp salt 1 tsp pepper 4 tbsps cornstarch Method – 1) Slice pork into 1 cm thickness and use a mallet to tenderise both sides of the pork slices. 2) Put the pork slices into the marinade, rub them together thoroughly with your fingers.Set them aside (the longer the better). 3) The first thing to cook is the potatoes – they are cleaned, peeled, cut into slices or wedges and then deep fried until golden brown. Lay cooked potatoes onto the bottom of a large platter. 4) In a non-stick pan (I recommend using this because the pork chops will caramelise, which may stick to the bottom of a normal wok/pan and get burnt, not to mention that washing the wok/pan is harder)…..heat up some oil, enough to cover the bottom. 5) Cook the pork chops by frying them over medium high heat for 3 minutes on the first side, and 2 minutes on the other side. ( *** Warning – there is likely some oil spattering and you are recommended to use a pair of long chopsticks or tongs to cook the pork chops and during cooking, be sure to cover your pan/wok to minimise oil spatter around your stove 💡 ) 6) Set cooked pork chops on top of the potatoes. 7) Next, saute sliced big onions in the leftover oil from cooking the pork chops until they are soft and a bit golden brown. Adding a tsp of salt will enhance the taste and caramelisation of the onions. 8 ) Top the pork chops with the sauteed onions and garnish with some sliced cucumbers. The cucumbers act as a cooling food to balance the heatiness of the fried pork chops and onions.
namespace egret3d { function _filterEmptyLine(string: string) { return string !== ""; } /** * 全局渲染状态组件。 */ @paper.singleton export class RenderState extends paper.BaseComponent { /** * @internal */ public readonly onGammaInputChanged: signals.Signal = new signals.Signal(); public version: string; public standardDerivativesEnabled: boolean; public textureFloatEnabled: boolean; public fragDepthEnabled: boolean; public textureFilterAnisotropic: EXT_texture_filter_anisotropic | null; public shaderTextureLOD: any; public maxTextures: uint; public maxVertexTextures: uint; public maxTextureSize: uint; public maxCubemapSize: uint; public maxRenderBufferize: uint; public maxVertexUniformVectors: uint; public maxAnisotropy: uint; public maxBoneCount: uint = 24; public maxPrecision: string = ""; public commonExtensions: string = ""; public vertexExtensions: string = ""; public fragmentExtensions: string = ""; public commonDefines: string = ""; public vertexDefines: string = ""; public fragmentDefines: string = ""; public readonly clearColor: Color = Color.create(); public readonly viewport: Rectangle = Rectangle.create(); public readonly defines: Defines = new Defines(); public readonly defaultCustomShaderChunks: Readonly<{ [key: string]: string }> = { custom_vertex: "", custom_begin_vertex: "", custom_end_vertex: "", custom_fragment: "", custom_begin_fragment: "", custom_end_fragment: "", }; /** * */ public readonly caches = { useLightMap: false, castShadows: false, receiveShadows: false, cullingMask: paper.Layer.Nothing, attributeCount: 0, boneCount: 0, egret2DOrderCount: 0, clockBuffer: new Float32Array(4), skyBoxTexture: null as (BaseTexture | null), }; public renderTarget: RenderTexture | null = null; public customShaderChunks: { [key: string]: string } | null = null; /** * */ public render: (camera: Camera, material?: Material, renderTarget?: RenderTexture) => void = null!; /** * */ public draw: (drawCall: DrawCall, material?: Material | null) => void = null!;//开发者一般不会手动调用,通常是后期渲染调用 private _logarithmicDepthBuffer: boolean = false; private _gammaInput: boolean = true; // private _gammaOutput: boolean = true; // private _gammaFactor: float = 1.0; private _toneMapping: ToneMapping = ToneMapping.None; // TODO move to caches protected readonly _stateEnables: ReadonlyArray<gltf.EnableState> = [gltf.EnableState.Blend, gltf.EnableState.CullFace, gltf.EnableState.DepthTest]; protected readonly _cacheStateEnable: { [key: string]: boolean | undefined } = {}; protected _getCommonExtensions() { let extensions = ""; // fragmentExtensions. if (this.standardDerivativesEnabled) { extensions += "#extension GL_OES_standard_derivatives : enable \n"; } if (this.fragDepthEnabled) { extensions += "#extension GL_EXT_frag_depth : enable \n"; } if (this.textureFloatEnabled) { extensions += "#extension GL_EXT_frag_depth : enable \n"; } this.fragmentExtensions = extensions; } protected _getCommonDefines() { let defines = ""; // commonDefines. defines += "precision " + this.maxPrecision + " float; \n"; defines += "precision " + this.maxPrecision + " int; \n"; this.commonDefines = defines; defines = ""; // vertexDefines this.vertexDefines = defines; defines = ""; // fragmentDefines defines += ShaderChunk.encodings_pars_fragment + " \n"; this.fragmentDefines = defines; } protected _getEncodingComponents(encoding: TextureEncoding) { switch (encoding) { case TextureEncoding.LinearEncoding: return ['Linear', '( value )']; case TextureEncoding.sRGBEncoding: return ['sRGB', '( value )']; case TextureEncoding.RGBEEncoding: return ['RGBE', '( value )']; case TextureEncoding.RGBM7Encoding: return ['RGBM', '( value, 7.0 )']; case TextureEncoding.RGBM16Encoding: return ['RGBM', '( value, 16.0 )']; case TextureEncoding.RGBDEncoding: return ['RGBD', '( value, 256.0 )']; case TextureEncoding.GammaEncoding: return ['Gamma', '( value, float( GAMMA_FACTOR ) )']; default: throw new Error('unsupported encoding: ' + encoding); } } protected _getToneMappingFunction(toneMapping: ToneMapping) { let toneMappingName = ""; switch (toneMapping) { case ToneMapping.LinearToneMapping: toneMappingName = 'Linear'; break; case ToneMapping.ReinhardToneMapping: toneMappingName = 'Reinhard'; break; case ToneMapping.Uncharted2ToneMapping: toneMappingName = 'Uncharted2'; break; case ToneMapping.CineonToneMapping: toneMappingName = 'OptimizedCineon'; break; default: throw new Error('Unsupported toneMapping: ' + toneMapping); } return `vec3 toneMapping( vec3 color ) { return ${toneMappingName}ToneMapping( color ); } \n`; } protected _getTexelEncodingFunction(functionName: string, encoding: TextureEncoding) { const components = this._getEncodingComponents(encoding); return 'vec4 ' + functionName + '( vec4 value ) { return LinearTo' + components[0] + components[1] + '; }'; } protected _getTexelDecodingFunction(functionName: string, encoding: TextureEncoding) { const finialEncoding = (this._gammaInput && encoding === TextureEncoding.LinearEncoding) ? TextureEncoding.GammaEncoding : encoding; const components = this._getEncodingComponents(finialEncoding); return 'vec4 ' + functionName + '( vec4 value ) { return ' + components[0] + 'ToLinear' + components[1] + '; }'; } /** * @internal */ public _updateDrawDefines(renderer: paper.BaseRenderer | null) { let useLightMap = false; let receiveShadows = false; let boneCount = 0; const defines = this.defines; const caches = this.caches; if (renderer) { useLightMap = renderer.constructor === MeshRenderer && (renderer as MeshRenderer).lightmapIndex >= 0; receiveShadows = caches.castShadows && renderer.receiveShadows; boneCount = renderer.constructor === SkinnedMeshRenderer ? Math.min(this.maxBoneCount, (renderer as SkinnedMeshRenderer).boneCount) : 0; } if (caches.useLightMap !== useLightMap) { if (useLightMap) { defines.addDefine(ShaderDefine.USE_LIGHTMAP); } else { defines.removeDefine(ShaderDefine.USE_LIGHTMAP); } caches.useLightMap = useLightMap; } if (caches.boneCount !== boneCount) { if (boneCount > 0) { defines.addDefine(ShaderDefine.USE_SKINNING); if (this.textureFloatEnabled) { defines.addDefine(ShaderDefine.BONE_TEXTURE); } else { defines.addDefine(ShaderDefine.MAX_BONES, boneCount); } } else { defines.removeDefine(ShaderDefine.USE_SKINNING); if (this.textureFloatEnabled) { defines.addDefine(ShaderDefine.BONE_TEXTURE); } else { defines.removeDefine(ShaderDefine.MAX_BONES); } } caches.boneCount = boneCount; } if (caches.receiveShadows !== receiveShadows) { if (receiveShadows) { defines.addDefine(ShaderDefine.USE_SHADOWMAP); defines.addDefine(ShaderDefine.SHADOWMAP_TYPE_PCF); } else { defines.removeDefine(ShaderDefine.USE_SHADOWMAP); defines.removeDefine(ShaderDefine.SHADOWMAP_TYPE_PCF); } caches.receiveShadows = receiveShadows; } } /** * @internal */ public _updateTextureDefines(mapName: string, texture: BaseTexture | null, defines: Defines | null = null) { defines = defines || this.defines; // const mapNameDefine = (egret3d as any).ShaderTextureDefine[mapName];//TODO if (mapNameDefine) { if (texture) { defines.addDefine(mapNameDefine); if (texture instanceof RenderTexture) { defines.addDefine(ShaderDefine.FLIP_V); } else { defines.removeDefine(ShaderDefine.FLIP_V); } } else { defines.removeDefine(mapNameDefine); defines.removeDefine(ShaderDefine.FLIP_V); } } // const decodingFunName = (egret3d as any).TextureDecodingFunction[mapName]; // TODO if (decodingFunName) { if (texture) { const decodingCode = this._getTexelDecodingFunction(decodingFunName, texture.gltfTexture.extensions.paper.encoding || TextureEncoding.LinearEncoding); const define = defines.addDefine(decodingFunName, decodingCode, ShaderDefineOrder.DecodingFun); if (define) { define.isCode = true; define.type = DefineLocation.Fragment; } } else { defines.removeDefine(decodingFunName, true); } } // if (mapName === ShaderUniformName.EnvMap) { const nameA = "envMapA"; const nameB = "envMapB"; if (texture) { const { mapping } = texture.gltfTexture.extensions.paper; let typeDefine = ShaderDefine.ENVMAP_TYPE_CUBE; const blendDefine = ShaderDefine.ENVMAP_BLENDING_MULTIPLY; // TODO let define: Define | null; switch (mapping) { case TextureUVMapping.Cube: default: typeDefine = ShaderDefine.ENVMAP_TYPE_CUBE; break; case TextureUVMapping.CubeUV: typeDefine = ShaderDefine.ENVMAP_TYPE_CUBE_UV; break; case TextureUVMapping.Equirectangular: typeDefine = ShaderDefine.ENVMAP_TYPE_EQUIREC; break; case TextureUVMapping.Spherical: typeDefine = ShaderDefine.ENVMAP_TYPE_SPHERE; break; } define = defines.addDefine(nameA, typeDefine); if (define) { define.type = DefineLocation.Fragment; } define = defines.addDefine(nameB, blendDefine); if (define) { define.type = DefineLocation.Fragment; } } else { defines.removeDefine(nameA, true); defines.removeDefine(nameB, true); } } } /** * @internal */ public getPrefixVertex(defines: string) { const prefixContext = [ this.commonExtensions, this.vertexExtensions, this.commonDefines, this.vertexDefines, defines, ShaderChunk.common_vert_def, "\n" ].filter(_filterEmptyLine).join("\n"); return prefixContext; } /** * @internal */ public getPrefixFragment(defines: string) { const prefixContext = [ this.commonExtensions, this.fragmentExtensions, this.commonDefines, this.fragmentDefines, defines, ShaderChunk.common_frag_def, "\n" ].filter(_filterEmptyLine).join("\n"); return prefixContext; } public initialize() { super.initialize(); (renderState as RenderState) = this; // const options = paper.Application.options as egret3d.RunOptions; this.toneMapping = ToneMapping.LinearToneMapping; this.gammaFactor = 2.0; this.gammaInput = options.gammaInput !== undefined ? options.gammaInput : false; this.gammaOutput = false; } /** * */ public updateRenderTarget(renderTarget: RenderTexture | null): void { } /** * */ public updateViewport(viewport: Rectangle) { } /** * */ public clearBuffer(bufferBit: gltf.BufferMask, clearColor?: Readonly<IColor>): void { } /** * */ public copyFramebufferToTexture(screenPostion: Vector2, target: BaseTexture, level: uint = 0): void { } /** * */ public clearState() { for (const key in this._cacheStateEnable) { delete this._cacheStateEnable[key]; } this.renderTarget = null; } /** * */ @paper.editor.property(paper.editor.EditType.CHECKBOX) public get logarithmicDepthBuffer(): boolean { return this._logarithmicDepthBuffer; } public set logarithmicDepthBuffer(value: boolean) { if (this._logarithmicDepthBuffer === value) { return; } const { defines, fragDepthEnabled } = this; if (value) { defines.addDefine(ShaderDefine.USE_LOGDEPTHBUF); if (fragDepthEnabled) { defines.addDefine(ShaderDefine.USE_LOGDEPTHBUF_EXT); } else { defines.removeDefine(ShaderDefine.USE_LOGDEPTHBUF_EXT); } } else { defines.removeDefine(ShaderDefine.USE_LOGDEPTHBUF); defines.removeDefine(ShaderDefine.USE_LOGDEPTHBUF_EXT); } } /** * */ @paper.editor.property(paper.editor.EditType.CHECKBOX) public get gammaInput(): boolean { return this._gammaInput; } public set gammaInput(value: boolean) { if (this._gammaInput === value) { return; } this._gammaInput = value; this._updateTextureDefines(ShaderUniformName.EnvMap, this.caches.skyBoxTexture); this.onGammaInputChanged.dispatch(); } /** * */ @paper.editor.property(paper.editor.EditType.CHECKBOX) public get gammaOutput(): boolean { return this._gammaOutput; } public set gammaOutput(value: boolean) { if (this._gammaOutput === value) { return; } const define = this.defines.addDefine("Gamma", this._getTexelEncodingFunction("linearToOutputTexel", value ? TextureEncoding.GammaEncoding : TextureEncoding.LinearEncoding), ShaderDefineOrder.EncodingFun); if (define) { define.isCode = true; define.type = DefineLocation.Fragment; } this._gammaOutput = value; } /** * */ @paper.editor.property(paper.editor.EditType.FLOAT, { step: 0.1 }) public get gammaFactor(): float { return this._gammaFactor; } public set gammaFactor(value: float) { if (value !== value || value < 1.0) { value = 1.0; } if (this._gammaFactor === value) { return; } const define = this.defines.addDefine(ShaderDefine.GAMMA_FACTOR, value, ShaderDefineOrder.GammaFactor); if (define) { define.type = DefineLocation.Fragment; } this._gammaFactor = value; } /** * */ @paper.editor.property(paper.editor.EditType.LIST, { listItems: paper.editor.getItemsFromEnum((egret3d as any).ToneMapping) }) // TODO public get toneMapping(): ToneMapping { return this._toneMapping; } public set toneMapping(value: ToneMapping) { if (this._toneMapping === value) { return; } const defineName = "ToneMapping"; const { defines } = this; if (value === ToneMapping.None) { defines.removeDefine(ShaderDefine.TONE_MAPPING); defines.removeDefine(ShaderChunk.tonemapping_pars_fragment); defines.removeDefine(defineName); } else { let define = defines.addDefine(ShaderDefine.TONE_MAPPING); if (define) { define.type = DefineLocation.Fragment; } define = defines.addDefine(ShaderChunk.tonemapping_pars_fragment); if (define) { define.isCode = true; define.type = DefineLocation.Fragment; } define = defines.addDefine(defineName, this._getToneMappingFunction(value)); if (define) { define.isCode = true; define.type = DefineLocation.Fragment; } } this._toneMapping = value; } /** * */ @paper.editor.property(paper.editor.EditType.CHECKBOX) public premultipliedAlpha: boolean = false; /** * */ @paper.editor.property(paper.editor.EditType.FLOAT, { minimum: 0.0, maximum: 10.0 }) public toneMappingExposure: float = 1.0; /** * */ @paper.editor.property(paper.editor.EditType.FLOAT, { minimum: 0.0, maximum: 10.0 }) public toneMappingWhitePoint: float = 1.0; } /** * 全局渲染状态组件实例。 */ export const renderState: RenderState = null!; }
Friday, October 03, 2008 Strange timing for Commissioner's departure IT could have seemed to a passing overseas visitor as though political protest in Britain had achieved a fantastically speedy result. For six days, entering the Oval cricket ground which is the incongruous setting for the Jean Charles de Menezes inquest, one was likely to encounter a couple with a placard saying Sir Ian Blair must Go! And on the seventh day, it was the news vendors' placards which announced that Sir Ian, Commissioner of the Metropolitan Police, had gone. Not as a result of the protests of course, nor of the inquest, which is expected to have another three months to run. Tory London mayor Boris Johnson, having become chair of the Metropolitan Police Authority, had made clear he wanted Sir Ian out, and though it is properly speaking the Home Secretary's prerogative to tell Britain's top police chief to go, on the day Boris Johnson took his chair, the Commissioner announced his resignation. He has until December 1 to clear his desk, and his deputy will temporarily take over. The news came through on the day after Detective Chief Inspector Jon Boutcher, a senior Anti-Terrorism squad officer in the control room at New Scotland Yard on July 22, 2005, the day Jean Charles de Menezes was killed, had given evidence. Answering questions from Michael Mansfield QC, acting for the de Menezes family, on how Jean Charles, wrongly identified as a terror suspect, was pursued into Stockwell umderground station by firearms officers, and shot dead, DCI Boutcher said the same thing could happen again. One question hanging over this affair from the start has been why police made no attempt to detain the subject near his home, which was under surveillance, and why - particularly if they believed he might be a suicide bomber - he was allowed instead to travel on public transport, and go down to the tube. Another concerns the false claims that Jean Charles had behaved suspiciously, and the Met's initial unwillingness to even admit they had killed the wrong man. Responding to questions from Michael Mansfield, DCI Boutcher claimed to have been told on the morning of the operation that surveillance officers had positively identified a "suspect" leaving flats in Scotia Road, Tulse Hill, although in fact the officer, "Frank" had only said the person leaving was worth a second look. Police records from the time show he was referred to only as an "unidentified male". In fact the failed suicide bomber Hussein Osman whom the police were supposedly after was nowhere near Scotia Road, or Stockwell . He had gone to Brighton, later taking the Euro-star to Paris, and was eventually detained in Rome. DCI Boutcher denied having known exactly where police units were during the operation. He also claimed not to have been aware that Jean Charles' wallet, containing his identification, was found on the seat next to which he was killed. With the TV news telling of Sir Ian Blair's resignation, we saw a clip of Sir Ian announcing at a press conference that day in 2005 that the police had killed a known terror suspect. It was as if the police had decided to kill a "terrorist", and then having killed an innocent man, they decided that he would have to do. People have been saying for some time that Blair, like his namesake in Downing Street at the time was a B-liar. That might have been good cause for the Commissioner to resign, despite the loyal backing he received from people in the Labour Party, and notably then Mayor, Ken Livingstone. But why now? Was his resignation anything to do with the killing of Jean Charles de Menezes, or has that provided a misleading diversion from other causes? Jean Charles' relatives have issued the following statement:The Menezes family is shocked by the news of Sir Ian Blair's resignation, as it comes in the middle of the inquest into Jean's death.As head of the Metropolitan Police, Sir Ian Blair should have been ultimately accountable for the death of Jean Charles de Menezes. We believe he certainly bears responsibility for the lies told about Jean and the cover up by the police in the aftermath of the shooting. He even tried to stop the IPCC investigating our cousin's death. The lack of accountability of the country's most senior police officer is one of the most shocking aspects of this tragedy. For Sir Ian Blair to state that he has resigned 'not because of any failures or pressures of the office' therefore reinforces our belief that he and the Metropolitan Police still refuse to accept full responsibility for Jean's death. For the family, Sir Ian Blair resigning does not change anything. Our focus is on the inquest where we hope we can find out the whole truth about Jean's killing. We await the verdict and findings and hope it will bring us closer to justice and for steps to be taken to ensure that no other family has to suffer the anguish we have over the last three years.* Reactions to Blair's resignation show an unprecedented tangle of politics surrounding the Metropolitan Police command. Brian Paddick, a former deputy assistant commissioner who stood as Lib Dem candidate for mayor of London, said that Home Secretary Jacqui Smith was also instrumental in Blair's departure. He told the BBC "On the day the mayor becomes the chair of the Metropolitan Police Authority he says boo and the commissioner jumps. "Not only that, it is actually only the Home Secretary that could force the commissioner to leave and therefore the Home Secretary could have turned round and said to Ian Blair and to the mayor: 'I'm sorry, you don't have the power, mayor, to do that. I want the commissioner to stay.' But she didn't; she allowed the commissioner to go." Paddick made his name as the most senior openly gay police officer, and for promoting a liberal policy on drug use - particularly cannabis - along with with his efforts to gain trust from young black people when he was commander responsible for policing in Lambeth. Ken Livingstone claimed that there had been a vendetta against Blair from the first day of his commissionership."The decisive voices were not those who criticised him from the left but those who want an end to what they call 'politically correct' - that is, non-racist – policing in London."http://www.guardian.co.uk/uk/2008/oct/03/blair.london The Blair resignation follows rows in the police force itself over accusations by leading black and Asian officers that racialism is still endemic among white colleagues. Meanwhile, back at the Oval, jury and observers were still mulling over responses by Detective Chief Inspector Boutcher to questions as to what went wrong in the operation which resulted in the death of Jean Charles de Meneses. "I am not sure anything did actually go wrong", replied DCI Boutcher. Asked then whether there was a real risk that it could happen again, the officer replied:"There is, Sir, yes." This is a Blair query re TV NEWS footage of his 22 July 2005 afternoon Stocwell Menezes events of earlier that day. My memory of Blair shown on TV that evening - given that he claims not to have known who had been killed etc., is that he commented "any death is unfortunate". I am not seeking to put him in a sympathetic light I am curious that he made this comment given that he claims not to have known what had actually gone down and that an innocent member of the public had been blown to bits. My memory is also that following on from his "any death is unfortunate" remark he was logically put under pressure to disclose who had been killed? It was then tyhat he said that it wa sall prt of an ever expanded anti terrorist op etc., - he sought to deflect the obvious - well who has been shot? to spin away from the actualy identity of the victim. In Radio4's The Media Show which looked at Blair he was criticised by the head of the Crime Reporters Association of spinning to the media and basically deceiving at that 22 July 2005 press conference - thsi is the Head of the CRA making the straight accusation. But I have seen no one to date take up his 22 July 2005 comment that "any death is unfortunate" at a time when he claims not to have known what had gone down. Police do not do sympathy plaudits when they engage in armed shoot outs - newspaper libraries have plently of reports that they propagandise such shoot outs to serve as a warning to others.
It happened! New York Gov. Andrew Cuomo signed into law a requirement that nurses earn a BSN within 10 years of initial licensure. This new law has many implications for RNs in New York as well as across the country. Why is New York so important? There are 297,331 RNs with a license in New York. That is 8% of all RNs in the U.S. This one state will set a precedent for others attempting to pass similar in your state. The push for BSN-prepared RNs has been around for a very long time! The American Nurses Association House of Delegates adopted a motion in 1964 supporting baccalaureate education as educational foundation for the registered nurse, and reconfirmed that position in 2000. The Institute of Medicine’s Future of Nursing Report calls for 80% of RNs to hold bachelor’s degrees by 2020, noting the need for higher education in RNs to take care of the higher complexity patients in our healthcare system. North Dakota did require a BSN until 2003, when it was overturned. As a small state with the only requirement for a bachelor’s degree, the nursing shortage had a negative effect on that requirement. However, now that New York has passed this into law, and with the support and work of the American Nurses Association, all the state action coalitions, AARP and the Robert Wood Johnson Foundation, this is not going away. This was the tipping point. Without getting into the research of why a BSN should be required, the legislation in New York noted several reasons. Supporting literature noted that because of increasing complexity of the American healthcare system, and rapidly expanding technology, the educational preparation of the RN must be expanded. It also stated that the nurse of the future must be prepared to partner with multiple disciplines as collaborator and manager of complex patients. If you stop and think about it, the RN is usually the least educated discipline on a multidisciplinary team. PTs, OTs, ST, Pharm Ds and social workers all are required to have bachelor’s, master’s or doctoral degrees. Despite being the least educationally prepared, the RN often has one of the most important roles on the healthcare team. So, what does the bill say exactly? The bill, AO1842-B/SO 6768 states has two main parts. First, it creates a temporary nursing program evaluation commission to make recommendations on barriers to entry into nursing, availability and access to baccalaureate programs and other related issues. This report and its findings are due to the governor within 12 months. The second part, which is effective immediately, states that “in order to continue to maintain registration as a registered professional nurse in New York state, have attained a baccalaureate degree or higher in nursing within 10 years of initial licensure.” This specific section 3 takes effect 18 months after the act became law (Dec. 19, 2017). Current RNs, as well as those currently enrolled or pending acceptance into a program preparing registered nurses effective date of this act (which is Dec. 19, 2017) are grandfathered in. This means the provisions of the law shall not apply to them. No doubt nurses have many unanswered questions and many individuals feel the law doesn’t or won’t apply to them. I am sure the board for Registered Professional Nurses in New York will be publishing further clarity on this. What does this law mean for you? Licensure in New York: If you hold a license in New York, even if you are not working there, you are grandfathered in. However, for those who later enter the profession as an RN, if you want to be a traveler, or hold a license in New York, you will fall under this requirement. This requirement is that you obtained a bachelor’s degree or higher in nursing (the law states nursing) within 10 years of your initial licensure. For example: You are accepted into an ADN program for this next fall in Texas. You take your initial exam Texas to become an RN. For certain reasons, you decide to move to New York. Based on the current list of those exempt from BSN in 10, you are not grandfathered in. This means that your clock to get a BSN or higher started with your initial license. Legislation in your state: Many states have considered this legislation and were watching New York with great anticipation. With this bill’s passage into law in New York, you will see many more states move toward proposing legislation over the next few years. New Jersey also has pending legislation. Educational choices: There are many current options for matriculation in nursing, with different state partnerships between diploma, associate and bachelor’s programs. In preparation for this to go into effect, there will be more options not only in New York, but also across the country. There are many RN-to-BSN programs online as well, so I would expect to see these programs expand. Grandfathered: If you are grandfathered in because you are an RN, you still may want to consider going to get your BSN. As more hospitals look to hire BSN-prepared RNs, and as legislation requires BSN or higher, you may want to consider going back to school. Increasing your education always will give you more options. This bill did not happen overnight. It took more than 14 years of shepherding. I expect that we see our next state to add this requirement within the next few years. It is tough work passing legislation. Many colleagues, ANA-New York lobbyist and bill sponsors worked especially hard over the last year to see the successful passage of this bill. “The passage of this bill into law reflects years of working toward a true collaboration of direct-care nurses, associate and baccalaureate faculty, nurse managers and administrators, healthcare facilities and professional associations and consumer advocates,” said Karen Ballard, MA, RN, FAAN, past executive director of ANA New York. “In the end, it is a win for all RNs and our patients!” Courses related to ‘earning a bachelor’s degree in nursing’ WEB309: RN to BSN: Aligning Your Personality Characteristics with Your Career Goals (1 contact hr) With the recommendation that 80% of nurses hold a bachelor’s degree by 2020, many RN’s may be considering advancing their education. Have you considered what areas within nursing you might like to explore? Might certain personality characteristics help you enjoy some nursing specialties more than others? Is your dream to work in management, administration, education or research? Is your desire to avoid specific job duties such as management? Try to align your strengths and personality characteristics with a nursing role you might enjoy! Perhaps there is an area of nursing you haven’t considered as a possibility for you. As you decide to further your education, an analysis of research and individual personality characteristics may help you align your goals within nursing areas you might enjoy the most. WEB299: Progressing to School Successfully: Is Now the Time for a BSN? (1 contact hr) Technology changes. Healthcare changes. And nursing is changing. Advance forward in your career by progressing to school successfully! With the 2020 goal of 80% of nurses holding a bachelor’s degree, what is the current distribution of degrees within nursing? What information do you need to consider to help you pursue your BSN and to become a part of the 80%? Become informed and motivated with this webinar. CE171-60: Earning Degrees By Distance Education (1 contact hr) Advancing in the nursing profession, and in some cases even maintaining a current position, may require a return to academic education. Returning to school can be daunting for adult learners. Balancing work, family, and traditional classes feels like an impossible burden. These factors make distance education a viable, a desirable, and often the only alternative. This module will provide nurses with information about obtaining academic credentials through distance education.
56*h - 56*h + 32*h**2 in the form z*h**2 + t + u*h and give z. 126 Express -u - 68 + 1032317962*u**2 + 2*u**4 - 1032317962*u**2 in the form z*u**4 + k*u**2 + v + b*u**3 + p*u and give v. -68 Express (-1642*y - 3390*y + 7803*y)*(0*y + 2*y + 0*y) - 3*y**2 - 2*y**2 + 6*y**2 as a*y**2 + i + t*y and give a. 5543 Express (-2*s - s + 4*s)*(3 + 2 - 3)*(-107 - 96 + 26)*(s + 0*s - 5*s) as h*s + t*s**2 + a and give t. 1416 Express -2*i**3 + 150186*i - 75001*i + 27 - 75109*i + 2*i**2 as c + w*i + b*i**2 + r*i**3 and give b. 2 Express (-90*h + 34*h - 45*h)*(8 - 59 + 9) as p + m*h and give p. 0 Express (31 - 6*j**3 - 31)*(-2*j - 78*j - 46*j)*(-1 + 2 - 4) in the form v*j**4 + o*j**2 + z + f*j + y*j**3 and give v. -2268 Express (-2*i - i + 5*i)*(108 - 48 + 64 + (0 + 3 - 1)*(3 - 2 - 3))*((0 - i + 0)*(1 + 6 - 2) + 6*i - 2*i - i) in the form c*i + l + q*i**2 and give q. -480 Rearrange 374 - 374 - 1110*x + 3244*x to the form q + b*x and give b. 2134 Express (1 - 5 + 2)*(-3*w**3 + 218779 - 218781 + 630*w**3) as j*w + c + z*w**2 + v*w**3 and give v. -1254 Express (-33*d + 5*d**2 + 33*d)*((0 + 0 + 1)*(-2*d - 1 + 1) + (-1 - 2 - 5)*(-3 + 3 + 11*d)) as x*d**3 + b*d + u*d**2 + j and give x. -450 Rearrange (2 - 4447*k**2 - 84*k - 184*k + 4444*k**2)*(16 - 16 - 6*k)*(-3 + 3 + 2*k) to i*k**2 + d*k**3 + a*k + t*k**4 + n and give t. 36 Express (10715*g**2 - 5576*g**2 - 14782*g**2 - 16179*g**2)*(-1 + 1 - 3) in the form c + h*g + w*g**2 and give h. 0 Express (7*w**2 - 5*w + 5*w)*(-4434*w + 1 + 11645*w - 4726*w) as y*w**3 + a + h*w**2 + r*w and give a. 0 Express 3 + 218*b - 651*b - 3 + 332*b in the form v + c*b and give c. -101 Express (0 - 1 + 3)*((22 - 42 + 7)*(69*q + 35*q - 134*q) - 3*q + 2*q - 2*q) as f + p*q and give p. 774 Express -260*w**2 + 518*w**2 + 0 - 2*w + 158*w**3 - 260*w**2 + 4 as q*w**3 + y + n*w**2 + o*w and give o. -2 Rearrange -878*l + 627*l + 3113*l + 999*l + 2217*l to q + z*l and give z. 6078 Rearrange (3*f**3 + 0*f + 0*f)*(-7 + 19*f + 7 + (27 - 24 - 21)*(0*f + 0*f - 2*f)) to the form l*f**3 + i*f**4 + o*f**2 + x*f + c and give i. 165 Rearrange -2755*k - 1492*k + 1494*k - 666*k - 2711*k to v*k + a and give v. -6130 Express 9*o - 149 + 308 + 69*o**4 + o**2 - 19*o**4 + o**3 - 156 as k*o**3 + a*o + x*o**2 + b + y*o**4 and give y. 50 Rearrange m - 2*m + 5*m - 11445 + 447*m + 11445 + (3 - 5 + 1)*(2*m - m + 0*m) to y + x*m and give x. 450 Express (-78*w - 8*w - 16*w)*(-19*w + 32*w - 6*w) as k*w + s + c*w**2 and give c. -714 Rearrange -8*h**3 + h**3 - 14 + 4*h**3 + 3*h - 80 to the form f + z*h**2 + q*h**3 + d*h and give q. -3 Express 7037*t**2 - 98567 + 98567 as o*t**2 + y*t + s and give o. 7037 Express v**2 - 9*v + 3*v - 23 + 0 - 269 + 12 in the form i*v**2 + b*v + q and give b. -6 Rearrange 5*k - 499 + 2*k - 6*k + 527 - 3*k**2 - k**3 to t*k**2 + o + r*k + q*k**3 and give o. 28 Express (3*s**2 + 2*s**2 - 3*s**2)*(-4 + 1 + 0)*(3 + 2 + 0)*(1405*s + 6948 - 6948) as c + k*s**2 + b*s + o*s**3 and give o. -42150 Express -14*w + 25*w - 759*w**2 - 1245*w**2 - 11*w as z*w**2 + j*w + u and give j. 0 Express -804 + 6*z**2 - 795 + 1516 in the form x*z + b*z**2 + t and give t. -83 Express (2 + 2 - 10*u + 13*u)*(4 + 1 - 1)*(-107 - 7*u**2 - 113 + 216) as h*u**3 + g*u + n + q*u**2 and give q. -112 Express ((-1 + 2 - 3)*(-k + 0*k + 0*k) + 24*k - 1062 + 1062 + 2*k + k + 10 - 4*k)*(-2 - 2 + 9) as g + u*k and give u. 125 Express (2*w - 2 + 2)*(-26*w + 84 - 9 - 27) as m*w + f*w**2 + d and give f. -52 Express (-10*q**2 + 10*q**2 + 3*q**3)*(-151 - 23 - 30) as r*q + p*q**3 + h + n*q**2 and give p. -612 Rearrange (65*q**3 + q - 67*q**3 - 8*q)*(-548*q + 849*q + 1347*q + 83*q + 1211*q) to the form x*q**3 + a*q**2 + d*q + o + t*q**4 and give a. -20594 Rearrange (-1 - 17 - 1)*(4*f + 8*f + 3*f) - 1 - 2*f + 1 + (-f - 3*f - 4*f)*(2 + 1 - 5) to g*f + n and give n. 0 Rearrange 20 - 4 + 2*u**2 - 39 + (14 - 5 + 8)*(-u**2 - u**2 + 0*u**2) to the form f*u**2 + v + r*u and give f. -32 Express (608 + 606 - 23)*(1 + 0 - 2*b**2 + 1) + 20*b**2 + 33*b - 33*b - 2*b**2 - 2*b**2 + 6*b**2 as i*b**2 + u + y*b and give u. 2382 Rearrange -53*q + 2*q + q**3 + 6*q - 34*q**2 + 20*q to n*q**3 + u*q + i*q**2 + p and give u. -25 Rearrange 3*g**2 + 3*g + 685 + g**2 - 3*g**2 - 3*g**2 - 715 to the form k*g**2 + t*g + y and give k. -2 Express 802*h**2 - 17 - 3 - 783*h**2 - 2 as w*h + t*h**2 + r and give w. 0 Express -383*z**2 + 175*z**2 + 939*z**2 + (-2*z + 2 - 2)*(0 + 0 - z) as n + p*z + y*z**2 and give y. 733 Express 215 - 214 + 5*h - 67*h + 4*h - 54*h in the form r*h + j and give r. -112 Rearrange -85046*w**3 - 3*w**2 + 0*w**2 + 84083*w**3 to y*w**2 + h + f*w + u*w**3 and give y. -3 Express -1581*t - 854*t + 808*t - 771*t as r*t + i and give r. -2398 Rearrange (4877*v**2 + 3594*v**2 - 3575*v**2)*(0 + 0 + v) to the form r*v + d + q*v**3 + b*v**2 and give b. 0 Rearrange 0*b**4 + 18*b**2 + 3*b**4 - 38*b**2 + b**3 + 8*b**2 - 5*b + 2 + 65*b**2 to j*b**3 + x + r*b**2 + l*b + v*b**4 and give v. 3 Rearrange -37 - 37 + 74 + 9041*f + 2599*f to w*f + u and give w. 11640 Rearrange (-2*k + k + 3*k)*(260 + 212 + 137 - 2*k)*(k - k + 2*k) to c*k + f + m*k**3 + g*k**2 and give g. 2436 Rearrange (0*b - 2*b + 4*b)*(-1936*b + 1073*b + 1091*b)*(-2 - 1 + 5) to a + y*b + p*b**2 and give p. 912 Express -4*n + 30340013 + 0*n**2 + 0*n**2 - 6*n**3 - 30340083 in the form y + h*n**3 + g*n + w*n**2 and give y. -70 Express -5*u**2 + 297 + 27*u - 309 - 27*u in the form i*u + r + t*u**2 and give t. -5 Rearrange -42 - 20*s + 159 - 401 - 232 + 22*s + s**2 to x*s + w*s**2 + o and give x. 2 Express (-1 - 192*k**2 + 1 + 5*k**2 + 0)*(-24291*k - 245*k**2 + 24291*k) in the form g + u*k**2 + i*k + s*k**4 + y*k**3 and give s. 45815 Rearrange -38 - 454*v + 907*v + 4*v**2 + 2*v**3 - 10 - 455*v to the form f*v**2 + p + s*v + i*v**3 and give i. 2 Rearrange 95*g**3 - 17*g - 106*g**3 + 152*g**2 - 2 + 75*g - 57*g to the form o*g**3 + w + d*g + z*g**2 and give w. -2 Rearrange 7253 + t**2 - 17*t**3 - 42*t**3 + 13*t - 7252 to y*t**2 + x*t + z*t**3 + k and give y. 1 Rearrange 6898*p + 2602*p + 1475*p + 1815*p to the form d + g*p and give g. 12790 Rearrange (-98*v + 52 - 52)*(155*v**3 + 53 - 24 - 29) to the form g*v**4 + k*v + u*v**2 + n*v**3 + y and give y. 0 Rearrange -141*z**4 + 2*z**3 + 11*z**2 + 276*z**4 + 1 + 15*z - 13*z**2 - 128*z**4 + 6*z**2 to the form d*z**2 + l*z**4 + o*z**3 + x + u*z and give x. 1 Rearrange 0*m**2 - 9*m - 730*m**3 - 10*m + 3*m**2 - 9*m + 29*m to b + z*m**3 + g*m + a*m**2 and give a. 3 Express ((3 + h - 3)*(2 + 4 + 1) + 2*h + 0*h - 3*h)*(5 - 70 + 35 - 83) in the form p*h + n and give p. -678 Express -118*g - 1573 + 535 + 525 + 523 in the form s + d*g and give s. 10 Rearrange -109*o + 75*o - o**4 + o**3 + 35*o to g + v*o**3 + d*o**2 + a*o**4 + b*o and give a. -1 Express -17*n**2 + 18*n**2 + 440 - 154 + 32 + 4*n**3 in the form d + i*n**3 + g*n**2 + o*n and give g. 1 Rearrange 1540*c - 825*c + 12 - 824*c to t*c + p and give t. -109 Rearrange -323 + 1765*a - 1768*a - 51 - 28 to j*a + v and give j. -3 Rearrange -2 + 42*i**2 + 27*i**2 + 51*i**2 + 15*i - 13*i - 86*i**2 to f*i**2 + g*i + b and give g. 2 Express (40*i - 40*i + 4*i**3)*((-1 - 1 + 4)*(2*i - i - 8*i) - i + i + 4*i) in the form n + l*i**2 + v*i**3 + z*i**4 + a*i and give z. -40 Express 905 - 1758*u + 2*u**3 + 1757*u + 19*u**2 - 927 in the form g + j*u + z*u**2 + k*u**3 and give k. 2 Express -898229 + 3*f + 43*f**4 + 898231 - 3*f**2 + 5*f**2 as s*f**4 + a*f**3 + k*f**2 + q*f + y and give y. 2 Express -312 - 308 + 936 - 510*z - 316 in the form i + r*z and give r. -510 Express -226*b**2 - 196*b**4 + 65*b**4 + 66*b**4 - 2 + 66*b**4 - 4*b**3 - 1 as a*b**2 + x*b**3 + p*b**4 + n*b + c and give p. 1 Express 6476 + 15*o**4 + 7*o**2 - 6476 + (-o**2 + 2*o**2 - 2*o**2)*(-3*o**2 - 2*o**2 + 3*o**2) as y + t*o**3 + c*o**2 + a*o + l*o**4 and give c. 7 Express -30*r - 856 + 284 + 283 + 289 in the form b*r + t and give b. -30 Rearrange (127 + 77 + 48 - 23)*(9*v + 10*v**2 - 338 + 338) to r*v**2 + j*v + w and give r. 2290 Express -2176*p - 5175*p + 12586*p as k*p + g and give k. 5235 Rearrange -44*d**3 + 32*d + 316*d + d**2 + d**4 + 43*d**3 to the form k*d + z*d**3 + r + m*d**4 + f*d**2 and give z. -1 Rearrange 38*s**2 - 35*s**2 + 2 - 3*s**2 - 3*s + 12*s**3 to the form v*s**3 + g*s**2 + b + m*s and give v. 1
from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import numpy as np # type: ignore import onnx from ..base import Base from . import expect class NonMaxSuppression(Base): @staticmethod def export_nonmaxsuppression_suppress_by_IOU(): # type: () -> None node = onnx.helper.make_node( 'NonMaxSuppression', inputs=['boxes', 'scores', 'max_output_boxes_per_class', 'iou_threshold', 'score_threshold'], outputs=['selected_indices'] ) boxes = np.array([[ [0.0, 0.0, 1.0, 1.0], [0.0, 0.1, 1.0, 1.1], [0.0, -0.1, 1.0, 0.9], [0.0, 10.0, 1.0, 11.0], [0.0, 10.1, 1.0, 11.1], [0.0, 100.0, 1.0, 101.0] ]]).astype(np.float32) scores = np.array([[[0.9, 0.75, 0.6, 0.95, 0.5, 0.3]]]).astype(np.float32) max_output_boxes_per_class = np.array([3]).astype(np.int64) iou_threshold = np.array([0.5]).astype(np.float32) score_threshold = np.array([0.0]).astype(np.float32) selected_indices = np.array([[0, 0, 3], [0, 0, 0], [0, 0, 5]]).astype(np.int64) expect(node, inputs=[boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold], outputs=[selected_indices], name='test_nonmaxsuppression_suppress_by_IOU') @staticmethod def export_nonmaxsuppression_suppress_by_IOU_and_scores(): # type: () -> None node = onnx.helper.make_node( 'NonMaxSuppression', inputs=['boxes', 'scores', 'max_output_boxes_per_class', 'iou_threshold', 'score_threshold'], outputs=['selected_indices'] ) boxes = np.array([[ [0.0, 0.0, 1.0, 1.0], [0.0, 0.1, 1.0, 1.1], [0.0, -0.1, 1.0, 0.9], [0.0, 10.0, 1.0, 11.0], [0.0, 10.1, 1.0, 11.1], [0.0, 100.0, 1.0, 101.0] ]]).astype(np.float32) scores = np.array([[[0.9, 0.75, 0.6, 0.95, 0.5, 0.3]]]).astype(np.float32) max_output_boxes_per_class = np.array([3]).astype(np.int64) iou_threshold = np.array([0.5]).astype(np.float32) score_threshold = np.array([0.4]).astype(np.float32) selected_indices = np.array([[0, 0, 3], [0, 0, 0]]).astype(np.int64) expect(node, inputs=[boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold], outputs=[selected_indices], name='test_nonmaxsuppression_suppress_by_IOU_and_scores') @staticmethod def export_nonmaxsuppression_flipped_coordinates(): # type: () -> None node = onnx.helper.make_node( 'NonMaxSuppression', inputs=['boxes', 'scores', 'max_output_boxes_per_class', 'iou_threshold', 'score_threshold'], outputs=['selected_indices'] ) boxes = np.array([[ [1.0, 1.0, 0.0, 0.0], [0.0, 0.1, 1.0, 1.1], [0.0, 0.9, 1.0, -0.1], [0.0, 10.0, 1.0, 11.0], [1.0, 10.1, 0.0, 11.1], [1.0, 101.0, 0.0, 100.0] ]]).astype(np.float32) scores = np.array([[[0.9, 0.75, 0.6, 0.95, 0.5, 0.3]]]).astype(np.float32) max_output_boxes_per_class = np.array([3]).astype(np.int64) iou_threshold = np.array([0.5]).astype(np.float32) score_threshold = np.array([0.0]).astype(np.float32) selected_indices = np.array([[0, 0, 3], [0, 0, 0], [0, 0, 5]]).astype(np.int64) expect(node, inputs=[boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold], outputs=[selected_indices], name='test_nonmaxsuppression_flipped_coordinates') @staticmethod def export_nonmaxsuppression_limit_output_size(): # type: () -> None node = onnx.helper.make_node( 'NonMaxSuppression', inputs=['boxes', 'scores', 'max_output_boxes_per_class', 'iou_threshold', 'score_threshold'], outputs=['selected_indices'] ) boxes = np.array([[ [0.0, 0.0, 1.0, 1.0], [0.0, 0.1, 1.0, 1.1], [0.0, -0.1, 1.0, 0.9], [0.0, 10.0, 1.0, 11.0], [0.0, 10.1, 1.0, 11.1], [0.0, 100.0, 1.0, 101.0] ]]).astype(np.float32) scores = np.array([[[0.9, 0.75, 0.6, 0.95, 0.5, 0.3]]]).astype(np.float32) max_output_boxes_per_class = np.array([2]).astype(np.int64) iou_threshold = np.array([0.5]).astype(np.float32) score_threshold = np.array([0.0]).astype(np.float32) selected_indices = np.array([[0, 0, 3], [0, 0, 0]]).astype(np.int64) expect(node, inputs=[boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold], outputs=[selected_indices], name='test_nonmaxsuppression_limit_output_size') @staticmethod def export_nonmaxsuppression_single_box(): # type: () -> None node = onnx.helper.make_node( 'NonMaxSuppression', inputs=['boxes', 'scores', 'max_output_boxes_per_class', 'iou_threshold', 'score_threshold'], outputs=['selected_indices'] ) boxes = np.array([[ [0.0, 0.0, 1.0, 1.0] ]]).astype(np.float32) scores = np.array([[[0.9]]]).astype(np.float32) max_output_boxes_per_class = np.array([3]).astype(np.int64) iou_threshold = np.array([0.5]).astype(np.float32) score_threshold = np.array([0.0]).astype(np.float32) selected_indices = np.array([[0, 0, 0]]).astype(np.int64) expect(node, inputs=[boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold], outputs=[selected_indices], name='test_nonmaxsuppression_single_box') @staticmethod def export_nonmaxsuppression_identical_boxes(): # type: () -> None node = onnx.helper.make_node( 'NonMaxSuppression', inputs=['boxes', 'scores', 'max_output_boxes_per_class', 'iou_threshold', 'score_threshold'], outputs=['selected_indices'] ) boxes = np.array([[ [0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0], [0.0, 0.0, 1.0, 1.0] ]]).astype(np.float32) scores = np.array([[[0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.9]]]).astype(np.float32) max_output_boxes_per_class = np.array([3]).astype(np.int64) iou_threshold = np.array([0.5]).astype(np.float32) score_threshold = np.array([0.0]).astype(np.float32) selected_indices = np.array([[0, 0, 0]]).astype(np.int64) expect(node, inputs=[boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold], outputs=[selected_indices], name='test_nonmaxsuppression_identical_boxes') @staticmethod def export_nonmaxsuppression_center_point_box_format(): # type: () -> None node = onnx.helper.make_node( 'NonMaxSuppression', inputs=['boxes', 'scores', 'max_output_boxes_per_class', 'iou_threshold', 'score_threshold'], outputs=['selected_indices'], center_point_box=1 ) boxes = np.array([[ [0.5, 0.5, 1.0, 1.0], [0.5, 0.6, 1.0, 1.0], [0.5, 0.4, 1.0, 1.0], [0.5, 10.5, 1.0, 1.0], [0.5, 10.6, 1.0, 1.0], [0.5, 100.5, 1.0, 1.0] ]]).astype(np.float32) scores = np.array([[[0.9, 0.75, 0.6, 0.95, 0.5, 0.3]]]).astype(np.float32) max_output_boxes_per_class = np.array([3]).astype(np.int64) iou_threshold = np.array([0.5]).astype(np.float32) score_threshold = np.array([0.0]).astype(np.float32) selected_indices = np.array([[0, 0, 3], [0, 0, 0], [0, 0, 5]]).astype(np.int64) expect(node, inputs=[boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold], outputs=[selected_indices], name='test_nonmaxsuppression_center_point_box_format') @staticmethod def export_nonmaxsuppression_two_classes(): # type: () -> None node = onnx.helper.make_node( 'NonMaxSuppression', inputs=['boxes', 'scores', 'max_output_boxes_per_class', 'iou_threshold', 'score_threshold'], outputs=['selected_indices'] ) boxes = np.array([[ [0.0, 0.0, 1.0, 1.0], [0.0, 0.1, 1.0, 1.1], [0.0, -0.1, 1.0, 0.9], [0.0, 10.0, 1.0, 11.0], [0.0, 10.1, 1.0, 11.1], [0.0, 100.0, 1.0, 101.0] ]]).astype(np.float32) scores = np.array([[[0.9, 0.75, 0.6, 0.95, 0.5, 0.3], [0.9, 0.75, 0.6, 0.95, 0.5, 0.3]]]).astype(np.float32) max_output_boxes_per_class = np.array([2]).astype(np.int64) iou_threshold = np.array([0.5]).astype(np.float32) score_threshold = np.array([0.0]).astype(np.float32) selected_indices = np.array([[0, 0, 3], [0, 0, 0], [0, 1, 3], [0, 1, 0]]).astype(np.int64) expect(node, inputs=[boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold], outputs=[selected_indices], name='test_nonmaxsuppression_two_classes') @staticmethod def export_nonmaxsuppression_two_batches(): # type: () -> None node = onnx.helper.make_node( 'NonMaxSuppression', inputs=['boxes', 'scores', 'max_output_boxes_per_class', 'iou_threshold', 'score_threshold'], outputs=['selected_indices'] ) boxes = np.array([[[0.0, 0.0, 1.0, 1.0], [0.0, 0.1, 1.0, 1.1], [0.0, -0.1, 1.0, 0.9], [0.0, 10.0, 1.0, 11.0], [0.0, 10.1, 1.0, 11.1], [0.0, 100.0, 1.0, 101.0]], [[0.0, 0.0, 1.0, 1.0], [0.0, 0.1, 1.0, 1.1], [0.0, -0.1, 1.0, 0.9], [0.0, 10.0, 1.0, 11.0], [0.0, 10.1, 1.0, 11.1], [0.0, 100.0, 1.0, 101.0]]]).astype(np.float32) scores = np.array([[[0.9, 0.75, 0.6, 0.95, 0.5, 0.3]], [[0.9, 0.75, 0.6, 0.95, 0.5, 0.3]]]).astype(np.float32) max_output_boxes_per_class = np.array([2]).astype(np.int64) iou_threshold = np.array([0.5]).astype(np.float32) score_threshold = np.array([0.0]).astype(np.float32) selected_indices = np.array([[0, 0, 3], [0, 0, 0], [1, 0, 3], [1, 0, 0]]).astype(np.int64) expect(node, inputs=[boxes, scores, max_output_boxes_per_class, iou_threshold, score_threshold], outputs=[selected_indices], name='test_nonmaxsuppression_two_batches')
--- abstract: 'Compton scattering of a laser beam with a relativistic electron beam has been used to generate intense, highly polarized and nearly monoenergetic x-ray or gamma-ray beams at many facilities. The ability to predict the spatial, spectral and temporal characteristics of a Compton gamma-ray beam is crucial for the optimization of the operation of a Compton light source as well as for the applications utilizing the Compton beam. In this paper, we present two approaches, one based upon analytical calculations and the other based upon Monte Carlo simulations, to study the Compton scattering process for various electron and laser beam parameters as well as different gamma-beam collimation conditions. These approaches have been successfully applied to characterize Compton gamma-ray beams, after being benchmarked against experimental results at the High Intensity Gamma-ray Source (HI$\gamma$S) facility at Duke University.' author: - 'C. Sun[^1]' - 'Y. K. Wu' bibliography: - 'compton\_scattering.bib' title: Theoretical and simulation studies of characteristics of a Compton light source --- Introduction ============ Compton scattering of a laser beam with a relativistic electron beam has been successfully used to generate intense, highly polarized and nearly monoenergetic x-ray or gamma-ray beams with a tunable energy at many facilities [@H.R.Weller; @Nakano:2001xp; @2003SPIE.5197..241K]. These unique Compton photon beams have been used in a wide range of basic and application research fields from nuclear physics to astrophysics, from medical research to homeland security and industrial applications [@H.R.Weller]. The ability to predict the spectral, spatial and temporal characteristics of a Compton gamma-ray beam is crucial for the optimization of the gamma-ray beam production as well as for research applications utilizing the beam. While the theory of particle-particle (or electron-photon) Compton scattering, which is equivalent to the scattering between a monoenergetic electron beam and a monoenergetic laser beam with zero transverse sizes, is well documented in literature [@QED_landau; @CFT_landau; @Jackson], there remains a need to fully understand the characteristics of the gamma-ray beam produced by Compton scattering of a laser beam and an electron beam with specific spatial and energy distributions, i.e., the beam-beam scattering. Study of beam-beam Compton scattering has been recently reported in  [@Hartemann:2005zz; @Brown:2004zz]. However, the algorithms used in these works are based upon the Thomson scattering cross section, i.e., an elastic scattering of electromagnetic radiation by a charged particle without the recoil effect. For scattering of a high energy electron beam and a laser beam, the recoil of the electron must be taken into account. The Compton scattering cross section has been used to study characteristics of Compton gamma-ray beams by Duke scientists in $1990$’s [@Vladimir; @Park_thesis]. However, the effects of incoming beam parameters, and the effects of gamma-beam collimation were not fully taken into account. In this paper, we present two different methods, a semi-analytical calculation and a Monte Carlo simulation, to study the Compton scattering process of a polarized (or unpolarized) laser beam with an unpolarized electron beam in the linear Compton scattering regime. Using these two methods, we are able to characterize a Compton gamma-ray beam with various laser and electron beam parameters, arbitrary collision angles, and different gamma-beam collimation conditions. This paper is organized as follows. In Section II, we first review the calculation of the Compton scattered photon energy for an arbitrary collision angle, and then introduce the scattering cross section in a Lorentz invariant form. Based upon this cross section, the spatial and spectral distributions as well as the polarization of a Compton gamma-ray beam are investigated in particle-particle scattering cases. In Section III, we discuss the beam-beam Compton scattering by considering effects of the incoming beam parameters as well as the effect of the gamma-ray beam collimation. Two methods, a semi-analytical calculation and a Monte Carlo simulation, are then presented. Based upon the algorithms of these methods, two computing codes, a numerical integration code and a Monte Carlo simulation code, have been developed at Duke University. The benchmarking results and applications of these two codes are presented in Section IV. The summary is given in Section V. Particle-particle scattering ============================ Scattered photon energy ------------------------ A review of the calculation of scattered photon energies in the particle-particle scattering case is in order. Figure \[electron\_lab\_frame\] shows the geometry of Compton scattering of an electron and a photon in a laboratory frame coordinate system $(x_e,y_e,z_e)$ in which the incident electron with a momentum $\vec{p}$ is moving along the $z_e$-direction. The incident photon with a momentum $\hbar\vec{k}$ ($\hbar$ is the Planck constant) is propagated along the direction with angles $(\theta_i, \phi_i)$. The collision occurs at the origin of the coordinate system. After the collision, the photon with a momentum $\hbar\vec{k}^\prime$ is scattered into the direction of $(\theta_f, \phi_f)$. ![\[electron\_lab\_frame\] Geometry of Compton scattering of an electron and a photon in a lab frame coordinate system $(x_e,y_e,z_e)$ in which the electron is incident along the $z_e$-direction. The incident photon is propagating along the direction given by the polar angle $\theta_i$ and azimuthal angle $\phi_i$. The collision occurs at the origin of the coordinate system. After the scattering, the scattered photon propagates in the direction given by the polar angle $\theta_f$ and azimuthal angle $\phi_f$. $\theta_p$ is the angle between the momenta of incident and scattered photons, $\vec{k}$ and $\vec{k^\prime}$. The electron after scattering is not shown in the figure.](electron_photon_scattering.eps){width="\columnwidth"} According to the conservation of the 4-momenta before and after scattering, we can have $$p+ k=p^\prime+ k^\prime, \label{conservation_law}$$ where $p=(E_e/c, \vec{p})$ and $k=(E_p/c,\hbar \vec{k})$ are the 4-momenta of the electron and photon before the scattering, respectively; $p^\prime=(E^\prime_e/c, \vec{p}^\prime)$ and $k^\prime=(E_g/c,\hbar \vec{k}^\prime)$ are their 4-momenta after the scattering; $E_e$ and $E_p$ are the energies of the electron and photon before the scattering; $E_e^\prime$ and $E_g$ are their energies after the scattering; and $c$ is the speed of light. Squaring both sides of Eq. (\[conservation\_law\]) and following some simple manipulations, we can obtain the scattered photon energy as follows, $$E_g = \frac{(1-\beta \cos\theta_i)E_p}{(1-\beta \cos\theta_f)+(1-\cos\theta_p)E_p/E_e}, \label{scatteredphotonenergy}$$ where $\beta = v/c$ is the speed of the incident electron relative to the speed of light, and $\theta_p$ is the angle between the momenta of the incident and scattered photons (Fig. \[electron\_lab\_frame\]). For a head-on collision, $\theta_i = \pi$ and $\theta_p = \pi-\theta_f$, Eq. (\[scatteredphotonenergy\]) can be simplified to $$E_g = \frac{(1+\beta)E_p}{(1-\beta\cos\theta_f)+(1+\cos\theta_f)E_p/E_e}. \label{energy_head_on}$$ Clearly, given the energies of the incident electron and photon, $E_e$ and $E_p$, the scattered photon energy $E_g$ only depends on the scattering angle $\theta_f$, independent of the azimuth angle $\phi_f$. The relation between the scattered photon energy $E_g$ and scattering angle $\theta_f$ is demonstrated in Fig. \[sim\_en\_sp\_dist\]. In this figure, the scattered photon energies $E_g$ are indicated by the quantities associated with the concentric circles in the observation plane, and the scattering angles $\theta_f$ are represented by the radii $R$ of the circles, i.e, $\theta_f = R/L$, where $L=60$ meters is the distance between the collision point and the observation plane. We can see that the scattered photons with higher energies are concentrated around the center ($\theta_f = 0$), while lower energy photons are distributed away from the center. Such a relation, in principle, allows the formation of a scattered photon beam with a small energy-spread using a simple geometrical collimation technique. ![\[sim\_en\_sp\_dist\]The relation between the scattered photon energy (in MeV) and scattering angle in an observation plane, which is $60$ meters downstream from the collision point. The scattered photons are produced by $800$ nm photons scattering with $500$ MeV electrons. Each concentric circle is an equi-energy contour curve of the energy distribution of scattered photons.](simp_en_sp_dist.eps){width="\columnwidth"} For a small scattering angle ($\theta_f\ll1$) and an ultra-relativistic electron ($\gamma\gg1$), Eq. ($\ref{energy_head_on}$) can be simplified to $$E_g \approx \frac{4\gamma^2E_p}{1+\gamma^2\theta_f^2+4\gamma^2 E_p/E_e}, \label{scatteredphotonenergy_headon}$$ where $\gamma = E_e/(mc^2)$ is the Lorentz factor of the electron and $mc^2$ is its rest energy. When the photon is scattered into the backward direction of the incident photon (i.e., $\theta_f = 0$, sometimes called backscattering), the scattered photon energy will reach the maximum value given by $$E_g^{max} = \frac{4\gamma^2E_p}{1+4\gamma^2 E_p/E_e}. \label{max_energy_compton}$$ Neglecting the recoil effect, i.e., $4\gamma^2 E_p/E_e\ll1$, Eq. (\[max\_energy\_compton\]) can be reduced to the result given by the *relativistic Thomson scattering* theory [@Brown:2004zz] $$E_g^{max} \approx 4\gamma^2E_p. \label{energy_thomson}$$ We can see that the incident photon energy $E_p$ is boosted by a factor of approximately $4\gamma^2$ after the backscattering. Therefore, the Compton scattering of photons with relativistic electrons can be used to produce high energy photons, i.e., gamma-ray photons. Under a set of conditions $\theta_i \approx \pi$ and $\theta_f\approx 0$, the uncertainties of the scattered photon energy $E_g$ due to the uncertainties of the variables ($E_e,~E_p,~\theta_f$ and $\theta_i$) in Eq. (\[scatteredphotonenergy\]) can be estimated [@Park_thesis; @Park_paper]. For example, the relative uncertainty of the scattered photon energy $\Delta E_g/E_g$ due to the uncertainty of the electron beam energy $\Delta E_e/E_e$ is given by taking the derivative of Eq. (\[scatteredphotonenergy\]) with respect to $E_e$, i.e., $$\frac{\Delta E_g}{ E_g} \approx 2(1-\frac{2\gamma^2E_p/E_e}{1+4\gamma^2E_p/E_e})\frac{\Delta E_e}{E_e}\approx 2 \frac{\Delta E_e}{E_e}.$$ Contributions to $\Delta E_g/E_g$ associated with other variables are summarized in Table \[depedences\]. -------------------------------------------------------------------------------------------------------------- Variables Contributions Approximated contributions ------------ --------------------------------------------------------------- ------------------------------ -- $E_e$ $2(1-\frac{2\gamma^2E_p/E_e}{1+4\gamma^2E_p/E_e})\frac{\Delta 2$\frac{\Delta E_e}{E_e}$ E_e}{E_e} $ $E_p$ $\frac{1}{1+4\gamma^2E_p/E_e}\frac{\Delta E_p}{E_p}$ $\frac{\Delta E_p}{E_p}$ $\theta_f$ $-\frac{\gamma^2}{1+4\gamma^2E_p/E_e}\Delta \theta_f^2$ $-\gamma^2\Delta \theta_f^2$ $\theta_i$ $-\frac{\beta}{4}\Delta \theta_i^2$ $-\frac{1}{4}\Delta \theta_i^2$ -------------------------------------------------------------------------------------------------------------- : \[depedences\]Relative uncertainty of the scattered photon energy $\Delta E_g/ E_g$ due to the uncertainties of various variables in Eq. (\[scatteredphotonenergy\]) under assumptions of $\theta_i \approx \pi$ and $\theta_f\approx 0$. Scattering cross section ------------------------ ### Lorentz invariant form The general problem concerning the collision is to find the probabilities of final states for a given initial state of the system, i.e., the scattering cross section. Using Quantum Electrodynamics (QED) theory, the Compton scattering cross section in the Lorentz invariant form has been calculated in [@QED_landau; @Grozin_book; @Grozin_paper], and the result for unpolarized electrons scattering with polarized photons is given by $$\begin{aligned} \frac{\mathrm{d}\sigma}{\mathrm{d}Y\mathrm{d}\phi_f}&=&\frac{2 r^2_e}{X^2}\left\lbrace \left(\frac{1}{X}-\frac{1}{Y}\right)^2+\frac{1}{X}-\frac{1}{Y}+\frac{1}{4} \left(\frac{X}{Y}+\frac{Y}{X}\right)-(\xi_3+\xi^\prime_3)\left[\left(\frac{1}{X} -\frac{1}{Y}\right)^2+\frac{1}{X}-\frac{1}{Y}\right]\right.\nonumber\\ &&\!\!\!\!\!\!\!\!\!\left.+\xi_1\xi^\prime_1\left(\frac{1}{X}-\frac{1}{Y}+\frac{ 1}{2}\right)+\xi_2\xi^\prime_2\frac{1}{4}\left(\frac{X}{Y}+\frac{Y}{X} \right)\left(1+\frac{2}{X}-\frac{2}{Y}\right)+\xi_3\xi^\prime_3\left[\left(\frac {1}{X}-\frac{1}{Y}\right)^2+\frac{1}{X}-\frac{1}{Y}+\frac{1}{2}\right] \right\rbrace, \label{covariant_crosssection}\end{aligned}$$ where $r_e$ is the *classical electron radius*; $\phi_f$ is the azimuthal angle of the scattered photon; $\xi_{1,2,3}$ and $\xi^\prime_{1,2,3}$ are Stokes parameters describing the incident and scattered photon polarizations in their respective coordinate systems; $X$ and $Y$ are the Lorentz invariant variables defined as follows $$X = \frac{s-(mc)^2}{(mc)^2},~Y = \frac{(mc)^2-u}{(mc)^2}, \label{invariant_quantities}$$ where $s$ and $u$ are the *Mandelstam variables* [@QED_landau] given by $$s=(p+k)^2,~u=(p-k^\prime)^2.$$ $X$ and $Y$ satisfy the inequalities [@QED_landau] $$\frac{X}{X+1} \leq Y \leq X. \label{inequality}$$ Since the scattering cross section of Eq. (\[covariant\_crosssection\]) is expressed in the Lorentz invariants, it can easily be expressed in terms of the collision parameters defined in any specific frame of reference. ### Polarization description in lab frame ![\[scattering plane\]Coordinate systems of Compton scattering of an electron and a photon in a laboratory frame. ($x_e,y_e,z_e$) is the coordinate system for the incident electron ($\vec{p}$) moving along the $z_e$-axis direction. For the head-on collision, the incident photon ($\vec{k}$) comes along the negative $z_e$-axis, and the scattered photon ($\vec{k}^\prime$) is moving along the direction given by the polar angle $\theta_f$ and azimuthal angle $\phi_f$. The momentum vectors $\vec{k}$ and $\vec{k}^\prime$ form the scattering plane. $(\tilde{x}, \tilde{y}, \tilde{z})$ is a right-hand coordinate system attached to the scattering plane. The $\tilde{z}$-axis is along the direction of $\vec{k}$; $\tilde{x}$-axis is perpendicular to the scatter plane; and $\tilde{y}$-axis is in the scattering plane. $(\tilde{x}^\prime, \tilde{y}^\prime, \tilde{z}^\prime)$ is another right-hand coordinate system attached to the scattering plane. The $\tilde{z}^\prime$-axis is along the direction of $\vec{k}^\prime$; $\tilde{x}^\prime$-axis is the same as the $\tilde{x}$-axis; and $\tilde{y}^\prime$-axis lies in the scattering plane.](scattering_plane.eps){width="\columnwidth"} In the laboratory frame, three right-hand coordinate systems are used in Eq. (\[covariant\_crosssection\]) to describe the motion and polarization of the incident electron $(x_e,y_e,z_e)$, the incident photon $(\tilde{x}, \tilde{y}, \tilde{z})$, and the scattered photon $(\tilde{x}^\prime, \tilde{y}^\prime,\tilde{z}^\prime)$ (Fig. \[scattering plane\]). The coordinate system $(x_e,y_e,z_e)$ is fixed in the lab frame, and its $z_e$-axis is along the incident direction of the electron. $(\tilde{x}, \tilde{y}, \tilde{z})$ and $(\tilde{x}^\prime, \tilde{y}^\prime, \tilde{z}^\prime)$ are the local coordinate systems attached to the scattering plane formed by the momenta of the incident and scattered photons, $\vec{k}$ and $\vec{k}^\prime$. For $(\tilde{x}, \tilde{y}, \tilde{z})$, the $\tilde{x}$-axis is perpendicular to the scattering plane; the $\tilde{y}$- and $\tilde{z}$-axes are in the scattering plane with the $\tilde{z}$-axis along the direction of $\vec{k}$. For $(\tilde{x}^\prime,\tilde{y}^\prime,\tilde{z}^\prime)$, the $\tilde{x}^\prime$-axis is the same as the $\tilde{x}$-axis for the incident photon, perpendicular to the scattering plane; and the $\tilde{z}^\prime$-axis is along the direction of $\vec{k}^\prime$. The Stokes parameters $\xi^{(\prime)}_{1,2,3}$ of the incident and scattered photons in Eq. (\[covariant\_crosssection\]) are defined in their local coordinate systems, respectively. The parameter $\xi^{(\prime)}_3$ describes the linear polarization of the photon along the $\tilde{x}^{(\prime)}$- or $\tilde{y}^{(\prime)}$-axis; the parameter $\xi_1^{(\prime)}$ describes the linear polarization along the direction at $\pm45^\circ$ angles relative to the $\tilde{x}^{(\prime)}$-axis; and the parameter $\xi_2^{(\prime)}$ represents the degree of circular polarization of the photon. The polarization of the photon is always defined in its local coordinate system with its momentum being one of the axes. For Compton scattering described by Eq. (\[covariant\_crosssection\]), these local coordinate systems $(\tilde{x}, \tilde{y}, \tilde{z})$ and $(\tilde{x}^\prime, \tilde{y}^\prime, \tilde{z}^\prime)$ are different for different scattering planes. However, for the cases that the photons and electrons collide nearly head on to produce high-energy photons with small scattering angles, it becomes possible to conveniently express in an approximate manner the polarization of the incident and scattered photons using a fixed coordinate system, for example, the lab-frame electron coordinate system $(x_e, y_e, z_e)$. Let us consider the incident photon with its $\tilde{z}$-axis approximately parallel to the negative $z_e$-axis. The Stokes parameter of the incident photon can be related to the degrees of polarization defined in the fixed electron coordinate system through the following equations [@CFT_landau; @ginzburg], $$\begin{aligned} \xi_1 &\approx& P_t \sin (2\tau-2\phi_f)), \nonumber\\ \xi_2 &\approx& P_c, \nonumber\\ \xi_3 &\approx& -P_t \cos (2\tau-2\phi_f)), \label{stokes_lab_initial_photon}\end{aligned}$$ where $P_t$ and $P_c$ are the degree of linear and circular polarizations of the incident photon defined in the coordinate system $(x_e,y_e,z_e)$, respectively; $\tau$ is the azimuthal angle of the linear polarization $P_t$ with respect to the $x_e$-axis; and $\phi_f$ is the azimuthal angle of the scattering plane. For Compton scattering involving an ultra-relativistic electron, scattered photons are concentrated in a small scattering angle ($\theta_f < 1/\gamma$). For these high-energy photons with small scattering angles, their $\tilde{z}^\prime$-axes are approximately parallel to the $z_e$-axis. Neglecting the polar angle (i.e. $\theta_f \ll 1$), the Stokes parameters of the scattered photon can be expressed approximately using a set of Stokes parameters defined in the fixed electron coordinate system as [@ginzburg], $$\begin{aligned} \xi^\prime_1&\approx&-\bar{\xi}^\prime_1\cos2\phi_f+\bar{\xi}^\prime_3\sin2\phi_f, \nonumber\\ ~\xi^\prime_2&\approx&\bar{\xi}^\prime_2,\nonumber\\ ~\xi^\prime_3&\approx&-\bar{\xi}^\prime_1\sin2\phi_f-\bar{\xi}^\prime_3\cos2\phi_f, \label{final_photon_stokes}\end{aligned}$$ where $\bar{\xi}'_{1,2,3}$ are the Stokes parameters defined in the coordinate system $(x_e, y_e, z_e)$. Spatial and energy distributions of scattered photons ----------------------------------------------------- Based upon Eqs. (\[covariant\_crosssection\]), (\[stokes\_lab\_initial\_photon\]) and (\[final\_photon\_stokes\]), we can calculate the spatial and energy distributions of a gamma-ray beam produced by Compton scattering of a monoenergetic electron and laser beams with zero transverse beam sizes, i.e., the particle-particle scattering. Let us consider Compton scattering of an unpolarized electron and a polarized laser photon without regard to their polarizations after the scattering. The differential cross section is obtained by setting $\xi^\prime_{1,2,3}$ to zero in Eq. (\[covariant\_crosssection\]) and multiplying the result by a factor of two for the summation over the polarizations of the scattered photons [@QED_landau]. Thus, the differential cross section is given by [@Park_paper] $$\begin{aligned} \frac{\mathrm{d}\sigma}{\mathrm{d}Y\mathrm{d}\phi_f} & = & \frac{4r^2_e}{X^2}\left\lbrace (1-\xi_3)\left[\left(\frac{1}{X}-\frac{1}{Y}\right)^2+\frac{1}{X}-\frac{1}{Y} \right]\right.\nonumber\\ &&\left. +\frac{1}{4}\left(\frac{X}{Y}+\frac{Y}{X}\right)\right\rbrace . \label{angular_dif_crosssection}\end{aligned}$$ The total cross section can be obtained by integrating Eq. (\[angular\_dif\_crosssection\]) with respect to $Y$ and $\phi_f$, $$\begin{aligned} \sigma_{tot} &=& 2\pi r_e^2\frac{1}{X}\left\lbrace\left(1-\frac{4}{X}-\frac{8}{X^2} \right)\log(1+X)\right.\nonumber\\ &&\left.+\frac{1}{2} +\frac{8}{X}-\frac{1}{2(1+X)^2}\right\rbrace. \label{tot_scat_cross}\end{aligned}$$ Note that the Stokes parameter $\xi_3$ depends on $\phi_f$; however, after integration over $\phi_f$ the dependence vanishes. Neglecting the recoil effect ($X\ll1$), we can have $$\sigma_{tot} = \frac{8\pi r_e^2}{3}(1-X)\approx \frac{8\pi r_e^2}{3}, \label{total_cross_section}$$ which is just the *classical Thomson cross section*. ### Spatial distribution For a head-on collision ($\theta_i=\pi$) in a laboratory frame, according to Eq. (\[invariant\_quantities\]) the Lorentz invariant quantities $X$ and $Y$ are given by $$X = \frac{2\gamma E_p(1+\beta)}{mc^2},~Y=\frac{2\gamma E_g(1-\beta\cos\theta_f)}{mc^2}, \label{invariant_lab_frame}$$ and $$\mathrm{d}Y=2 \left(\frac{E_g}{mc^2}\right)^2\sin\theta_f\mathrm{d}\theta_f. \label{dy_domega}$$ Substituting $\mathrm{d}Y$ into Eq. (\[angular\_dif\_crosssection\]), the angular differential cross section is given by $$\begin{aligned} \frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\!\!\! &=& \!\!\!\frac{8r^2_e}{X^2}\!\!\left\lbrace \![1\!+\!P_t\cos(2\tau\!-\!\!2\phi_f)]\!\!\left[\left(\frac{1}{X}-\frac{1}{Y} \right)^2\!\!+\!\!\frac{1}{X}\!-\!\frac{1}{Y}\right]\right.\nonumber\\ &&\left.+\frac{1}{4}\left(\frac{X}{Y}+\frac{Y}{X}\right)\right\rbrace \left(\frac{E_g}{mc^2}\right)^2. \label{crosssection-1}\end{aligned}$$ where $\mathrm{d}\Omega = \sin\theta_f \mathrm{d}\theta_f \mathrm{d}\phi_f$ and $\xi_3$ has been expressed in terms of $P_t$ (Eq. (\[stokes\_lab\_initial\_photon\])). From Eq. (\[crosssection-1\]), we can see that the differential cross section depends on the azimuthal angle $\phi_f$ of the scattered photon through the term $P_t\cos(2\tau-2\phi_f)$. For a circularly polarized or unpolarized incident photon beam ($P_t = 0$), this dependency vanishes. Therefore, the distribution of scattered photons is azimuthally symmetric. However, for a linearly polarized incident photon beam ($P_t \neq 0$), the differential cross section is azimuthally modulated, and the gamma photon distribution is azimuthally asymmetric. Figs. \[cir\_dist\] and \[linear\_dist\] illustrate the spatial distributions of Compton gamma photons at a location $60$ meters downstream from the collision point for both circularly and linearly polarized incident photon beams. In these figures we can also see that the distribution of scattered photons peaks sharply along the direction of the incident electron beam. This demonstrates that the gamma-ray photons produced by Compton scattering of a relativistic electron beam and a laser beam are mostly scattered into the electron beam direction within a narrow cone. ------------------------------------ ----------------------------------------- ![image](cir_dist_3d){width="3in"} ![image](cir_dist_contour){width="3in"} ------------------------------------ ----------------------------------------- ------------------------------------ ----------------------------------------- ![image](lin_dist_3d){width="3in"} ![image](lin_dist_contour){width="3in"} ------------------------------------ ----------------------------------------- ### Energy distribution For a head-on collision in the laboratory frame, it can be shown that $$Y = X\frac{\beta E_e-E_g}{\beta E_e-E_p}.$$ Thus, $$\mathrm{d}Y = -X\frac{\mathrm{d} E_g}{\beta E_e-E_p}. \label{dy_dEg}$$ Substituting $\mathrm{d}Y$ in Eq. (\[angular\_dif\_crosssection\]) and integrating the result with respect to the azimuth angle $\phi_f$, we can obtain the energy distribution of scattered photons as follows $$\begin{aligned} \frac{\mathrm{d}\sigma}{\mathrm{d}E_g } & = &\frac{8 \pi r^2_e}{X(\beta E_e-E_p)} \left[\left(\frac{1}{X}-\frac{1}{Y}\right)^2+\frac{1}{X}-\frac{1}{Y} \right.\nonumber\\ &&\left.+\frac{1}{4}\left(\frac{X}{Y}+\frac{Y}{X}\right)\right]. \label{crosssection-en}\end{aligned}$$ The energy spectrum calculated using Eq. (\[crosssection-en\]) is shown in Fig. \[sim\_en\_dist\]. The spectrum has a high energy cutoff edge which is determined by the incident electron and photon energies according to Eq. (\[max\_energy\_compton\]). In Fig. \[sim\_en\_dist\], we can see the spectral intensity has a maximum value at the scattering angle $\theta_f=0$, and a minimum value around the scattering angle $\theta_f = 1/\gamma$. The ratio between them is about $2$ when the recoil effect is negligible. This will be shown in the next section. ![\[sim\_en\_dist\]The computed energy distribution of Compton gamma-ray photons produced by a head-on collision of a $800$ nm laser beam with a $500$ MeV electron beam. The scaled scattering angle $\gamma\theta_f$ with the electron Lorentz factor versus the gamma-ray photon energy is also shown in the plot. The solid line represents the energy distribution of the gamma-ray photons, and the dash line represents the relation between the scaled scattering angle and photon energy.](simple_en_dist){width="\columnwidth"} Note that the energy spectrum shown in Fig. \[sim\_en\_dist\] is for a Compton gamma-ray beam without collimation. However, if the gamma-ray beam is collimated by a round aperture with a radius of $R$ and distance $L$ from the collision point, the energy spectrum will have a low energy cutoff edge, and its value can be calculated using Eq. (\[scatteredphotonenergy\_headon\]) with $\theta_f = R/L$. ### Observations for a small recoil effect For a small recoil effect ($X\ll1$), we can approximate Eqs. (\[crosssection-1\]) and (\[crosssection-en\]) to draw several useful conclusions. For convenience, we first define $$f(Y) = \left(\frac{1}{X}-\frac{1}{Y}\right)^2+\frac{1}{X}-\frac{1}{Y}+\frac{1}{4} \left(\frac{X}{Y}+\frac{Y}{X}\right). \label{f_term}$$ Using the inequality Eq. (\[inequality\]), it can be found that $$\frac{1}{4(1+X)} \leq f(Y) \leq \frac{2+X}{4},$$ approximately (with a negligible recoil effect, $X\ll1$), $$\frac{1}{4}\leq f(Y)\leq \frac{1}{2}.$$ Thus, the maximum and minimum spectral flux of the Compton gamma-ray beam are given by $$(\frac{\mathrm{d}\sigma}{\mathrm{d}E_g})_{max} = \frac{8 \pi r^2_e}{X(\beta E_e-E_p)}\frac{2+X}{4}, \label{maximum_spectrum_intensity}$$ and $$(\frac{\mathrm{d}\sigma}{\mathrm{d}E_g})_{min} = \frac{8 \pi r^2_e}{X(\beta E_e-E_p)}\frac{1}{4(1+X)}.$$ The ratio between them is $$\frac{(\mathrm{d}\sigma/\mathrm{d}E_g)_{max}}{(\mathrm{d}\sigma/\mathrm{d}E_g)_ {min}} = (2+X)(1+X)\approx 2,$$ which is shown in Fig. \[sim\_en\_dist\]. When $\theta_f = 0$, we can have $$E_g \approx 4\gamma^2E_p,~Y \approx X(1-X).$$ Substituting $Y$ in Eq. (\[f\_term\]), we have $f(Y) \approx 1/2$. Thus, the spectral flux has a maximum value around the scattering angle $\theta_f = 0$. When $\theta_f = 1/\gamma$, we can have $$E_g \approx 2\gamma^2E_p,~Y \approx X(1-\frac{X}{2}).$$ Substituting $Y$ into Eq. (\[f\_term\]), we have $f(Y)\approx 1/4$. Therefore, the spectral flux has a minimum value around the scattering angle $\theta_f = 1/\gamma$. These results are illustrated in Fig. \[sim\_en\_dist\]. Expressed in terms of the total scattering cross section of Eq. (\[total\_cross\_section\]), the fraction of scattered photons in the energy range $[E_g^{max}-\Delta E_g^{max}, E_g^{max}]$ can be found approximately as $$\frac{\Delta \sigma_{max}}{\sigma_{tot}} \approx \frac{3(2+X)}{4(1-X)}\frac{\Delta E_g^{max}}{E_g^{max}}\approx 1.5\frac{\Delta E_g^{max}}{E_g^{max}}.$$ This is a simple formula which can be used to estimate the portion of the total gamma-ray flux with a desirable energy spread $\Delta E_g^{max}$ after collimation. For a circularly polarized or unpolarized incident photon beam, according to Eq. (\[crosssection-1\]), it can also be calculated that the angular intensity of scattered gamma-ray photons at the scattering angle $\theta_f = 1/\gamma$ is about $1/8$ of the maximum intensity at the scattering angle $\theta_f = 0$, i.e., $$\frac{(\mathrm{d}\sigma/\mathrm{d}\Omega)_{\theta_f=1/\gamma}}{(\mathrm{d} \sigma/\mathrm{d}\Omega)_{\theta_f=0}}\approx\frac{1}{8}.$$ In addition, integrating Eq. (\[angular\_dif\_crosssection\]) over the entire solid angle of the cone with a half-opening angle of $1/\gamma$, i.e., integrating $Y$ over the range of $X(1-X/2) \leqslant Y \leqslant X(1-X)$ and $\phi_f$ over the range from $0$ to $2\pi$, we can have $$\sigma_1 = \int_0^{2\pi}\mathrm{d}\phi\int_0^{1/\gamma}\frac{\mathrm{d}\sigma}{\mathrm{d} \Omega}\sin\theta \mathrm{d}\theta\approx\frac{4\pi r_e^2}{3}=\frac{1}{2}\sigma_{tot}. \label{flux_in_cone}$$ Comparing Eq. (\[flux\_in\_cone\]) to the total cross section of Eq. (\[total\_cross\_section\]), we can conclude that about half of the total gamma-ray photons are scattered into the $1/\gamma$ cone. This can be explained by considering the Compton scattering in the electron rest frame. In this frame, the Compton scattering process is just like “dipole” radiation: the gamma-ray photons are scattered in all the directions, a half of the gamma photons is scattered into the forward direction, and the other half into the backward direction. When transformed to the laboratory frame, the gamma-ray photon scattered into the forward direction in the rest frame will be concentrated in the $1/\gamma$ cone in the laboratory frame. Polarization of scattered photons\[polarization\_study\] -------------------------------------------------------- For polarized photons scattering with unpolarized electrons without regard to the final electron polarization, the cross section is given by Eq. (\[covariant\_crosssection\]). Substituting $\xi_{1,2,3}$ and $\xi^\prime_{1,2,3}$ using Eqs. (\[stokes\_lab\_initial\_photon\]) and (\[final\_photon\_stokes\]), and assuming the linear polarization of the incident photon beam is along the $x_e$-axis, i.e., $\tau = 0$, we can get $$\frac{\mathrm{d}\sigma}{\mathrm{d}Y\mathrm{d}\phi_f}=\frac{2r^2_e}{X^2} \left(\Phi_0+\displaystyle\sum_{i=1}^3\Phi_i\bar{\xi}^\prime_i\right), \label{finalphoton-polar}$$ where $$\begin{aligned} \Phi_0&=&\left(\frac{1}{X}-\frac{1}{Y}\right)^2+\frac{1}{X}-\frac{1}{Y}+\frac{1} {4}\left(\frac{X}{Y}+\frac{Y}{X}\right)\nonumber\\ &&+\left[\left(\frac{1}{X}-\frac{1}{Y}\right)^2+\frac{1}{X}-\frac{1}{Y}\right] P_t\cos2\phi_f,\nonumber\\ \Phi_1&=&\frac{1}{2}\left(\frac{1}{X}-\frac{1}{Y} +1\right)^2P_t\sin4\phi_f\nonumber\\ &&+\left[\left(\frac{1}{X}-\frac{1}{Y}\right)^2+\frac{1}{X}-\frac{1}{Y}\right] \sin2\phi_f,\nonumber\\ \Phi_2&=&\frac{1}{4}\left(\frac{X}{Y}+\frac{Y}{X}\right)\left(\frac{2}{X}-\frac{ 2}{Y}+1\right)P_c,\nonumber\\ \Phi_3&=&-\left(\frac{1}{X}-\frac{1}{Y}+\frac{1}{2} \right)P_t\sin^22\phi_f\nonumber\\ &&+\left[\left(\frac{1}{X}-\frac{1}{Y}\right)^2+\frac{1}{X}-\frac{1}{Y}+\frac{1} {2}\right]P_t\cos^22\phi_f\nonumber\\ &&+\left[\left(\frac{1}{X}-\frac{1}{Y}\right)^2+\frac{1}{X}-\frac{1}{Y}\right] \cos2\phi_f.\end{aligned}$$ It should be noted that the Stokes parameters $\bar{\xi}^\prime_{1,2,3}$ describe the polarization of the scattered photon selected by a detector, not the polarization of the photon itself [@QED_landau]. In order to distinguish them from the detected Stokes parameters $\bar{\xi}^\prime_{1,2,3}$, we denote the Stokes parameters of the scattered photon itself by $\xi^f_{1,2,3}$. According to the rules presented in section 65 of [@QED_landau], $\xi^f_{1,2,3}$ are given by $$\xi^f_i = \frac{\Phi_i}{\Phi_0},~~i=1,2,3.$$ Integrating Eq. (\[finalphoton-polar\]) over the azimuthal angle $\phi_f$ gives $$\frac{\mathrm{d}\sigma}{\mathrm{d}Y}=\frac{2r^2_e}{X^2}\left\lbrace \langle\Phi_0\rangle+\displaystyle\sum_{i=1}^3\langle\Phi_i\rangle\langle\bar{ \xi}^\prime_i\rangle\right\rbrace,$$ where $$\begin{aligned} \langle\Phi_0\rangle &=&2\pi\left[\left(\frac{1}{Y}-\frac{1}{Y}\right)^2+\frac{1}{X}-\frac{1}{Y} +\frac{1}{4}\left(\frac{X}{Y}+\frac{Y}{X}\right)\right],\nonumber\\ \langle\Phi_1\rangle&=&0,\nonumber\\ \langle\Phi_2\rangle&=&\frac{\pi}{2}\left(\frac{X}{Y}+\frac{Y}{X} \right)\left(\frac{2}{X}-\frac{2}{Y}+1\right)P_c,\nonumber\\ \langle\Phi_3\rangle&=&\pi\left(\frac{1}{X}-\frac{1}{Y}\right)^2P_t.\end{aligned}$$ Therefore, the averaged Stokes parameters of the scattered photons over the angle $\phi_f$ are given by $\langle\xi^f_i\rangle=\langle\Phi_i\rangle/\langle\Phi_0\rangle$, which depend on the incident photon polarization and variables $X$ and $Y$. For example, for 100% horizontally polarized ($P_t=1, P_c=0,\tau=0$) incident photons scattering with unpolarized electrons, the average Stokes parameters of the scattered photons are given by $$\begin{aligned} \!\!\!\!\langle\xi^f_1\rangle &=&\frac{\langle\Phi_1\rangle}{\langle\Phi_0\rangle}=0, ~~~~~~~~~~\langle\xi^f_2\rangle =\frac{\langle\Phi_2\rangle}{\langle\Phi_0\rangle}=0,\nonumber\\ \!\!\!\!\langle\xi^f_3\rangle &=&\frac{\langle\Phi_3\rangle}{\langle\Phi_0\rangle}=\frac{2(\frac{1}{X}-\frac{1 }{Y})^2}{4(\frac{1}{X}-\frac{1}{Y})^2+\frac{4}{X}-\frac{4}{Y}+\frac{X}{Y}+\frac{ Y}{X}}.\end{aligned}$$ Clearly, the scattered photons retain the polarization of the incident photons. $\langle\xi^f_3\rangle$ as a function of the scattered photon energy is shown in Fig. \[fig-stokes1\] for $800$ nm laser photons head-on colliding with $500$ MeV electrons. It can be seen that the average Stokes parameter $\langle\xi^f_3\rangle$ of scattered gamma-ray photons is almost equal to 1 around the maximum scattered photon energy as in this case the recoil effect is negligible. It means the scattered gamma-ray photons with the maximum energy are almost $100$% horizontally polarized. ![The average Stokes parameter $\langle\xi^f_3\rangle$ of Compton gamma-ray photons produced by a $100$% horizontally polarized ($P_t=1, P_c=0,\tau=0$) $800$ nm laser photons head-on colliding with an unpolarized $500$ MeV electrons. []{data-label="fig-stokes1"}](avestokes3_linear){width="\columnwidth"} Beam-beam scattering ==================== In the previous section we discussed the spatial and spectral distributions of a gamma-ray beam produced by Compton scattering of monoenergetic electron and laser beams with zero transverse beam sizes, i.e., particle-particle scattering. However, in the reality, the incoming electron and laser beams have finite spatial and energy distributions, which will change the distributions of the scattered gamma-ray beam. Therefore, there remains a need to understand the characteristics of a Compton gamma-ray beam produced by scattering of a laser beam and an electron beam with specific spatial and energy distributions, i.e., the beam-beam scattering. In this section, we discuss the beam-beam Compton scattering process. First, we derive a simple formula to calculate the total flux of the Compton gamma-ray beam. Then, we present two methods, a semi-analytical calculation and a Monte Carlo simulation, to study the spatial and spectral distributions of the gamma-ray beam. Based upon these methods, two computing codes, a numerical integration code and a Monte Carlo simulation code have been developed. These two codes have been benchmarked against the experimental results at High Intensity Gamma-ray Source (HI$\gamma$S) facility at Duke University. Geometry of beam-beam scattering -------------------------------- Figure \[lab\_laser\_frame\] shows Compton scattering of a pulsed electron beam and a pulsed laser beam in a laboratory frame. Two coordinate systems are used: $(x,y,z)$ for the electron-beam moving along the $z$-direction; the $(x_l,y_l,z_l)$ for the laser-beam propagating in the negative $z_l$-direction. These two coordinate systems share a common origin. The time $t=0$ is chosen for the instant when the centers of the electron beam and laser pulse arrive at the origin. The definition of these two coordinate systems allows the study of the Compton scattering process with an arbitrary collision angle, i.e, the angle between $z$-axis and negative $z_l$-axis. For a head-on collision, the collision angle equals $\pi$. In this case, the electron and laser coordinate systems coincide. In these coordinate systems, the electron and laser beams with Gaussian distributions in their phase spaces can be described by their respective intensity functions as follows [@Vladimir] $$\begin{aligned} f_e(x,y,z,x^\prime,y^\prime,p,t)\!\!\!&=&\!\!\!\frac{1}{ (2\pi)^3\varepsilon_x\varepsilon_y\sigma_p\sigma_z}\!\exp\!\left[-\frac{ \gamma_xx^2+2\alpha_xxx^\prime+\beta_xx^{\prime2}}{2\varepsilon_x}\!-\!\frac{ \gamma_yy^2+2\alpha_yyy^\prime+\beta_yy^{\prime2}}{2\varepsilon_y}\!-\!\frac{ (p-p_0)^2}{2\sigma^2_p}\!-\!\frac{(z-ct)^2}{2\sigma^2_z}\right],\nonumber\\ f_p(x_l,y_l,z_l,k,t)\!\!\!&=&\!\!\!\frac{1}{4\pi^2\sigma_l\sigma_k\sigma_w^2} \!\exp\!\left[-\frac{x_l^2+y_l^2}{2\sigma_w^2}-\frac{(z_l+ct)^2}{2\sigma_l^2} -\frac{(k-k_0)^2}{2\sigma^2_k}\right],~\sigma_w =\sqrt{\frac{\lambda \beta_0}{4\pi}\left(1+\frac{z_l^2}{\beta_0^2}\right)}, \label{electron-photon-dist}\end{aligned}$$ $p$ is the momentum of an electron, and $p_0$ is the centroid momentum of the electron beam; $x^\prime$ and $y^\prime$ are the angular divergences of the electron beam in the $x$- and $y$- directions, respectively; $\alpha_{x,y},\beta_{x,y}$ and $\gamma_{x,y}$ are Twiss parameters of the electron beam; $\sigma_p$, $\sigma_z$ and $\varepsilon_{x,y}$ are the electron beam momentum spread, RMS bunch length, and transverse emittance, respectively; $k$ and $\lambda$ are the wavenumber and wavelength of a laser photon, and $k_0$ is the centroid wavenumber of the laser beam; $\beta_0, \sigma_k$ and $\sigma_l$ are the Rayleigh range, the RMS energy spread and bunch length of the laser beam. Note that the waist of the laser beam is assumed to be at the origin of both coordinate systems. Total flux ---------- The number of collisions occurring during a time $\mathrm{d}t$ and inside a phase space volume $\mathrm{d}^3p~\mathrm{d}^3k~\mathrm{d}V$ is given by [@CFT_landau] $$\begin{aligned} \mathrm{d}N(\vec{r},\vec{p},\vec{k},t)&=&\sigma_{tot}(\vec{p},\vec{k})c(1-\vec{ \beta}\cdot\vec{k}/|\vec{k}|)n_e(\vec{r},\vec{p},t)\nonumber\\ &&\times n_p(\vec{r},\vec{k},t)\mathrm{d}^3p~\mathrm{d}^3k~\mathrm{d}V\mathrm{d}t, \label{collisions_rate}\end{aligned}$$ where $\sigma_{tot}(\vec{p},\vec{k})$ is the total Compton scattering cross section which is determined by the momenta of the incident electron and laser photon, $\vec{p}$ and $\hbar\vec{k}$; $\vec{\beta}=\vec{v}_e/c$ is the relative velocity of the incident electron; $n_e(\vec{r},\vec{p},t)= N_e f_e(\vec{r},\vec{p},t)$ and $n_p(\vec{r},\vec{k},t) = N_p f_p(\vec{r},\vec{k},t)$, where $f_e(\vec{r},\vec{p},t)$ and $f_p(\vec{r},\vec{k},t)$ are the phase space intensity functions of electron beam and laser pulse, and $N_e$ and $N_p$ are the total numbers of electrons and laser photons in their respective pulses. To calculate the total number of scattered gamma-ray photons produced by collision, Eq. (\[collisions\_rate\]) needs to be integrated for the entire phase space and the collision time, i.e., $$\begin{aligned} N_{tot} &=& \int \mathrm{d} N(\vec{r},\vec{p},\vec{k},t)\nonumber\\ &=& N_e N_p\int \sigma_{tot}(\vec{p},\vec{k})c(1-\beta\cos\theta_i)\nonumber\\ &&\times f_e(\vec{r},\vec{p},t)f_p(\vec{r},\vec{k},t)\mathrm{d}^3p~\mathrm{d}^3k~\mathrm{d} V\mathrm{d}t. \label{tot_flux_integration}\end{aligned}$$ where $\theta_i$ is the collision angle between the incident electron and laser photon. Assuming collisions occur at the waists of both beams $(\alpha_x =\alpha_y = 0, \sigma_w =\sqrt{\lambda \beta_0/(4\pi)} )$, the spatial and momentum phase space in the density functions can be separated, i.e, $f_e(\vec{r},\vec{p},t) = f_e(\vec{r},t)f_e(\vec{p})$ and $f_p(\vec{r},\vec{k},t) = f_p(\vec{r},t)f_p(\vec{k})$. Since the cross section $\sigma_{tot}(\vec{p},\vec{k})$ only depends on $\vec{p}$ and $\vec{k}$, we can have $$N_{tot} = N_e N_p\int \mathcal{L}_{sc} \sigma_{tot}(\vec{p},\vec{k}) f_e(\vec{p})f_p(\vec{k})\mathrm{d}^3p~\mathrm{d}^3k, \label{tot_flux_lumin}$$ where $$\mathcal{L}_{sc} = c (1-\beta\cos\theta_i) \int f_e(\vec{r},t)f_p(\vec{r},t)\mathrm{d}V\mathrm{d}t \label{luminosity_single}$$ is the single-collision luminosity defined as the number of scattering events produced per unit scattering cross section, which has dimensions of 1/area [@Chao:1999qt]. For a head-on collision ($\theta_i = \pi$) of a relativistic electron ($\beta \approx 1$) and a photon, the single-collision luminosity can be simplified to $$\mathcal{L}_{sc} = \frac{1}{2\pi\sqrt{\frac{\lambda\beta_0}{4\pi}+\beta_x\varepsilon_x}\sqrt{\frac{ \lambda\beta_0}{4\pi}+\beta_y\varepsilon_y}}.$$ Thus, Eq. (\[tot\_flux\_lumin\]) can be rewritten in a simple form $$N_{tot} = N_e N_p \mathcal{L}_{sc} \overline{\sigma_{tot}},$$ where $\overline{\sigma_{tot}}$ is the total Compton scattering cross section averaged over the momenta of the incident electrons and photons. Neglecting the energy spread of the electrons and photons, $\overline{\sigma_{tot}}$ can be approximated by $\sigma_{tot}$ of Eq. (\[tot\_scat\_cross\]), which can be further simplified to the *classical Thomson cross section* if the recoil effect is negligible. If the beam-beam collision rate is $f_0$, the gamma-ray flux is given by $$\frac{\mathrm{d}N_{tot}}{\mathrm{d}t} = N_e N_p \mathcal{L}_{sc} \overline{\sigma_{tot}} f_0.$$ Spatial and energy distributions: semi-analytical calculation ------------------------------------------------------------- To obtain the spatial and energy distributions of a Compton gamma-ray beam, the differential cross section should be used instead of the total cross section in Eq. (\[tot\_flux\_integration\]). In addition, two constraints need to be imposed during the integration of Eq. (\[tot\_flux\_integration\]) [@Vladimir; @Park_thesis]. ![\[collimator-geometry\]Geometric constraint for a scattered gamma-ray photon. The diagram only shows the projection of the constraint in the $x$-$z$ plane.](collimator_limitation.eps){width="\columnwidth"} First, let us consider the geometric constraint, which assures the gamma-ray photon generated at the location $\vec{r}$ can reach the location $\vec{r}_d$ shown in Fig. \[collimator-geometry\]. In terms of the position vector, this constraint is given by $$\frac{\vec{k}^\prime}{|\vec{k}^\prime|}=\frac{\vec{r}_d-\vec{r}}{|\vec{r}_d-\vec {r}|}, \label{geometray_constraint}$$ where $\vec{k}^\prime$ represents the momentum of the gamma-ray photon; $\vec{r} = (x,y,z)$ denotes the location of the collision; and $\vec{r}_d=(x_d,y_d,z_d)$ denotes the location where the scattered gamma-ray photon is detected. Due to the finite spatial distribution and angular divergence of the electron beam, a gamma-ray photon reaching the location $\vec{r}_d$ can be scattered from an electron at different collision points with different angular divergences. The constraint of Eq. (\[geometray\_constraint\]) projected in the x-z and y-z planes is given by $$\theta_x+x^\prime = \frac{x_d-x}{L}, ~\theta_y+y^\prime = \frac{y_d-y}{L}. \label{constraint_projected}$$ Here, $\theta_x$ and $\theta_y$ are the projections of the scattering angle $\theta_f$ in the x-z and y-z planes, i.e., $\theta_x = \theta_f\cos\phi_f$, $\theta_y = \theta_f\sin\phi_f$ and $\theta_f^2 = \theta_x^2+\theta_y^2$, where $\theta_f$ and $\phi_f$ are the angles defined in the electron coordinate system ($x_e,y_e,z_e$) in which the electron is incident along the $z_e$-direction (Fig. \[collimator-geometry\]). $x^\prime$ and $y^\prime$ are the angular divergences of the incident electron, i.e., the angles between the electron momentum and $z$-axis. $L$ is the distance between the collision point and the detection plane (or the collimation plane). Note that a far field detection (or collimation) has been assumed, i.e., $L\gg |\vec{r}|$ and $ L\approx |\vec{r}_d|$. The second constraint is the energy conservation. Due to the finite energy spread of the electron beam, the gamma-ray photon with an energy of $E_g$ can be produced by electrons with various energies and scattering angles. Mathematically, this constraint is given by $$\delta(\bar{E}_g-E_g), \label{energycondition}$$ where $$\bar{E}_g = \frac{4\bar{\gamma}^2E_p}{1+\bar{\gamma}^2\theta_f^2+4\bar{\gamma} E_p/mc^2}.$$ Imposing the geometric and energy constraints in Eq. (\[tot\_flux\_integration\]), the spatial and energy distributions of a Compton gamma-ray beam can be obtained by integrating all the individual scattering events, i.e., $$\begin{aligned} \frac{\mathrm{d}N(E_g,x_d,y_d)}{ \mathrm{d}\Omega_d \mathrm{d}E_g}&\approx&N_eN_p\int \frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\delta(\bar{E} _g-E_g)c(1+\beta)\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\times f_e(x,y,z,x^\prime,y^\prime,p,t)\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\times f_p(x,y,z,k,t)\mathrm{d}x^\prime\mathrm{d}y^\prime \mathrm{d}p\mathrm{d}k\mathrm{d}V\mathrm{d}t, \label{spatialenergydist}\end{aligned}$$ where $\mathrm{d}\Omega_d = \mathrm{d}x_d \mathrm{d}y_d/L^2$, and $\mathrm{d}\sigma/\mathrm{d}\Omega$ is the differential Compton scattering cross section. Note that a head-on collision between electron and laser beams has been assumed, and the density function $f_e(\vec{r},\vec{p},t)$ has been replaced with $f_e(x,y,z,x^\prime,y^\prime,p,t)$ of Eq. (\[electron-photon-dist\]) under the approximation $p_z\approx p$ for a relativistic electron beam. In addition, the integration $\int \cdots f_p(\vec{r},\vec{k},t)\mathrm{d}^3k$ is replaced with $\int \cdots f_p(x,y,z,k,t) \mathrm{d}k$, where $f_p(x,y,z,k,t)$ is defined in Eq. (\[electron-photon-dist\]). Integrations over $\mathrm{d}k_x$ and $\mathrm{d}k_y$ have been carried out since the differential cross section has a very weak dependency on $k_x$ and $k_y$ for a relativistic electron beam. Assuming head-on collisions for each individual scattering event ($\theta_i = \pi$ and $\mathrm{d}\sigma/\mathrm{d}\Omega$ is given by Eq. (\[crosssection-1\])), neglecting the angular divergences of the laser beam and replacing $x^\prime$ and $y^\prime$ with $\theta_x$ and $\theta_y$, we can integrate Eq. (\[spatialenergydist\]) over $\mathrm{d}V,\mathrm{d}t$ and $\mathrm{d}p$ to yield the following result (see Appendix \[append\]), $$\begin{aligned} \frac{\mathrm{d}N(E_g,x_d,y_d)}{\mathrm{d}E_g \mathrm{d}x_d \mathrm{d}y_d}&=&\frac{r_e^2L^2N_eN_p}{4\pi^3\hbar c\beta_0\sigma_\gamma\sigma_k}\int^{\infty}_0 \int^{\sqrt{4E_p/E_g}}_{-\sqrt{4E_p/E_g}}\int^{\theta_{xmax}}_{-\theta_{xmax}} \frac{1}{\sqrt{\zeta_x\zeta_y}\sigma_{\theta x}\sigma_{\theta y}}\frac{\gamma}{1+2\gamma E_p/mc^2}\nonumber \\ &&\times\left\lbrace\frac{1}{4}\left[\frac{4\gamma^2E_p}{ E_g(1+\gamma^2\theta_f^2)}+\frac{E_g(1+\gamma^2\theta_f^2)}{4\gamma^2E_p}\right] -2\cos^2(\tau-\phi_f)\frac{\gamma^2\theta_f^2}{(1+\gamma^2\theta_f^2)^2} \right\rbrace \nonumber\\ &&\times\exp\left[-\frac{(\theta_x-x_d/L)^2}{2\sigma_{\theta_x}^2}-\frac{ (\theta_y-y_d/L)^2}{2\sigma_{\theta_y}^2}-\frac{(\gamma-\gamma_0)^2}{ 2\sigma_\gamma^2}-\frac{(k-k_0)^2}{2\sigma_k^2}\right]\mathrm{d}\theta_x \mathrm{d}\theta_y \mathrm{d}k, \label{spatialenergydist3}\end{aligned}$$ where $$\begin{aligned} \xi_x = 1+(\alpha_x-\frac{\beta_x}{L})^2+\frac{2k\beta_x\varepsilon_x}{\beta_0}, ~\zeta_x =1+\frac{2k\beta_x\varepsilon_x}{\beta_0},~\sigma_{\theta x} = \sqrt{\frac{\varepsilon_x\xi_x}{\beta_x\zeta_x}},\nonumber\\ \xi_y = 1+(\alpha_y-\frac{\beta_y}{L})^2+\frac{2k\beta_y\varepsilon_y}{\beta_0}, ~\zeta_y =1+\frac{2k\beta_y\varepsilon_y}{\beta_0},~\sigma_{\theta y} = \sqrt{\frac{\varepsilon_y\xi_y}{\beta_y\zeta_y}},\nonumber\\ \theta_f = \sqrt{\theta_x^2+\theta_y^2},~\theta_{xmax} =\sqrt{ 4E_p/E_g-\theta_y^2},~\sigma_\gamma = \frac{\sigma_{E_e}}{mc^2},\nonumber\\ \gamma=\frac{2E_g E_p/mc^2}{4E_p-E_g\theta_f^2}\left(1+\sqrt{1+\frac{4E_p-E_g\theta_f^2}{ 4E_p^2E_g/(mc^2)^2}}\right), \end{aligned}$$ and $\sigma_{E_e}$ is the RMS energy spread of the electron beam. In a storage ring, the vertical emittance of the electron beam is typically much smaller than the horizontal emittance. For a Compton scattering occurring at a location with similar horizontal and vertical beta functions ($\beta_x \sim \beta_y$), the vertical divergence of the electron beam can be neglected. In addition, the photon energy spread of a laser beam is small, and its impact can also be neglected in many practical cases. Under these circumstances, the cross section term in Eq. (\[spatialenergydist3\]) has a weak dependence on $\theta_y$ ($\approx y_d/L$) and $k$ ($\approx k_0$). With the assumption of an unpolarized or circularly polarized laser beam, Eq. (\[spatialenergydist3\]) can be simplified further after integrating $\theta_y$ and $k$: $$\begin{aligned} \frac{\mathrm{d}N(E_g,x_d,y_d)}{\mathrm{d}E_g \mathrm{d}x_d\mathrm{d}y_d}&\approx& \frac{r_e^2 L^2N_eN_{p}}{2\pi^2 \hbar c \beta_0\sqrt{\zeta_x}\sigma_\gamma\sigma_{\theta x}} \int^{\theta_{xmax}}_{-\theta_{xmax}}\frac{\gamma}{1+2\gamma E_p/mc^2} \left\lbrace \frac{1}{4}\left[\frac{4\gamma^2 E_p}{E_g(1+\gamma^2\theta_f^2)}+ \frac{E_g(1+\gamma^2\theta_f^2)}{4\gamma^2 E_p}\right]\right.\nonumber\\ &&\left.-\frac{\gamma^2\theta_f^2}{(1+\gamma^2\theta_f^2)^2}\right\rbrace \times\exp\left[-\frac{(\theta_x-x_d/L)^2}{2\sigma_{\theta_x}^2}- \frac{(\gamma-\gamma_0)^2}{2\sigma_\gamma^2}\right] \mathrm{d}\theta_x, \label{dist_no_vertical}\end{aligned}$$ where $\theta_{xmax}=\sqrt{4E_p/E_g-(y_d/L)^2}$. The integrations with respect to $k, ~\theta_y$ and $\theta_x$ in Eq. (\[spatialenergydist3\]) or $\theta_x$ in Eq. (\[dist\_no\_vertical\]) must be carried out numerically. For this purpose, a numerical integration Compton scattering code (CCSC) in the C++ computing language has been developed to evaluate the integrals of Eqs. (\[spatialenergydist3\]) and (\[dist\_no\_vertical\]). With the detailed spatial and energy distributions of the Compton gamma-ray beam $\mathrm{d}N(E_g,x_d,y_d)/(\mathrm{d}E_g \mathrm{d}x_d\mathrm{d}y_d)$, the energy spectrum of the gamma-ray beam collimated by a round aperture with a radius of $R$ can be easily obtained by integrating $\mathrm{d}N(E_g,x_d,y_d)/(\mathrm{d}E_g \mathrm{d}x_d\mathrm{d}y_d)$ over the variables $x_d$ and $y_d$ for the entire opening aperture, i.e., $\sqrt{x_d^2+y_d^2}\leqslant R^2$. The transverse misalignment effect of the collimator on the gamma-ray beam distributions can be introduced by replacing $x_d$ and $y_d$ with $x_d+\Delta x$ and $y_d+\Delta y$ in Eq. (\[spatialenergydist3\]) or Eq. (\[dist\_no\_vertical\]), where $\Delta x$ and $\Delta y$ are the collimator offset errors in the horizontal and vertical directions, respectively. Spatial and energy distributions: Monte Carlo simulation -------------------------------------------------------- In the previous section, we have derived an analytical formula to study the spatial and energy distributions of a Compton gamma-ray beam. However, to simplify the calculation several approximations have been made: head-on collisions for each individual scattering event, a negligible angular divergence of the laser beam, and far field collimation. A completely different approach to study Compton scattering process is to use a Monte Carlo simulation. With this numerical technique, effects that cannot be easily included in an analytical method can be properly accounted for. For example, using a Monte Carlo simulation we can study the scattering process for an arbitrary collision angle. With this motivation, we developed a Monte Carlo Compton scattering code. In following, the algorithm of this code is presented. ### Simulation setup At the beginning of the collision, both the electron and laser pulses are located some distance away from the origin (Fig. \[lab\_laser\_frame\]), and two pulse centers arrive at the origin at the same time ($t=0$). The collision duration is divided into a number of time steps, and the time step number represents the time in the simulation. Due to a large number of electrons in the bunch, it is not practical to track each electron in the simulation. Therefore, the electron bunch is divided into a number of macro particles (for example, $10^6$) which are tracked in the simulation. The phase space coordinates of each macro particle are sampled at time $t=0$. For an electron beam with Gaussian distributions in phase space, the coordinates are sampled according to the electron beam Twiss parameters as follows [@CAIN; @chao_1] $$\begin{aligned} x(0) &=& \sqrt{2 u_1\varepsilon_x\beta_x }\cos\phi_1,\nonumber\\ x^\prime(0) &=&-\sqrt{2 u_1\varepsilon_x /\beta_x }(\alpha_x\cos\phi_1+\sin\phi_1),\nonumber\\ y(0) &=& \sqrt{2 u_2 \varepsilon_y \beta_y }\cos\phi_2,\nonumber\\ y^\prime(0) &=&-\sqrt{2 u_2 \varepsilon_y /\beta_y}(\alpha_y\cos\phi_2+\sin\phi_2),\nonumber\\ z(0) &=& \sigma_z r_1,\nonumber\\ E_e&=&E_0(1+\sigma_{E_e} r_2), \label{electron_parameters}\end{aligned}$$ where $u_{1,2}$ are random numbers generated using an exponential distribution with a unit mean parameter (i.e., $e^{-u_{1,2}}$), $r_{1,2}$ are random numbers generated according to a Gaussian distribution with a zero mean and unit standard deviation, and $\phi_{1,2}$ are uniformly distributed random numbers between $0$ and $2\pi$. The coordinates of macro particles at any other time ($t\neq0$) can then be obtained by transforming the coordinates given by Eq. (\[electron\_parameters\]). The Compton scattering is simulated according to the local intensity and momentum of the laser beam at the collision point. The intensity of the laser beam at the collision point $(x,y,z)$ in the electron-beam coordinate system can be calculated according to Eq. (\[electron-photon-dist\]) using the laser-beam coordinates $(x_l,y_l,z_l)$ transformed from $(x,y,z)$. The momentum direction $\hat{k}$ of the photon at the collision point $(x,y,z)$ can be calculated from the point of view of electromagnetic wave of the photon beam. For a Gaussian laser beam, its propagation phase $\psi(x_l,y_l,z_l)$ in the laser-beam coordinate system is given by [@CAIN; @AESiegman] $$\psi(x_l,y_l,z_l) =-ik_lz_l-ik_lz_l\frac{x_l^2+y_l^2}{2(\beta_0^2+z_l^2)};$$ the wavevector (the momentum of photon $\vec{k}_l$) is given by $\vec{k}_l = \bigtriangledown \psi(x_l,y_l,z_l)$. Thus, $$\hat{k}_l\approx-\frac{1}{\sqrt{1+c_1^2+c_2^2}}(c_1\hat{x}_l+c_2\hat{y}_l+\hat{z }_l), \label{local_vector}$$ where $$c_1 = \frac{x_l z_l}{\beta_0^2+z^2_l},~c_2 = \frac{y_l z_l}{\beta_0^2+z^2_l}.$$ The unit vector $\hat{k}_l$ expressed in the electron-beam coordinate system gives the momentum direction of the laser photon in this coordinate system. ### Simulation procedures At each time step, the Compton scattering process is simulated for each macro particle. The simulation proceeds in two stages. In the first stage, the scattering probability is calculated using the local intensity and momentum of the laser beam. According to this probability, the scattering event is sampled. If the scattering happens, a gamma-ray photon will be generated, and the simulation proceeds to the next stage. In the second stage, the energy and scattering angles (including the polar and azimuthal angles) of the gamma-ray photon are sampled according to the differential Compton scattering cross section. The detailed simulation procedures for these two stages are presented as follows. ### First stage: scattering event Since the energy and scattering angles of the gamma-ray photon are not the concern at this stage, the total scattering cross section is used to calculate the scattering probability. According to Eq. (\[collisions\_rate\]), the scattering probability $P(\vec{r},\vec{p},\vec{k},t)$ in the time step $\Delta t$ for the macro particle at the collision point $(x,y,z)$ is given by $$P(\vec{r},\vec{p},\vec{k},t) =\sigma_{tot}(\vec{p},\vec{k}) c(1-\vec{\beta}\cdot\vec{k}/|\vec{k}|)n_p(x,y,z,k,t)\Delta t, \label{collisions_prob}$$ where $n_p(x,y,z,k,t)$ and $\vec{k}$ are the local density and wavevector of the photon beam, respectively; $\sigma_{tot}(\vec{p},\vec{k})$ is the total scattering cross section given by Eq. (\[tot\_scat\_cross\]). According to the probability $P(\vec{r},\vec{p},\vec{k},t)$, the scattering event is sampled using the *rejection* method as follows [@penelope; @Nelson:1985ec]: first, a random number $r_3$ is uniformly generated in the range from $0$ to $1$; if $r_3\leq P(\vec{r},\vec{p},\vec{k},t)$, Compton scattering happens; otherwise the scattering does not happen, and the above sampling process is repeated for the next macro particle. ### Second stage: scattered photon energy and direction When a Compton scattering event happens, a gamma-ray photon is generated. The simulation proceeds to the next stage to determine the energy and scattering angles of the gamma-ray photon. For convenience, the sampling probability for generating gamma-ray photon parameters is calculated in the electron-rest frame coordinate system $(x_e^\prime,y_e^\prime,z_e^\prime)$ in which the electron is at rest and the laser photon is propagated along the $z_e^\prime$-axis direction. Since the momenta of macro particles and laser photons have been expressed in the electron-beam coordinate system ($x,y,z$) in the lab frame, we need to transform the momenta to those defined in the electron-rest frame coordinate system $(x_e^\prime, y_e^\prime, z_e^\prime)$. After transformations, the sampling probability for generating the scattered gamma-ray photon energy and direction will be calculated as follows. In the electron-rest frame coordinate system $(x_e^\prime, y_e^\prime, z_e^\prime)$, according to Eq. (\[scatteredphotonenergy\]) the scattered photon energy is given by $$\frac{1}{E_g^\prime} = \frac{1}{E_p^\prime}+\frac{1}{mc^2}(1-\cos\theta^\prime), \label{gamma_energy_rest_frame}$$ where $\theta^\prime$ is the scattering angle between the momenta of the scattered and incident photons; $E_g^\prime$ and $E_p^\prime$ are the energies of the scattered and incident photons, and $E_g^\prime$ is in the range of $$\frac{E_p^\prime}{1+2E_p^\prime/mc^2}\leq E_g^\prime \leq E_p^\prime. \label{omega_range}$$ In the electron-rest frame coordinate system, we can simplify the Lorentz invariant quantities $X$ and $Y$ of Eq. (\[angular\_dif\_crosssection\]) to $X = 2E_p^\prime/mc^2$ and $Y = 2E_g^\prime/mc^2$. As a result, the differential cross section is given by $$\begin{aligned} \frac{\mathrm{d}^2\sigma}{\mathrm{d}E_g^\prime \mathrm{d}\phi^\prime}\!\!\!&=&\!\!\!\frac{ mc^2 r_e^2}{2 E_p^{\prime2}}\!\!\left\lbrace \!\!\left[1+P_t \cos(2\tau^\prime\!-\!2\phi^\prime)\right]\!\!\left[\!\!\left(\frac{mc^2}{ E_p^\prime}\!-\!\frac{mc^2}{ E_g^\prime}\right)^2\right. \right.\nonumber\\ \!\!\!&&\left.\left.+2\left(\frac{mc^2}{E_p^\prime}-\frac{mc^2}{E_g^\prime} \right)\right]+\frac{E_p^\prime}{E_g^\prime} +\frac{E_g^\prime}{E_p^\prime}\right\rbrace, \label{cross_section}\end{aligned}$$ where $\tau^\prime$ is the azimuthal angle of the linear polarization direction of the incident photon beam defined in the system $(x_e^\prime, y_e^\prime, z_e^\prime)$, and $\phi^\prime$ is the azimuthal angle of the scattered photon. Note that the quantity $P_t$, the degree of linear polarization of the incident photon beam, is invariant under Lorentz transformations. The scattered photon energy $E_g^\prime$ and the azimuthal angle $\phi^\prime$ are sampled according to the differential cross section Eq. (\[cross\_section\]). Since Eq. (\[cross\_section\]) depends on both $E_g^\prime$ and $\phi^\prime$, the *composition and rejection* sampling method [@penelope; @Nelson:1985ec] is used to sample these two variables. To sample the scattered gamma-ray photon energy $E_g^\prime$, Eq. (\[cross\_section\]) needs to be summed over the azimuthal angle $\phi^\prime$ and written as $$\frac{\mathrm{d}\sigma}{\mathrm{d}E_g^\prime}=\pi r_e^2\frac{mc^2}{E_p^{\prime2}}(2+\frac{2E_p^\prime}{mc^2})f(E_g^\prime),$$ where $$\begin{aligned} f(E_g^\prime) &=& \frac{1}{2+2E_p^\prime/mc^2}\left[\left(\frac{mc^2}{E_p^\prime}-\frac{mc^2}{ E_g^\prime}\right)^2\right.\nonumber\\ &&+2\left(\frac{mc^2}{E_p^\prime}-\frac{mc^2}{E_g^\prime}\right) \left.+\frac{E_p^\prime}{E_g^\prime} +\frac{E_g^\prime}{E_p^\prime}\right],\end{aligned}$$ and $0\leq f(E_g^\prime)\leq 1$ for any $E_g^\prime$. Now, the scattered gamma-ray photon energy $E_g^\prime$ can be sampled according to $f(E_g^\prime)$ as follows: first, a uniform random number $E_g^\prime$ is generated in the range given by Eq. (\[omega\_range\]), and $r_4$ in the range from $0$ to $1$; if $r_4\leq f(E_g^\prime)$, $E_g^\prime$ is accepted, otherwise the above sampling process is repeated until $E_g^\prime$ is accepted. If $E_g^\prime$ is accepted, the scattering angle $\theta^\prime$ can be calculated using Eq. (\[gamma\_energy\_rest\_frame\]). After the scattered gamma-ray photon energy $E_g^\prime$ is determined, the azimuthal $\phi^\prime$ angle is sampled according to $$g(\phi^\prime) =\frac{\mathrm{d}^2\sigma}{\mathrm{d}E_g^\prime \mathrm{d}\phi^\prime} /\frac{\mathrm{d}\sigma}{\mathrm{d}E_g^\prime}.$$ After obtaining the gamma-ray photon energy $E_g^\prime$, and the angles $\theta^\prime$ and $\phi^\prime$ in the electron-rest frame coordinate system, we need to transform these parameters to those in the lab-frame coordinate system. In the meantime, the momentum of the scattered electron is also computed. This electron can still interact with the laser photon in following time steps, which allows to correctly model the multiple scattering process between the electrons and laser photons. Benchmark and applications of Compton scattering codes ====================================================== Based upon the algorithms discussed in Section III, we have developed two computer codes using the C$++$ programming language: the numerical integration Compton scattering code CCSC and the Monte Carlo Compton scattering code MCCMPT. Below, we briefly discuss the benchmark and applications of these two codes. Energy Distribution ------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[CCSC\_MCCMPT\] Compton gamma-ray beam energy spectra generated using computer codes MCCMPT, CCSC and CAIN2.35. The stairs plot represents the spectrum simulated using the code MCCMPT, the dash line represents the spectrum calculated using the code CCSC, and the circles represent the one using the code CAIN2.35. The electron beam energy and RMS energy spread are $400$ MeV and $0.2$%, respectively. The electron beam horizontal emittance is $10$ nm-rad, and the vertical emittance is neglected. The laser wavelength is $600$ nm with negligible photon beam energy spread. The gamma-ray beam is collimated by an aperture with a radius of $12$ mm located $60$ meters downstream from the collision point. ](analytical_simulated_smooth "fig:"){width="\columnwidth"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[CCSC\_Measured\] Comparison between the measured and calculated energy spectra of a Compton gamma-ray beam. The solid line represents the calculated spectrum using the CCSC code, and the circles represent the measured gamma-beam energy distribution after removing the escape peaks and Compton plateau using a spectrum unfolding technique. The gamma-ray beam is produced by Compton scattering of a $466$ MeV electron beam and a $790$ nm laser beam at the HI$\gamma$S facility. The RMS energy spread of the electron beam is $0.1$%, and horizontal and vertical emittance are $7.8$ and $1.0$ nm-rad, respectively. The collimator with an aperture radius of $12.7$ mm is placed $60$ meters downstream from the collision point.](5MeV_measured_analytical "fig:"){width="\columnwidth"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Our Compton scattering computer codes MCCMPT and CCSC have been benchmarked against a well known beam-beam colliding code CAIN2.35 developed at KEK for International Linear Collider [@CAIN]. The energy spectra of Compton gamma-ray beams generated using these three codes are shown in Fig. \[CCSC\_MCCMPT\]. We can see that these three codes can produce very close results. In terms of computing time, the codes CCSC, MCCMPT and CAIN2.35 took about $10$ minutes, $150$ minutes and $1200$ minutes to generate these spectra using a single-core machine, respectively. Compared to the multi-purpose beam-beam colliding code CAIN2.35, the dedicated Compton scattering codes CCSC and MCCMPT are much faster and easy to use. At the HI$\gamma$S facility, the Compton gamma-ray beam is usually measured using a high-purity Germanium (HPGe) detector. Due to the non-ideal response of the detector, the measured spectrum has a structure of a full energy peak, a single and double escape peaks, and a Compton plateau. To unfold the measured energy spectrum, a novel end-to-end spectrum reconstruction method has been recently developed [@c.sun_2]. The comparison of the measured gamma spectrum and calculated spectrum using the CCSC code is shown in Fig. \[CCSC\_Measured\]. A very good agreement between them is observed. Using the Monte Carlo simulation code, we can study the Compton scattering process with an arbitrary collision angle. The simulated spectra using MCCMPT are compared to those using CAIN$2.35$ in Fig. \[arb\_ang\]. Again, very good agreements are observed. It is clearly shown that the gamma-ray beam produced by a head-on collision of an electron and a laser beams has the highest energy and flux. With a 90 degree collision angle, the maximum energy of the gamma-ray beam is only half of that for a head-on collision. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[arb\_ang\] Compton gamma beam energy spectra for different collision angles $90^\circ$, $100^\circ$, $120^\circ$, $135^\circ$ and  $180^\circ$. These spectra are simulated using codes MCCMPT and CAIN2.35. The electron beam and laser beam parameters are the same as those in Fig. \[CCSC\_MCCMPT\]. The solid lines represent the spectra simulated using the code MCCMPT, and the circles represent the spectrum simulated using the code CAIN2.35.](spectra_collision_angle "fig:"){width="\columnwidth"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The energy spread of a Compton gamma-ray beam is mainly determined by the degree of the collimation of the gamma beam, energy spread and angular divergence of the electron beam [@c.sun_2]. The contributions of these parameters to the gamma-ray beam energy spread are summarized in Table \[depedences\]. In some literature [@PhysRevE.54.5657; @1983paac.conf...21S], a simple quadratic sum of individual contributions was used to estimate the energy spread of the Compton scattering gamma-ray beam. Since the electron beam angular divergence and the gamma-beam collimation introduce non-Gaussian broadening effects on the gamma-beam spectrum [@c.sun_2], causing the spectrum to have a long energy tail (Figs. \[CCSC\_MCCMPT\] and \[CCSC\_Measured\]), the energy spread of the gamma-ray beam cannot be given simply by the quadrature sum of different broadening mechanisms. The realistic gamma-ray beam energy spread needs to be calculated from its energy spectrum, which can be done using either the numerical integration code CCSC, or a Monte Carlo simulation code, MCCMPT or CAIN2.35. Spatial distribution -------------------- ----------------------------------------------- ---------------------------------------------- ![image](simulated_dist_cirular){width="2in"} ![image](simulated_dist_linear){width="2in"} ![image](measured_dist_circular){width="2in"} ![image](measured_dist_linear){width="2in"} ----------------------------------------------- ---------------------------------------------- Figure \[simulated\_measured\_dist\] shows the spatial distribution of a gamma-ray beam simulated by the MCCMPT code for circularly and linear polarized incoming laser beams. For comparison, the measured spatial distributions of gamma-ray beams using the recently developed gamma-ray imaging system at HI$\gamma$S facility [@c.sun_3] are also shown in Fig. \[simulated\_measured\_dist\]. It can be seen that for a circularly polarized incoming laser beam, the distribution is azimuthally symmetric; for a linearly polarized incoming laser beam, the gamma-ray beam distribution is asymmetric, and is “pinched” along the direction of the laser beam polarization. More applications of using CCSC and MCCMPT codes to study characteristics of Compton gamma-ray beams can be found in [@c.sun_2; @c.sun_1; @c.sun_4]. \[conclusion\]Summary ===================== To study characteristics of a gamma-ray beam produced by Compton scattering of an electron beam and a laser beam, we have developed two algorithms: one based upon an analytical calculation and the other using a Monte Carlo simulation. According to these algorithms, two computer codes, a numerical integration code (CCSC) and a Monte Carlo simulation code (MCCMPT), have been developed at Duke University. These codes have been extensively benchmarked against a beam-beam colliding code CAIN2.35 developed at KEK and measurement results at the High Intensity Gamma-ray Source (HI$\gamma$S) facility at Duke University. Using these two codes, we are able to characterize Compton gamma-ray beams with various electron and laser beam parameters, arbitrary collision angles, and different gamma-beam collimation conditions. In this work, the nonlinear Compton scattering process is not considered, and the polarization of the electron beam is not taken into account. Although the polarization of the gamma-ray beam has been calculated in Section II, this calculation is limited to the particle-particle scattering case. Further studies will be carried out to address these issues. This work is supported in part by the US Department of Defense MFEL Program as administered by the AROSR under contract number FA9550-04-01-0086 and by the U.S. Department of Energy, Office of Nuclear Physics under grant number DE-FG02-97ER41033. Spatial and energy distributions of a Compton gamma-ray beam\[append\] ====================================================================== The spatial and energy distributions of a Compton gamma-ray beam produced by a head-on collision of an electron beam and a photon beam is given by $$\frac{\mathrm{d}N(E_g,x_d,y_d)}{\mathrm{d}\Omega_d \mathrm{d}E_g}\approx \int \frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\delta(\bar{E}_g-E_g)c(1+\beta)n_e(x,y, z,x^\prime,y^\prime,p,t) n_p(x,y,z,k,t)\mathrm{d}x^\prime\mathrm{d}y^\prime\mathrm{d}p\mathrm{d}k\mathrm{ d}V\mathrm{d}t, \label{spatialenergydist_append}$$ where $\mathrm{d}\Omega_d = \mathrm{d}x_d \mathrm{d}y_d/L^2$; $n_e(x,y,z,x^\prime,y^\prime,p, t)$ and $n_p(x,y,z,k,t)$ are the density functions of the electron and photon beams given by Eq. (\[electron-photon-dist\]); $\mathrm{d}\sigma/\mathrm{d}\Omega$ is the differential cross section given by Eq. (\[crosssection-1\]). For head-on collisions, we can simplify the differential cross section to $$\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}=8 r_e^2\left\lbrace \frac{1}{4}\left[\frac{4\bar{\gamma}^2 E_p}{\bar{E_g}(1+\bar{\gamma}^2\theta_f^2)}+\frac{\bar{E_g}(1+\bar{\gamma} ^2\theta_f^2)}{4\bar{\gamma}^2 E_p}\right]-2\cos^2(\tau-\phi_f)\frac{\bar{\gamma}^2\theta_f^2}{(1+\bar{\gamma} ^2\theta_f^2)^2}\right\rbrace \left(\frac{\bar{E_g}}{4\bar{\gamma} E_p}\right)^2. \label{crosssection_headon_append}$$ Replacing $x^\prime$ and $y^\prime$ with $\theta_x$ and $\theta_y$ according to Eq. (\[constraint\_projected\]), and neglecting the angular divergence of the laser beam at the collision point, we can integrate Eq. (\[spatialenergydist\_append\]) over $\mathrm{d}V$ and $\mathrm{d}t$ and obtain $$\begin{aligned} \frac{\mathrm{d}N(E_g,x_d,y_d)}{\mathrm{d}E_g \mathrm{d}x_d \mathrm{d}y_d}&=&\frac{L^2N_e N_p}{(2\pi)^3\beta_0\sigma_p\sigma_k}\int\frac{k}{\sqrt{\zeta_x\zeta_y}}\frac{1} {\sigma_{\theta x}\sigma_{\theta_y}} \frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\delta(\bar{E} _g-E_g)(1+\beta)\nonumber\\ &&\times\exp\left[-\frac{(\theta_x-x_d/L)^2}{2\sigma_{\theta_x}^2}-\frac{ (\theta_y-y_d/L)^2}{2\sigma_{\theta_y}^2}-\frac{(p-p_0)^2}{2\sigma_p^2}-\frac{ (k-k_0)^2}{2\sigma_k^2}\right]\mathrm{d}\theta_x\mathrm{d}\theta_y\mathrm{d} p\mathrm{d}k, \label{spatialenergydist6}\end{aligned}$$ where $$\begin{aligned} \xi_x = 1+(\alpha_x-\frac{\beta_x}{L})^2+\frac{2k\beta_x\varepsilon_x}{\beta_0}, ~\zeta_x =1+\frac{2k\beta_x\varepsilon_x}{\beta_0},~\sigma_{\theta x} = \sqrt{\frac{\varepsilon_x\xi_x}{\beta_x\zeta_x}}, \nonumber\\ \xi_y = 1+(\alpha_y-\frac{\beta_y}{L})^2+\frac{2k\beta_y\varepsilon_y}{\beta_0}, ~\zeta_y =1+\frac{2k\beta_y\varepsilon_y}{\beta_0},~\sigma_{\theta y} =\sqrt{\frac{\varepsilon_y\xi_y}{\beta_y\zeta_y}},\nonumber\\ \theta_f = \sqrt{\theta_x^2+\theta_y^2},~\theta_x =\theta_f\cos\phi_f,~\theta_y =\theta_f\sin\phi_f.\end{aligned}$$ Next, we need to integrate the electron beam momentum $\mathrm{d}p$. It is convenient to change the momentum $p$ to the scaled electron beam energy variable $\bar{\gamma}=E_e/(mc^2)$, and rewrite the delta-function $\delta(\bar{E}_g-E_g)$ as $$\delta(\bar{E}_g-E_g)=\delta(\frac{4\bar{\gamma}^2E_p}{1+\bar{\gamma} ^2\theta_f^2+4\bar{\gamma} E_p/mc^2}-E_g)=-\delta(\bar{\gamma}-\gamma)\frac{(1+\gamma^2\theta_f^2+4\gamma E_p/mc^2)^2}{8\gamma E_p(1+2\gamma E_p/mc^2)}, \label{delta}$$ where $$\gamma=\frac{2E_g E_p/mc^2}{4E_p-E_g\theta_f^2}\left(1+\sqrt{1+\frac{4E_p-E_g\theta_f^2}{ 4E_p^2E_g/(mc^2)^2}}\right)$$ is the root of $$E_g=\frac{4\gamma^2E_p}{1+\gamma^2\theta_f^2+4\gamma E_p/mc^2}$$ with the condition of $0\leq\theta_f\leq\sqrt{\frac{4E_p}{E_g}}$. Substituting Eqs. (\[crosssection\_headon\_append\]), (\[delta\]) into Eq. (\[spatialenergydist6\]) and integrating $d\bar{\gamma}$, we can get $$\begin{aligned} \frac{\mathrm{d}N(E_g,x_d,y_d)}{\mathrm{d}E_g\mathrm{d}x_d\mathrm{d}y_d}&=&\frac {r_e^2L^2N_eN_p}{4\pi^3\hbar c\beta_0\sigma_\gamma\sigma_k}\int^{\infty}_0 \int^{\sqrt{4E_p/E_g}}_{-\sqrt{4E_p/E_g}}\int^{\theta_{xmax}}_{-\theta_{xmax}} \frac{1}{\sqrt{\zeta_x\zeta_y}\sigma_{\theta x}\sigma_{\theta y}}\frac{\gamma}{1+2\gamma E_p/mc^2}\nonumber \\ &&\times\left\lbrace\frac{1}{4}\left[\frac{4\gamma^2E_p}{ E_g(1+\gamma^2\theta_f^2)}+\frac{E_g(1+\gamma^2\theta_f^2)}{4\gamma^2E_p}\right] -2\cos^2(\tau-\phi_f)\frac{\gamma^2\theta_f^2}{(1+\gamma^2\theta_f^2)^2} \right\rbrace \nonumber\\ &&\times\exp\left[-\frac{(\theta_x-x_d/L)^2}{2\sigma_{\theta_x}^2}-\frac{ (\theta_y-y_d/L)^2}{2\sigma_{\theta_y}^2}-\frac{(\gamma-\gamma_0)^2}{ 2\sigma_\gamma^2}-\frac{(k-k_0)^2}{2\sigma_k^2}\right]\mathrm{d}\theta_x\mathrm{ d}\theta_y\mathrm{d}k, \label{spatialenergydist3_append}\end{aligned}$$ where $$\theta_{xmax}=\sqrt{4E_p/E_g-\theta_y^2}.$$ [^1]: Currently at Lawrence Berkeley National Laboratory.
Friday, November 23, 2007 A few weeks ago, Will Dungee (a friend and a pastor at my church) and I went prayer walking in a part of Glenwood that had been hopping with drugs and recent violence (someone had been robbed and then shot and killed at a convenience store in that area). Our hope was to pray over the neighborhood and also to talk with folks there and ask them how we could pray for them and see where the Lord might take it from there. Eventually we happened upon “Rick”, who was hanging out with a couple of other guys and appeared to be concluding some sort of deal involving either bootleg CD’s, drugs, or both. I suggested we go talk to him, and Will agreed, and so I practically ran up to "Rick" (I didn’t know him at that point) while Will walked much more coolly behind me. We struck up a conversation, during which we learned that his mom was an evangelist, that he wouldn’t consider himself a Christian because of how he was living, that he had been shot at least once, and that he figured that each of us is supposed to be the best we can be at what we are doing – if you are a Christian, be the best one you can be; if you are a dealer, be the best one that you can be (seriously). We asked him how we might pray for him, and he said just to ask God to let him live another day (which honestly seemed pretty generic), and so we did that and went on our way. I wasn’t sure I would see him again, especially since my prayer walks in Soflo (South Florida Street) were not a regular part of my week. About a week or two later I was walking my dog Joe around the block and was passing by a different convenience store that is notorious for shady folks hanging around outside. As I passed by a car parked on the street, I look in and who do I see but "Rick" ! So I stopped and we did the whole “fist pound” thing (if you’re not from the ‘hood like me, you might not be as hip as me on that {sarcasm}). The first time we met, "Rick" had guessed that I didn’t live in Glenwood, and so on this meeting, I asked him what he was doing in my part of the ‘hood (neither of us took me seriously as I said that). Then I told him that it looked like God did answer prayers. He looked confused for a moment, and I reminded him that we had prayed that God would let him see another day, and well, here he was. He smiled and said, “Yeah, that’s right,” and I could tell that something flickered inside, the part of him that was created to know God and connect to Him. That was the extent of our conversation, and as I walked away, I couldn’t help but think that I had just taken part in a Kingdom moment. Will and I meeting "Rick" had transformed him in my eyes. I normally would have ignored him, not even noticed him on my walk with Joe, or just dismissed him as a punk dealer. But because I had a relationship with "Rick", I saw him that day, and so we talked again. I think it also transformed me in his eyes. He normally would have ignored me, once he saw I wasn’t interested in buying, not even noticing me on my walk, dismissing me as a rich white guy. But because he knew me on some level, he saw me, and so we talked again. Saturday, November 17, 2007 So I have held off as long as I could and have inaugurated my Christmas music listening tonight with Andrew Peterson's album Behold the Lamb of God. So to celebrate, I thought I might list my top five Christmas albums (#1 being most favorite, but #2's a favorite, too, just not as much favorite), and then invite my legions of loyal readers to share your lists or comment on mine. I'm always looking for new music, so show me the way! 5) Harry Connick, Jr When My Heart Finds Christmas - This album takes me back to some good ol' days when I was at Carolina, and I just continue to enjoy it year after year. 3) Ed Cash, Bebo Norman, Allen Levi Joy! - These three guys are really talented musicians and singers, and they really do have joy on this album as they perform many classic Christmas hymns. Plus, their original songs are super, and it's just a cool blend of three unique voices. 1) Andrew Peterson Behold the Lamb of God - This is an amazing original CD which tells the story of Christmas beginning in Exodus and moving through the Gospels. The songs stand alone, but are meant to be listened to as a whole body of work. All of the songs are originals, and in true Andrew Peterson form, the lyrics are profound. One of the highlights is "Matthew's Begats" which is simply the geneology of Jesus sung to an upbeat tune. If you have the chance to see this concert in person, I would take it. Tuesday, November 13, 2007 The other day as I was having a quiet time, Eliza asked me to do something with her. Used to being interrupted, I said, "Not right now, I'm spending time with Jesus. When I'm finished talking with Jesus then we can play." Her response was, "I pray sometimes too, Mommy." I said, "Really, when do you do that?" Thinking she was talking about prayer time before bed or at the dinner table. But she answered, "Well, I just close my eyes and bow my head and pray to God alone in my room." I said, "What do you pray about?" "I just sit quiet and pray, Mommy." "I know, but what do you talk to God about when you pray?" "Nothing, Mommy, I just sit quiet and listen to God." Sunday, November 11, 2007 You know, it seems like the only time I ever hear Psalm 23 is at funerals. But is that all it’s good for, to remind us of God in the midst of the valley of the shadow of death? It has become such a somber Psalm to me, repeated by rote. But as I have grown in margin, I am finding great comfort and margin from this short oldie-but-goodie. The Lord as our shepherd gives us hope that it’s not all up to us. There is someone greater than ourselves looking out for interests, pastoring us. There is a roominess in knowing that with God as our shepherd, we shall not want. Even in the midst trouble, of things not looking all right, of financial questions, God says that He is there for us with protection and presence and provision. God not only wants us to eat and to move, but also to rest, and as our shepherd, He loves us enough to make us lie down because He knows our need for rest more than we do. He leads us to green pastures and quiet waters, simple evidence of His goodness and love. And what has stood out to me over anything else in this psalm has been, “He restores my soul.” Our souls are our mind, will, and emotions, and this verse reminds us that God doesn’t just desire for us to have physical provision and rest, but He cares about our inner life as well. He wants to redeem our ways of thinking about Him, restoring our mind. He wants to heal the ways we feel about ourselves and our circumstances, restoring our emotions. He wants to transform our choices to reflect a trusting, love relationship with Him, restoring our wills. Backing all of this up is the Lord’s goodness and mercy, following us, urging us on towards our good home with the Lord. In the midst of our messes, in the midst of our forgetting, there is a quiet assurance that goodness and love will follow. And we are reminded of our future home with the Lord forever. Knowing that our future is secure gives us freedom and margin in the present to stop striving so hard. The Lord is my shepherd. I shall not want. He makes me lie down. He leads me. Surely goodness and love will follow me. This is not a somber psalm. It is a psalm of confidence that allows us to take a step back from our hurry and our efforts at self-provision and self-protection, allowing us to make room for margin. Wednesday, November 07, 2007 Make a Budget: One of the best defenses against marginless finances and one of the best ways to ensure that we can give generously to God’s work is simply to have a budget. In fact, I don't know how people manage their money and have funds to give without having a budget. Having a plan for where your money is going before you even get it and knowing where you are spending your income allows you to establish margin from the get go. If giving/tithing is a priority for us, then rather than waiting to see if there is anything left at the end to give, we ought to make “Giving” a line item in our budget (perhaps THE line item) and adjust our "want-to" spending around it. Diane and I have determined a percentage that we want to give, and so we adjust our giving percentage-wise to the amount of income that comes in. We use two tools to help us budget. One is a simple excel sheet on which we list every conceivable area of spending. There are line items for personal spending money for me and Diane, money set aside each month towards Christmas presents, vacation, and car repairs, money for clothes for our kids and more. We do a "zero-balance budget", which means that we have a place to put every dollar that is coming in, even if that place is "extra funds." The other is an online program called mVelopes (www.mvelopes.com). The way that mVelopes works is that it allows you to put money from your bank account into virtual “envelopes” so that you know where every dollar is going. So when you use your debit card at Food Lion, for example, mVelopes downloads that transaction from your bank account. Then you drag and drop that into your envelope for “grocery store” and it subtracts that amount from what you budgeted for the month. This system enables you to budget for many different areas of life, it tracks every dollar that you spend, and it helps you know when to say when. For example, when you have used up all of your “eating out money”, your envelope is at $0 and you know that it’s time to pack your lunch for the rest of the month. For us, mVelopes has been nothing short of amazing, and I would highly recommend the free, 30-day trial you can get online. An emergency fund helps with margin because if you know that you have $1,000 set aside for nothing but Murphy’s Law, it makes things like a busted radiator ($600 for my 1995 Honda Civic, I found out last month) be nothing more than a blip on the radar. It’s covered. Financial advisor Dave Ramsey http://www.daveramsey.com/suggests that after paying all of your “have-to” bills (including minimum balances on credit cards), putting all extra money towards building a $1,000 emergency fund that you DO NOT TOUCH is a necessity for margin. Having that margin allows for a measure of peace in times that could easily feel like crisis. The Debt Snowball: Once that emergency fund is established, if you have non-house debt your extra money should go towards the principle of your lowest amount owed. Then when that debt is paid off, take its minimum payment and apply that, with any extra each month, to the principle of your next lowest debt, and so on. This is what Ramsey calls the “debt snowball.” Eliminating debt removes some of the “have-to” payments and allows us to give more and save more. It’s also wise to not incur further debt, so for example, don’t buy a more expensive car than you can afford to pay off immediately (or within a few months). Now, let’s say you have your $1,000, you are putting all your extra money into your debt, and then an emergency happens and you dip into your fund. Focus next on replenishing your emergency fund, then back to debt. Diane and I have been following this 3-step plan for a few years, and we have reduced our non-house debt by over $12,000, and have experienced a great deal of financial peace and have seen our ability and desire to give increase. While we don’t have tons of room for all the “extras” that we desire, we have found our way to having joy in the financial margins. Sunday, November 04, 2007 Having margin is not just relegated to having time to relax and room for relationship. Financial margin is having some space between our income and our outflow, which allows room in our finances for giving, for saving, and for emergencies that inevitably come. This is another area where margin is key for us to have peace, and it’s an area where so many in our culture and in the Church are way out of whack. It’s hard to have financial margin here in America. Advertisers are after us from the time we are pre-schoolers, telling us about one more toy or cereal that we can’t live without, and they don’t ease off as we get older. In fact the “toys” and “cereals” get more expensive, and the benefits that they promise seem more and more alluring, because they promise us beauty, status, happiness, sex, and fulfillment. And so we are encouraged to live right up to our financial limits, spending every dollar as soon as we get it, and even to go beyond our limits, charging things on credit. We make choices to have payments and bills for things that we may or may not need, and spend a lot of time worrying about how to pay those bills or spend more time working in order to afford what we bought. One of the costs of living this way is that our ability to participate financially in building God’s Kingdom is severely limited. Many times, in our heart of hearts we want to give more than we do, but there is just not room left when we add up what needs to go out plus the things that we want. Another cost is that our time and attention is taken up by a focus on money, giving us less and less of those things to give to resting with God, being in relationship with others, and more. Financial stress is one of the leading causes of stress in America and is one of the leading causes of divorce. Such a premium has been placed on money, and it has been elevated to such a place of delivering hope and happiness that when we don’t have as much as we think we should, it can consume us. But the Lord doesn’t want us consumed with money and worrying about that. Jesus said, “Why do you worry about clothes and food? Your Father knows you need those things. Seek God and His kingdom first, and everything else will fall into place.” Financial margin gives us room for that seeking. Next we will look at three simple ways to move towards financial margin.
Abstract: Claims: Description: BACKGROUND SUMMARY BRIEF DESCRIPTION OF THE DRAWINGS DETAILED DESCRIPTION Overview Example System Example Methods Patent applications by Daniel Joseph Filip, San Jose, CA US Patent applications by GOOGLE INC. Near real-time imagery of a given location may be provided to user upon request. Most popularly viewed geographic locations are determined, and a 360 degree image capture device is positioned at one or more of the determined locations. The image capture device may continually provide image information, which is processed, for example, to remove personal information and filter spam. Such image information may then be provided to users upon request. The image capture device continually captures multiple views of the given location, and the requesting user can select which perspective to view.A computer-implemented method comprising: receiving, by one or more computing devices, live images of a geographical location from at least one image capture device; processing, by the one or more computing devices, the received live images; receiving, by the one or more computing devices, a request for map data corresponding to the geographical location; providing, by the one or more computing devices, the requested map data; receiving, by the one or more computing devices, a request for live imagery corresponding to the requested map data; determining, by the one or more computing devices and based on the request for live imagery, a point of view associated with the requested live images; and providing, by the one or more computing devices, processed live images corresponding to the determined point of view and the requested map information.The method of claim 1, further comprising: determining, using the one or more computing devices, geographical locations for which imagery is most often requested by users; and positioning the image capture device at the determined geographical location.The method of claim 1, wherein the received images include a continuous 360 degree field of view around the image capture device.The method of claim 1, wherein processing the received live images comprising detecting personal information and blurring the detected personal information.The method of claim 4, wherein the personal information includes human faces.The method of claim 1, wherein processing the received live images comprises filtering spam data.A system comprising: at least one image capture device positioned at a geographical location; one or more processors in communication with the image capture device, the one or processors programmed to: receive live images of a geographical location from at least one image capture device; process the received live images; receive a request for map data corresponding to the geographical location; provide the requested map data; receive a request for live imagery corresponding to the requested map data; determine, based on the request for live imagery, a point of view associated with the requested live images; and provide processed live images corresponding to the determined point of view and the requested map information.The system of claim 7, wherein the one or more processors are further programmed to determine geographical locations for which imagery is most often requested by users, and wherein the at least one image capture device is positioned at the determined geographical location.The system of claim 7, wherein the received images include a continuous 360 degree field of view around the image capture device.The system of claim 7, wherein processing the received live images comprises detecting personal information and blurring the detected personal information.The system of claim 10, wherein the personal information includes human faces.The system of claim 7, wherein processing the received live images comprises filtering spam data.A non-transitory computer-readable medium storing information and instructions executable by a processor for performing a method of providing live imagery, the method comprising: receiving live images of a geographical location from at least one image capture device; processing the received live images; receiving a request for map data corresponding to the geographical location; providing the requested map data; receiving a request for live imagery corresponding to the requested map data; determining, based on the request for live imagery, a point of view associated with the requested live images; and providing processed live images corresponding to the determined point of view and the requested map information.The non-transitory computer-readable medium of claim 13, the method further comprising determining geographical locations for which imagery is most often requested by users, wherein the image capture device is positioned at the determined geographical location.The non-transitory computer-readable medium of claim 13, wherein the received images include a continuous 360 degree field of view around the image capture device.The non-transitory computer-readable medium of claim 13, wherein processing the received live images comprising detecting personal information and blurring the detected personal information.The non-transitory computer-readable medium of claim 16, wherein the personal information includes human faces.The non-transitory computer-readable medium of claim 13, wherein processing the received live images comprises filtering spam data.Upon request, map data for a given location and associated imagery may be provided to a user. Such associated imagery is typically captured by a vehicle-mounted camera as the vehicle drives through the given location, and then stored in a database. Because of the passage of time between image capture and providing the image to the user, the imagery may depict information that is irrelevant or out of date. For example, the imagery may depict construction that is no longer ongoing, or a business that is no longer operational.Near real-time imagery of a given location may be provided to user upon request. Most popularly viewed geographic locations may be determined, and a 360 degree image capture device may be positioned at such locations. The image capture device may continually provide image information, which is processed, for example, to remove personal information and filter spam. Such image information may then be provided to users upon request. Because the image capture device continually captures multiple views of the given location, the requesting user can select which perspective to view.One aspect of the disclosure provides a computer-implemented method for providing live imagery to users upon request. In this method one or more computing devices receive live images of a geographical location from at least one image capture device, and process the received live images. Further, the one or more computing devices receive a request for map data corresponding to the geographical location, and provide the requested map data. The one or more computing devices further receive a request for live imagery corresponding to the requested map data, and determine, based on the request for live imagery, a point of view associated with the requested live images. The one or more computing devices provide processed live images corresponding to the determined point of view and the requested map information. According to one example, the one or more computing devices further determine geographical locations for which imagery is most often requested by users, and the image capture device is positioned at the determined geographical location. The received images may include a continuous 360 degree field of view around the image capture device. Processing the received live images may include detecting personal information, such as human faces and license plate numbers, and blurring the detected personal information. Alternatively or additionally, processing the received images may include filtering spam data.Another aspect of the disclosure provides a system comprising at least one image capture device positioned at a geographical location, and one or more processors in communication with the image capture device. The one or processors are programmed to receive live images of a geographical location from at least one image capture device, process the received live images, receive a request for map data corresponding to the geographical location, provide the requested map data, receive a request for live imagery corresponding to the requested map data, determine, based on the request for live imagery, a point of view associated with the requested live images, and provide processed live images corresponding to the determined point of view and the requested map information.Yet another aspect of the disclosure provides a non-transitory computer-readable medium storing information and instructions executable by a processor. When executed, the instructions perform a method comprising receiving live images of a geographical location from at least one image capture device, processing the received live images, receiving a request for map data corresponding to the geographical location, and providing the requested map data. This method further includes receiving a request for live imagery corresponding to the requested map data, determining, based on the request for live imagery, a point of view associated with the requested live images, and providing processed live images corresponding to the determined point of view and the requested map information.FIG. 1 is a functional diagram of a system in accordance with aspects of the disclosure.FIG. 2 is a pictorial diagram of the system of FIG. 1.FIG. 3 is an example screen shot in accordance with aspects of the disclosure.FIG. 4 is another example screen shot in accordance with aspects of the disclosure.FIG. 5 is another example screen shot in accordance with aspects of the disclosure.FIG. 6 is a flow diagram of an example method in accordance with aspects of the disclosure.Upon request by a user, live imagery of a given location may be provided to the user over the Internet in association with map data for the given location. For example, an image capture device may be positioned at the given location and may continually provide imagery to one or more computing devices. The one or more computing devices process the imagery to, for example, remove personal information (e.g., faces and/or license plate numbers) and filter spam. A user may request map data for the given location, and may also request live imagery of the given location. In response to the request, the one or more processors provide the processed live images associated with the requested map data.The image capture device may be, for example, a 360 degree video camera. In this regard, the image capture device may continually capture a 360 degree field of view around the image capture device. According to one example, in requesting the live imagery, the user may specify a viewpoint for the imagery. For example, the user may submit directional information with the request for imagery, and in response receive a segment of the captured imagery.Positioning of the image capture device may be determined based on popularity. For example, the one or more computing devices may determine for which geographical locations the most requests for map data or imagery are received. Image capture devices may be positioned at the determined locations. Preferably, the image capture devices are positioned so as to prevent tampering.The processing performed on the captured images may be automated. For example, the one or more processors may automatically detect personal information, such as faces, license plates, or other information. In response to detecting such information, the one or more processors blur or otherwise obscure the information such that it is not provided to a user in response to a request. Moreover, the one or more processors may detect and filter spam. For example, it may be determined that images from an unauthorized image capture device or other unauthorized content are being received in addition to or in place of approved images. Accordingly, the unauthorized content and images may be filtered.FIGS. 1 and 2 include an example system 100 in which the features described above may be implemented. It should not be considered as limiting the scope of the disclosure or usefulness of the features described herein. In this example, system 100 can include one or more computing devices 110, which may be connected to further computing devices 160 and 170 over a network 150.Computing devices 110 can contain one or more processors 120, memory 130 and other components typically present in general purpose computing devices. The memory 130 can store information accessible by the one or more processors 120, including instructions 132 that can be executed by the one or more processors 120.Memory 130 can also include data 134 that can be retrieved, manipulated or stored by the processor. The memory can be of any non-transitory type capable of storing information accessible by the processor, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories.The instructions 132 can be any set of instructions to be executed directly, such as machine code, or indirectly, such as scripts, by the one or more processors. In that regard, the terms "instructions," "application," "steps" and "programs" can be used interchangeably herein. The instructions can be stored in object code format for direct processing by a processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. The instructions 132 can be executed to perform operations such as detecting personal information in received images, modifying the received images to blur or obscure such information, or the like. The instructions 132 may also be executed to perform spam detection and filtering. Functions, methods and routines of the instructions are explained in more detail below.Data 134 can be retrieved, stored or modified by the one or more processors 120 in accordance with the instructions 132. For instance, although the subject matter described herein is not limited by any particular data structure, the data can be stored in computer registers, in a relational database as a table having many different fields and records, or XML documents. The data can also be formatted in any computing device-readable format such as, but not limited to, binary values, ASCII or Unicode. Moreover, the data can comprise any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories such as at other network locations, or information that is used by a function to calculate the relevant data. According to one example, the data may include map information for geographical locations. Moreover, the data 134 may include information related to image capture device 190, such as an identifier and location information.The one or more processors 120 can be any conventional processors, such as a commercially available CPU. Alternatively, the processors can be dedicated components such as an application specific integrated circuit ("ASIC") or other hardware-based processor. One or more of computing devices 110 may include specialized hardware components to perform specific computing processes, such as decoding video, matching video frames with images, distorting videos, encoding distorted videos, etc. faster or more efficiently.Although FIG. 1 functionally illustrates the processor, memory, and other elements of computing device 110 as being within the same block, the processor, computer, computing device, or memory can actually comprise multiple processors, computers, computing devices, or memories that may or may not be stored within the same physical housing. For instance, the memory can be a hard drive or other storage media located in housings different from that of the computing devices 110. As another example, various methods described below as involving a single component (e.g., processor 120) may involve a plurality of components (e.g., multiple computing devices distributed over a network of computing devices, computers, "racks," etc. as part of a parallel or distributed implementation). Further, the various functions performed by the embodiments may be executed by different computing devices at different times as load is shifted from among computing devices. Similarly, various methods described below as involving different components (e.g., device 110 and device 160) may involve a single component (e.g., rather than device 160 performing a determination described below, device 160 may send the relevant data to device 110 for processing and receive the results of the determination for further processing or display). Accordingly, references to a processor, computer, computing device, or memory will be understood to include references to a collection of processors, computers, computing devices, or memories that may or may not operate in parallel. For example, the computing devices 110 may include server computing devices operating as a load-balanced server farm, distributed system, etc. Yet further, although some functions described below are indicated as taking place on a single computing device having a single processor, various aspects of the subject matter described herein can be implemented by a plurality of computing devices, for example, communicating information over network 150.Each of the computing devices 110 can be at different nodes of the network 150 and capable of directly and indirectly communicating with other nodes of network 150. Although only a few computing devices are depicted in FIGS. 1-2, it should be appreciated that a typical system can include a large number of connected computing devices, with each different computing device being at a different node of the network 150. The network 150 and intervening nodes described herein can be interconnected using various protocols and systems, such that the network can be part of the Internet, World Wide Web, specific intranets, wide area networks, or local networks. The network can utilize standard communications protocols, such as Ethernet, WiFi and HTTP, protocols that are proprietary to one or more companies, and various combinations of the foregoing. Although certain advantages are obtained when information is transmitted or received as noted above, other aspects of the subject matter described herein are not limited to any particular manner of transmission of information.As an example, each of the computing devices 110 may include web servers capable of communicating with a storage system 140, image capture device 190, and computing devices 160, 170 via the network 150. For example, one or more of server computing devices 110 may receive live imagery from the image capture device 190 through the network 150, and may further transmit processed imagery to the client devices 160, 170 using the network 150. As another example, one or more of server computing devices 110 may use network 150 to transmit and present information to a user, such as user 191, 192, on a display, such as displays 165 of computing devices 160, 170. In this regard, computing devices 160, 170 may be considered client computing devices and may perform all or some of the features described herein.Each of the client computing devices 160, 170 may be configured similarly to the server computing devices 110, with one or more processors 162 and memory, including data 163 and instructions 164 as described above. Each client computing device 160, 170 may be a personal computing device intended for use by a user 191, 192 and have all of the components normally used in connection with a personal computing device such as a central processing unit (CPU), memory (e.g., RAM and internal hard drives) storing data and instructions, a display 165 (e.g., a monitor having a screen, a touch-screen, a projector, a television, or other device that is operable to display information), and user input device 166 (e.g., a mouse, keyboard, touch-screen or microphone). The client computing device may also include a camera 167 for recording video streams, speakers, a network interface device, and all of the components used for connecting these elements to one another.Although the client computing devices 160, 170 may each comprise a full-sized personal computing device, they may alternatively comprise mobile computing devices capable of wirelessly exchanging data with a server over a network such as the Internet. By way of example only, client computing device 160 may be a mobile phone or a device such as a wireless-enabled PDA, a tablet PC, or a netbook that is capable of obtaining information via the Internet. In another example, client computing device 170 may be a head-mounted computing system. As an example the user may input information using a small keyboard, a keypad, microphone, using visual signals with a camera, or a touch screen.As with memory 114, storage system 140 can be of any type of computerized storage capable of storing information accessible by the server computing devices 110, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories. In addition, storage system 140 may include a distributed storage system where data is stored on a plurality of different storage devices which may be physically located at the same or different geographic locations. Storage system 140 may be connected to the computing devices via the network 150 as shown in FIG. 1 and/or may be directly connected to any of the computing devices 110, 160, 170.Storage system 140 may store images and associated information such as image identifiers, orientation, location of the camera that captured the image, intrinsic camera settings (such as focal length, zoom, etc.), depth information, as well as references to other, target images. Storage system 140 may also include information used for processing live imagery received by the one or more servers 110 from the image capture device 190. For example, the storage system 140 may include data associated with previously identified spam, such that the data can be used to identify and filter spam from the live imagery.The image capture device 190 may be a camera, such as a video camera, or any other device capable of capturing images of a particular geographical location. According to one example, the image capture device 190 is a 360 degree video camera, which continually captures a 360 degree field of view around itself. According to another example, multiple image capture devices may be deployed at one geographical location. The image capture device may be positioned at the geographical location in such a way as to prevent or mitigate potential tampering with it. According to some examples, the image capture device 190 is positioned at geographical locations selected based on popularity. For example, the one or more computing devices 110 may determine geographical locations for which map data and/or imagery are most often requested by users, and image capture devices 190 may be placed at those determined locations.Using the system described above, live imagery of geographical locations is provided to users upon request. The live imagery may be received at the one or more server computing devices from the image capture device and processed, for example, to remove personal information, such as faces and license plates numbers, and filter spam. The live imagery can include 360 degree panoramas. Users can request map data and imagery for the geographical location, and receive the processed live imagery in response. The users may also specify a particular point of view, and a corresponding portion of the 360 degree panorama is provided.FIG. 3 illustrates an example screenshot 300 providing map information for a given geographical location corresponding to an address entered in search field 310. The map information includes, for example, a roadgraph 320. A place marker 322 may indicate a position on the roadgraph 320 corresponding to the entered location. View option buttons 325, 335, 345, 355 are also provided, wherein each button provides an option for a different representation of the geographical location. For example, the map button 325 may correspond to a roadgraph, such as the roagraph 320. The street button 335 may correspond to still imagery of the geographical location taken from a perspective of someone standing at street level. The satellite button 345 may correspond to satellite imagery, showing a view of the geographical location from space. The live button 355 may correspond to live imagery captured by an image capture device dedicated to obtaining imagery of the specified geographical location, such as the image capture device 190 (FIGS. 1-2).FIG. 4 illustrates an example screenshot 400 illustrating an example of the live imagery associated with the specified geographical location and provided to the user. For example, the geographical location corresponding to address 415 is depicted by roadgraph 420, on which a position viewpoint indicator 462 and a directional viewpoint indicator 464 is placed. Live imagery appearing in viewing field 450 corresponds to the address 415 and the roadgraph 420. The live images may be viewed by the user, for example, by selecting live view button 455 among option buttons 425, 435, 455. The images provided in viewing field 450 may include a portion of images actually captured and provided to the server computing devices. For example, while the image capture device positioned at the geographical location may obtain images with a continuous 360 degree field of view, only a segment of such field of view may be shown in viewing field 450. That segment corresponds to a position and direction of indicators 462, 464. According to other examples, the full images captured, such as the entire 360 degree field of view panorama, may be provided to the user in one or more viewing fields.The position indicator 462 and directional indicator 464 may be manipulated by the user, for example, to receive images of a different viewpoint. FIG. 5 illustrates another example screenshot providing live imagery corresponding to a different view of the same geographical location as in FIG. 4. In particular, while position indicator 562 remains in the same position as position indicator 462 (FIG. 4), directional indicator 564 has been manipulated to point to a different direction, such as towards North. Accordingly, the live imagery provided in viewing field 550 shows a different area of the geographical location. According to the example where a 360 degree image capture device positioned at the location is providing the images, the images shown in the viewing field 550 may be another portion of the 360 degree panorama. In this regard, the user may repeatedly request different portions of the 360 degree frame. According to some examples, because the imagery provided in the viewing field 550 is live, objects in the imagery may appear to be moving. Moreover, the imagery may be continually updated as new images are received.The imagery provided in FIGS. 4-5 is processed by one or more computing devices prior to being provided to users. For example, personal information and spam may be removed. As an example of removing personal information, an automatic face detection and blurring operation may be performed on the imagery. Similarly, license plate numbers and other personal information may be detected and blurred or otherwise obscured. As an example of spam filtering, spam such as people putting their faces close up to the camera or people holding up signs with slogans may be present in the received images. Such spam may be detected, for example, using face detection or text detection algorithms. Detected spam may be blurred in the images or obscured by a black box or other object. Thus, while the imagery is described as being "live," it should be understood that the imagery may actually be subject to a small delay, such as a few second to a few minutes. According to another example, images including detected spam may not be sent to users. For example, last available clean live imagery from the given geographical location, which does not include spam, can be provided. In some instances, such last available clean images can be provided with a timestamp or note indicating the delay.According to one example, crowd-sourcing techniques may be used as part of the spam detection and filtering process. For example, users may submit reports identifying spam included in the live imagery for a given location. In response to receiving a predetermined number of reports for the given location, the last available clean images may be provided to users in place of the more recent images that include spam.FIG. 6 provides a flow diagram illustrating an example method 600. The following operations do not have to be performed in the precise order described below. Rather, various steps can be handled in a different order or simultaneously. Steps can also be omitted unless otherwise stated.In block 610, one or more computing devices receive live images of a geographical location from at least one image capture device. The at least one image capture device may be, for example, a 360 camera that continually captures images in directions all around a vertical axis. Such image capture devices may be positioned at selected geographical locations throughout the world. According to one example, the geographical locations may be selected by determining the locations for which imagery and/or map data is most often requested by users.In block 620, the received live images may be processed, for example, to remove personal information and filter spam. For example, the one or more server computing devices may automatically detect objects such as faces, license plates, etc. Once detected, the received imagery may be modified to obscure those objects. For example, the detected objects may be blurred, covered, or the like. Spam may affect the received images in various ways. For example, it may be determined that images from an unauthorized image capture device or other unauthorized content are being received in addition to or in place of approved images. Such spam may be automatically filtered using any of a number of techniques.In block 630, a request for map data corresponding to the geographical location is received by the one or more computing devices. For example, a user may enter an address, point of interest, or other relevant information in a search field of an interface. In response, the requested map data is provided to the user (block 640). For example, an address and/or a roadgraph or other depiction of the geographical location may be provided.In block 650, a request for live imagery corresponding to the requested map data is received. For example, the user may select an option to view live imagery from among several other types of views. Further, the user may identify in the request a specific area of the geographical location to view. For example, the user may identify position and/or directional information associated with the requested imagery. Such information may be indicated by the user by manipulating icons, entering text, providing speech commands, navigating through a depiction of the location, or the like.In block 660, a point of view associated with the requested live image is determined based on the request for live imagery. For example, the one or more computing devices may determine from information received from the user which specific area of the geographical location the user would like to see live.In block 670, processed live images corresponding to the determined point of view and the requested map information are provided to the user. For example, a portion of the captured 360 degree panorama may be to the user, wherein the provided portion corresponds to the position and direction specified in the user's request. According to another example, the full 360 degree panorama may be provided in one or more viewing fields. Because the imagery is continually captured, the imagery provided to the user may be continually updated.The above described features may be advantageous in that they provide users with the most up to date information regarding a specified location. For example, users can become informed about weather, traffic, construction, events, or other details associated with a geographic location. Such information may be more reliable than other sources of the same information, because the users can view it first hand, regardless of their current location. Using such information, users can make decisions about visiting the geographic location, or just become better educated about it.As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. As an example, the preceding operations do not have to be performed in the precise order described above. Rather, various steps can be handled in a different order or simultaneously. Steps can also be omitted unless otherwise stated. In addition, the provision of the examples described herein, as well as clauses phrased as "such as," "including" and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements.
--- abstract: 'Hybrid analog and digital beamforming transceivers are instrumental in addressing the challenge of expensive hardware and high training overheads in the next generation millimeter-wave (mm-Wave) massive MIMO (multiple-input multiple-output) systems. However, lack of fully digital beamforming in hybrid architectures and short coherence times at mm-Wave impose additional constraints on the channel estimation. Prior works on addressing these challenges have focused largely on narrowband channels wherein optimization-based or greedy algorithms were employed to derive hybrid beamformers. In this paper, we introduce a deep learning (DL) approach for joint channel estimation and hybrid beamforming for frequency-selective, wideband mm-Wave systems. In particular, we consider a massive MIMO Orthogonal Frequency Division Multiplexing (MIMO-OFDM) system and propose three different DL frameworks comprising convolutional neural networks (CNNs), which accept the received pilot signal as input and yield the hybrid beamformers at the output. Numerical experiments demonstrate that, compared to the current state-of-the-art optimization and DL methods, our approach provides higher spectral efficiency, lesser computational cost, and higher tolerance against the deviations in the received pilot data, corrupted channel matrix, and propagation environment.' author: - 'Ahmet M. Elbir and Kumar Vijay Mishra [^1] [^2]' bibliography: - 'IEEEabrv.bib' - 'references\_047\_journal.bib' title: 'Deep Learning Strategies For Joint Channel Estimation and Hybrid Beamforming in Multi-Carrier mm-Wave Massive MIMO Systems' --- Channel estimation, deep learning, hybrid beamforming, mm-Wave, wideband massive MIMO. Introduction {#sec:Introduciton} ============ The conventional cellular communications systems suffer from spectrum shortage while the demand for wider bandwidth and higher data rates is continuously increasing [@mimoOverview]. In this context, millimeter wave (mm-Wave) band is a preferred candidate for fifth-generation (5G) communications technology because they provide higher data rate and wider bandwidth [@mimoOverview; @mishra2019toward; @5GwhatWillItBe; @hodge2019reconfigurable; @ayyar2019robust]. Compared to sub-6 GHz transmissions envisaged in 5G, the mm-Wave signals encounter a more complex propagation environment that is characterized by higher scattering, severe penetration losses, lower diffraction, and higher path loss for fixed transmitter and receiver gains [@mimoHybridLeus1; @mimoHybridLeus2]. The mm-Wave systems leverage massive antenna arrays - usually in a multiple-input multiple-output (MIMO) configuration - to achieve array and multiplexing gain, and thereby compensate for the propagation losses at high frequencies [@mimoRHeath]. However, such a large array requires a dedicated radio-frequency (RF) chain for each antenna resulting in an expensive system architecture and high power consumption. In order to address this, hybrid analog and baseband beamforming architectures have been introduced, wherein a small number of phase-only analog beamformers are employed to steer the beams. The down-converted signal is then processed by baseband beamformers, each of which is dedicated to a single RF chain [@mimoHybridLeus1; @mimoHybridLeus2; @mimoRHeath; @mimoScalingUp]. This combination of high-dimensional phase-only analog and low-dimensional baseband digital beamformers significantly reduces the number of RF chains while also maintaining sufficient beamforming gain [@mmwaveKeyElements; @mimoRHeath]. However, lack of fully digital beamforming in hybrid architectures poses challenges in mm-Wave channel estimation [@channelEstLargeArrays; @channelEstLargeArrays2; @channelEstimation1; @channelEstimation1CS; @channelModelSparseBajwa; @channelModelSparseSayeed]. The instantaneous channel state information (CSI) is essential for massive MIMO communications because precoding at downlink or decoding at uplink transmission requires highly accurate CSI to achieve spatial diversity and multiplexing gain [@mimoHybridLeus1; @mimoHybridLeus2]. In practice, pilot signals are periodically transmitted and the received signals are processed to estimate the CSI [@channelEstLargeArrays2; @channelEstLargeArrays2]. Further, the mm-Wave environments such as indoor and vehicular communications are highly variable with short coherence times [@coherenceTimeRef] that necessitates use of channel estimation algorithms that are robust to deviations in the channel data. Once the CSI is obtained, the hybrid analog and baseband beamformers are designed using either instantaneous channel matrix or channel covariance matrix (CCM). Bamforming based on the latter provides lower spectral efficiency [@widebandHBWithoutInsFeedback] because CCM does not reflect the instantaneous profile of the channel. Hence, it is more common to utilize the channel matrix for hybrid beamforming [@mimoHybridLeus3; @hybridBFAltMin; @hybridBFLowRes; @sohrabiOFDM]. In recent years, several techniques have been proposed to design the hybrid precoders in mm-Wave MIMO systems. Initial works have focused on narrow-band channels [@mimoHybridLeus1; @mimoHybridLeus2; @mimoHybridLeus3; @mimoRHeath; @hybridBFLowRes]. However, to effectively utilize the mm-Wave MIMO architectures with relatively larger bandwidth, there are recent and concerted efforts toward developing broadband hybrid beamforming techniques. The key challenge in hybrid beamforming for a broadband frequency-selective channel is designing a common analog beamformer that is shared across all subcarriers while the digital (baseband) beamformer weights need to be specific to a subcarrier. This difference in hybrid beamforming design of frequency-selective channels from flat-fading case is the primary motivation for considering hybrid beamforming for orthogonal frequency division multiplexing (OFDM) modulation. The optimal beamforming vector in a frequency-selective channel depends on the frequency, i.e., a subcarrier in OFDM, but the analog beamformer in any of the narrow-band hybrid structures cannot vary with frequency. Thus, a common analog beamformer must be designed in consideration of impact to all subcarriers, thereby making the hybrid precoding more difficult than the narrow-band case. Among prior works, [@widebandChannelEst1; @widebandChannelEst2] consider channel estimation for wideband mm-Wave massive MIMO systems. The hybrid beamforming design was investigated in [@alkhateeb2016frequencySelective; @sohrabiOFDM; @widebandHBWithoutInsFeedback; @widebandMLbased] where OFDM-based frequency-selective structures are designed. In particular, [@alkhateeb2016frequencySelective] proposes a Gram-Schmidt orthogonalization based approach for hybrid beamforming (GS-HB) with the assumption of perfect CSI and GS-HB selects the precoders from a finite codebook which are obtained from the instantaneous channel data. Using the same assumption on CSI, [@sohrabiOFDM] proposed a phase extraction approach for hybrid precoder design. In [@zhu2016novel], a unified analog beamformer is designed based on the second-order spatial channel covariance matrix of a wideband channel. In [@zhang2016low], the Eckart-Young-Mirsky matrix approximation is employed to find the wideband beamforming matrices that have the minimum Euclidean distance from the optimal solutions. In [@lee2014matrix], the wideband beamformer design is cast as a search for a common basis matrix for the subspaces spanned by all subcarriers’ channel matrices and the higher order singular value decomposition (HOSVD) method is applied. In [@chen2018hybrid], antenna selection is also introduced to wideband hybrid beamforming. It exploits the asymptotic orthogonality of array steering vectors and proposes two angular-information-based beamforming schemes to relax the assumption of full CSI at the transmitter such that knowledge of only angles of departure is required. Nearly all of the aforementioned methods strongly rely on perfect CSI knowledge. This is very impractical given the highly dynamic nature of mm-Wave channel [@coherenceTimeRef]. To relax this dependence and obtain robust performance against the imperfections in the estimated channel matrix, we examine a deep learning (DL) approach. The DL is capable of uncovering complex relationships in data/signals and, thus, can achieve better performance. This has been demonstrated in several successful applications of DL in wireless communications problems such as channel estimation [@mimoDLChannelEstimation; @deepCNN_ChannelEstimation], analog beam selection [@mimoDLHybrid; @hodge2019multi], and also hybrid beamforming [@mimoDLHybrid; @mimoDLChannelModelBeamformingFacebook; @mimoDeepPrecoderDesign; @elbirDL_COMML; @elbirQuantizedCNN2019; @elbirHybrid_multiuser]. In particular, DL-based techniques have been shown [@deepCNN_ChannelEstimation; @deepLearningCommOverAir; @elbirIETRSN2019; @elbirQuantizedCNN2019; @elbirDL_COMML] to be computationally efficient in searching for optimum beamformers and tolerant to imperfect channel inputs when compared with the conventional methods,. However, these works investigated only narrow-band channels [@mimoDeepPrecoderDesign; @mimoDLChannelModelBeamformingFacebook; @elbirDL_COMML; @elbirQuantizedCNN2019]. The DL-based design of hybrid precoders for broadband mm-Wave massive MIMO systems, despite its high practical importance, remains unexamined so far. In this paper, we propose a DL-based joint channel estimation and hybrid beamformer design for wideband mm-Wave systems. The proposed framework constructs a non-linear mapping between the received pilot signals and the hybrid beamformers. In particular, we employ convolutional neural networks (CNNs) in three different DL structures. In the first framework (F1), a single CNN maps the received pilot signals directly to the hybrid beamformers. In the second (F2) and third (F3) frameworks, we employ multiple CNNs to also estimate the channel separately. In F2, entire subcarrier data are fed to a single CNN for channel estimation. This is a less complex architecture but it does not allow flexibility of controlling each channel individually. Therefore, we tune the performance of F2 in F3, which has a dedicated CNN for each subcarrier. The proposed DL framework operates in two stages: offline training and online prediction. During training, several received pilot signals and channel realizations are generated, and hybrid beamforming problem is solved via the manifold optimization (MO) approach [@hybridBFAltMin; @manopt] to obtain the network labels. In the prediction stage when the CNNs operate in real-time, the channel matrix and the hybrid beamformers are estimated by simply feeding the CNNs with the received pilot data. The proposed approach is advantageous because it does not require the perfect channel data in the prediction stage yet it provides robust performance. Moreover, our CNN structure takes less computational time to produce hybrid beamformers when compared to the conventional approaches. The rest of the paper is organized as follows. In the following section, we introduce the system model for wideband mm-Wave channel. We formulate the joint channel estimation and beamforming problem in Section \[sec:probform\]. We then present our approaches toward both of these problems in Sections \[sec:ice\] and \[sec:bb\_hb\], respectively. We introduce our various DL frameworks in Section \[sec:HD\_Design\] and follow it with numerical simulations in Section \[sec:Sim\]. We conclude in Section \[sec:Conc\]. Throughout this paper, we denote the vectors and matrices by boldface lower and upper case symbols, respectively. In case of a vector $\mathbf{a}$, $[\mathbf{a}]_{i}$ represents its $i$th element. For a matrix $\mathbf{A}$, $[\mathbf{A}]_{:,i}$ and $[\mathbf{A}]_{i,j}$ denote the $i$th column and the $(i,j)$th entry, respectively. The $\mathbf{I}_N$ is the identity matrix of size $N\times N$; $\mathbb{E}\{\cdot\}$ denotes the statistical expectation; $\textrm{rank}(\cdot)$ denotes the rank of its matrix argument; $\|\cdot\|_\mathcal{F}$ is the Frobenius norm; $(\cdot)^{\dagger}$ denotes the Moore-Penrose pseudo-inverse; and $\angle\{\cdot\}$ denotes the angle of a complex scalar/vector. The notation expressing a convolutional layer with $N$ filters/channels of size $D\times D$, is given by $N$@$ D\times D$. System Model {#sec:SystemModel} ============ We consider hybrid precoder design for a frequency selective wideband mm-Wave massive MIMO-OFDM system with $M$ subcarriers (Fig. \[fig\_SystemArchitecture\]). The base station (BS) has $N_\mathrm{T}$ antennas and $N_\mathrm{RF}$ $(N_\mathrm{RF} \leq N_\mathrm{T})$ RF chains to transmit $N_\mathrm{S}$ data streams. In the downlink, the BS first precodes $N_\mathrm{S}$ data symbols $\mathbf{s}[m] = [s_1[m],s_2[m],\dots,s_{N_\mathrm{S}}[m]]^\textsf{T}\in \mathbb{C}^{N_\mathrm{S}}$ at each subcarrier by applying the subcarrier-dependent baseband precoders $\mathbf{F}_{\mathrm{BB}}[m] = [\mathbf{f}_{\mathrm{BB}_1}[m],\mathbf{f}_{\mathrm{BB}_2}[m],\dots,\mathbf{f}_{\mathrm{BB}_{N_\mathrm{S}}} [m]]\in \mathbb{C}^{N_{\mathrm{RF}}\times N_\mathrm{S}}$. Then, the signal is transformed to the time-domain via $M$-point inverse fast Fourier transforms (IFFTs). After adding the cyclic prefix, the transmitter employs a subcarrier-independent RF precoder $\mathbf{F}_{\mathrm{RF}}\in \mathbb{C}^{N_\mathrm{T}\times N_{\mathrm{RF}}}$ to form the transmitted signal. Given that $\mathbf{F}_{\mathrm{RF}}$ consists of analog phase shifters, we assume that the RF precoder has constant equal-norm elements, i.e., $|[\mathbf{F}_{\mathrm{RF}}]_{i,j}|^2 =1$. Additionally, we have the power constraint $\sum_{m=1}^{M}\|\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[m] \|_\mathcal{F}^2= MN_\mathrm{S}$ that is enforced by the normalization of baseband precoder $\{\mathbf{F}_{\mathrm{BB}}[m] \}_{m\in \mathcal{M}}$ where $\mathcal{M} = \{1,\dots,M\}$. Thus, the $N_\mathrm{T}\times 1$ transmit signal is $$\begin{aligned} \mathbf{x}[m] = \mathbf{F}_{\mathrm{RF}} \mathbf{F}_{\mathrm{BB}}[m] \mathbf{s}[m], \end{aligned}$$ In mm-Wave transmission, the channel is represented by a geometric model with limited scattering [@mimoChannelModel1]. The channel matrix $\mathbf{H}[m]$ includes the contributions of $L$ clusters, each of which has the time delay $\tau_l$ and $N_\mathrm{sc} $ scattering paths/rays within the cluster. Hence, each ray in the $l$th cluster has a relative time delay $\tau_{{r}}$, angle-of-arrival (AOA) $\theta_l \in [-\pi,\pi]$, angle-of-departure (AOD) $\phi_l \in [-\pi,\pi]$, relative AOA (AOD) shift $\vartheta_{rl}$ ($\varphi_{rl}$) between the center of the cluster and each ray [@alkhateeb2016frequencySelective], and complex path gain $\alpha_{l,r}$ for $r = \{1,\dots, N_\mathrm{sc}\}$. Let $p(\tau)$ denote a pulse shaping function for $T_\mathrm{s}$-spaced signaling evaluated at $\tau$ seconds [@channelModelSparseSayeed], then the mm-Wave delay-$d$ MIMO channel matrix is $$\begin{aligned} \label{eq:delaydChannelModel} \mathbf{H}[d] = & \sqrt{\frac{ N_\mathrm{T} N_{\mathrm{R}} } {N_{sc}L}}\sum_{l=1}^{L} \sum_{r=1}^{N_\mathrm{sc}}\alpha_{l,r} p(dT_\mathrm{s} - \tau_l - \tau_{{r}}) \nonumber \\ & \times \mathbf{a}_\mathrm{R}(\theta_{l} - \vartheta_{rl}) \mathbf{a}_\mathrm{T}^\textsf{H}(\phi_l - \varphi_{rl}), \end{aligned}$$ where $\mathbf{a}_\mathrm{R}(\theta)$ and $\mathbf{a}_\mathrm{T}(\phi)$ are the $N_\mathrm{R} \times 1$ and $N_\mathrm{T}\times 1$ steering vectors representing the array responses of the receive and transmit antenna arrays respectively. Let $\lambda_m = \frac{c_0}{f_m}$ be the wavelength for the subcarrier $m$ with frequency of $f_m$. Since the operating frequency is relatively higher than the bandwidth in mm-Wave systems and the subcarrier frequencies are close to each other, (i.e., $f_{m_1} \approx f_{m_2}$, $m_1,m_2 \in\mathcal{M}$), we use a single operating wavelength $\lambda = \lambda_{1} = \dots = \lambda_{M} = \frac{c_0}{f_c}$ where $c_0$ is speed of light and $f_c$ is the central carrier frequency [@sohrabiOFDM]. This approximation also allows for a single frequency-independent analog beamformer for each subcarrier. Then, for a uniform linear array (ULA), the array response of the transmit array is $$\begin{aligned} \mathbf{a}_\mathrm{T}(\phi) = \big[ 1, e^{j\frac{2\pi}{\lambda} \overline{d}_\mathrm{T}\sin(\phi)},\dots,e^{j\frac{2\pi}{\lambda} (N_\mathrm{T}-1)\overline{d}_\mathrm{T}\sin(\phi)} \big]^\textsf{T}, \end{aligned}$$ where $\overline{d}_\mathrm{T}=\overline{d}_\mathrm{R} = \lambda/2$ is the antenna spacing and $\mathbf{a}_\mathrm{R}(\theta)$ can be defined in a similar way as for $\mathbf{a}_\mathrm{T}(\phi)$. Using the delay-$d$ channel model in (\[eq:delaydChannelModel\]), the channel matrix at subcarrier $m$ is $$\begin{aligned} \mathbf{H}[m] = \sum_{d=0}^{D-1}\mathbf{H}[d]e^{-j\frac{2\pi m}{M} d}, \end{aligned}$$ where $D$ is the length of cyclic prefix [@channelModelSparseBajwa]. With the aforementioned block-fading channel model [@mmWaveModel1], the received signal at subcarrier $m$ is $$\begin{aligned} \label{arrayOutput} \mathbf{y}[m] = \sqrt{\rho}\mathbf{H}[m] \mathbf{F}_\mathrm{RF}\mathbf{F}_\mathrm{BB}[m]\mathbf{s}[m] + \mathbf{n}[m], \end{aligned}$$ where $\rho$ represents the average received power and $\mathbf{H}[m]\in \mathbb{C}^{N_\mathrm{R}\times N_\mathrm{T}}$ channel matrix and $\mathbf{n}[m] \sim \mathcal{CN}(\mathbf{0},\sigma^2 \mathbf{I}_\mathrm{N_\mathrm{R}})$ is additive white Gaussian noise (AWGN) vector. The received signal is first processed by the analog combiner $\mathbf{W}_\mathrm{RF}$. Then, the cyclic prefix is removed from the the processed signal and $N_\mathrm{RF}$ $M$-point FFTs are applied to yield the signal in frequency domain. Finally, the receiver employs low-dimensional $N_\mathrm{RF}\times N_\mathrm{S}$ digital combiners $\{\mathbf{W}_\mathrm{BB}[m]\}_{m\in \mathcal{M}}$. The received and processed signal is obtained as $\widetilde{\mathbf{y}}[m] = \mathbf{W}_\mathrm{BB}^\textsf{H}[m]\mathbf{W}_\mathrm{RF}^\textsf{H}\mathbf{y}[m]$, i.e., $$\begin{aligned} \label{sigModelReceived} \widetilde{\mathbf{y}}[m] = & \sqrt{\rho}\mathbf{W}_\mathrm{BB}^\textsf{H}[m]\mathbf{W}_\mathrm{RF}^\textsf{H}\mathbf{H}[m] \mathbf{F}_\mathrm{RF}\mathbf{F}_\mathrm{BB}[m]\mathbf{s}[m] \nonumber \\ &+ \mathbf{W}_\mathrm{BB}^\textsf{H}[m]\mathbf{W}_\mathrm{RF}^\textsf{H}\mathbf{n}[m], \end{aligned}$$ where the analog combiner $\mathbf{W}_\mathrm{RF}\in \mathbb{C}^{N_\mathrm{R}\times N_\mathrm{RF}}$ has the constraint $\big[[\mathbf{W}_\mathrm{RF}]_{:,i}[\mathbf{W}_\mathrm{RF}]_{:,i}^\textsf{H}\big]_{i,i}=1$ similar to the RF precoder. Problem Formulation {#sec:probform} =================== In practice, the estimation process of the channel matrix is a challenging task, especially in case of a large number of antennas deployed in massive MIMO communications [@channelEstLargeArrays; @channelEstimation1]. Further, short coherence times of mm-Wave channel imply that the channel characteristics change rapidly [@coherenceTimeRef]. Literature indicates several mm-Wave channel estimation techniques [@mimoChannelModel2; @channelEstimation1CS; @channelEstimation1; @mimoAngleDomainFaiFai; @mimoHybridLeus2]. In our DL framework, the channel estimation is performed by a deep network which accepts the received pilot signals as input and yields the channel matrix estimate at the output layer [@deepCNN_ChannelEstimation]. During the pilot transmission process, the transmitter activates only one RF chain to transmit the pilot on a single beam; the receiver meanwhile turns on all RF chains [@mimoHybridLeus2]. Hence, unlike other DL-based beamformers [@elbirDL_COMML; @elbirQuantizedCNN2019; @mimoDLChannelModelBeamformingFacebook; @mimoDeepPrecoderDesign] that presume knowledge of the channel, our framework exploits DL for both channel matrix approximation as well as beamforming. Specifically, we focus on designing hybrid precoders $\mathbf{F}_\mathrm{RF},\mathbf{F}_\mathrm{BB}[m]$, $\mathbf{W}_\mathrm{RF},\mathbf{W}_\mathrm{BB}[m]$ by maximizing the overall spectral efficiency of the system under power spectral density constraint for each subcarrier. Let $R[m]$ be the overall spectral efficiency of the subcarrier $m$. Assuming that the Gaussian symbols are transmitted through the mm-Wave channel [@mimoRHeath; @mimoHybridLeus1; @mimoHybridLeus2; @alkhateeb2016frequencySelective], $R[m]$ is $$\begin{aligned} &R[m] = \textrm{log}_2 \bigg| \mathbf{I}_{N_\mathrm{S}} +\frac{\rho}{N_\mathrm{S}}\boldsymbol{\Lambda}_\mathrm{n}^{-1}[m]\mathbf{W}_\mathrm{BB}^\textsf{H}\mathbf{W}_\mathrm{RF}^\textsf{H} \mathbf{H}[m]\nonumber \\ &\;\;\;\;\;\; \times\mathbf{{F}}_\mathrm{RF}\mathbf{{F}}_\mathrm{BB}[m]\mathbf{{F}}_\mathrm{BB}^\textsf{H}[m] \mathbf{{F}}_\mathrm{RF}^\textsf{H}\mathbf{H}^\textsf{H}[m]\mathbf{W}_\mathrm{RF}\mathbf{W}_\mathrm{BB}[m] \bigg|, \end{aligned}$$ where $\boldsymbol{\Lambda}_\mathrm{n}[m] = \sigma_n^2 \mathbf{W}_\mathrm{BB}^\textsf{H}[m]\mathbf{W}_\mathrm{RF}^\textsf{H} \mathbf{W}_\mathrm{RF}\mathbf{W}_\mathrm{BB}[m]\in \mathbb{C}^{N_\mathrm{S} \times N_\mathrm{S}}$ corresponds to the noise term in (\[sigModelReceived\]). The hybrid beamformer design is equivalent to the following optimization problem: $$\begin{aligned} \label{HBdesignProblem} &\underset{\mathbf{{F}}_\mathrm{RF},\mathbf{{W}}_\mathrm{RF}, \{\mathbf{{F}}_\mathrm{BB}[m],\mathbf{{W}}_\mathrm{BB}[m]\}_{m\in \mathcal{M}}}{\operatorname*{maximize}} \frac{1}{M}\sum_{m =1}^{M} R[m] \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\; \mathbf{{F}}_\mathrm{RF} \in \mathcal{F}_\mathrm{RF}, \mathbf{{W}}_\mathrm{RF} \in \mathcal{W}_\mathrm{RF}, \nonumber \\ &\sum_{m=1}^{M}||\mathbf{{F}}_\mathrm{RF}\mathbf{{F}}_\mathrm{BB}[m]||_{\mathcal{F}}^2 = M N_\mathrm{S}, \end{aligned}$$ where $\mathcal{F}_\mathrm{RF}$ and $\mathcal{W}_\mathrm{RF}$ are the feasible sets for the RF precoder and combiners which obey the unit-norm constraint and The hybrid beamformer design problem in (\[HBdesignProblem\]) requires analog and digital beamformers which, in turn, are obtained by exploiting the structure of the channel matrix in mm-Wave channel. Our goal is to recover $\mathbf{F}_\mathrm{RF}$, $\mathbf{F}_\mathrm{BB}[m]$, $\mathbf{W}_\mathrm{RF}$, and $\mathbf{W}_\mathrm{BB}[m]$ for the given received pilot signal. In the following section, we describe the channel estimation and design methodology of hybrid beamformers before introducing learning-based approach. Channel Estimation {#sec:ice} ================== In our work, DL network estimates the channel from the received pilot signals in the preamble stage. Consider the downlink scenario when the transmitter employs a single RF chain $\overline{\mathbf{f}}_u[m]\in\mathbb{C}^{N_\mathrm{T}}$ to transmit pilot signals $\overline{{s}}_u[m]$ on a single beam where $u = 1,\dots,M_\mathrm{T}$. Then, the receiver activates $M_\mathrm{R}$ RF chains to apply $\overline{\mathbf{w}}_v$ for $v = 1,\dots, M_\mathrm{R}$ to process the received pilots [@deepCNN_ChannelEstimation; @mimoHybridLeus2]. Since the number of RF chains in the receiver is limited by $N_\mathrm{RF}$ (usually less than $M_\mathrm{R}$ in a single channel use), a total of $N_\mathrm{RF}$ combining vectors are employed. Hence, the total channel use in the channel acquisition process is $\lceil \frac{M_\mathrm{R}}{N_\mathrm{RF}}\rceil$. After processing through combiners, the received pilot signal becomes $$\begin{aligned} \label{receivedSignalPilot} \mathbf{\overline{Y}}[m] = \overline{\mathbf{W}}^\textsf{H}[m] \mathbf{H}[m] \overline{\mathbf{F}}[m]\overline{\mathbf{S}}[m] + \widetilde{\mathbf{N}}[m], \end{aligned}$$ where $\overline{\mathbf{F}}[m] = [\overline{\mathbf{f}}_1[m],\overline{\mathbf{f}}_2[m],\dots,\overline{\mathbf{f}}_{M_\mathrm{T}}[m]]$ and $\overline{\mathbf{W}}[m] = [\overline{\mathbf{w}}_1[m],\overline{\mathbf{w}}_2[m],\dots,\overline{\mathbf{w}}_{M_\mathrm{R}}[m]]$ are $N_\mathrm{T}\times M_\mathrm{T}$ and $N_\mathrm{R}\times M_\mathrm{R}$ beamformer matrices. The $\overline{\mathbf{S}}[m] = \mathrm{diag}\{ \overline{s}_1[m],\dots,\overline{s}_{M_\mathrm{T}}[m]\}$ denotes pilot signals and $\widetilde{\mathbf{N}}[m]= \overline{\mathbf{W}}^\textsf{H} \overline{\mathbf{N}}[m]$ is effective noise matrix, where $\overline{\mathbf{N}}[m] \sim \mathcal{N}(0, \sigma_{\overline{\mathbf{N}}}^2)$. The noise corruption of the pilot training data is measured by SNR$_{\overline{\mathbf{N}}}$. Without loss of generality, we assume that $\overline{\mathbf{F}}[m] = \overline{\mathbf{F}}$ and $\overline{\mathbf{W}}[m] = \overline{\mathbf{W}}$, $\forall m$ and $\overline{\mathbf{S}}[m] = \sqrt{P_\mathrm{T}}\mathbf{I}_{M_\mathrm{T}}$, where $P_\mathrm{T}$ is the transmit power. Then, the received signal (\[receivedSignalPilot\]) becomes $$\begin{aligned} \label{receivedSignalPilotMod} \mathbf{\overline{Y}}[m] = \overline{\mathbf{W}}^\textsf{H} \mathbf{H}[m] \overline{\mathbf{F}} + \widetilde{\mathbf{N}}[m]. \end{aligned}$$ The initial channel estimate (ICE) is then $$\begin{aligned} \label{Gm} \mathbf{G}[m] = \mathbf{T}_\mathrm{T} \overline{\mathbf{Y}}[m]\mathbf{T}_\mathrm{R}, \end{aligned}$$ where $$\begin{aligned} \mathbf{T}_\mathrm{T} = \begin{dcases} \overline{\mathbf{W}},& M_\mathrm{R} < N_\mathrm{R} \\ (\overline{\mathbf{W}}\overline{\mathbf{W}}^\textsf{H})^{-1}\overline{\mathbf{W}}, & M_\mathrm{R} \leq N_\mathrm{R}, \end{dcases} \end{aligned}$$ and $$\begin{aligned} \mathbf{T}_\mathrm{R} = \begin{dcases} \overline{\mathbf{F}}^\textsf{H},& M_\mathrm{T} < N_\mathrm{T} \\ \overline{\mathbf{F}}^\textsf{H}(\overline{\mathbf{F}}\overline{\mathbf{F}}^\textsf{H})^{-1}, & M_\mathrm{T} \leq N_\mathrm{T}. \end{dcases} \end{aligned}$$ We consider $\mathbf{G}[m]$ as an initial estimate because, later, we improve this approximation with a deep network that maps $\mathbf{G}[m]$ to $\mathbf{H}[m]$. Hybrid Beamformer Design For Wideband mm-Wave MIMO Systems {#sec:bb_hb} ========================================================== The design problem in (\[HBdesignProblem\]) requires a joint optimization over several matrices. This approach is computationally complex and even intractable. Instead, a decoupled problem is preferred [@mimoRHeath; @sohrabiOFDM; @elbirQuantizedCNN2019; @hybridBFAltMin]. Here, the hybrid precoders $\mathbf{F}_\mathrm{RF},\mathbf{F}_\mathrm{BB}[m]$ are estimated first and then the hybrid combiners $\mathbf{W}_\mathrm{RF},\mathbf{W}_\mathrm{BB}[m]$ are found. Define the mutual information of the mm-Wave channel that can be achieved at the BS through Gaussian signalling as [@alkhateeb2016frequencySelective] $$\begin{aligned} & \mathcal{I}\{\mathbf{F}_\mathrm{RF},\mathbf{F}_\mathrm{BB}[m]\} = \textrm{log}_2 \bigg| \mathbf{I}_{N_\mathrm{S}} \nonumber \\ &\;\;\;\;\;\; +\frac{\rho}{N_\mathrm{S}}\mathbf{H}[m]\mathbf{{F}}_\mathrm{RF}\mathbf{{F}}_\mathrm{BB}[m]\mathbf{{F}}_\mathrm{BB}^\textsf{H}[m] \mathbf{{F}}_\mathrm{RF}^\textsf{H}\mathbf{H}^\textsf{H}[m] \bigg|. \end{aligned}$$ The hybrid precoder are then obtained by maximizing the mutual information, i.e., $$\begin{aligned} \label{PrecoderDesignProblem} &\underset{\mathbf{{F}}_\mathrm{RF}, \{\mathbf{{F}}_\mathrm{BB}[m]\}_{m\in \mathcal{M}}}{\operatorname*{maximize}} \frac{1}{M}\sum_{m =1}^{M} \mathcal{I}\{\mathbf{F}_\mathrm{RF},\mathbf{F}_\mathrm{BB}[m]\} \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\; \mathbf{{F}}_\mathrm{RF} \in \mathcal{F}_\mathrm{RF}, \nonumber \\ &\sum_{m=1}^{M}||\mathbf{{F}}_\mathrm{RF}\mathbf{{F}}_\mathrm{BB}[m]||_{\mathcal{F}}^2 = M N_\mathrm{S}, \end{aligned}$$ We note here that one could approximate the optimization problem in (\[PrecoderDesignProblem\]) by exploiting the similarity between the hybrid beamformer $\mathbf{F}_\mathrm{RF}\mathbf{F}_\mathrm{BB}[m]$ and the optimal unconstrained beamformer $\mathbf{F}^{\mathrm{opt}}[m]$. The latter is obtained from the right singular matrix of the channel matrix $\mathbf{H}[m]$ [@hybridBFAltMin; @mimoRHeath]. Let the singular value decomposition of the channel matrix be $\mathbf{H}[m] = \mathbf{U}[m] \boldsymbol{\Sigma}[m] \mathbf{V}^H[m]$, where $\mathbf{U}[m]\in \mathbb{C}^{N_\mathrm{R}\times \mathrm{rank}(\mathbf{H}[m])}$ and $\mathbf{V}[m]\in \mathbb{C}^{N_\mathrm{T} \times \mathrm{rank}(\mathbf{H}[m])}$ are the left and the right singular value matrices of the channel matrix, respectively, and $\boldsymbol{\Sigma}[m]$ is $\mathrm{rank}(\mathbf{H}[m])\times \mathrm{rank}(\mathbf{H}[m])$ matrix composed of the singular values of $\mathbf{H}[m]$ in descending order. By decomposing $\boldsymbol{\Sigma}[m]$ and $\mathbf{V}[m]$ as $\boldsymbol{\Sigma}[m] = \mathrm{diag}\{ \widetilde{\boldsymbol{\Sigma}}[m],\overline{\boldsymbol{\Sigma}}[m] \},\hspace{5pt} \mathbf{V}[m] = [\widetilde{\mathbf{V}}[m],\overline{\mathbf{V}}[m]],$ where $\widetilde{\mathbf{V}}[m]\in \mathbb{C}^{N_\mathrm{T}\times N_\mathrm{S}}$, the unconstrained precoder is readily obtained as $\mathbf{F}^{\mathrm{opt}}[m] = \widetilde{\mathbf{V}}[m]$ [@mimoRHeath]. The hybrid precoder design problem for subcarrier $m$ then becomes the minimization of the Euclidean distance between $\mathbf{F}^{\mathrm{opt}}[m]$ and $\mathbf{F}_\mathrm{RF}\mathbf{F}_\mathrm{BB}[m]$ as $$\begin{aligned} \label{PrecoderSingleCarrier} &\underset{\mathbf{F}_\mathrm{RF},\mathbf{F}_\mathrm{BB}[m]}{\operatorname*{minimize}} \big|\big| \mathbf{F}^{\mathrm{opt}}[m] - \mathbf{F}_\mathrm{RF}\mathbf{F}_\mathrm{BB}[m] \big|\big|_\mathcal{F}^2 \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\; \mathbf{{F}}_\mathrm{RF} \in \mathcal{F}_\mathrm{RF}, \nonumber \\ &\big|\big| \mathbf{{F}}_\mathrm{RF}\mathbf{{F}}_\mathrm{BB}[m]\big|\big|_{\mathcal{F}}^2 = N_\mathrm{S}. \end{aligned}$$ Incorporating all subcarriers in the problem produces $$\begin{aligned} \label{PrecoderAllCarriers} &\underset{\mathbf{F}_\mathrm{RF},\{\mathbf{F}_\mathrm{BB}[m]\}_{m \in \mathcal{M}}}{\operatorname*{minimize}} \big|\big| \widetilde{\mathbf{F}}^{\mathrm{opt}} - \mathbf{F}_\mathrm{RF}\widetilde{\mathbf{F}}_\mathrm{BB} \big|\big|_\mathcal{F}^2 \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\; \mathbf{{F}}_\mathrm{RF} \in \mathcal{F}_\mathrm{RF}, \nonumber \\ &\sum_{m=1}^{M}\big|\big| \mathbf{{F}}_\mathrm{RF}\mathbf{{F}}_\mathrm{BB}[m]\big|\big|_{\mathcal{F}}^2 = MN_\mathrm{S}, \end{aligned}$$ where $$\begin{aligned} \widetilde{\mathbf{F}}^{\mathrm{opt}} = \begin{bmatrix} \mathbf{F}^{\mathrm{opt}}[1] & \mathbf{F}^{\mathrm{opt}}[2] & \cdots & \mathbf{F}^{\mathrm{opt}}[M] \end{bmatrix} \in \mathbb{C}^{N_\mathrm{T}\times MN_\mathrm{S}}, \end{aligned}$$ and $$\begin{aligned} \widetilde{\mathbf{F}}_\mathrm{BB} = \begin{bmatrix} \mathbf{{F}}_\mathrm{BB}[1] & \mathbf{{F}}_\mathrm{BB}[2] & \cdots & \mathbf{{F}}_\mathrm{BB}[M] \end{bmatrix} \in \mathbb{C}^{N_\mathrm{RF}\times MN_\mathrm{S}}, \end{aligned}$$ contain the beamformers for all subcarriers. Once the hybrid precoders are designed, the hybrid combiners $\mathbf{W}_\mathrm{RF},\mathbf{W}_\mathrm{BB}[m]$ realized by minimizing the mean-square-error (MSE), $\mathbb{E}\{\big|\big| \mathbf{s}[m] - \mathbf{W}_\mathrm{BB}^\textsf{H}[m] \mathbf{W}_\mathrm{RF}^\textsf{H}\mathbf{y}[m] \big|\big|_2^2\}$. The combiner-only optimization is $$\begin{aligned} \label{CombinerOnlyProblem} &\underset{\mathbf{W}_\mathrm{RF}, \mathbf{W}_\mathrm{BB}[m] }{\operatorname*{minimize}} \mathbb{E}\{\big|\big| \mathbf{s}[m] - \mathbf{W}_\mathrm{BB}^\textsf{H}[m] \mathbf{W}_\mathrm{RF}^\textsf{H}\mathbf{y}[m] \big|\big|_2^2\} \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\mathbf{W}_\mathrm{RF} \in{\mathcal{W}}_\mathrm{RF}. \end{aligned}$$ A more efficient form of (\[CombinerOnlyProblem\]) is due to [@mimoRHeath], where a constant term $\mathrm{Trace}\{\mathbf{W}_{\mathrm{MMSE}}^\textsf{H}[m] \mathbb{E}\{\mathbf{y}[m]\mathbf{y}^\textsf{H}[m]\mathbf{W}_{\mathrm{MMSE}}[m] \}\} - \mathrm{Trace}\{\mathbf{s}[m]\mathbf{s}^\textsf{H}[m] \}$ is added to the cost function. Here, $\mathbf{W}_{\mathrm{MMSE}}[m]$ denotes the minimum MSE (MMSE) estimator defined as $\mathbf{W}_\mathrm{MMSE}[m]= (\mathbb{E}\{\mathbf{s}[m] \mathbf{y}^\textsf{H}[m] \} \mathbb{E}\{\mathbf{y}[m] \mathbf{y}^\textsf{H}[m] \}^{-1})^\textsf{H}$. Then, (\[CombinerOnlyProblem\]) reduces to the optimization problem $$\begin{aligned} \label{CombinerOnlyProblemEquivalent} &\underset{\mathbf{W}_\mathrm{RF}, \mathbf{W}_\mathrm{BB}[m]}{\operatorname*{minimize}} \big|\big| \boldsymbol{\Lambda}_\mathrm{y}^{1/2}[m] (\mathbf{W}_\mathrm{MMSE}[m] - \mathbf{W}_\mathrm{RF} \mathbf{W}_\mathrm{BB}[m] )\big|\big|_\mathcal{F}^2 \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\mathbf{W}_\mathrm{RF} \in{\mathcal{W}}_\mathrm{RF}. \end{aligned}$$ where $\boldsymbol{\Lambda}_\mathrm{y}[m] = \rho\mathbf{H}[m]\mathbf{F}_\mathrm{RF}\mathbf{F}_\mathrm{BB}[m]\mathbf{F}_\mathrm{BB}^\textsf{H}[m]\mathbf{F}_\mathrm{RF}^\textsf{H}\mathbf{H}^\textsf{H}[m] + \sigma_n^2\mathbf{I}_{N_\mathrm{R}},$ is the covariance of the array output in (\[arrayOutput\]). The unconstrained combiner in a compact form is then [@WoptCombiner], $$\begin{aligned} &\mathbf{W}_\mathrm{MMSE}^\textsf{H}[m] = \frac{1}{\rho}\bigg( \mathbf{F}^{\mathrm{opt}^\textsf{H}}[m]\mathbf{H}^\textsf{H}[m]\mathbf{H}[m]\mathbf{F}^{\mathrm{opt}}[m] \nonumber \\ & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ \frac{N_\mathrm{S}\sigma_n^2}{\rho}\mathbf{I}_{N_\mathrm{S}} \bigg)^{-1} \mathbf{F}^{\mathrm{opt}^\textsf{H}}[m]\mathbf{H}^\textsf{H}[m]. \end{aligned}$$ In (\[CombinerOnlyProblemEquivalent\]), the multiplicative term $\boldsymbol{\Lambda}_\mathrm{y}^{1/2}[m]$ does not depend on $\mathbf{W}_\mathrm{RF}$ or $\mathbf{W}_\mathrm{BB}[m]$, It, therefore, has no bearing on the solution and can be ignored. Define $$\begin{aligned} \widetilde{\mathbf{W}}_\mathrm{MMSE} &= \begin{bmatrix}{\mathbf{W}}_\mathrm{MMSE}[1]&{\mathbf{W}}_\mathrm{MMSE}[2]&\cdots&{\mathbf{W}}_\mathrm{MMSE}[M] \end{bmatrix} \nonumber\\ &\in \mathbb{C}^{N_\mathrm{R}\times MN_\mathrm{S}}, \end{aligned}$$ and $$\begin{aligned} \widetilde{\mathbf{W}}_\mathrm{BB} = \begin{bmatrix}{\mathbf{W}}_\mathrm{BB}[1] & {\mathbf{W}}_\mathrm{BB}[2] & \cdots &{\mathbf{W}}_\mathrm{BB}[M] \end{bmatrix}\in \mathbb{C}^{N_\mathrm{RF}\times MN_\mathrm{S}}. \end{aligned}$$ Then, the hybrid combiner design problem becomes $$\begin{aligned} \label{CombinerOnlyProblemAllSubcarriers} &\underset{\mathbf{W}_\mathrm{RF}, \{\mathbf{W}_\mathrm{BB}[m]\}_{m\in \mathcal{M}}}{\operatorname*{minimize}} \big|\big| \widetilde{\mathbf{W}}_\mathrm{MMSE} - \mathbf{W}_\mathrm{RF} \widetilde{\mathbf{W}}_\mathrm{BB}\big|\big|_\mathcal{F}^2 \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\;\;\;\;\;\mathbf{W}_\mathrm{RF} \in{\mathcal{W}}_\mathrm{RF} \nonumber \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \mathbf{W}_\mathrm{BB}[m] = (\mathbf{W}_\mathrm{RF}^\textsf{H} \boldsymbol{\Lambda}_\mathrm{y}[m] \mathbf{W}_\mathrm{RF})^{-1}\nonumber \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\times (\mathbf{W}_\mathrm{RF}^\textsf{H}\boldsymbol{\Lambda}_\mathrm{y}[m]\mathbf{W}_\mathrm{MMSE}[m]). \end{aligned}$$ In [@manopt], manifold optimization or “Manopt” algorithm is suggested to effectively solve the optimization problems in (\[PrecoderAllCarriers\]) and (\[CombinerOnlyProblemAllSubcarriers\]). Note that both of these problems do not require a codebook or a set of array response of transmit and receive arrays [@mimoRHeath]. In fact, the manifold optimization problem for (\[PrecoderAllCarriers\]) and (\[CombinerOnlyProblemAllSubcarriers\]) are initialized at a random point, i.e., beamformers with unit-norm and random phases. Learning-Based Joint Channel Estimation and Hybrid Beamformer Design {#sec:HD_Design} ==================================================================== ![Deep learning frameworks for hybrid beamforming in mm-Wave MIMO systems. The F1 has a single CNN (MC-HBNet) which maps the received pilot signal data directly into hybrid beamformers. In F2 and F3, multiple CNNs are used for channel estimation and hybrid beamforming sequentially. For channel estimation, a single CNN (MC-CENet) is trained for all subcarrier data in F2 whereas a dedicated CNN (SC-CENet) is used for each subcarrier data in F3. The final HBNet stage is identical in F2 and F3. []{data-label="fig_DLFrameworks"}](DLFrameworks.PNG){width="1.0\columnwidth"} We introduce three DL frameworks F1, F2, and F3 (Fig. \[fig\_DLFrameworks\]). In all of them, hybrid beamformers are the outputs. The ICE values $\mathbf{G}[m]$ obtained from the received pilot signal in the preamble stage form the inputs. The F1 architecture is Multi-Carrier Hybrid Beamforming Network (MC-HBNet). It comprises a single CNN which accepts the ICEs jointly for all subcarriers. The input size is $MN_\mathrm{R} \times N_\mathrm{T}$). The ICEs introduce a performance loss if the channel estimates are inaccurate. To address this, F2 employs separate CNNs for channel estimation (Multi-Carrier Channel Estimation Network or MC-CENet) and hybrid beamforming (HBNet). The MC-CENet accepts the ICE of a single subcarrier as input; other subcarriers are fed sequentially, one at a time. So, the training data consists of a single ICE (with input of size $N_\mathrm{R}\times N_\mathrm{T}$) for each subcarrier. To make the setup even more flexible at the cost of computational simplicity, F3 employs one CNN per subcarrier for estimating the channel. For the $m$th subcarrier, each Single Carrier Channel Estimation Network (SC-CENet$[m]$, $m\in \mathcal{M}$) feeds into a single HBNet. Input Data ---------- We partition the input ICE data into three components to enrich the input features. In our previous works, similar approaches has provided good features for DL implementations [@elbirQuantizedCNN2019; @elbirDL_COMML; @elbirIETRSN2019; @deepCNN_ChannelEstimation]. In particular, we use the real, imaginary parts and the absolute value of each entry of ICEs. The absolute value entry indicates to the DL network that the real and imaginary input feeds are connected. Define the input for MC-HBNet in F1 as $\mathbf{X}_{\mathrm{F1}} = [\mathbf{X}_{\mathrm{F1}}^\textsf{T}[1],\dots, \mathbf{X}_{\mathrm{F1}}^\textsf{T}[M] ]^\textsf{T}$. Then, for $M_\mathrm{R}\times M_\mathrm{T}$ ICE, the $(i,j)$-th entry of the submatrices per subcarrier is $[[\mathbf{X}_{\mathrm{F1}}[m]]_{:,:,1}]_{i,j} = | [\mathbf{G}[m]]_{i,j}|$ for the first “channel” or input matrix of $\mathbf{X}_{\mathrm{F1}}[m]$. The second and the third channels are $[[\mathbf{X}_{\mathrm{F1}}[m]]_{:,:,2}]_{i,j} = \operatorname{Re}\{[\mathbf{G}[m]]_{i,j}\}$ and $[[\mathbf{X}_{\mathrm{F1}}[m]]_{:,:,3}]_{i,j} = \operatorname{Im}\{[\mathbf{G}[m]]_{i,j}\}$, respectively. Hence, the size of $\mathbf{X}_{\mathrm{F1}}$ is $M M_\mathrm{R}\times M_\mathrm{T}\times 3$. In F2, the input data comprises single subcarrier ICEs. The input for MC-CENet $\mathbf{X}_{\mathrm{F2}}$ is of size $M_\mathrm{R}\times M_\mathrm{T}\times 3$. The input data for each SC-CENet in F3 is same as in F2. The inputs of HBNet in both F2 and F3 also have the same structure; it is denoted as $\mathbf{X}_{\mathbf{H}} = [\mathbf{X}_{\mathbf{H}}^\textsf{T}[1],\dots, \mathbf{X}_{\mathbf{H}}^\textsf{T}[M] ]^\textsf{T} $ which is of size $M N_\mathrm{R}\times N_\mathrm{T}\times 3$, where $[[\mathbf{X}_{\mathbf{H}}[m]]_{:,:,1}]_{i,j} = | [\mathbf{H}[m]]_{i,j}|$, $[[\mathbf{X}_{\mathbf{H}}[m]]_{:,:,2}]_{i,j} = \operatorname{Re}\{[\mathbf{H}[m]]_{i,j}\}$ and $[[\mathbf{X}_{\mathbf{H}}[m]]_{:,:,3}]_{i,j} = \operatorname{Im}\{[\mathbf{H}[m]]_{i,j}\}$. Labeling -------- The hybrid beamformers are the common output for all three frameworks (Fig. \[fig\_DLFrameworks\]). We represent the output as the vectorized form of analog beamformers common to all subcarriers and baseband beamformers corresponding to all subcarriers. The output is an $N_\mathrm{RF}\big(N_\mathrm{T} + N_\mathrm{R} + 2MN_\mathrm{S} \big) \times 1 $ real-valued vector $$\begin{aligned} \label{zSU} \hspace{10pt} \mathbf{z} = \begin{bmatrix} \mathbf{z}_\mathrm{RF}^\textsf{T} & \widetilde{\mathbf{z}}_\mathrm{BB}^\textsf{T} \end{bmatrix}^\textsf{T}, \end{aligned}$$ where $\mathbf{z}_\mathrm{RF} = [\mathrm{vec}\{\angle \mathbf{F}_\mathrm{RF} \}^\textsf{T},\mathrm{vec}\{\angle \mathbf{W}_\mathrm{RF} \}^\textsf{T}]^\textsf{T}$ is a real-valued $N_\mathrm{RF}(N_\mathrm{T} + N_\mathrm{R})\times 1$ vector which includes the phases of analog beamformers. The $\widetilde{\mathbf{z}}_\mathrm{BB}\in \mathbb{R}^{2M N_\mathrm{S} N_\mathrm{RF}}$ is composed of the baseband beamformers for all subcarriers as $ \widetilde{\mathbf{z}}_\mathrm{BB} = [\mathbf{z}_\mathrm{BB}^\textsf{T}[1],\mathbf{z}_\mathrm{BB}^\textsf{T}[2],\dots,\mathbf{z}_\mathrm{BB}^\textsf{T}[M]]^\textsf{T} $ where $$\begin{aligned} &\mathbf{z}_\mathrm{BB}[m] = [\mathrm{vec}\{\operatorname{Re}\{ \mathbf{F}_\mathrm{BB}[m]\} \}^\textsf{T}, \mathrm{vec}\{\operatorname{Im}\{ \mathbf{F}_\mathrm{BB}[m]\} \}^\textsf{T}, \nonumber \\ &\;\;\;\;\;\;\;\;\;\;\;\mathrm{vec}\{\operatorname{Re}\{ \mathbf{W}_\mathrm{BB}[m]\} \}^\textsf{T}, \mathrm{vec}\{\operatorname{Im}\{ \mathbf{W}_\mathrm{BB}[m]\} \}^\textsf{T}]^\textsf{T}. \end{aligned}$$ The output label of MC-CENet in F2 is the channel matrix. Given that MC-CENet is fed by the ICE $\mathbf{G}[m]$, the output label for MC-CENet is $$\begin{aligned} \label{zH} \mathbf{z}_{\mathbf{H}[m]} = [\mathrm{vec}\{\operatorname{Re}\{\mathbf{H}[m]\}\}^\textsf{T} , \mathrm{vec}\{\operatorname{Im}\{\mathbf{H}[m]\}\}^\textsf{T} ]^\textsf{T}, \end{aligned}$$ which is a real-valued vector of size $2N_\mathrm{R}N_\mathrm{T}$. The SC-CENet$[m]$ in F3 has similar input and output structures as the MC-CENet but ICEs are fed to each SC-CENet$[m]$ separately. Network Architectures and Training ---------------------------------- ![Deep network architectures used in DL frameworks F1, F2, and F3 for wideband mm-wave channel estimation and hybrid beamforming. []{data-label="fig_Networks"}](NetworkArchitectures_v02.png){width="1.0\columnwidth"} We design four deep network architectures (Fig. \[fig\_Networks\]). The MC-HBNet and HBNet have input size of $MN_\mathrm{R}\times N_\mathrm{T}\times 3$ whereas the input for MC-CENet and SC-CENet$[m]$ is $N_\mathrm{R}\times N_\mathrm{T}\times 3$. The number of filters and number of units for all layers are shown in Fig. \[fig\_Networks\]. There are dropout layers with a $50\%$ probability after each fully connected layer in each network. We use pooling layers after the first and second convolutional layers only in MC-HBNet and HBNet to reduce the dimension by two. The output layer of all networks are the regression layer with the size depending on the application as discussed earlier. The network parameters are fixed after a hyperparameter tuning process that yields the best performance for the considered scenario [@elbirDL_COMML; @elbirQuantizedCNN2019; @elbirIETRSN2019]. The proposed deep networks are realized and trained in MATLAB on a PC with a single GPU and a 768-core processor. We used the stochastic gradient descent algorithm with momentum 0.9 and updated the network parameters with learning rate $0.0005$ and mini-batch size of $128$ samples. Then, we reduced the learning rate by the factor of $0.9$ after each 30 epochs. We also applied a stopping criteria in training so that the training ceases when the validation accuracy does not improve in three consecutive epochs. Algorithm \[alg:algorithmTraining\] summarizes steps for training data generation. .\ [**Output:** Training datasets for the networks in Fig. \[fig\_DLFrameworks\]: $\mathcal{D}_{\mathrm{MC-HBNet}}$, $\mathcal{D}_{\mathrm{MC-CENet}}$, $\mathcal{D}_{\mathrm{HBNet}}$ and $\mathcal{D}_{\mathrm{SC-CENet}}$.]{} \[alg:algorithmTraining\] Generate $\{\mathbf{H}^{(n)}[m]\}_{n=1}^N$ for $m \in \mathcal{M}$. Initialize with $t= \overline{t}=1$ while the dataset length is $T=NG$ for MC-HBNet, HBNet, SC-CENet, and $\overline{T} = MT$ for MC-CENet. **for** $1 \leq n \leq N$ **do** **for** $1 \leq g \leq G$ **do** $[\widetilde{\mathbf{H}}^{(n,g)}[m]]_{i,j} \sim \mathcal{CN}([\mathbf{H}^{(n)}[m]]_{i,j},\sigma_{\mathbf{H}}^2)$. Generate received pilot signal from (\[receivedSignalPilotMod\]) as $$\begin{aligned} \overline{\mathbf{Y}}^{(n,g)}[m] = \overline{\mathbf{W}}^{\textsf{H}} \mathbf{H}^{(n,g)}[m] \overline{\mathbf{F}} + \widetilde{\mathbf{N}}^{(n,g)}[m]. \nonumber \end{aligned}$$ Construct ${\mathbf{G}}^{(n,g)}[m]$ from (\[Gm\]) by using $\overline{\mathbf{Y}}^{(n,g)}[m]$. Using $\mathbf{H}^{(n,g)}[m]$, find $\hat{\mathbf{F}}_{\mathrm{RF}}^{(n,g)}$ and $\hat{\mathbf{F}}_{\mathrm{BB}}^{(n,g)}[m]$ by solving (\[PrecoderAllCarriers\]). Find $\hat{\mathbf{W}}_{\mathrm{RF}}^{(n,g)}$ and $\hat{\mathbf{W}}_{\mathrm{BB}}^{(n,g)}[m]$ by solving (\[CombinerOnlyProblemAllSubcarriers\]). Input for MC-HBNet: $\mathbf{X}_{\mathrm{F1}}^{(t)} =$ $ [\mathbf{X}_{\mathrm{F1}}^{(t)^\textsf{T}}[1],\dots,\mathbf{X}_{\mathrm{F1}}^{(t)^\textsf{T}}[M] ]^\textsf{T}$ and, for $ m\in \mathcal{M}, \forall i,j$, $$\begin{aligned} &[[\mathbf{X}_{\mathrm{F1}}^{(t)}[m]]_{:,:,1}]_{i,j} = |[{\mathbf{G}}^{(n,g)}[m]]_{i,j}| \nonumber \\ &[[\mathbf{X}_{\mathrm{F1}}^{(t)}[m]]_{:,:,2}]_{i,j}=\operatorname{Re} \{[{\mathbf{G}}^{(n,g)}[m]]_{i,j}\} \nonumber \\ &[[\mathbf{X}_{\mathrm{F1}}^{(t)}[m]]_{:,:,3}]_{i,j} = \operatorname{Im}\{[{\mathbf{G}}^{(n,g)}[m]]_{i,j}\}, \nonumber \end{aligned}$$ Output for MC-HBNet: $\mathbf{z}_\mathrm{HB}^{(t)} = \mathbf{z}^{(t)}$ as in (\[zSU\]). **for** $1\leq m \leq M$ **do** Input for MC-CENet: $\mathbf{X}_\mathrm{F2}^{(\overline{t})} = \mathbf{X}_{\mathrm{F1}}^{(t)}[m]$. Output for MC-CENet: $\mathbf{z}_{\mathrm{MC}-\hspace{-3pt}\mathbf{H}}^{(\overline{t})}\hspace{-5pt} = \hspace{-3pt} \mathbf{z}_{\mathbf{H}[m]}^{(t)}$ as in (\[zH\]). $\overline{t} = \overline{t} + 1$. **end for** Input for HBNet: $\mathbf{X}_\mathbf{H}^{(t)} = [\mathbf{X}_{\mathbf{H}}^{(t)^\textsf{T}}[1],\dots,\mathbf{X}_{\mathbf{H}}^{(t)^\textsf{T}}[M] ]^\textsf{T}$. Output for HBNet: $\mathbf{z}_\mathrm{HB}^{(t)}$. Input for SC-CENet$[m]$: $\mathbf{X}_\mathrm{F3}^{({t})}[m] = \mathbf{X}_{\mathrm{F1}}^{(t)}[m] $. Output for SC-CENet$[m]$: $\mathbf{z}_{\mathrm{SC}-\mathbf{H}}^{({t})}[m] =\mathbf{z}_{\mathbf{H}[m]}^{({t})} $. $t = t+1$. **end for** $g$, **end for** $n$, $\mathcal{D}_{\mathrm{MC-HBNet}} = \big((\mathbf{X}_{\mathrm{F1}}^{(1)}, \mathbf{z}_\mathrm{HB}^{(1)}),\dots, (\mathbf{X}_{\mathrm{F1}}^{(T)}, \mathbf{z}_\mathrm{HB}^{(T)})\big).$ $\mathcal{D}_{\mathrm{MC-CENet}} = \big((\mathbf{X}_\mathrm{F2}^{(1)}, \mathbf{z}_{\mathrm{MC}-\mathbf{H}}^{(1)} ),\dots, (\mathbf{X}_\mathrm{F2}^{(\overline{T})}, \mathbf{z}_{\mathrm{MC}-\mathbf{H}}^{(\overline{T})} )\big).$ $\mathcal{D}_{\mathrm{HBNet}} = \big((\mathbf{X}_{\mathbf{H}}^{(1)}, \mathbf{z}_\mathrm{HB}^{(1)}),\dots, (\mathbf{X}_{\mathbf{H}}^{(T)}, \mathbf{z}_\mathrm{HB}^{(T)})\big).$ $\mathcal{D}_{\mathrm{SC\hspace{-1pt}-\hspace{-1pt}CENet}}\hspace{-2pt}[m] \hspace{-3pt}=\hspace{-3pt} \big(\hspace{-1pt}(\mathbf{X}_{\mathrm{F3}}^{(1)}[m], \mathbf{z}_{\mathrm{SC}\hspace{-1pt}-\hspace{-1pt}\mathbf{H}}^{(1)}),\dots,\hspace{-3pt} (\mathbf{X}_{\mathrm{F3}}^{(T)}[m], \mathbf{z}_{\mathrm{SC}\hspace{-1pt}-\hspace{-1pt}\mathbf{H}}^{(T)})\hspace{-1pt} \big).$ To train the proposed CNN structures, we realize $N=100$ different scenarios for $G=100$ (see Algorithm \[alg:algorithmTraining\]). For each scenario, we generated a channel matrix and received pilot signal where we introduce additive noise to the training data on both the channel matrix and the received pilot signal which are defined by SNR$_{\mathbf{H}}$ and SNR$_{\overline{\mathbf{N}}}$ respectively. During training, we use multiple SNR$_{\mathbf{H}}$ and SNR$_{\overline{\mathbf{N}}}$ values to enable robustness in the networks against corrupted input characteristics [@elbirDL_COMML; @elbirQuantizedCNN2019]. In particular, we use SNR$_{\overline{\mathbf{N}}} = \{20, 30, 40\}$ dB and SNR$_{\mathbf{H}} =\{15,20,25\}$ dB, where we have SNR$_{\mathbf{H}} = 20\log_{10}(\frac{|[\mathbf{H}[m]]_{i,j}|^2}{\sigma_{\mathbf{H}}^2})$ and SNR$_{\overline{\mathbf{N}}} = 20\log_{10}(\frac{|[ \mathbf{H}[m] \overline{\mathbf{F}}[m]\overline{\mathbf{S}}[m]]_{i,j}|^2}{\sigma_{\overline{\mathbf{N}}}^2})$. In addition, SNR $=\{-10, 0, 10\}$ dB is selected in the training process. As a result, the sizes of the training data for MC-HBNet, MC-CENet, HBNet and SC-CENet$[m]$ are $MN_\mathrm{R}\times N_\mathrm{T}\times 3 \times 30000 $, $N_\mathrm{R}\times N_\mathrm{T}\times 3 \times 30000 M$, $MN_\mathrm{R}\times N_\mathrm{T}\times 3 \times 30000 $ and $N_\mathrm{R}\times N_\mathrm{T}\times 3 \times 30000 $, respectively. Further, $80\%$ and $20\%$ of all generated data are chosen for training and validation datasets, respectively. For the prediction process, we generated $J_T$ Monte Carlo experiments where a test data which is separately generated by adding noise on received pilot signal with SNR defined as SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ is used. Note that this operation is applied to corrupt input data and test the network against deviations in the input data which can resemble the changes in the channel matrix due to short coherence times in mm-Wave scenario [@coherenceTimeRef]. Numerical Simulations {#sec:Sim} ===================== We evaluated the performance of the proposed DL frameworks through several experiments. We compared our DL-based hybrid beamforming (hereafter, DLHB) with the state-of-the-art hybrid precoding algorithms such as Gram-Schmidt-orthogonalization-based method (GS-HB) [@alkhateeb2016frequencySelective], phase-extraction-based method (PE-HB) [@sohrabiOFDM], and another recent DL-based multilayer perceptron (MLP) method [@mimoDeepPrecoderDesign]. As a benchmark, we implemented a fully digital beamformer obtained from the SVD of the channel matrix. We also present the performance of the MO algorithm [@hybridBFAltMin] used for the labels of the hybrid beamforming networks. The MO algorithm constitutes a performance yardstick for DLHB, in the sense that the latter cannot perform better than the MO algorithm because the hybrid beamformers used as labels are obtained from MO itself. Finally, we implemented spatial frequency CNN (SF-CNN) architecture [@deepCNN_ChannelEstimation] that has been proposed recently for wideband mm-Wave channel estimation. We compare the performance of our DL-based channel estimation with SF-CNN using the same parameters. We followed the training procedure outlined in the Section \[sec:HD\_Design\] with $N_\mathrm{T}=128$ elements, $N_\mathrm{R}=16$ antennas, and $N_\mathrm{RF} = N_\mathrm{S} = 4$ RF chains. Throughout the experiments, unless stated otherwise, we use $M=16$ subcarriers at $f_c = 60$ GHz with $4$ GHz bandwidth, and $L=10$ clusters with $N_\mathrm{sc}=5$ scatterers for all transmit and receive angles that are uniform randomly selected from the interval $[-\pi,\pi]$. We selected $\overline{\mathbf{F}}[m]$ and $\overline{\mathbf{W}}[m]$ as the first $M_\mathrm{T}$ columns of an $N_\mathrm{T}\times N_\mathrm{T}$ discrete Fourier transform (DFT) matrix and the first $M_\mathrm{R}$ columns of an $N_\mathrm{R}\times N_\mathrm{R}$ DFT matrix respectively [@deepCNN_ChannelEstimation]. Then, we set $M_\mathrm{T}=128$ and $M_\mathrm{R}=16$. In the prediction stage, the preamble data are different from the training stage. Instead, we construct $\mathbf{G}[m]$ from (\[receivedSignalPilot\]) and (\[Gm\]) with a completely different realization of noise $\overline{\mathbf{N}}$ corresponding to SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$. Spectral efficiency evaluation ------------------------------ Figure \[fig\_SNR\_Rate\] shows the spectral efficiency of various algorithms for varying test SNR, given SNR$_{\overline{\mathbf{N}}}=20$ dB. The DLHB techniques - fed with only the received pilot data (i.e., $\mathbf{G}[m]$) - outperform GS-HB [@alkhateeb2016frequencySelective] and PE-HB [@sohrabiOFDM] that utilize perfect channel matrix to yield hybrid beamformers. Further, GS-HB algorithm requires the set of array responses of received paths which is difficult to achieve in practice. The MO algorithm is used to obtain the labels of the deep networks for hybrid beamforming, hence the performances of the DL approaches are upper-bounded by the MO algorithm. However, note that perfect channel information is required for even the benchmark MO algorithm [@hybridBFAltMin]. The gap between the MO algorithm and the DL frameworks is explained by the corruptions in the DL input which causes deviations from the label data (obtained via MO) at the output regression layer. Note that our DLHB methods improve upon other DL-based techniques such as MLP [@mimoDeepPrecoderDesign], which lacks a feature extraction stage provided by convolutional layers in our networks. Among the DL frameworks, F2 and F3 exhibit superior performance than F1 because the channel estimated by MC-CENet and SC-CENet has higher accuracy. On the contrary, F1 uses ICEs directly as input and is, therefore, unable to achieve similar improvement. While F2 and F3 have similar hybrid beamforming performance, F3 has computationally more complex because of presence of $M$ CNNs in the channel estimation stage. In order to compare the algorithms with the same input channel data, we use the channel matrix estimate obtained from MC-CENet for MO, GS-HB, PE-HB and MLP when SNR $=0$ dB. Figure \[fig\_CE\_SNR\_N\_Rate\] shows the spectral efficiency so obtained with respect to SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$, which determines the noise added to the received pilot data. For SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}\geq 0$ dB, we note that the non-DL methods perform rather imperfectly but their performance is at least similar with the true channel matrix case shown in Fig \[fig\_SNR\_Rate\]. The DL-based techniques exceed in comparison and exhibit higher tolerance against the corrupted channel data corresponding to SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$. The F2 and F3 quickly reach the maximum efficiency when SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ is increased to $-15$ dB. Again, the F1 fares poorly because it is directly fed by the ICEs and lacks the channel estimation network. Error in channel estimation {#subsec:ch_est} --------------------------- \ \ Figure \[fig\_CE\_SNRonReceivedSignal\] shows the normalized MSE (NMSE) (Fig. \[fig\_CE\_SNRonReceivedSignal\](a)) in the channel estimates and the spectral efficiency (Fig. \[fig\_CE\_SNRonReceivedSignal\](b)) of the DL approaches with respect to SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ when SNR $=0$ dB. Here, the NMSE is $$\begin{aligned} \textrm{NMSE} = \frac{1}{M J_T } \sum_{m=1}^{M}\sum_{i=1}^{J_T} \frac{|| \mathbf{H}[m] - \hat{\mathbf{H}}_i[m] ||_\mathcal{F}}{|| \mathbf{H}[m] ||_\mathcal{F} } , \end{aligned}$$ where $J_T$ is the number trials. We observe that all of the DL frameworks provide improvement as SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ increases but F3, in particular, surpasses all other methods. We remark that DLHB approaches outperform the recently proposed SF-CNN because the latter lacks fully connected layers and relies only on several convolutional layers (see Table 1 in [@deepCNN_ChannelEstimation]). While convolutional layers are good at extracting the additional features inherent in the input, the fully connected layers are more efficient in non-linearly mapping the input to the labeled data [@vggRef]. Further, SF-CNN [@deepCNN_ChannelEstimation] draws on a single SNR$_{\overline{\mathbf{N}}}$ in the training and works well only when SNR$_{\overline{\mathbf{N}}}=$ SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$. This is impractical because it requires re-training whenever there is a change in SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$. On the other hand, no such requirement is imposed on our DLHB method because we use multiple SNR$_{\overline{\mathbf{N}}}$s during the training stage. Again, F3 leverages multiple CNNs to outclass F2. While both have largely similar results as in Fig. \[fig\_SNR\_Rate\], we observe from Fig. \[fig\_CE\_SNRonReceivedSignal\](b) that F3 attains higher spectral efficiency even at SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ as low as -5 dB when compared with F1, F2, and MLP. We conclude that, effectively, the channel estimation improvement in F3 also leads to capacity enhancement at very low SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$. Next, Fig. \[fig\_CE\_SNRonReceivedSignal\](b) illustrates that F1 performs well only when SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ exceeds 15 dB. In summary, F2 yields the highest spectral efficiency with reasonable network complexity. We observe in Fig. \[fig\_CE\_SNRonReceivedSignal\](a) that the performance of DL-based algorithms maxes out after SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ reaches $5$ dB. This is because, being biased estimators, deep networks do not provide unlimited accuracy. This problem can be mitigated by increasing the number of units in various network layers. Unfortunately, it may lead to the network memorizing the training data and perform poorly when the test data are different than the ones in training. To balance this trade-off, we used noisy data-sets during training so that the network attains reasonable tolerance to corrupted/imperfect inputs. Although the spectral efficiency of DLHB frameworks remains largely unchanged at high SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$, it is an improvement over MLP as can be ascertained from both Fig. \[fig\_SNR\_Rate\] and Fig. \[fig\_CE\_SNRonReceivedSignal\](b). \ \ Effect of noise contamination ----------------------------- We examined the performance of the DL approaches for the corrupted pilot data when SNR $=0$ dB and SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}= 10$ dB. In this experiment, we added noise determined by SNR$_{\overline{\mathbf{S}}-\mathrm{TEST}}$ to the pilot signal matrix $\overline{\mathbf{S}}$ in (\[receivedSignalPilot\]). All networks are trained by selecting $\overline{\mathbf{S}} = \sqrt{P_T} \mathbf{I}_{M_\mathrm{T}}$. Figure \[fig\_CE\_PilotContamination\](a) shows that F3 has lower NMSE than both F2 and SF-CNN. Here, the performance of the algorithms maxes out after SNR$_{\overline{\mathbf{S}}-\mathrm{TEST}}$ is increased to $15$ dB; the channel estimation improvement is very incremental for all deep networks except ICE, where the preamble noise is determined by SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$. The degradation in accuracy of DL methods can be similarly explained as in Section \[subsec:ch\_est\]. Nevertheless, the hybrid beamforming performance of F2 and F3 is better than MLP even though the channel estimation improvement is modest. Moreover, the performance of F2 and F3 quickly reaches to their best after SNR$_{\overline{\mathbf{S}}-\mathrm{TEST}} = -15$ dB (Fig. \[fig\_CE\_PilotContamination\](b)). Effect of angle and cluster mismatch ------------------------------------ We imposed further challenges on our techniques by introducing an angle mismatch from the receiver AOA/AOD angles (also used as training data). In the prediction stage, we generated a different channel matrix by inserting angular mismatch in each of the path angles. Figure \[fig\_AngleMismatch\] illustrates the spectral efficiency achieved with respect to the standard deviation of the mismatch angle, $\sigma_\Theta$. Hence, for the AOA/AOD angles ${\theta}_l,{\phi}_l$ from the $l$th cluster, the mismatched angles are given by $\widetilde{\theta}_l \sim \mathcal{N}(\theta_l, \sigma_\Theta^2)$ and $\widetilde{\phi}_l \sim \mathcal{N}(\phi_l, \sigma_\Theta^2)$, respectively. For both $L=10$ (Fig. \[fig\_AngleMismatch\]a) and $L=3$ (Fig. \[fig\_AngleMismatch\]b) clusters, DLHB methods are able to tolerate at most $4^{\circ}$ of angular mismatch which other learning-based methods such as MLP are unable to. As this mismatch increases, it leads to significant deviations in the channel matrix data (arising from the multiplication of deviated steering vectors in (\[eq:delaydChannelModel\]). We also evaluated effect of a mismatch in the number of clusters $L$ between training and prediction data. We trained the networks for $L=10$ and $L=5$ with different channel realizations. During testing, we generated a new channel matrix for different number of clusters. Figures \[fig\_PathMismatch\](a) and (b) illustrate the spectral efficiency for $L=10$ and $L=5$, respectively. The F2 and F3 reach to their maximum performance when $L$ reaches to the value used in the training. The performance of F1 and MLP gets worse as $L$ increases. Note that in the prediction stage, the first 10 (5), as in Fig. \[fig\_PathMismatch\]a (b), cluster angles are same as used for training; remaining 10 (5) cluster angles are selected uniformly at random as mentioned earlier. As $L$ increases, the input data becomes “more familiar” to the deep network. The spectral efficiency does not degrade after addition of randomly generated cluster paths because DLHB designs the hybrid beamformers according to the received paths that are already present in the training data. As a result, deep networks provide robust performance even with additional received paths and channel matrix different from the training stage. However, the loss of cluster paths in the training data causes would deteriorate the performance because the input data becomes “unfamiliar” to the deep network and hybrid beamformer designs suffer as a result. Computational complexity ------------------------ MC-HBNet MC-CENet SC-CENet$[m]$ HBNet SF-CNN MLP ---------- ---------- --------------- ------- -------- ------ 45.6 95.3 76.6 43.8 85.1 41.4 : Training Times for Networks (in Minutes) []{data-label="tableComp_Networks"} DLHB-F1 DLHB-F2 DLHB-F3 --------- --------- --------- 45.6 138.5 1270.3 : Training Times for DLHB Frameworks (in Minutes) []{data-label="tableComp_Frameworks"} MC-HBNet MC-CENet SC-CENet$[m]$ HBNet SF-CNN MLP ---------- ---------- --------------- -------- -------- -------- 0.0053 0.0056 0.0057 0.0059 0.0057 0.0056 : Run Times for Networks (in Seconds) []{data-label="tableComp_Networks2"} DLHB-F1 DLHB-F2 DLHB-F3 MO GS-HB PE-HB --------- --------- --------- ------- -------- -------- 0.0053 0.0113 0.0778 3.204 0.0132 0.0152 : Run Times for Algorithms (in Seconds) []{data-label="tableComp_Frameworks2"} We assessed the training times of all DLHB frameworks. We select the same simulations settings presented in Section \[sec:HD\_Design\]. For $M=16$, Tables \[tableComp\_Networks\] and \[tableComp\_Frameworks\] list training times for each network (Fig. \[fig\_Networks\]) and DLHB framework (Fig. \[fig\_DLFrameworks\]), respectively. The simple structure and smaller input/output layer sizes of MC-HBNet, HBNet, and MLP implies that they have the lowest training times than the CENet. Similarly, F1 is the fastest in training while F3 is the slowest. Note that we trained each SC-CENet separately one after the other. The training time of F3 is reduced when all SC-CENet networks are to be trained jointly in parallel. Designing hybrid beamformers by solving (\[PrecoderAllCarriers\]) and (\[CombinerOnlyProblemAllSubcarriers\]) using the MO algorithm introduces computational overhead. While this process is tedious, our proposed DLHB holds up this complexity only during the training. In the prediction stage, however, DLHB exhibits far smaller computational times than other algorithms. For the sake of completeness, Tables \[tableComp\_Networks2\] and  \[tableComp\_Frameworks2\] list the prediction stage computational times of the networks and frameworks, respectively. All networks show similar run times because of parallel processing of deep networks with GPUs. Among the DLHB frameworks, F1 is the fastest due to its structural simplicity. The MO algorithm takes longest to run in solving its inherent optimization problem. While GS-HB and PE-HB are quicker than F3, they are fed with the true channel matrix and lack any channel estimation stage. The F2 has slightly less execution times than GS-HB and PE-HB and provides more robust performance without requiring the CSI. Hence, we conclude that the proposed DL frameworks are computationally efficient and more tolerant to many different corruptions in the input data. Summary {#sec:Conc} ======= We introduced three DL frameworks for joint channel estimation and hybrid beamformer design in wideband mm-Wave massive MIMO systems. Unlike prior works, the proposed DL frameworks do not require the knowledge of the perfect CSI to design the hybrid beamformers. We investigated the performance of DLHB approaches through several numerical simulations and demonstrated that they provide higher spectral efficiency and more tolerance to corrupted channel data than the state-of-the-art. The robust performance results from training the deep networks for several different channel scenarios which are also corrupted by synthetic noise. This aspect has been ignored in earlier works. We showed that the trained networks provide robust hybrid beamforming even when the received paths change up to 4 degrees from the training channel data. This allows for sufficiently long times in deep network operations without requiring re-training. This significant improvement addresses the common problem of short coherence times in a mm-Wave system. Even in terms of the channel estimation accuracy, our DLHB frameworks outperform other DL-based approaches such as SF-CNN. Our experiments show that the channel estimation performance of all DL methods maxes out at high SNR$_{\overline{\mathbf{N}}}$ regimes. This is explained by the nature of deep networks which are biased estimators. Acknowledgements {#acknowledgements .unnumbered} ================ K. V. M. acknowledges Prof. Robert W. Heath Jr. of The University of Texas at Austin for helpful discussions and suggestions. [^1]: A. M. E. is with the Department of Electrical and Electronics Engineering, Duzce University, Duzce, Turkey. E-mail: ahmetelbir@duzce.edu.tr, ahmetmelbir@gmail.com. [^2]: K. V. M. is with The University of Iowa, Iowa City, IA 52242 USA. E-mail: kumarvijay-mishra@uiowa.edu.
Welcome to our website! As we have the ability to list over one million items on our website (our selection changes all of the time), it is not feasible for a company our size to record and playback the descriptions on every item on our website. However, if you are an American with a disability we are here to help you. Please call our disability services phone line at 919-834-0395 during regular business hours and one of our kind and friendly personal shoppers will help you navigate through our website, help conduct advanced searches, help you choose the item you are looking for with the specifications you are seeking, read you the specifications of any item and consult with you about the products themselves. There is no charge for the help of this personal shopper for any American with a disability. Finally, your personal shopper will explain our Privacy Policy and Terms of Service, and help you place an order if you so desire. Graff Wilkinson Supply Co in Raleigh, NC is an authorized dealer of Graff Products. American ingenuity and European craftsmanship are the cornerstones of GRAFF's design commitment to create innovative, cutting-edge mixers (faucets) and plumbing accessories. Supported by over 80 years of plumbing and hardware manufacturing experience, GRAFF's luxury kitchen and bath offerings include a range of contemporary, transitional and traditional products. So if you are looking for Graff products in Raleigh, Durham, Carrboro, Wilson, Chapel Hill, Cary, Apex, Holly Springs, Morrisville, Wilson and Rocky Mount, or if you have any questions about Graff products, please feel free to call us at 919-834-0395 or simply stop by Wilkinson Supply Co at any time and we would be glad to help you. The award-winning Ametis Shower System creates a truly exceptional showering experience. Engineered with many high-tech features, the Ametis Shower System offers a soothing halo effect using LED chromotherapy lighting. The LED lighting is positioned within the shower ring to add a new dimension to the colum, thanks to indirect lighting - still a seldom used concept in bathroom design. Aqua-Sense is an innovative, highly technological shower collection for the most demanding tastes. Water, light and sound orchestrated in harmonic balance, allow for a deeper sense of wellness. With various handle options and numerous shower components, Aqua-Sense can deliver a physical and emotional experience. Drawing inspiration from a traditional water pump, G+Design Studio transformed an outdated product into an elegant and modern object for everyday use. In each model, Bali retains its unique ties to both the past and the present. Bollero offers refined looks and understated styles, an inclusive approach to design fulfilling the needs of every user. With intuitive operation and effortless style, Bollero appeals to consumers who are both steadfast about cooking and want to create a beautifully practical space. Traditional meets contemporary with the sophisticated Camden Collection. Designed by GRAFF's G+ Design Studio, the collection's style is transitional and highly unique, allowing it to fit into both traditional and contemporary settings. Blending Victorian and Edwardian aesthetic sensibilities with modern principles and technologies, each fixture exhibits a luxurious artistic quality. Canterbury represents the perfect choice for those who search for a traditional Victorian feel in their bathroom. Whether in the exposed shower version or in the traditional showerhead, this collection is always elegant and distinctive. Available with cross handles, porcelain handles, or metal handles, the whole collection has been developed in several precious and long-lasting finishes. Staying true to its namesake, the Conical collection is composed using fluid lines that broaden at the base and taper into a graceful neck. With a matching bar faucet, this pair creates a magnetic combination and adds a fresh perspective to the kitchen space. A graceful form with enduring elegance, Corsica withstands trends exuding a timeless appeal. Within its classical design, Corsica offers a matching side spray to allow water to be used in different ways that suit the user. A tradition-inspired silhouette, Duxbury stands dignified and graceful. With a beveled neck and obtuse base, Duxbury stays true to itself while transforming the kitchen with a comfortable, casual ambiance. The side spray offers even more flexibility in the heart of the home, the kitchen. Architectural details combine with clean lines to form the Finezza collection. A perfect blend of grace and elegance, this full suite offers an exquisite array of choices from faucets to shower elements. As you descend down the profile of the Fontaine Collection, this contemporary design broadens in its form. Clearly its shape is meant to delineate a luxurious and clean style. The contour of the handles is a dominant element that embodies this distinct silhouette. The design of the contemporary collection was derived from the stylings of classic motorcycles, fusing an industrial aesthetic with details nostalgic of the all-American icons.Conceptualized by GRAFF's G+Design Studio, the faucet's noteworthy handle, recalling a car steering wheel, offers a unique eclecticism and adaptability to contemporary and technical environments. Each piece is crafted with a focus on engineering and ingenuity, resulting in a minimalist and even composition. Complete with a full line of matching shower components and accessories, Incanto can fulfill the functional and design needs of your next project. With distinctive, designer handles, the Infinity kitchen faucet displays a unique, modern style best suited for contemporary designs. For both cooking and clean-up, effortlessly utilize the side spray for even more functionality within the kitchen. Grace personified. That's the best way to describe the Lauren collection. The gentle curve of the showerhead and the slim perfectly-appointed handles can take on a stronger look when shown in the gold plated finish. The Lauren Collection knows that there is no need to shout to be heard. Arched to perfection, the sleek crescents of the Luna Collection carry over into the thermostatic shower system. A wall-mounted base with an overhead rainfall setting, the Luna shower may be used every day but will never feel routine. The M-Series thermostatic module lets you create, customize and transform your shower experience. Pushing the boundaries of design and opening up new possibilities in shower functionality, the M-Series minimalistic beauty comes to life in a new and personalized way. M.E. 25 embodies the best that minimalism has to offer. As one of the most versatile collections on the market, personalization of each bathroom is a simple job. ADA compliant, available in four finishes, M.E. 25 is perfectly suited to most bath environments. Never compromise on your decisions. Inspired by the city skyline, the Manhattan collection delivers a streamlined design coupled with exceptional functionality. Crisp lines and simple forms melt together for a sleek appeal - bringing a striking new life to a contemporary or transitional kitchen space. Traditional in nature, when shown in polished chrome the shower shines through a more contemporary light. The perfect complement to a classic bath space, Nantucket speaks to a time when life moved at a slower, more contemplative pace. Practical and stylish, Oscar is perfectly suited to environments with a contemporary taste, giving the kitchen a new allure. Its twist and lock sprayhead and adjustable stream allow a simple and effective use while the pull-out spray and handle have a rubber grip for easier handling. A simple divergence in the neck spurs thoughts of the past while traditional styling adds a vintage feel to the Pesaro Collection. Bringing about an air of timelessness, Pesaro represents the perfect complement to an ageless space. The Phase Collection's clean and simple shape gives tribute to the perfect union of sensuality and precision. A contemporary collection with slim lines, it is suited to every type of interior project. It is a timeless creation, a group of elements which fits perfectly in today's ever-changing society. Sometimes simplicity is deceiving. Qubic could not reflect this concept any better. Polished and elegant, it represents an architectural element with a defined contemporary design. Square and cube-shaped, as already announced by its name, Qubic confers strength and stability to the sink and the bathroom as a whole. ‘'Lightness and strength'' are the principles that inspired Angeletti Ruzza Design when they created this collection. Its minimal yet sensual design is defined by clean, simple lines that result in a strong visual impact. The series offers a wide range of options to satisfy both practical and aesthetical desires. This contemporary shower stands as the main protagonist in the bathroom and seduces with the purity of its shape. The minimal design consists of a geometrical composition of cubes, rectangles and right angles. Besides the traditional polished chrome and Steelnox® finishes, the version in matte black adds a concrete and substantial look to each item of the Solar collection. Clean lines and smooth surfaces identify this very unique collection. Designed by G+Design Studio, Structure delivers, with its right angles, an idea of artistic perfection. As geometrical as poetry, this shower is the purest expression of engineering brilliance disguised as simplicity. Modern and refined, Targa is outstanding with smooth and slightly convex handles. The arrangement uses a softly bent lever. All elements of this collection are premium-design products, with a superior level in terms of manufacturing standards. Behind a natural profile, simple and free from excess, as essential as the element from which it takes its name, Terra offers modern cutting-edge solutions that are ecologically sound and look forward to technological progress. The cylindrical, smooth and bright shape, recreates a relaxing atmosphere of harmony in the bathroom, like a journey into nature, where the flow of the water and the fascinating forms, capture you and lull you gently. An art-deco touch makes the Topaz Collection as unique as its jewel shaped handles. The showerhead features an unmistakable hexagonal shape. No redundant detail, no excessive movements: the Topaz Collection is perfect in its modern and timeless finishes. With a zen-inspired appeal, Tranquility speaks a soft, gentle language. Contemporary in its finishes and forms, the Tranquility Collection brings a slight Asian influence to the bath environment. The soft slope of the handle, while recalling a traditional style, makes a more transitional statement. The handle resembles a bamboo shoot and its gentle curve is soothing to the touch. The Vintage Collection draws inspiration from the design of classic fire hose nozzles, pairing a modern spout with bold handles. Each element, from the rounded brim at the spout's top to the undulating handles complete with carefully designed cut outs, resembles the traditional forms of the fire house featured in the historic Chicago Fire Department logo. The elegant styling of the Vista Collection displays fine design and sophistication. As the curvature of the spout reaches its pinnacle, its quiet slope concludes with a modest flair. The simple structure of the two handled base conveys a vintage charm. A style as unique as the place its named after, Wellington stands proud as it blends a fascination of the past with a fury for the present. The uncommon shape is made for those who like to deviate from the norm with a tendency towards progression. Welcome to our website! As we have the ability to list over one million items on our website (our selection changes all of the time), it is not feasible for a company our size to record and playback the descriptions on every item on our website. However, if you are an American with a disability we are here to help you. Please call our disability services phone line at 919-834-0395 during regular business hours and one of our kind and friendly personal shoppers will help you navigate through our website, help conduct advanced searches, help you choose the item you are looking for with the specifications you are seeking, read you the specifications of any item and consult with you about the products themselves. There is no charge for the help of this personal shopper for any American with a disability. Finally, your personal shopper will explain our Privacy Policy and Terms of Service, and help you place an order if you so desire.
Looking Ahead: An Interview with Michael A. Marletta On January 1, Michael A. Marletta took office as president and CEO of The Scripps Research Institute. Here, he speaks with Mika Ono of News&Views about topics including his background, priorities, and vision for the future. What led you to The Scripps Research Institute? Twenty years ago, Scripps offered me a position. I was at the University of Michigan at the time. I thought long and hard about it, and decided I still enjoyed the full spectrum of a complicated university with many thousands of undergraduates. Just over 10 years ago, I moved to UC Berkeley. At Berkeley, I served as chair of the chemistry department for five years and found I enjoyed leading a complex and driven diverse group of people. A few times over the years, Richard Lerner [former president] would say, “Look, if you are ready to make a move…” I visited Scripps a number of times, and I’ve always admired the place. So when this opportunity came along, I thought it was a long shot but I applied. What excites you about the job as president? I’m excited about the potential of learning how biology works and applying that knowledge to medical problems—and that’s really being excited about the mission of The Scripps Research Institute. Others at Scripps are excited about that, too, and that’s great. You’ve been here since July. What are some of your first impressions of the institute? The most encouraging thing I learned is that, in general, the faculty and staff have an intense devotion to this place. I walked into Beckman the other day and the security guard at the desk, Marcus Bilbee, and I struck up a conversation, and it was clear he cares a lot. When the faculty start to talk about what they have been able to discover here, it’s clear they have an attachment. That has been deeper and more intense than I expected. That is going to help us in the long run. No place is perfect. Scripps has its challenges, areas for improvement. But if you feel strongly about the place where you work, you are willing to help and be part of the solution. What are the biggest challenges you see? There are financial pressures. Scripps is a soft-money institution. One question I could ask in return is, “Why do faculty come to Scripps?” They could stay in a university and, even with no research support, collect nine months of salary for teaching. But for that, their days would be broken up with all kinds of university responsibilities. I did those for many years. Some of those are enjoyable, but sometimes they take you away from research when you would rather sit in a lab talking to students about a particular result. At Scripps, you can come in at the beginning of the day and if somebody finds something unexpected or a big experiment works, you could spend all day thinking about it, talking about it, writing about it… That never happens in a university environment. Faculty come here because they can do unencumbered research. For that, there’s the risk of raising money to fund the research you want to do. Faculty also come and stay because of the infrastructure here—the very best in equipment. So we need to generate resources to keep that infrastructure at the highest level. We need to generate resources to recruit the next generation of new faculty. We need to have resources to keep our faculty who will get offers from other places. While there are different issues in Florida, in La Jolla the financial pressures are significant. We have had long-time relationships with “big pharma” that are not going to be repeated in the current environment. Florida is still in the growth phase, still with money from the State of Florida, so there is empty space because we are still recruiting principle investigators. We’re on track to meet the Florida benchmarks. All of this boils down to the fact that the biggest issue facing us is how to move forward in a situation where the federal government will not be the partner it has been in the past. That will put even more pressure on us to raise internal funds. We’re looking at a combination of philanthropy and a return on our investment in intellectual property [IP]. IP is going to ramp up. Not having a first-rights agreement as we have had in the past will make us look farther into the future for financial benefit from our IP, but we will own all of what we discover here and that should be a direct benefit to us. Could you talk a little more about philanthropy? Why should people give here versus elsewhere? People give because there is something about what we are doing that strikes a chord in them. Each of us can rattle off parents, siblings, aunts, uncles, cousins who suffered from some disease. It’s just inevitable. When disease strikes, we often like to do something about it. It’s one of the common aspects of private giving here. Donors hear about what we’re doing and want to support it. Of course, we have to tell them what we’re doing, and I’m spending some of my time doing that. Sometimes what strikes a chord is an individual they meet, say a faculty member working on a particular disease. When they make a contribution, they have the opportunity to see that person be successful, working on something they believe in or a disease they want to see wiped out. So it’s often deeply personal. That’s why philanthropy is all about relationships—listening to what potential donors find interesting and then showing them we have the potential to make a major discovery that they can be a part of. Isn’t basic research somewhat of a double-edged sword—you’re years away from medically applied research, although the fundamental discovery may ultimately have a larger impact? I actually don’t agree with that. Let’s use the recent example of Jeff Kelly’s tafamidis [now approved in Europe for the treatment of familial amyloid polyneuropathy]. At the heart of it, I’d say Jeff probably has two passions. One is to come up with a drug that helps treat disease. He’s just done that. But the other passion is for the science itself. So Jeff’s driving force was understanding how proteins fold, and when they misfold what happens—very basic, fundamental work, but also necessary to make a drug. Benlysta® is the only treatment for lupus, a very complicated disease. Richard Lerner’s antibodies are the technology that drug was based on. There was Humira® before that. Humira® will soon be the largest selling drug in the world. To me, Scripps represents the very best in fundamental research coupled with looking outward for the translational piece, which takes fundamental discovery and turns it into drugs. When I was at Michigan, in the medical school’s biological chemistry department, the clinicians would say, “You’re so far away from [the clinic].” It appeared more like that then, because you made a fundamental discovery, you published it, and that was more or less the end of it. But at Scripps, it’s not just about basic discovery, but also what can you can do with it. That’s different. I tell donors what our fundamental discoveries can do. I tell them we are about discovery—that’s what we do—but we don’t let it rest there, and we’ve got examples to show it. Here, basic research and potential applications go hand-in-hand. Your own work has bridged fundamental discovery and medical application. We started a company. My father said I finally must have done something important! We spent years trying to understand the remarkable finding that a molecule such as nitric oxide, this toxic molecule, is regulating blood pressure and is involved in learning, in memory. Everything in moderation; a glass of wine is good, 10 is probably bad. It’s the same with a molecule like nitric oxide. Biology has learned how to handle it. It’s extremely toxic, yet we’re making it and using it in some important physiological processes. It just turns out we don’t make very much of it. Over the years, we asked questions about how biology handles such a toxic molecule to carry out these important physiological processes. We then started to ask how biology tells the difference between nitric oxide, carbon monoxide, and oxygen. Biology has to look at all three and tell the difference from a chemical perspective, and it’s not so easy. In figuring that out, we realized that we could use our fundamental understanding to deliver nitric oxide or carbon monoxide or oxygen to particular tissues, and there are good, practical reasons for wanting to do all three. So we wrote some patents, and there’s a little company [Omniox] that’s operating right now in San Francisco. Hopefully, it will be successful. How did you get interested in science in the first place? I have a 16-year old. I watched him when he was a baby. Every kid is a scientist. They are all trying to figure out the world—whether they are lying on the floor and whacking at something or trying to figure out where the ball is going to go when it rolls across the floor. I found it interesting to watch him. I thought about myself and from my earliest memories, I always wanted to know how things work. But the catalyzing moment was October 4, 1957, when I was six years old and the Soviets launched Sputnik. I was six, so I was too young to be afraid. This was in upstate New York. It was pretty cold as I remember it, an October night, and I put on a heavy jacket and went out and stood on the front lawn of the little house we lived in and watched Sputnik fly over. Even though I didn’t understand there was engineering and science at the time, I became convinced that whatever that was I wanted to be a part of it. Christmas was right around the corner, so I asked for a telescope. Since I was six, I guess it would have been Santa who brought it to me. Then the next year, I asked for a microscope and I got that. And the next year I asked for a chemistry set, a Gilbert chemistry set, and I didn’t get that. My father was worried I was going to blow up the house, although there was nothing you can blow up with a Gilbert chemistry set. But by this time, it was maybe 1960 and you could still buy a lot of chemicals, which I did because I had a paper route. I built my own lab and I almost did blow up the house… I was always fascinated by the periodic table and the idea that everything on this planet was composed of those elements, and you could mix and match them to make things already in nature or make new things with properties nobody expected. I thought that was it. Then I took a biology course and realized that the master chemist is biology. Since then I’ve been walking between the two worlds. Is it too early to ask you your vision of Scripps? It’s a little early, but people have asked. As I mentioned, you have to have the best infrastructure possible. You’ve got really smart people who already have great ideas. You need to recognize talent, keep the best talent, and then basically get out of the way. That said, I think that it would be important for Scripps to engage in serious issues in human health. I would like us to work on some big problems, like the combination of obesity and metabolic diseases like diabetes. We already have people working in these areas, but there is some opportunity. As enzymologists—I would describe myself as an enzymologist—we study one enzyme in a test tube, one at a time. We understand a lot, but when you put that one enzyme with a thousand others all working together in us, it doesn’t quite work like it works in a test tube. So, in fact, we’re talking about metabolism, an old moniker. When you think about the spectrum of metabolic diseases, they include not only diabetes and aspects of obesity, but also cancer, which is now being reinterpreted as something called the Warburg effect—oxygen consumption by cancer cells. I would like us to be as good at metabolomics as we are at proteomics—where we are one of the best in the world due to our investment in talent and infrastructure. With infrastructure in metabolomics, not only can our faculty take advantage of these resources, we’ll also be able to tackle diseases that confront the Western world. If we don’t solve those problems, as a society we’re going to have an albatross around our neck. We need to understand the processes, and we need to do something about those diseases. So, I see investment in that kind of infrastructure and then doing what I do best, which is 1) taking advantage of it in my own research, and 2) getting out of the way. Are there any other messages you want to get out there to employees, to donors, to faculty? I mostly want people to know I’m excited. The more I learn about Scripps, the more excited I am. Also, I’m going to work hard to make sure that Scripps remains the kind of institution that it has been and moves forward with new discoveries, but I need everybody’s help—faculty and staff—everybody. Send comments to: press@scripps.edu Michael A. Marletta took office as president and CEO of The Scripps Research Institute on January 1. (Photo by Dave Freeman, BioMedical Graphics.)
June 26, 2008 Justice Scalia sells out felon gun rights, but on what basis exactly? Here are sets of quotes from the majority opinion in Heller that I have a hard time adding up: We start therefore with a strong presumption that the Second Amendment right is exercised individually and belongs to all Americans. (Slip op. at 10, emphasis added.) It was plainly the understanding in the post-Civil War Congress that the Second Amendment protected an individual right to use arms for self-defense. (Slip op. at 44, emphasis added.) As the quotations earlier in this opinion demonstrate, the inherent right of self-defense has been central to the Second Amendment right. The [DC] handgun ban amounts to a prohibition of an entire class of “arms” that is overwhelmingly chosen by American society for that lawful purpose. The prohibition extends, moreover, to the home, where the need for defense of self, family, and property is most acute. Under any of the standards of scrutiny that we have applied to enumerated constitutional rights, banning from the home “the most preferred firearm in the nation to ‘keep’ and use for protection of one’s home and family,” 478 F. 3d, at 400, would fail constitutional muster. (Slip op. at 56-57, emphasis added.) A broader point about the laws that JUSTICE BREYER cites: All of them punished the discharge (or loading) of guns with a small fine and forfeiture of the weapon (or in a few cases a very brief stay in the local jail), not with significant criminal penalties.... [W]e do not think that a law imposing a 5-shilling fine and forfeiture of the gun would have prevented a person in the founding era from using a gun to protect himself or his family from violence, or that if he did so the law would be enforced against him. The District law, by contrast, far from imposing a minor fine, threatens citizens with a year in prison (five years for a second violation) for even obtaining a gun in the first place. (Slip op. at 61-62, emphasis added.) Summing up, it would seem that the majority holds that, pursuant to the Second Amendment, "all Americans" have an "individual right to use arms for self-defense." And, the Second Amendment would be most problematically transgressed when this right is severely restricted in the "home, where the need for defense of self, family, and property is most acute" through the threat of years in prison rathen than just a minor fine. As regular readers know, I think all these assertions add up to making constitutionally questionable the threat of severe sentences on felons in possession of firearms. After all, felons are Americans with a need to protect themselves and their families through keeping guns in their home. And yet, all felons (even non-violent ones like Lewis Libby and Martha Stewart) face the threat of 10 years in federal prison for just possessing a firearm. Nevertheless, the majority opinion boldly and baldly asserts that "nothing in our opinion should be taken to cast doubt on longstanding prohibitions on the possession of firearms by felons and the mentally ill." (Slip op. at 54.) Really? How can that (unjustified and unsupported) dicta be squared with all that has been said before? To his credit, Justice Stevens properly asserts in this context that felons are not categorically excluded from exercising First and Fourth Amendment rights and thus the majoiry "offers no way to harmonize its conflicting pronouncements." Time and litigation will tell if holdings or dicta end up dominating the application of the Second Amendment in future cases. Comments Doug, since time immemorial, criminals have lost certain rights. It's that simple. The right to vote is precious, and it can be taken away. Posted by: federalist | Jun 26, 2008 11:26:09 AM So federalist, is it your position that a state can someone who commits perjury from practicing his or her religion? We've all read the trite point that criminals may lost certain rights. The question becomes, then, whether they may arbitrarily lose rights that have no connection with what they've done. Posted by: | Jun 26, 2008 11:36:21 AM Let me try again: So federalist, is it your position that a state can ban someone who commits perjury from practicing his or her religion? We've all read the trite point that criminals may lose certain rights. The question becomes whether they may arbitrarily lose rights that have no connection with what they've done. Posted by: | Jun 26, 2008 11:38:02 AM Felons also retain their rights to be free from having troops quartered upon them in peacetime without their consent, and in wartime, except as provided by law, under the Third Amendment; their rights to the due process of law, and freedom from compulsory self-incrimination, under the Fifth Amendment; their rights to have the assistance of counsel, to confront the witnesses against them, and to have their fate in a criminal case determined by a jury, under the Sixth Amendment; their right to a trial by jury in Federal court in an action at common law, where the amount in controversy exceeds $20, under the Seventh Amendment; and their rights to be free from excessive fines, from cruel and unusual punishments, and from having to post excessive bail, under the Eighth Amendment. The Fourteenth Amendment prohibits states from abridging the privileges and immunities of citizens of the United States. Heller does not seem in any way to turn upon citizenship, and thus does not make prohibition possible if one is an alien. Therefore, what is there about the Second Amendment, or about its right to possess weapons one might reasonably use in self-defense (which seems to be the issue the Court would like us to focus on in Heller), that excludes felons from the protection of this provision of the Bill of Rights? Posted by: Greg Jones | Jun 26, 2008 11:45:59 AM There are certain rights that cannot be taken away. The "inherent right of self-defense" in one's home may be one of them. There are two separate issues/groups in this debate. First, there are violent felons and non-violent felony (and by non-violent, I mean truly non-violent like perjury, insider trading, tax violation, environmental violations, ect.) Second, there is "inside the home" versus "outside the home". One could picture a grid with two collums and two rows, and in each box depicting the validity of taking away the right. In my opinion, non-violent felons, at a minimum, must retain their inherent right to self defense in the home. Outiside of the home is not as certain, and this is where the "standard of scrutiny" is important. Violent felons should also maintain their inherent right to self defense in the home. However, IMO, they can loose it outside of the home, where the right is not "most accute." Posted by: DEJ | Jun 26, 2008 11:47:04 AM They did not use language "for example" or "such as" felons or mentally ill. The majority in SCOTUS used clear language on whom may be restricted. Lautenburg seems to be in jeapordy, as well as the California "ugly gun" bans. Extremes of licensing requirements for purchasing/possessing a firearm are also out the door. Posted by: Mike | Jun 26, 2008 11:50:27 AM I wonder if we are jumping to conclusions. Did Scalia (in what seems to amount to dicta) foreclose the possibility that some firearm restrictions on felons are unconstitutional (particularly given the reliance on self-defense)? Perhaps he only means they are not not necessarily unreasonable...there may be some leeway to require the feds to insert a "reasonable component" to felon firearm bans. (For instance, creating a reasonable application process for non-violent offenders to regain the right, putting time limits on the ban, or requiring the government to exercise reasonable and non-arbitrary discretion in deciding whether a certain convicted felon should be allowed to possess a firearm). Such reasonable restrictions, IMHO, would be more reasonable than many restrictions placed on convicted sex offenders. I would not foreclose the possibility of someone convicted of a minor non-violent felony decades ago successfully challenging the ban. (Perhaps he could be an otherwise upstanding citizen, have a family of four, and live in a dangerous neighborhood too). Posted by: Nathan | Jun 26, 2008 12:00:39 PM Based on an initial reading of the majority opinion, the real "Heller challenge" lies in challenging a charge under 18 USC 924(c)(1)(A)(i). Why should someone (who is not otherwise a prohibited person) involved in a drug conspiracy face a separate 5 year mandatory minimum for exercising his fundamental right to self defense? Posted by: Anon. Law Clerk | Jun 26, 2008 12:18:40 PM The state is entitled to opine that someone who commits a felony is more likely to use the weapon for illegal purposes than for (lawful) self-defense purposes. Posted by: Steve | Jun 26, 2008 12:19:14 PM Mike - did you actually read the Court's opinion? Scalia specifically states that governments are allowed to require a license for the purchase or possession of a gun as long as they are not being denied for "arbitrary and capricious" reasons. And that was not dicta, but instead it was part of the holding (that DC would be required to grant Mr. Heller a license if he met their criteria). Make no mistake - this sort of thing is exactly why the NRA never wanted this case before the Supreme Court. Far from saying existing gun control laws are in jeopardy, the majority at the end of its opinion actually seems to be calling for more gun control laws to be passed! Wonder which one of the 5 justices that was needed for to get them to vote that a total ban was out. I repeatedly predicted that this opinion would ultimately be a big zero (and maybe even a hidden victory for gun control advocates because it would legitimize most forms of gun control short of an outright ban). I think that is exactly what happened. Its is really simple. It use to be that the ATF could reinstate firearm priviledges until congress stopped funding I believe 1992. I think it might be time to start funding that program again. The people who apply would most likely be reformed felons and would think twice before committing another felony. Posted by: noway | Jun 26, 2008 12:28:25 PM I'm not surprised at all that the felon in possession laws would not be affected but is the federal laws banning felons from owning firearms subject to amendment now? I mean there are a lot of nonviolent offenders with old felony convictions and I mean some are very old, that have straightened out there life and should not be subject to a blanket federal prohibition. Having said that and taking note of 18 U.S.C. 921 (a) (20), I'm glad I live in Louisiana where felons can own firearms if they keep out of trouble for 10 years from the date they complete their sentence and in some case of nonviolent non enumerated offenses they can get the right back upon completion of their sentence. See RS 14:95.1, Article 1 Section 20 of the Louisiana constitutio and United States v. Dupaquier. See also the Louisiana first offender pardon statute. Posted by: Paul | Jun 26, 2008 12:31:47 PM I think the greater concern with the opinion is regulation of types of weapons. Scalia recognizes striking down the handgun ban is in tension with the preferatory clause. The majority provides absolutely no guidance on what types of guns may be regulated and what types may not. You may disagree with the majority's statement that this opinion does not disturb the present rulings upholding felon is possession laws but the Court is fairly clear on that issue. They didn't say "we express no opinion," but said "no doubt should be cast." Rightly or wrongly five Supreme Court justices have said (albeit in dicta) that the felon in possession cases are still good law notwithstanding Heller. From page 55 of the slip opinion: We also recognize another important limitation on the right to keep and carry arms. Miller said, as we have explained, that the sorts of weapons protected were those “in common use at the time.” But arguably one of the reasons some of these guns are not "in common use" is because there are federal and state regulations which ban or severely limit their purchase. Apparently, the government may ban some guns, but not others because they aren't in common use because they are illegal. In other words the guns may be banned because they have been banned, a wholly unsatisfactory conclusion. The court does say that the prohibition of dangerous and unusual weapons is permissible, but provides no guidance on what that means. Unusual suffers from the same problem as "common use." If unusual were enough D.C. might have claimed pistols were unusual in D.C., and therefore might be banned because they have been banned. Dangerous might be workable, but is problematic. Consider for instance the sawed off shotgun the supreme court considered in the Miller case. The sawed off shotgun is not in the ordinary, plain sense of the word more dangerous. It does not fire faster, the shorter barrel length decreases the energy of the pellets and diminishes the guns effective range. But it does make the gun more concealable and so more dangerous in the sense it is more adaptable to illicit use. But how about a high capacity magazine for a pistol? Does that make a pistol more dangerous? Dangerous might work for some obvious weapons (e.g. an AK-47 is more dangerous than a 9mm semi-automatic pistol but how dangerous is dangerous enough to regulate?). But there are significant problems with its application. The majority seem to indicate the fully automatic versions of the M-16 may be banned, but does that mean the Government can ban the civilian semi-automatic versions? I'm not sure that the civilian version is substantially less dangerous within the meaning of the word. Its more common, but again that may be because federal law has restricted the purchase of fully automatic weapons for some time. Posted by: NK | Jun 26, 2008 12:41:08 PM Mr. Heller is/was a resident of DC with no right to vote or representation in congress and as a consequence had to sue in federal court over a city ordinance? It seems to me the benefits/damage of this case are very limited. Did the supremes select this case because it was narrow to begin with? Posted by: John Neff | Jun 26, 2008 12:54:29 PM wonder if we are jumping to conclusions. Did Scalia (in what seems to amount to dicta) foreclose the possibility that some firearm restrictions on felons are unconstitutional (particularly given the reliance on self-defense)? Perhaps he only means they are not not necessarily unreasonable...there may be some leeway to require the feds to insert a "reasonable component" to felon firearm bans. (For instance, creating a reasonable application process for non-violent offenders to regain the right, putting time limits on the ban, or requiring the government to exercise reasonable and non-arbitrary discretion in deciding whether a certain convicted felon should be allowed to possess a firearm). Such reasonable restrictions, IMHO, would be more reasonable than many restrictions placed on convicted sex offenders. I would not foreclose the possibility of someone convicted of a minor non-violent felony decades ago successfully challenging the ban. (Perhaps he could be an otherwise upstanding citizen, have a family of four, and live in a dangerous neighborhood too). What I'm thinking is that congress may now ammend the federal firearms law, specifically 922 (g) and 921 (a) (20), so that nonviolent felons can enjoy the their second admendment right with out having to qualify under such strict exemptions as layed out in 921 (a) (20). Posted by: Paul | Jun 26, 2008 12:58:05 PM Hi, I'm a patriotic blogger, 7th generation American. I'm named for my 7th Great-Grandfather, so really more than 7 generations, but he was riding horses in the revolutionary war to the Continental Congress with important documents and such so they made the statue of him. Pidgeons like it, :) Scalia is in felony violation of the USA Patriot Act as he persists in maintaining the false information on terrorism he provided in his recent supreme court decision regarding the Guantanamo and other terrorist detainees. So, how would this square with the (moderately clear) right to vote and losing that right upon the conviction of a felony. Is that also an area in which you would argue that we have a defect in constitutional reasoning, Prof. Berman? Posted by: Jonathan | Jun 27, 2008 1:05:55 PM He wouldn't argue that because he's probably aware that Section 2 of the 14th Amendment provides support for felon-disenfranchisement. Posted by: | Jun 27, 2008 1:34:41 PM DAB, I am a Colorado citizen that was wrongly convicted. Not my opinion DAB, buddy I have court transcripts that show it! To the rest- Whoever thought that all Americans have rights clearly has to be stoned. You lose rights at the whim of the system, even if you're not a criminal. They can strip you of more than just the right to defend yourself, they can (and have) strip you of the right to a fair trial. They can strip you of your freedom of religion. The ACLU fought and won a battle over a law that stopped felons from voting. Now they can vote. But the ACLU won't touch the right to defend yourself. We have to do that ourselves. Posted by: Colorado citizen | Jun 28, 2008 6:33:43 PM The now-unconstitutional sentencing guidelines already distinguish between felons. Firearm possession by a felon not convicted of a violent or drug crime is an automatic level 6 as long as the firearm is possessed for a lawful purpose. It seems a small leap for the courts to allow non-violent felons to possess firearms... Posted by: Matt | Jul 3, 2008 6:16:35 PM Zack - Had you also read the opinion, Scalia quite clearly required a license to be issued to Mr Heller if he was not otherwise barred from one - and Scalia explicitly stated weapons are prohibited to felons and the mentally ill. He wrote that twice in his opinion. Clearly, there is no licensing requirement past that check that would pass his version of "constitutional muster". "Arbitrary and capricious" would be requiring training, the location of your home, your reason for purchase, etc. All fees are already unconstitutional in order to practice a constitutional right. Posted by: Mike | Jul 3, 2008 11:35:59 PM I'm a felon. When I took my Alford Plea to an attempted charge, my alleged crime wasn't considered violent. Because of a change in wording, I'm now, many years later, considered a violent offender. So according to many posters here, before the courts changed my status, it would have been A-OK for me to protect my family, myself, and my business with a firearm, but now that I'm a "violent" offender, I can no longer be trusted not to "go postal" and gun down my fellow citizens? Need I remind you that, were I to decide to break the law by killing people I'd not balk at the idea of illegally possessing a firearm? DAB: I'm just a second-class citizen Posted by: Never been to Whitechapel | Jul 23, 2008 8:45:01 AM i am a convicted felony a low grade the lowest non-violent i cant protect my family plus i was convicted of a non violent crime convicted for protection because of this i am now a convicted felont 2 years but served 14 months for good behavior. the people i tried to help well they had to go to court as a person that had the crime forced on them numerous time the court found the people nothing wrong they had the proff they said because i intervene justice was served. homemaker disable disable at the time Posted by: | Jul 28, 2008 7:33:08 PM How would the court ruling affect my one and only run in with the law. They confiscated my rifle. made me forfeit it. What about drug user (marijuana) In possession of firearm. Legal Rifle found in home safe with small bag of pot. I have federal sentencing in court next month. Possible 10 years. Is this reason for appeal.? Posted by: ed | Jul 28, 2008 9:52:47 PM I've become more in touch with this issue over the past couple of years. I'm 52 and at age 17 in 1973 I was arrested and charge with the offense of burglary. 3 of us broke in to a doctor's office. Myself and one of the others were caught. While out on bond I was arrested (not convicted) for possession of a couple of joints. The arrest caused me to be sentenced to 2 years in prison. I served my time and completed parole. At that time I was not particularly concerned about gun rights. I worked and have never been convicted of a crime except for a DUI in 1985. After the DUI I woke up, went to college, graduated with a MSW and have worked as a clinical social worker in addictions, psychiatry, college counseling center, and as a family therapist working with juveniles and their parents who are at high risk for criminal convictions. I'm married with no children and own my home in a suburb. We have had a few "strange" visitors but I've not been concerned with the need for a firearm to protect our home. I work with some very rough families and in some tough neighborhoods where self-defense awareness is a necessity. Now the best I can do is duck or accelerate the gas pedal. About 2 years ago I realized that I'm not in the shape I was in the past and if someone attempted to harm me or my family the likelihood of being able to defend us was pretty poor. I applied for a pardon but was denied because the governor is "very conservative". The prosecutor of my case supported my application for the pardon. I've been researching and education myself about the felon with firearm issue and my opinion is that the underlying intent of this was to remove access to firearms in the urban ghettos. I consider it to be discriminatory because the law came about during the Civil Rights era and rioting in the late 60's. (BTW, I'm white). I'm adamant that restricting a felon's right to own firearms discriminates against them. The 2nd Amendment is the only right that is entirely restricted for a crime. I wonder if those who use religion as an excuse for their crimes should have their religious freedom restricted. Does an "inciting to riot" or "threat to harm" offense cause someone to lose their right to free speech. It doesn't. I wish this law would be overturned, but I know its an apostasy to many that a felon should have a 2nd Amendment right. My best hope is a pardon which is not likely unless I can make a large donation. Posted by: Bruce | Aug 6, 2008 8:37:41 PM Do like I do. I'm a convicted felon but I have firearms. Some laws you just ignore. I'd rather be jailed for shooting someone proteting my home than be 6ft. under. Just be careful. Posted by: Glenn | Aug 15, 2008 6:11:41 PM Question, Can a member of a felon’ family have a gun in the same home as the felon, if not. The member of that family loses their rights , say man and wife, Husband is a felon, wife non- felon Posted by: question | Aug 17, 2008 10:21:32 PM My son is a convicted felon and was charged with possession of a firearm and the bad part is they didnt find a weapon on his person they found it on the ground in the dark and said it was his mind you this was a crime area where anyone could have put it there because police where around. Now they didnt do balistics on the weapon to see if his finger prints where on it they also said he stole it from a police in texas then turned around and said they made a mistake it was brought at action some is not telling truth they went by the word of police but they lie to they are just human just like us. can any tell me what is the procedure for convicting a felon with firearm very mad mom at or justice system. Posted by: Tanya | Aug 21, 2008 11:53:45 PM This is tanya again email me angleeyes45@aol.com Posted by: Tanya | Aug 21, 2008 11:57:46 PM Lousisana Law what is the procedure for conviction? Posted by: Tanya | Aug 22, 2008 12:00:40 AM The united states justice system is just as bad as its health care system. When presidents and politicians can lie to the people on prime time TV, what hope do we have that a normal citizen has a true chance of justice. Unless your rich and famous or connected you are out of luck. There is no hope for justice in America. Most people don't see the violations of rights and written law until its to late. Like me, I thought a trial by jury of your peers was just that "a trial by a jury of your peers" Well its not. Its a trial buy a jury of prosecutor supported elite citizens supporting the desired outcome. Posted by: anon | Aug 30, 2008 5:03:12 PM Obviously the ban on felons possessing handguns is a hat-tip by Scalia to the "evolving standards" / "living Consitution" wing of the Court. [[I'm being tongue-in-cheek: I'm sure there is a good originalist argument against felons possessing handguns, but there's no consistent way for the minority in Heller to protest Scalia's conclusion using their own methodology]]. Posted by: AndyK | Aug 31, 2008 2:59:44 PM Scalia and this gang of co-conspirators didn't interpret the Second Amendment. They just patched up a "solution" to please some but essentially left the status quo unchanged. If Scalia and his cronies had had basic independent judicial reasoning they would have recognized that Congress and the States have No Constitutional authority to "regulate" the Second Amendment because the Amendment grants no such right to either. In fact, nothing in the Second Amendment states that " Congress and the States or Congress or the States SHALL have power to enforce this Act by Appropriate Legislation". This is pure Tyranny no matter how Scalia puts it. As for the so-called "Interstate Commerce Clause", this act not only didn't give any right to Congress to "Criminalize" Intestate Commerce (Regulate Only) but it was foreclosed, became null and void by the passage of the Amendment to the Constitution which clarified the limited power of Congress while expanding the greater power of the States and the People of every State. Scalia's mind is degenerating rapidly and has limited legal knowledge of what's really happening. When he says that "nothing in our opinion should be taken to cast doubt on longstanding prohibition on the possession of firearms by felons and the mentally." he is not telling the truth. What he says only applies to people convicted of federal Felonies and not to those convicted of State Crimes who can by application of State Laws regain easily the right to vote,to run for office and serve in a jury thereby re-acquiring the right to possess firearms and even satisfy federal rules. I believe that now it's time for person with a federal felony to come forward a generate another review on this case. Otherwise at some point, possessing anything manufactured in another State could be a Felony for people already convicted of a prior felony!!! Posted by: Allisio Rex | Sep 19, 2008 2:48:40 PM Job: history researcher. "Law enforcement agencies and personnel have no duty to protect individuals from the criminal acts of others; instead their duty is to preserve the peace and arrest law breakers for the protection of the general public." (Lynch v. NC Dept. Justice) ". . . a government and its agents are under no general duty to provide public services, such as police protection, to any particular individual citizen."--Warren v. District of Columbia, 444 A.2d 1 (D.C. App.181) If there is an individual right to self-protection, and there is no right to police protection, then how can the felon exercise the right to self-protection where police protection is denied? Do they become wards of the state, given a right to police protection, or does the state relinquish their right to self-protection, while simultaneously maintaining their power to not protect? There is a moral conundrum there. Are they less worthy of protection, or self-protection? Further, where they are pushed to the edges of society, is it any better for them, where they live in a more dangerous environment, to lack the power as well as the right to self-protection? If they can be punished for exercising the right to self-protection, is it still remaining a right? My understanding , which is not from personal experience at all, but just from talking with Judges who do these ,is that they issues hundred or thousands of these and that all the movants have to do is come in and say basically I’M afraid of my boyfriend or my husband , and they get this order, Now, is there more required that. Movant was found Guilty based on no proof required per the above statement by Federal court judge Shubb Court record transcript of Jan 7th 2005 page 155 to 167 Jan 7th 2005 page 169 Court “ But what is your understanding of what has to be proven at the hearing, other than the fact that the women is afraid Of him/ Attorney Smith government, the statute does not speak to that, it gives the judge discretion Posted by: larry | Oct 10, 2008 6:27:15 PM I am a convicted felon. Student Financial Aid Fraud. Can anyone make a legitimate argument as to how this effects my ability to safely and legally operate a firearm? I was convicted of unauthorized use of a motor vehicle at age 18 in 1966 in Virginia.Recently the Govenor of Virginia restored all my rights except the right to own a gun. Virginia says get it from from my residence state. My residence state(La.) says get it from Virginia. Now I live and work in a hurricane prone area and need to be on the job as essential personnel(federal) for hurricanes.Post katrina it was very scary to be in New Orleans with no self-defense. If anyone reading this decides they need a guinea-pig to challenge this unfair right of non-violent felons to own a gun please contact me.jamesmetairie@yahoo.com Posted by: jack | Nov 24, 2008 12:05:48 PM i,m retired united states merchant marine with 28 years service . presently federal blue collar employee with over 12 years service. aged 60 and still can,t have a gun for home defense,etc. Posted by: jack | Nov 24, 2008 12:18:40 PM I too, am a convicted felon. The image that most people think of when someone says the term "convicted felon" is some nefarious person lurking in the shadows waiting to do evil things to others. But the reality is that as the web of laws grows larger and larger, so does the corral of malum prohibitum laws and victimless crimes for which we may all be prosecuted. In the early days, being a member of congress was only a part time job! I was convicted my second semester of college for possessing a small amount of marijuana with the intent to sell it to my peers. I spent four months incarcerated. A few years earlier, when I was 16, I was convicted for burglarizing a house I had never set foot into. I was guilty because I was an unwitting accomplice and told the authorities what had happened. Needless to say, the sum of my experiences on the wrong side of the justice system has been illuminating. The purpose of my post isn’t to rant about the lack of justice in the. Rather, I would like to shed some light on why the notion of a blanket ban on firearm possession by felons is a violation of their constitutional rights, which may safely be restored to them. The good news is that I got my life together. I graduated university with honors in the field of business and was hired by a large fortune 500 company before ultimately pursuing a career in real estate. I now own a real estate brokerage and a manufactured home dealership. The process for obtaining these licenses took me 5 years while the state contemplated whether or not I was fit to oversee other licensees and ensure the public’s best interest. But, I’m a patient person and at last was allowed operate a brokerage & dealership. With a little extra paperwork, I can meander through most encounters with the government. I have a steady girlfriend and am held in high regard by my peers who are all accomplished professionals. My actions are that of an upstanding citizen. When Katrina hit, I watched the looting and the complete chaos on erupt. Between Katrina and the attacks on 9-11, I began to realize that it is inevitable that events will take place in our lives for which our government cannot be there for us every time. However, I can be there every time and therefore the ownership of firearms would be a reasonable exercise of one’s 2nd amendment rights. Only, I don’t have them anymore. So should I be predisposed to becoming the victim of a violent crime because of a couple of past convictions dating back over 14 years? I’ve kept my nose clean and even excelled in life. Moreover, had all this not taken place in Kalifornia, but rather another state, I would once again be able to own firearms. So Americans’ Right to Bear Arms doe not seem equally protected under the law. Though it would be very easy to circumvent the law and possess a firearm, I have chosen to do the right thing. A pardon looks unlikely according to my former gun-rights attorney. I’ve even looked into joining the military as they have expressed the potential for restoring my rights in exchange for service. I’m keeping that option on the table, but at this stage of my life and being the only kin responsible for the care of my 91 year old grandmother, I can’t help but wonder if there isn’t another way… I am currently looking for a solution to my legal issues. What must I do? What challenges must I overcome? Who can help? Posted by: Flavio | Dec 1, 2008 4:12:52 AM was there a racist reason the gun control act of 1968 was passed? Do the black panthers ring a bell? What about Drug laws. Was there a racist element in passing those? Its a shame that the war on drugs and denying felons the right to protect themselves can come from laws that have an element of racism to them. If you don't know what im talking about. The GCA of 1968 was also called the "keep guns from n@ggers act" Posted by: | Dec 19, 2008 12:06:10 AM Heres a more poignyant question, if in some states felons do not have the leagle right to say no to a search without a warrant, what exactly are the leagle rights of a felon? Posted by: Eric Holt | Mar 12, 2009 11:06:45 AM Scalia is wrong. Felons even those who have committed violent Felonies do get back firearm rights following a State conviction either by operation of State laws or by application. Only Federal Felonies precludes such remedies. This is unequal treatment under under our laws. It's clear now that States finally protect our Civil Rights while Congress "stomps" on them. What a reversal! I would like to add that it is of a common understanding shared by Constitutional lawyers that Congress has No authority to regulate firearms within a State, not only by virtue of the Second and Amendment but and all the enumerated Amendments which place a heavy restriction on the on what Congress can do. In fact the Amendments to the Constitution fully render the so-called "Interstate Commerce Clause null and void. Posted by: Geaorge Alleni | Apr 30, 2009 2:09:56 PM Hello, Whatever happened to the movement to restore rights after a sentence has been served? Does anyone know? Interesting post. I am a prior felons that lives in a not-so-good area of town(as most felons do because it is hard to find a job with a record). I have spend many hours researching this issue. I would like to own a firearm to protect my home, but legally I can't and don't. I can't risk the consequences of owning one. Many years ago I asked my probation officer how I was to defend my home and he said "call the police" or "hide". I told him that the response time from the police in my area was about an hour and he told me "I better hide good then!". He thought it was unfair that we are not allowed to defend ourselves as well. The whole thing is ridiculous. I made a mistake many years ago in my life and now I am to be a second class citizen my whole life. I am a legal productive citizen involved in politics(I have restored my voting rights) and family life. I am not the person I was a decade ago. I can't believe people that say they are for gun rights and then say they believe in "restrictions" as a form of infringement that directly contradicts the second amendment, they engage in "double speak" and "double think". We really only possess the rights that the lowest class of citizen in our country does. We set a dangerous trend with we start having classes of citizens in this country. Just remember that when they can restrict my rights, they set the precedence so they can restrict and will eventually restrict yours. Posted by: Chris | Jul 24, 2009 1:03:33 PM I am a legal researcher by occupation, and a felon. Here is one for you guys. I am preparing to bring suit against the feds for denying my right to firearms. I was convicted of one felony 15 years ago. Ten years ago Ohio granted me a Relief of Disability O.R.C. 2923.14, which states I am "allowed to own and possess firearms as allowed by state and federal law...this does not apply to dangerous ordnance." In my NICS appeal, the feds denied me based on the "unless clause" of 922, stating the the state limited my firearms ownership by not granting dangerous ordnance. According to O.R.C. 2923.11, dangerous ordnance are exactly what the feds require all citizens to apply for with a class 3 form 4. Therefore, the state does not even have the power to grant permission to ownership or possession of dangerous ordnance. This creates several issues. First, the feds are denying my rights based on the state not granting me rights that THEY say the state does not have the power to grant! Next, the will of the Ohio legislature was to allow me to petition the court to be allowed to own and possess firearms, which I did and was subsequently granted. The feds denial of my rights is a direct defiance of the will of the Ohio legislature. Also, the Gun Control Act itself is not Constitutional. How does the interstate commerce clause give the feds the right to regulate the use and ownership of the product once the interstate commerce is complete? How about product items that were not shipped interstate but were strictly intrastate? (see Firearms Freedom Act and current Montana challenge). I am seriously doing this, working close with my attorney, lobbyists, and soveriegnty and firearms groups. If anyone would like to help me in developing defense or whatever, please respond and email me. theadvocate35@yahoo.com My book entitled: the Second Amendment: the State, the Felon, the Right to Keep and Bear Arms, can be found at: www.scribd.com. It covers pragmatic experiences about the use and necessity of a firearm and the arbitrary state laws in Ohio and other states that define the soft felon, those that have never gone to prison, as one equal in temper and attitude to that of a murderer, rapist, or burglar. For the most part the soft felon is normally not a recusant. Such behavior should be viewed as socially beneficial to society rather than inimical. Usually unbridled circumstances are the cause of the soft felon's criminal misfortunes. That is, external or perhaps internal forces, or both, beyond his or her immediate control. All soft felons need to band together to form a coalition for the purpose of the restoration of all civil rights in the USA and the right to keep and bear a handgun in the home for personal protection against unwanted intrusion. You can also go to www.webcommentary.com and read my commentaries on the subject as well. Chris, I feel your pain and wish you the best. It's starting to become more and more like a dictatorship rather than a free democracy when the feds are constantly trying to find loopholes in the system to attack our rights to self defense. Instead of concentrating on making laws that work against the criminals of America. They are restricting the law abiding citizens to their right to self defense. Ever research "The Black Laws"? Most enlightening on this and several other points of state-created-ability to "lose" rights (as opposed to having privileges withdrawn or withheld). By the way, I am not "Black", just an observer of the ways bad laws become ensconced and worse. Posted by: Throsso | Aug 17, 2010 12:22:24 PM "The right to keep and bear arms shall not be infringed." There is no exception made for "convicted felons" of any stripe. ALL gun control is unreasonable and UNCONSTITUTIONAL. An unconstitutional statute is null and void. Cowards should protect themselves instead of leaving it to their government ass-wipers. Felon or not, buy a firearm (no NICS on a private transaction) and learn to use it against ANYONE who would try to harm you. REGARDLESS of the outfit they wear or authority they claim. Bottom line, it's better to be judged by twelve, than carried by six (and the loved-ones who trust their lives to YOU, would agree). "It is the duty of all good men to disobey unjust laws." Posted by: Thoroughly Provoked | Dec 28, 2010 7:06:40 PM Chris said: "I would like to own a firearm to protect my home, but legally I can't and don't. I can't risk the consequences of owning one." Consequences? In prison on your child's birthday, or free at your child's funeral. The choice is yours. Posted by: Thoroughly Provoked | Dec 28, 2010 7:20:42 PM If somthing is a rite, no institue can legaly take it away, so if felons dont have a rite to keep and bear arms it is safe to asume that they are no longer "citizens" and therefore are either slaves or non-entities it is in my humble opinion that the second amendment does not say "unless the government doesn't want you to". If an instition or government entity has to give you permission to do something it then becomes a privlidge and the government has placed itself incorectly above you whom should by law be the controller or boss of the government. Posted by: Eric Holt | Jan 14, 2012 3:05:50 PM an american- fyi to all out there- didnt see anyone mention this- the federal definition of "firearm" excludes anything from efore 1898 including 'modern replicas'. so we have the right to keep and bear arms until 1898, anyway. that's some deadly weaponry, if you think about it. also see "united states vs simmons"; you might not be as felonious as you thought... Posted by: john m. | Aug 7, 2012 5:19:47 AM Post a comment In the body of your email, please indicate if you are a professor, student, prosecutor, defense attorney, etc. so I can gain a sense of who is reading my blog. Thank you, DAB
--- abstract: 'The ionic-liquid-gating technique can be applied to the search for novel physical phenomena at low temperatures because of its wide controllability of the charge carrier density. Ionic-liquid-gated field-effect transistors are often fragile upon cooling, however, because of the large difference between the thermal expansion coefficients of frozen ionic liquids and solid target materials. In this paper we provide a practical technique for setting up ionic-liquid-gated field-effect transistors for low-temperature measurements. It allows stable measurements and reduces the electronic inhomogeneity by reducing the shear strain generated in frozen ionic liquid.' author: - Yamaguchi Takahide - Yosuke Sasama - Hiroyuki Takeya - Yoshihiko Takano - Taisuke Kageura - Hiroshi Kawarada bibliography: - 'EDLTsetupbib.bib' title: 'Ionic-liquid-gating setup for stable measurements and reduced electronic inhomogeneity at low temperatures' --- Introduction ============ The application of the ionic-liquid-gating technique to low-temperature physics has attracted considerable attention because it can control the charge carrier density over an extremely wide range.[@Bis17] This technique uses an ionic liquid (organic salt in the liquid phase at room temperature) as the gate dielectric in field-effect transistors in which a target material acts as a channel of charge carriers. In this electric double layer transistor (EDLT), the large capacitance of the electric double layer at the surface of the target material allows the channel to have a large concentration of charge carriers on the order of $10^{13}-10^{14}$ cm$^{-2}$. Even when the EDLT is cooled to below the freezing point of the ionic liquid, the electric double layer and the resulting charge carrier density are preserved. Therefore, one can measure the low-temperature electronic properties of the sample with a charge carrier density controlled in a wide range. Electric-field-induced superconductivity has been obtained in various materials by using ionic-liquid gating.[@Ye10; @Bol11; @Uen11; @Len11; @Dub12; @Ye12; @Jo15; @Shi15; @Sai15; @Lu15; @Li16; @Cos16; @Zen18] ![(a) Schematic of a diamond EDLT with no structure for reducing the shear strain in the ionic liquid. Drain, source, and gate electrodes have been fabricated on the surface of a single crystal diamond. Only the channel of the EDLT on the diamond surface is hydrogen-terminated, which assists the accumulation of holes. The rest of the surface is oxygen-terminated. The oxygen termination electrically isolates the channel from the gate. A drop of ionic liquid is applied to cover both the channel and gate electrode. (b) Temperature dependence of the sheet resistance of two diamond EDLTs at the gate voltage of $-1$ V. The sample A1 was set the same as shown in (a) and measured with a two-point configuration. The ionic liquid on the sample A2 was covered with an unfixed small piece of glass. The sample A2 was measured with a four-point configuration. The resistance of A1 suddenly increased (the current decreased to the noise level) at 73 K, which was caused by the fracture of ionic liquid. The source and drain electrodes on the hydrogen-terminated diamond surface were also partially peeled off. Similar destruction of the device structure also occurred for A2 at 127 K. (c) The large thermal expansion coefficient of frozen ionic liquid compared to that of diamond leads to shear strain (and resulting slips) at the interface. (d,e) Optical microscope images of a droplet of ionic liquid (DEME-BF$_4$) on a hydrogen-terminated diamond surface at 200 K (d) and 8 K (e). The contraction of the frozen ionic liquid was observed, presumably because the hydrogen-terminated surface of diamond is highly “non-stick”. The droplet size at each temperature and at 300 K is depicted by dashed lines.](Fig1_rev.pdf){width="7truecm"} However, there is a practical problem for such low-temperature measurements on EDLTs: frozen ionic liquids often fracture at low temperatures. This induces some detrimental effects on measurements, such as a sudden jump in the resistance-temperature curve (Fig. 1). A large electronic inhomogeneity possibly due to the local detachment of frozen ionic liquid from the sample has also been reported for WS$_2$ and MoS$_2$ EDLTs.[@Jo15; @Cos16] These problems are presumably due to the shear strain caused by the large difference in the thermal expansion coefficient between the frozen ionic liquid and target sample or its substrate. In this paper we introduce an experimental technique that suppresses the shear strain and leads to stable measurements on EDLTs at low temperatures. Our setups for diamond and silicon EDLTs[@Yam13; @Yam14; @Yam16; @Sas17; @Sas172] are shown as examples. This technique will allow stable and efficient low-temperature experiments with EDLTs and studies of high-quality samples with reduced electronic inhomogeneity. Results and discussion ====================== A key feature of our setup is a counter plate placed above the sample/substrate surface (Fig. 2). Ionic liquid is inserted between the counter plate and sample/substrate surface. A similar setup has been used in previous experiments.[@Bol11; @Dub12] We propose here that an adequate spacing between the sample/substrate surface and the counter plate can reduce the shear strain that appears when the device is cooled. Our idea is to compensate for the cooling-induced shrinkage of frozen ionic liquid in a mechanical manner by using the shrinkage of the counter plate support. If the shrinkage of the support along the z axis is larger than that of the ionic liquid, the ionic liquid is compressed along the z axis and expands along the xy plane. If the shrinkage of ionic liquid along the xy plane due to cooling cancels this expansion, there should be no shear strain along the xy plane. The counter plate can be used as a gate electrode if its surface is electrically conductive and a proper wiring is made. ![(a,b) Schematic of a diamond EDLT with a counter plate. (c) Optical image of a diamond EDLT set on a sample holder with a counter plate. The sample holder is shown here on a holder stand, which is removed when the sample holder is sealed with indium (Fig. 3). The dimensions of the diamond sample are 2.6 mm $\times$ 2.6 mm $\times$ 0.3 mm.](Fig2_rev.pdf){width="7truecm"} ![Schematic of the sample holder assembly for sealing in a glove box. A TO-5 header is soldered to the main part of the sample holder for electrical feedthrough. ](Fig3_rev.pdf){width="4.6truecm"} ![Temperature dependence of the sheet resistance of 10 different diamond EDLTs. The resistance was measured with a four-point configuration. From top to bottom at the lowest temperature, the samples are numbered as B1-10. The surface orientation, ionic liquid, and gate voltage are as follows. B1: (100), DEME-TFSI, $-1.8 V$; B2: (100), DEME-BF$_4$, $-1.8 V$; B3: (111), DEME-BF$_4$; $-1.8$ V, B4: (100), DEME-TFSI, $-1.49$ V; B5: (100), DEME-BF$_4$, $-1.44$ V; B6: (100), TMPA-TFSI + HTFSI, $-2.58$ V; B7: (100), DEME-BF$_4$, $-1.96$ V; B8: (111), DEME-BF$_4$, $-2.4$ V; B9: (111), DEME-BF$_4$, $-2.2$ V; B10: (111), DEME-BF$_4$, $-1.8$ V. Boron-doped diamond was used as source, drain, and gate electrodes for B6. The diamond surface of B10 is atomically flat.[@Yam14] The curves measured while the sample was cooled down and warmed up are both shown for B2 and B10. (Gray thick lines are for warming up.) The resistance variation for different samples despite similar gate voltages is attributed to different amounts of charged adsorbates on the diamond surface.[@Yam13]](Fig4_rev.pdf){width="6.5truecm"} Let us examine the adequate spacing between the sample/substrate surface and the counter plate. We assume that the counter plate and its support are thick and rigid enough that they are not deformed by external force. We also assume that the thermal expansion coefficient of the sample (or substrate) is small and can be neglected. This is the case when diamond or silicon is used as a substrate or a sample, because at temperatures below 293 K the thermal expansion coefficients for diamond[@Sto11] and silicon[@Mid15] are less than $1{\times}10^{-6}$ and $3{\times}10^{-6}$ (K$^{-1}$), respectively, which are smaller than those of most of other materials. This assumption is only for simplification of the calculation shown below. With a straight forward modification, our scheme can be applied to any material. The length variations of the frozen ionic liquid along the x direction due to the temperature change ${\Delta}T$ and along the z direction due to the mechanical force are given by $$\begin{aligned} \frac{{\Delta}x_\mathrm{IL}}{x_\mathrm{IL}}=\alpha_\mathrm{IL}{\Delta}T-{\sigma}_\mathrm{IL}\frac{{\Delta}z_\mathrm{IL}}{z_\mathrm{IL}},\\ {\Delta}z_\mathrm{IL}=-z_\mathrm{IL}\alpha_\mathrm{IL}{\Delta}T+z_\mathrm{sup}\alpha_\mathrm{sup}{\Delta}T.\end{aligned}$$ Here ${\alpha}_\mathrm{IL}$ and ${\alpha}_\mathrm{sup}$ are the thermal expansion coefficients of the frozen ionic liquid and the counter plate support, and ${\sigma}_\mathrm{IL}$ is the Poisson’s ratio of the frozen ionic liquid. For the shear strain caused by the temperature change ${\Delta}T$ to be minimized, $$\begin{aligned} \frac{{\Delta}x_\mathrm{IL}}{x_\mathrm{IL}}=0.\end{aligned}$$ Then, $$\begin{aligned} z_\mathrm{IL}=\frac{{\sigma}_\mathrm{IL}}{1+{\sigma}_\mathrm{IL}}\frac{\alpha_\mathrm{sup}}{\alpha_\mathrm{IL}}z_\mathrm{sup}.\end{aligned}$$ The value of Poisson’s ratio $\sigma$ for most materials is $0.3-0.4$. We use brass and copper for the support of the counter plate. The thermal expansion coefficient of copper is $10{\times}10^{-6}$ (K$^{-1}$) at 100 K and $15{\times}10^{-6}$ (K$^{-1}$) at 200 K.[@Nix41] It is difficult to find the data of thermal expansion of frozen ionic liquids at temperatures below their freezing point. We directly observed the thermal contraction of droplets of ionic liquid (DEME-BF$_4$; freezing point: 238 K, melting point: 282 K[@Kim05]) on a hydrogen-terminated diamond surface under an optical microscope \[Figs. 1(d) and 1(e)\]. The diameter of the droplets shrank by $0.7{\pm}0.1{\%}$ when the temperature decreased from 200 to 8 K. If we assume that the thermal expansion coefficient of the frozen ionic liquid depends linearly on temperature, it is estimated to be $(35{\pm}5){\times}10^{-6}$ (K$^{-1}$) at 100 K and $(70{\pm}10){\times}10^{-6}$ (K$^{-1}$) at 200 K, which we use in the following calculation. These values are in the same range as those of organic charge-transfer salts, which are $(40-80){\times}10^{-6}$ (K$^{-1}$) at 100 K and $(40-80){\times}10^{-6}$ (K$^{-1}$) at 200 K.[@Mul02; @Sou08; @Fou13] The height $z_\mathrm{sup}$ of the support of the counter plate is $0.45-0.5$ mm in our experimental setup for diamond EDLTs. If we use these values, the thickness $z_\mathrm{IL}$ of ionic liquid should be $30-50$ $\mu$m for 100 K and $20-40$ $\mu$m for 200 K to minimize the shear strain. If a softer material with a larger $\alpha_\mathrm{sup}$ is used for the support (for example, polymer), then it is better to increase the ratio $z_\mathrm{IL}/z_\mathrm{sup}$. If the sample/substrate is fixed on the sample holder using adhesive tape, its large thermal expansion coefficient should also be taken into consideration. An optical microscope image of our setup for a diamond EDLT is shown in Fig. 2(c). The diamond is fixed using two copper claws, without the use of adhesive tape. As a counter plate, we used a Ti/Pt or Ti/Au deposited glass (or silicon) plate or a diamond substrate with a boron-doped layer on the surface. Here the counter plate also acted as a gate electrode. The thickness of the diamond differed from sample to sample because the original thickness of the diamond substrate and the amount of surface polishing differed. We adjusted the spacing between a sample and counter plate to be ${\approx}20-30$ $\mu$m each time by using some metal spacer plates with different thicknesses: 20, 30, 40, 50, 80, and 100 $\mu$m. This sample holder was also designed so that it can be sealed with indium in an Ar-filled glove box[@Bon00; @Bon01; @Brass] to prevent water contamination of the ionic liquid (Fig. 3). We have not observed any significant jumps in the temperature dependence of resistance of diamond EDLTs and could perform stable measurements at low temperatures with this setup. The temperature dependence of the resistances of ten different diamond EDLTs is shown in Fig. 4. The curves vary in a monotonic manner, although a few curves cross, possibly due to the difference in the surface crystallographic orientation. Furthermore, there is almost no difference between the resistance-temperature curves measured while the sample is cooled down and warmed up. This indicates that the local detachment of ionic liquid[@Jo15] is negligible during the thermal process. We observed an electric-field-induced insulator-metal transition and Shubnikov de-Haas oscillations of diamond with this setup.[@Yam13; @Yam14] An anomalous low-temperature magnetotransport of the electric-field-induced charge carriers was also observed in diamond with the (100) surface.[@Yam16] ![Optical image of a sample holder for silicon EDLTs. (a) The main part of the sample holder. (b) The lid of the sample holder. (c) A silicon EDLT ready for low-temperature measurements. First, small pieces of indium for electrical wiring and the seal of the sample holder are placed on the lid. Then, a silicon chip with a hydrogen-terminated channel, Hall bar electrodes, and a gate electrode[@Sas17; @Sas172] is fixed on the main part of the sample holder by a copper claw in an Ar-filled glove box. After a drop of ionic liquid is applied, the lid is screwed on. This makes the electrical wiring, the seal of the sample holder, and the insertion of the ionic liquid between the silicon and counter plate (lid) at the same time. The dimensions of the silicon chip are approximately 6.0 mm $\times$ 6.0 mm $\times$ 0.38 mm. (d) Optical image of the Hall bar and gate electrode on the silicon chip.](Fig5_rev.pdf){width="7truecm"} ![Temperature dependence of the sheet resistance of a silicon EDLT for different gate voltages. The resistance was measured with a four-point configuration. The resistance could not be measured accurately at $T{\le}220$ and 60 K for $V_g=0$ and $-0.4$ V because the contact resistance became very high. The resistance decreased with increasing negative gate voltage at $V_g{\ge}-1.2$ V, but it increased at $V_g{\le}-1.2$ V. See Ref. 19 for details.](Fig6_rev.pdf){width="8.2truecm"} We performed a study of silicon EDLTs as well[@Sas17; @Sas172]. Another type of sample holder (Fig. 5) was fabricated for the silicon EDLTs for the following reasons. The silicon surface of the channel of the EDLTs is hydrogen-terminated to reduce the trap density. This hydrogen termination is crucial for the device operation[@Sas17; @Sas172] but, unlike the hydrogen termination of diamond surface, is easily destroyed by air exposure. Therefore, the electrical wiring between the sample and sample holder cannot be performed in air for the silicon EDLTs. The sample holder is designed so that the electrical wiring can be performed using small pieces of indium in an Ar-filled glove box. The sample holder can also be sealed with indium in the glove box. This sample holder is made of polychlorotrifluoroethylene (PCTFE) and the lid acts as a counter plate. The counter plate support consists of $\approx$0.40 mm thick PCTFE and $\approx0.1-0.15$ mm thick indium: $z_\mathrm{sup}{\approx}0.50-0.55$ mm. The thermal expansion coefficient of PCTFE is $34{\times}10^{-6}$ (K$^{-1}$) at 100 K and $47{\times}10^{-6}$ (K$^{-1}$) at 200 K.[@PCTFE] The coefficient ($\alpha_v/3$) for indium is $27{\times}10^{-6}$ (K$^{-1}$) at 100 K and $28{\times}10^{-6}$ (K$^{-1}$) at 200 K.[@Smi64] Using Eq. 4, $z_\mathrm{IL}$ for the minimized shear strain is estimated to be $100-170$ $\mu$m for 100 K and $60-110$ $\mu$m for 200 K. We set $z_\mathrm{IL}\approx120-170$ $\mu$m in the actual setup. By using this setup and using ion implantation underneath the electrodes to reduce the contact resistance, we were able to measure detailed low-temperature transport properties of silicon EDLTs[@Sas172] (Fig. 6). The proposed method minimizes the shear strain in frozen ionic liquid, but the perfect elimination of this strain in an entire temperature range is difficult. This is because the temperature dependences of $\alpha_\mathrm{IL}$ and $\alpha_\mathrm{sup}$ generally differ. There may still be small remaining strains at low temperatures. The Shubnikov-de Haas oscillations observed in diamond EDLTs suggest a spatial inhomogeneity of charge carrier density and mobility at low temperatures.[@Yam14] Further work is necessary to elucidate whether this inhomogeneity has an intrinsic origin[@DezArXiv] or is caused by local distortion of the frozen ionic liquid due to residual shear strains. Detailed measurements of the thermal expansion coefficient and Poisson’s ratio of ionic liquids at different temperatures are also awaited. It may be possible to further reduce shear strain by setting the spacing $z_\mathrm{IL}$ so that the integral of $(1/x_\mathrm{IL})(\mathrm{d}x_\mathrm{IL}/\mathrm{d}T)$ (dependent of $\alpha_\mathrm{IL}(T)$, $\sigma_\mathrm{IL}(T)$, and $\alpha_\mathrm{sup}(T)$) between the temperature of interest and the freezing temperature of the ionic liquid would be zero. Conclusions =========== We proposed a practical method to reduce shear strain in frozen ionic liquid for stable measurements of electric double layer transistors at low temperatures. The reduction of shear strain was achieved by compensating for the cooling-induced shrinkage of frozen ionic liquid in a mechanical way using a counter plate and its support. The simple setup will be used for various materials and allow stable and efficient experiments at low temperatures. In particular, it prevents frozen ionic liquid from detaching from the sample surface and thus prevents the device breakdown due to cooling. It will also reduce the electronic inhomogeneity caused by the shear strain and thus help to study more intrinsic properties of the target materials. Acknowledgments =============== We appreciate helpful comments from Y. Ootuka and thank E. Watanabe, H. Osato, D. Tsuya, S. Hamada and S. Tanigawa for device fabrication in the early stage of this study. This study was supported by Grants-in-Aid for Fundamental Research (Grant Nos. 25287093 and 26220903) and the “Nanotechnology Platform Project” of MEXT, Japan.
{ "citation": "@inproceedings{Keysers2020,\n title={Measuring Compositional Generalization: A Comprehensive Method on\n Realistic Data},\n author={Daniel Keysers and Nathanael Sch\"{a}rli and Nathan Scales and\n Hylke Buisman and Daniel Furrer and Sergii Kashubin and\n Nikola Momchev and Danila Sinopalnikov and Lukasz Stafiniak and\n Tibor Tihon and Dmitry Tsarkov and Xiao Wang and Marc van Zee and\n Olivier Bousquet},\n booktitle={ICLR},\n year={2020},\n url={https://arxiv.org/abs/1912.09713.pdf},\n}", "description": "The CFQ dataset (and it's splits) for measuring compositional generalization.\n\nSee https://arxiv.org/abs/1912.09713.pdf for background.\n\nA note about the validation set: Since it has the same distribution as the test\nset and we are interested in measuring the compositional generalization of a\n*model* with respect to an *unknown* test distribution we suggest that any\ntuning should be done on a subset of the train set only (see section 5.1 of the\npaper).\n\nExample usage:\n\n```\ndata = tfds.load('cfq/mcd1')\n```", "downloadSize": "267599061", "location": { "urls": [ "https://github.com/google-research/google-research/tree/master/cfq" ] }, "name": "cfq", "schema": { "feature": [ { "domain": "query", "name": "query", "presence": { "minCount": "1", "minFraction": 1.0 }, "shape": { "dim": [ { "size": "1" } ] }, "type": "BYTES" }, { "domain": "question", "name": "question", "presence": { "minCount": "1", "minFraction": 1.0 }, "shape": { "dim": [ { "size": "1" } ] }, "type": "BYTES" } ], "stringDomain": [ { "name": "query", "value": [ "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.film_costumer_designer.costume_design_for_film M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}", "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.producer.films_executive_produced M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}", "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.edited_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}", "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.prequel M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.written_by M0\n}", "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.sequel M0 .\n?x0 ns:film.film.written_by M1\n}", "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.film_costumer_designer.costume_design_for_film M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}", "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}", "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.director.film M4 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.editor.film M4 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.films_executive_produced M4 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M4 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3 .\nM0 ns:film.writer.film M4\n}", "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3\n}", "SELECT count(*) WHERE {\nM0 ns:film.film.directed_by M1 .\nM0 ns:film.film.directed_by M2 .\nM0 ns:film.film.edited_by M1 .\nM0 ns:film.film.edited_by M2 .\nM0 ns:film.film.executive_produced_by M1 .\nM0 ns:film.film.executive_produced_by M2 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\nM0 ns:film.film.written_by M1 .\nM0 ns:film.film.written_by M2\n}" ] }, { "name": "question", "value": [ "Who wrote , edited , directed , and produced M1 , M2 , and M3", "Who wrote , edited , executive produced , and produced M1 , M2 , and M3", "Who wrote , executive produced , and directed M1 , M2 , and M3", "Who wrote , executive produced , and edited M1 , M2 , M3 , and M4", "Who wrote , executive produced , edited , and produced M1 , M2 , and M3", "Who wrote , executive produced , produced , and directed M1 , M2 , M3 , and M4", "Who wrote , executive produced , produced , and edited M1 , M2 , and M3", "Who wrote , produced , directed , and edited M1 , M2 , and M3", "Who wrote , produced , edited , and directed M1 , M2 , and M3", "Who wrote , produced , executive produced , and edited M1 , M2 , and M3" ] } ] }, "splits": [ { "name": "test", "numBytes": "5828528", "shardLengths": [ "11968" ], "statistics": { "features": [ { "path": { "step": [ "query" ] }, "stringStats": { "avgLength": 379.0212, "commonStats": { "avgNumValues": 1.0, "maxNumValues": "1", "minNumValues": "1", "numNonMissing": "11968", "numValuesHistogram": { "buckets": [ { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 } ], "type": "QUANTILES" }, "totNumValues": "11968" }, "rankHistogram": { "buckets": [ { "label": "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.film_costumer_designer.costume_design_for_film M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}", "sampleCount": 59.0 }, { "highRank": "1", "label": "SELECT count(*) WHERE {\nM0 ns:film.film.directed_by M1 .\nM0 ns:film.film.directed_by M2 .\nM0 ns:film.film.edited_by M1 .\nM0 ns:film.film.edited_by M2 .\nM0 ns:film.film.executive_produced_by M1 .\nM0 ns:film.film.executive_produced_by M2 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\nM0 ns:film.film.written_by M1 .\nM0 ns:film.film.written_by M2\n}", "lowRank": "1", "sampleCount": 53.0 }, { "highRank": "2", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.sequel M0 .\n?x0 ns:film.film.written_by M1\n}", "lowRank": "2", "sampleCount": 44.0 }, { "highRank": "3", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.producer.films_executive_produced M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}", "lowRank": "3", "sampleCount": 42.0 }, { "highRank": "4", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.edited_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}", "lowRank": "4", "sampleCount": 41.0 }, { "highRank": "5", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.prequel M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.written_by M0\n}", "lowRank": "5", "sampleCount": 39.0 }, { "highRank": "6", "label": "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3\n}", "lowRank": "6", "sampleCount": 38.0 }, { "highRank": "7", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.film_costumer_designer.costume_design_for_film M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}", "lowRank": "7", "sampleCount": 31.0 }, { "highRank": "8", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.executive_produced_by M2 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2\n}", "lowRank": "8", "sampleCount": 30.0 }, { "highRank": "9", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:film.film .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.executive_produced_by M2 .\n?x0 ns:film.film.executive_produced_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}", "lowRank": "9", "sampleCount": 29.0 } ] }, "topValues": [ { "frequency": 59.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.film_costumer_designer.costume_design_for_film M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}" }, { "frequency": 53.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.film.directed_by M1 .\nM0 ns:film.film.directed_by M2 .\nM0 ns:film.film.edited_by M1 .\nM0 ns:film.film.edited_by M2 .\nM0 ns:film.film.executive_produced_by M1 .\nM0 ns:film.film.executive_produced_by M2 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\nM0 ns:film.film.written_by M1 .\nM0 ns:film.film.written_by M2\n}" }, { "frequency": 44.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.sequel M0 .\n?x0 ns:film.film.written_by M1\n}" }, { "frequency": 42.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.producer.films_executive_produced M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}" }, { "frequency": 41.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.edited_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}" }, { "frequency": 39.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.prequel M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.written_by M0\n}" }, { "frequency": 38.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3\n}" }, { "frequency": 31.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.film_costumer_designer.costume_design_for_film M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}" }, { "frequency": 30.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.executive_produced_by M2 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2\n}" }, { "frequency": 29.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:film.film .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.executive_produced_by M2 .\n?x0 ns:film.film.executive_produced_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}" } ], "unique": "8863" }, "type": "STRING" }, { "path": { "step": [ "question" ] }, "stringStats": { "avgLength": 68.0676, "commonStats": { "avgNumValues": 1.0, "maxNumValues": "1", "minNumValues": "1", "numNonMissing": "11968", "numValuesHistogram": { "buckets": [ { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 } ], "type": "QUANTILES" }, "totNumValues": "11968" }, "rankHistogram": { "buckets": [ { "label": "Who wrote , produced , executive produced , and directed M1 , M2 , and M3", "sampleCount": 1.0 }, { "highRank": "1", "label": "Who wrote , produced , executive produced , and directed M1 , M2 , M3 , and M4", "lowRank": "1", "sampleCount": 1.0 }, { "highRank": "2", "label": "Who wrote , executive produced , directed , and produced M1 , M2 , M3 , and M4", "lowRank": "2", "sampleCount": 1.0 }, { "highRank": "3", "label": "Who wrote , edited , produced , and directed M1 , M2 , and M3", "lowRank": "3", "sampleCount": 1.0 }, { "highRank": "4", "label": "Who wrote , edited , and directed M1 , M2 , M3 , and M4", "lowRank": "4", "sampleCount": 1.0 }, { "highRank": "5", "label": "Who wrote , directed , produced , and executive produced M1 , M2 , M3 , and M4", "lowRank": "5", "sampleCount": 1.0 }, { "highRank": "6", "label": "Who wrote , directed , produced , and edited M1 , M2 , M3 , and M4", "lowRank": "6", "sampleCount": 1.0 }, { "highRank": "7", "label": "Who wrote , directed , edited , and produced M1 , M2 , and M3", "lowRank": "7", "sampleCount": 1.0 }, { "highRank": "8", "label": "Who wrote , directed , and executive produced M1 , M2 , and M3", "lowRank": "8", "sampleCount": 1.0 }, { "highRank": "9", "label": "Who was influenced by and influenced M1 , M2 , M3 , and M4", "lowRank": "9", "sampleCount": 1.0 } ] }, "topValues": [ { "frequency": 1.0, "value": "Who wrote , produced , executive produced , and directed M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , produced , executive produced , and directed M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , executive produced , directed , and produced M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , edited , produced , and directed M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , edited , and directed M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , directed , produced , and executive produced M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , directed , produced , and edited M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , directed , edited , and produced M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , directed , and executive produced M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who was influenced by and influenced M1 , M2 , M3 , and M4" } ], "unique": "11968" }, "type": "STRING" } ], "numExamples": "11968" } }, { "name": "train", "numBytes": "40461180", "shardLengths": [ "95743" ], "statistics": { "features": [ { "path": { "step": [ "query" ] }, "stringStats": { "avgLength": 318.35504, "commonStats": { "avgNumValues": 1.0, "maxNumValues": "1", "minNumValues": "1", "numNonMissing": "95743", "numValuesHistogram": { "buckets": [ { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 } ], "type": "QUANTILES" }, "totNumValues": "95743" }, "rankHistogram": { "buckets": [ { "label": "SELECT count(*) WHERE {\n?x0 a ns:film.film .\nM1 ns:film.director.film ?x0 .\nM1 ns:film.editor.film ?x0 .\nM1 ns:film.producer.films_executive_produced ?x0 .\nM1 ns:film.producer.film|ns:film.production_company.films ?x0 .\nM1 ns:film.writer.film ?x0\n}", "sampleCount": 115.0 }, { "highRank": "1", "label": "SELECT count(*) WHERE {\n?x0 a ns:film.cinematographer .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}", "lowRank": "1", "sampleCount": 94.0 }, { "highRank": "2", "label": "SELECT count(*) WHERE {\n?x0 a ns:people.person .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}", "lowRank": "2", "sampleCount": 93.0 }, { "highRank": "3", "label": "SELECT count(*) WHERE {\n?x0 a ns:film.actor .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}", "lowRank": "3", "sampleCount": 88.0 }, { "highRank": "4", "label": "SELECT count(*) WHERE {\n?x0 a ns:film.editor .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}", "lowRank": "4", "sampleCount": 86.0 }, { "highRank": "5", "label": "SELECT count(*) WHERE {\n?x0 a ns:film.director .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}", "lowRank": "5", "sampleCount": 84.0 }, { "highRank": "6", "label": "SELECT count(*) WHERE {\n?x0 ns:film.film.prequel M0 .\nM1 ns:film.director.film ?x0 .\nM1 ns:film.editor.film ?x0 .\nM1 ns:film.producer.films_executive_produced ?x0 .\nM1 ns:film.producer.film|ns:film.production_company.films ?x0 .\nM1 ns:film.writer.film ?x0\n}", "lowRank": "6", "sampleCount": 83.0 }, { "highRank": "7", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by ?x1 .\n?x0 ns:film.film.edited_by ?x1 .\n?x0 ns:film.film.executive_produced_by ?x1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies ?x1 .\n?x0 ns:film.film.written_by ?x1 .\n?x1 ns:people.person.parents|ns:fictional_universe.fictional_character.parents|ns:organization.organization.parent/ns:organization.organization_relationship.parent M0\n}", "lowRank": "7", "sampleCount": 83.0 }, { "highRank": "8", "label": "SELECT count(*) WHERE {\n?x0 a ns:film.producer .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}", "lowRank": "8", "sampleCount": 80.0 }, { "highRank": "9", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by ?x1 .\n?x0 ns:film.film.edited_by ?x1 .\n?x0 ns:film.film.executive_produced_by ?x1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies ?x1 .\n?x0 ns:film.film.written_by ?x1 .\n?x1 ns:people.person.spouse_s/ns:people.marriage.spouse|ns:fictional_universe.fictional_character.married_to/ns:fictional_universe.marriage_of_fictional_characters.spouses M0 .\nFILTER ( ?x1 != M0 )\n}", "lowRank": "9", "sampleCount": 80.0 } ] }, "topValues": [ { "frequency": 115.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:film.film .\nM1 ns:film.director.film ?x0 .\nM1 ns:film.editor.film ?x0 .\nM1 ns:film.producer.films_executive_produced ?x0 .\nM1 ns:film.producer.film|ns:film.production_company.films ?x0 .\nM1 ns:film.writer.film ?x0\n}" }, { "frequency": 94.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:film.cinematographer .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}" }, { "frequency": 93.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:people.person .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}" }, { "frequency": 88.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:film.actor .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}" }, { "frequency": 86.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:film.editor .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}" }, { "frequency": 84.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:film.director .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}" }, { "frequency": 83.0, "value": "SELECT count(*) WHERE {\n?x0 ns:film.film.prequel M0 .\nM1 ns:film.director.film ?x0 .\nM1 ns:film.editor.film ?x0 .\nM1 ns:film.producer.films_executive_produced ?x0 .\nM1 ns:film.producer.film|ns:film.production_company.films ?x0 .\nM1 ns:film.writer.film ?x0\n}" }, { "frequency": 83.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by ?x1 .\n?x0 ns:film.film.edited_by ?x1 .\n?x0 ns:film.film.executive_produced_by ?x1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies ?x1 .\n?x0 ns:film.film.written_by ?x1 .\n?x1 ns:people.person.parents|ns:fictional_universe.fictional_character.parents|ns:organization.organization.parent/ns:organization.organization_relationship.parent M0\n}" }, { "frequency": 80.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:film.producer .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}" }, { "frequency": 80.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by ?x1 .\n?x0 ns:film.film.edited_by ?x1 .\n?x0 ns:film.film.executive_produced_by ?x1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies ?x1 .\n?x0 ns:film.film.written_by ?x1 .\n?x1 ns:people.person.spouse_s/ns:people.marriage.spouse|ns:fictional_universe.fictional_character.married_to/ns:fictional_universe.marriage_of_fictional_characters.spouses M0 .\nFILTER ( ?x1 != M0 )\n}" } ], "unique": "70043" }, "type": "STRING" }, { "path": { "step": [ "question" ] }, "stringStats": { "avgLength": 64.36601, "commonStats": { "avgNumValues": 1.0, "maxNumValues": "1", "minNumValues": "1", "numNonMissing": "95743", "numValuesHistogram": { "buckets": [ { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 } ], "type": "QUANTILES" }, "totNumValues": "95743" }, "rankHistogram": { "buckets": [ { "label": "Who wrote and produced a prequel of M1", "sampleCount": 1.0 }, { "highRank": "1", "label": "Who wrote and produced M1 's sequel", "lowRank": "1", "sampleCount": 1.0 }, { "highRank": "2", "label": "Who wrote and produced M1 's prequel", "lowRank": "2", "sampleCount": 1.0 }, { "highRank": "3", "label": "Who wrote and produced M1", "lowRank": "3", "sampleCount": 1.0 }, { "highRank": "4", "label": "Who wrote and executive produced a prequel of M1", "lowRank": "4", "sampleCount": 1.0 }, { "highRank": "5", "label": "Who wrote and executive produced a film executive produced by M2 and M3 and directed by M4", "lowRank": "5", "sampleCount": 1.0 }, { "highRank": "6", "label": "Who wrote and executive produced M1 's sequel", "lowRank": "6", "sampleCount": 1.0 }, { "highRank": "7", "label": "Who wrote and executive produced M1 's prequel", "lowRank": "7", "sampleCount": 1.0 }, { "highRank": "8", "label": "Who wrote and executive produced M1", "lowRank": "8", "sampleCount": 1.0 }, { "highRank": "9", "label": "Who wrote and edited a sequel of M1", "lowRank": "9", "sampleCount": 1.0 } ] }, "topValues": [ { "frequency": 1.0, "value": "Who wrote and produced a prequel of M1" }, { "frequency": 1.0, "value": "Who wrote and produced M1 's sequel" }, { "frequency": 1.0, "value": "Who wrote and produced M1 's prequel" }, { "frequency": 1.0, "value": "Who wrote and produced M1" }, { "frequency": 1.0, "value": "Who wrote and executive produced a prequel of M1" }, { "frequency": 1.0, "value": "Who wrote and executive produced a film executive produced by M2 and M3 and directed by M4" }, { "frequency": 1.0, "value": "Who wrote and executive produced M1 's sequel" }, { "frequency": 1.0, "value": "Who wrote and executive produced M1 's prequel" }, { "frequency": 1.0, "value": "Who wrote and executive produced M1" }, { "frequency": 1.0, "value": "Who wrote and edited a sequel of M1" } ], "unique": "95743" }, "type": "STRING" } ], "numExamples": "95743" } }, { "name": "validation", "numBytes": "5875751", "shardLengths": [ "11968" ], "statistics": { "features": [ { "path": { "step": [ "query" ] }, "stringStats": { "avgLength": 383.1173, "commonStats": { "avgNumValues": 1.0, "maxNumValues": "1", "minNumValues": "1", "numNonMissing": "11968", "numValuesHistogram": { "buckets": [ { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 } ], "type": "QUANTILES" }, "totNumValues": "11968" }, "rankHistogram": { "buckets": [ { "label": "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3\n}", "sampleCount": 68.0 }, { "highRank": "1", "label": "SELECT count(*) WHERE {\nM0 ns:film.film.directed_by M1 .\nM0 ns:film.film.directed_by M2 .\nM0 ns:film.film.edited_by M1 .\nM0 ns:film.film.edited_by M2 .\nM0 ns:film.film.executive_produced_by M1 .\nM0 ns:film.film.executive_produced_by M2 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\nM0 ns:film.film.written_by M1 .\nM0 ns:film.film.written_by M2\n}", "lowRank": "1", "sampleCount": 56.0 }, { "highRank": "2", "label": "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.film_costumer_designer.costume_design_for_film M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}", "lowRank": "2", "sampleCount": 50.0 }, { "highRank": "3", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.sequel M0 .\n?x0 ns:film.film.written_by M1\n}", "lowRank": "3", "sampleCount": 39.0 }, { "highRank": "4", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.producer.films_executive_produced M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}", "lowRank": "4", "sampleCount": 37.0 }, { "highRank": "5", "label": "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.director.film M4 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.editor.film M4 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.films_executive_produced M4 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M4 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3 .\nM0 ns:film.writer.film M4\n}", "lowRank": "5", "sampleCount": 33.0 }, { "highRank": "6", "label": "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}", "lowRank": "6", "sampleCount": 32.0 }, { "highRank": "7", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.edited_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}", "lowRank": "7", "sampleCount": 31.0 }, { "highRank": "8", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.film_costumer_designer.costume_design_for_film M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}", "lowRank": "8", "sampleCount": 31.0 }, { "highRank": "9", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.prequel M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.written_by M0\n}", "lowRank": "9", "sampleCount": 30.0 } ] }, "topValues": [ { "frequency": 68.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3\n}" }, { "frequency": 56.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.film.directed_by M1 .\nM0 ns:film.film.directed_by M2 .\nM0 ns:film.film.edited_by M1 .\nM0 ns:film.film.edited_by M2 .\nM0 ns:film.film.executive_produced_by M1 .\nM0 ns:film.film.executive_produced_by M2 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\nM0 ns:film.film.written_by M1 .\nM0 ns:film.film.written_by M2\n}" }, { "frequency": 50.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.film_costumer_designer.costume_design_for_film M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}" }, { "frequency": 39.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.sequel M0 .\n?x0 ns:film.film.written_by M1\n}" }, { "frequency": 37.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.producer.films_executive_produced M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}" }, { "frequency": 33.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.director.film M4 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.editor.film M4 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.films_executive_produced M4 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M4 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3 .\nM0 ns:film.writer.film M4\n}" }, { "frequency": 32.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}" }, { "frequency": 31.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.edited_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}" }, { "frequency": 31.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.film_costumer_designer.costume_design_for_film M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}" }, { "frequency": 30.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.prequel M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.written_by M0\n}" } ], "unique": "8859" }, "type": "STRING" }, { "path": { "step": [ "question" ] }, "stringStats": { "avgLength": 67.91536, "commonStats": { "avgNumValues": 1.0, "maxNumValues": "1", "minNumValues": "1", "numNonMissing": "11968", "numValuesHistogram": { "buckets": [ { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 } ], "type": "QUANTILES" }, "totNumValues": "11968" }, "rankHistogram": { "buckets": [ { "label": "Who wrote , produced , executive produced , and edited M1 , M2 , and M3", "sampleCount": 1.0 }, { "highRank": "1", "label": "Who wrote , produced , edited , and directed M1 , M2 , and M3", "lowRank": "1", "sampleCount": 1.0 }, { "highRank": "2", "label": "Who wrote , produced , directed , and edited M1 , M2 , and M3", "lowRank": "2", "sampleCount": 1.0 }, { "highRank": "3", "label": "Who wrote , executive produced , produced , and edited M1 , M2 , and M3", "lowRank": "3", "sampleCount": 1.0 }, { "highRank": "4", "label": "Who wrote , executive produced , produced , and directed M1 , M2 , M3 , and M4", "lowRank": "4", "sampleCount": 1.0 }, { "highRank": "5", "label": "Who wrote , executive produced , edited , and produced M1 , M2 , and M3", "lowRank": "5", "sampleCount": 1.0 }, { "highRank": "6", "label": "Who wrote , executive produced , and edited M1 , M2 , M3 , and M4", "lowRank": "6", "sampleCount": 1.0 }, { "highRank": "7", "label": "Who wrote , executive produced , and directed M1 , M2 , and M3", "lowRank": "7", "sampleCount": 1.0 }, { "highRank": "8", "label": "Who wrote , edited , executive produced , and produced M1 , M2 , and M3", "lowRank": "8", "sampleCount": 1.0 }, { "highRank": "9", "label": "Who wrote , edited , directed , and produced M1 , M2 , and M3", "lowRank": "9", "sampleCount": 1.0 } ] }, "topValues": [ { "frequency": 1.0, "value": "Who wrote , produced , executive produced , and edited M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , produced , edited , and directed M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , produced , directed , and edited M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , executive produced , produced , and edited M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , executive produced , produced , and directed M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , executive produced , edited , and produced M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , executive produced , and edited M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , executive produced , and directed M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , edited , executive produced , and produced M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , edited , directed , and produced M1 , M2 , and M3" } ], "unique": "11968" }, "type": "STRING" } ], "numExamples": "11968" } } ], "supervisedKeys": { "input": "question", "output": "query" }, "version": "1.2.0" }
Peter Lang’s ‘solar realities’ paper and its associated discussion thread has generated an enormous amount of interest on BraveNewClimate (435 comments to date). Peter and I have greatly appreciated the feedback (although not always agreed with the critiques!), and this has led Peter to prepare: (a) an updated version of ‘Solar Realites’ (download the updated v2 PDF here) and (b) a response paper (download PDF here). Below I reproduce the response, and also include Peter’s sketched analysis of the scale/cost of the electricity transmission infrastructure (PDF here). ———————————————– Comparison of capital cost of nuclear and solar power By Peter Lang (Peter is a retired geologist and engineer with 40 years experience on a wide range of energy projects throughout the world, including managing energy R&D and providing policy advice for government and opposition. His experience includes: coal, oil, gas, hydro, geothermal, nuclear power plants, nuclear waste disposal, and a wide range of energy end use management projects) Introduction This paper compares the capital cost of three electricity generation technologies based on a simple analysis. The comparison is on the basis that the technologies can supply the National Electricity Market (NEM) demand without fossil fuel back up. The NEM demand in winter 2007 was: 20 GW base load power; 33 GW peak power (at 6:30 pm); and 25 GW average power. 600 GWh energy per day (450 GWh between 3 pm and 9 am) The three technologies compared are: 1. Nuclear power; 2. Solar photo-voltaic with energy storage; and 3. Solar thermal with energy storage (Solar thermal technologies that can meet this demand do not exist yet. Solar thermal is still in the early stages of development and demonstration. On the technology life cycle Solar Thermal is before “Bleeding edge” – refer: http://en.wikipedia.org/wiki/Technology_lifecycle). This paper is an extension of the paper “Solar Power Realities” . That paper provides information that is essential for understanding this paper. The estimates are ‘ball-park’ and intended to provide a ranking of the technologies rather than exact costs. The estimates should be considered as +/- 50%. Nuclear Power 25 GW @ $4 billion /GW = $100 billion (The settled-down-cost of nuclear may be 25% to 50% of this figure if we reach consensus that we need to cut emissions from electricity to near zero as quickly as practicable.) 8 GW pumped hydro storage @ $2.5 billion /GW = $20 billion Total capital cost = $120 billion Australia already has about 2 GW of pumped-hydro storage so we would need an additional 6 GW to meet this requirement. If sufficient pumped hydro storage sites are not available we can use an additional 8GW of nuclear or chemical storage (e.g. Sodium Sulphur batteries). The additional 8 GW of nuclear would increase the cost by $12 billion to $132 billion (the cost of extra 8 GW nuclear less the cost of 8 GW of pumped hydro storage; i.e. $32 billion – $20 billion). Capital cost of PV system with 30 days of pumped-hydro storage = $2,800 billion. (In reality, we do not have sites available for even 1 day of pumped hydro storage.) Capital cost of PV system with 5 days of Sodium Sulphur battery storage = $4,600 billion. Solar Thermal The system must be able to supply the power to meet demand at all times, even during long periods of overcast conditions. We must design for the worst conditions. We’ll consider two worst case scenarios: 1. All power stations are under cloud at the same time for 3 days. 2. At all times between 9 am and 3 pm at least one power station, somewhere, has direct sunlight, but all other power stations are under cloud. Assumptions: The average capacity factor for all the power stations when under cloud for 3 days is 1.56 % (to be consistent with the PV analysis in “Solar Power Realities”; refer to Figure 7 and the table on page 10). The capacity factor in midwinter, when not under cloud, is 15% (refer Figure 7 in “Solar Power Realities”). But the clouds move, so all the power stations need this generating capacity. To maximise the probability that at least one power station is in the sun we need many power stations spread over a large geographic area. If we have say 20 power stations spread across south east South Australia, Victoria, NSW and southern Queensland, we would need 3,300 GW – assuming only the power station in the sun is generating. If we want redundancy for the power station in the sun, we’d need to double the 3,300 GW to 6,600 GW. Of course the power stations under cloud will also contribute. Let’s say they are generating at 1.56% capacity factor. Without going through the calculations we can see the capacity required will be between the 1,600 GW calculated for Scenario 1 and the 3,300 GW calculated here. However, it is a relatively small reduction (CF 3% / 60% = 5% reduction), so I have ignored it in this simple analysis . So, Scenario 2 requires 450,000 MWh storage and 3,300 GW generating capacity. It also requires a very much greater transmission capacity, but we’ll ignore that for now. This would be the cost if the sun was always shining brightly on all the solar power stations. This is about five times the cost of nuclear. However, that is not all. This system may have an economic life expectancy of perhaps 30 years. So it will need to be replaced at least once during the life of a nuclear plant. So the costs should be doubled to have a fair comparison with a nuclear plant. In order to estimate the costs for Scenario 1 and Scenario 2 we need costs for power and for energy storage as separate items. The input data and the calculations are shown in the Appendix. The costs for the two scenarios (see Appendix for the calculations) are: Summary of cost estimates for the options considered The conclusion stated in the “Solar Power Realities” paper is confirmed. The Capital cost of solar power would be 20 times more than nuclear power to provide the NEM demand. Solar PV is the least cost of the solar options. The much greater investment in solar PV than in solar thermal world wide corroborates this conclusion. Some notes on cloud cover A quick scan of the Bureau of Meteorology satellite images revealed the following: This link provides satelite views. A loop through the midday images for each day of June, July and August 2009, shows that much of south east South Australia, Victoria, NSW and southern Queensland were cloud covered on June 1, 2, 21 and 25 to 28. July 3 to 6, 10, 11, 14. 16, 22 to 31 also had widespread cloud cover (26th was the worst), as did August 4, 9, 10, 21, 22.. This was not a a rigorous study. Note that, although this table includes calculations for the cost of a system with 3 and 5 days of continuous operation at full power, the technology does not exist, and current evidence is that it is impracticable. The figure is used in this comparison, but is highly optimistic. ———————————————– Eraring to Kemps Creek 500kV transmission line. Each of the double circuit 500kV lines from Eraring to Kemps Creek can carry 3250MW. The 500kV lines are double circuit, 3 phase, quad Orange, i.e.2 circuits times 3 phases times 4 conductors per bundle, i.e. 24 wires per tower. Orange is ACSR, Aluminium Conductor Steel Reinforced, with 54 strands of 3.25mm dia aluminium surrounding 7 strands of 3.25mm dia steel. Roughly 1/3 of the cost of a line is in the wires, 1/3 in the steel towers and 1/3 in the easements required to run the line. Capital Cost of Transmission for Renewable Energy Following is a ‘ball park’ calculation of the cost of a trunk transmission system to support wind and solar farms spread across the continent and generating all our electricity. The idea of distributed renewable energy generators is that at least one region will be able to meet the total average demand (25 GW) at any time. Applying the principle that ‘the wind is always blowing somewhere’ and ‘the sun will always be shining somewhere in the day time’, there will be times when all the power would be supplied by just one region – let’s call it the ‘Somewhere Region’. The scenario to be costed is as follows: Wind power stations are located predominantly along the southern strip of Australia from Perth to Melbourne. Solar thermal power stations, each with their own on-site energy storage, are distributed throughout our deserts, mostly in the east-west band across the middle of the continent. All power (25GW) must be able to be provided by any region. We’ll base the costs on building a trunk transmission system from Perth to Sydney, with five north-south transmission lines linking from the solar thermal regions at around latitude 23 degrees. The Perth to Sydney trunk line is 4,000 km and the five north-south lines average 1000 km each. Add 1,000 km to distribute to Adelaide, Melbourne, Brisbane. Total line length is 10,000km. All lines must carry 25GW. Each of the double circuit 500kV lines from Eraring Power Station to Kemps Creek can transmit 3,250MW so let’s say we would need 8 parallel lines for 25GW plus one extra as emergency spare. Like this: LikeLoading... Related 322 Comments I’m aware of two broad approaches to solar thermal. One involves the focusing of sunlight using mirrors or lenses. The other is the solar chimney which relies on temperature differentials at the top and the bottom of a very large chimney and has little to do with direct sunlight (although obviously the sun drives the atmospherics). I don’t know the exact facts but I am lead to believe that the latter is only modestly effected by cloud cover and in fact it continues to produce substantial amounts of power at night even without any dedicated storage infrastructure or using quite passive storage via water filled containers. Can you inform me as to which version of solar thermal you are refering to in this article? You are correct, There are actually about four main categories of solar thermal. They are described in the NEEDS analysis, which is referenced in the “Solar Power Realities – Addendum” paper. The NEEDS analysis looks a the various options and selected the Solar Trough as the reference technology for detailed costings. They explain the reasons for the selection. How well do the solar towers and other meteorological reactors compare with conventional factories for electrical energy production? • By their description it is evident that Power Stations with Meteorological Reactors (Solar Chimneys and Energy Towers) will be very big electrical production units, which will produce a guaranteed Electric Power profile year round. Thus they are compatible to conventional Power Plants (that use coal, oil, gas or nuclear fuels) and thus can replace them. But as they are located in deserts or semi-desert areas, far away from consumption locations (big cities or industrial plants), they need very good interconnection of electricity grids and this is already being done progressively for all the other renewable energies: wind, sun, OTEC… (Have a look for instance to the Desertec concept on http://www.desertec.org). Solar thermal power plants have been in use commercially at Kramer Junction in California since 1985. New solar thermal power plants with a total capacity of more than 2000 MW are at the planning stage, under construction, or already in operation. • Other Renewable Power Plants (wind, solar concentrator, solar PVs, et al) only produce when weather and meteorological conditions are optimum (enough wind but not too strong, for PVs: sunshiny days with few clouds but no production during the night) and thus are only electrical energy production units of non-guaranteed power output, and cannot replace the conventional Power Plants. Solar chimneys can! • Due to thermal storage Solar updraft Chimney Power Stations can operate 24 h/ per day 365days/per year, with their daily energy production following the day’s average solar irradiation. The daily power production profile is very close to the usual demand profile and an aperture (or closure) mechanism allows to produce more (or less) at on-peak (or off-peak) consumption hours. • Electric power cannot be stored up and saved. During the hours at night and on the weekends when demand for electric power decreases, regular fuel consuming power companies actually lose money because they cannot just slow down or stop the generators during these times. It is not feasible because powering down the turbines and then getting them back up to speed during the peak hours, even if could be done within eight hours, would be more costly than letting them run. On the contrary, heat can be stored up and saved on special water containing reservoirs or tanks under the greenhouse of the solar chimney power plants, and electrical output can be adapted to peak power demand. • The only other renewable Power Plant, having a similar behaviour to a Meteorological Reactor Power Plant, is the Hydro Electric Power Plant. Their similarity is far deeper as water can be stored upstream and used for on-peak demand. Water can also be stored in a second reservoir downstram, and pumped back upstream when electricity from nuclear plants is much cheaper (off-peak demand). Conversion yield is good. • The optimum range of Power rating for the Solar Chimney Power Stations, due to the high dimensions, is 50 MW (Ciudad Real project in Spain), 200MW (Buronga, New South Wales project in Australia), and 400 MW (GreenTower South African project in the Namib desert, Namibia). This range of Power (50 – 400 MW) seems to be also optimum for Floating Solar Chimneys and Energy Towers. • For the appropriate places of installation these Meteorological Reactor Power Stations can annually produce electrical energy respectively from 150GWh to 600GWh. The material in your post #4 appears to be copied from a promotion brochure. I’d suggest you study the NEEDS report as a first step. Then you’ll be in a better position to condider all the options. Of course, you’d also need to get a good understanding of the nuclear option, because that is the least-cost option by a long way. An option with no new transmission might be thin film PV with local storage, either a fridge sized lead acid battery at home or sodium sulphur at substations. If dollar-a-watt predictions are true an average house roof could generate in the expected daily range 10 – 50 kwh for $50k and 20 kwh local storage might cost $5k. The household would have to carefully manage their winter needs, perhaps using fuel heating. Assuming we’re headed to 10 million households that’s $550 billion, still more expensive than 25 GW nuclear at $5 a watt. The underlying factor is not the need for storage so much as to greatly overbuild for winter generation. This system may have an economic life expectancy of perhaps 30 years. So it will need to be replaced at least once during the life of a nuclear plant. So the costs should be doubled to have a fair comparison with a nuclear plant. It is not linear. To make sense, you have to discount future costs/revenues – in particular here, revenues – to reflect interest. So years 30-60 of a nuclear reactor’s life are worth far less than years 0-30 – it is not double the economic value. See for instance table 6.D in the MIT ‘update on the cost of nuclear power’ working paper, for a stark illustration of what this financial effect does: You would be absolutely correct if the comparison were being done on the basis of Levelised Cost of Electricity (LCOE). But the comparisons are simple; and are of just the capital costs. By the way, altohough the paper mentions the need to double the capital cost to take into account the shorter life of the solar power station, this extra cost is not included in the comparison. It would need to be included in an LCOE analysis, as you quite rightly point out. Credit Suisse published a pretty big study at the beginning of the year on the comparative costs of some of the likeliest alternatives. They mentioned the big factor for nuke was the level of regulatory compliance that would be imposed. We estimate the costs of nuclear power to be $61.87 per MWh. Capital costs per kW are difficult to come by, but recent data from the Keystone Center estimates a capital cost in the range of $2,950 to $4,000 per kW (2007), and FPL estimates a cost of $8,000 per kW for its Turkey Point project. Therefore, we assume $6,000/KW in our base case. We note, however, that if capital costs are on the low-end of our estimates, the LCOE of power is only $35/MWh, which would be the lowest cost energy available. Any new nuclear plant would likely be built far from the energy demand, therefore transmission infrastructure investment would likely be required. The significant benefit of nuclear power is that there are no carbon emissions and the power is highly reliable, suitable for base load generation. The WACC of nuclear projects tends to be lower due to the high debt capital structure and loan collateral – utilities would not proceed with a nuclear build out without federal loan guarantees. Nuclear power often appears to be the easy solution to growing energy demands and climate concerns, but the public opposition is a serious obstacle. As better options are developed for safe storage or reprocessing of used rods, we believe we will eventually start to see new nuclear power plants. jc, as I understand it, the FPL costs for Turkey Point are higher because they are escalated costs, rather than overnight. It is quite right, according to my reading and contacts, that regulatory uncertainty is the big issue right now for nuclear builds. As one respected contact said: “As for price predictions, it’s not that hard to predict what they should cost, since ABWRs have already been built [in Asia]. That completely disregards how much Americans are being told they’ll cost, due to the lack of assurance that once they’re ordered they’ll be allowed to be built without repeated construction shutdowns, etc. Until utility companies feel confident that they’ll be able to build them and get them online expeditiously, it cannot reasonably be said that the competitive model is fully operational when it comes to nuclear power, anywhere in the States. I’d be glad to let the market decide if the court system didn’t let every zealot with a sign shut down a multi-billion dollar construction project. And believe me, if you build it, they will come. How to get beyond that? I’m not a lawyer, so I don’t know if it would be legal, but if there was a way to fashion legislation to allow construction to continue even through pending litigation as long as the builder has all the permits in a row, then I believe you’d see a lot of plants start going up.” Peter – thanks. I found the following in that document and it basically answers my question. Due to the uncertain perspectives of this technology, the absence of a reference project, and therefore the lack of cost and material data the solar updraft tower is not considered furthermore in this study. In short your discussion of solar thermal excludes consideration of the solar updraft tower. In which case I find any conclusion that “solar is very expensive” to be quite unsurprising. Of course the solar updraft tower if ever built on a commercial basis may not change that conclusion but I suspect it might (although I also have little doubt that coal and nuclear would still win on cost). TerjeP, I should do it more justice than this (perhaps in the future), but briefly, the solar chimney’s (updraft tower) stuff is nonsense – 200 MWe yield from tower that is taller than the Burj Dubai? You’ve got to be kidding me. It’s so utterly fantastic, it’s not even worth crunching the numbers on. I have no doubt that such a power plant has a significant commericalisation risk and so the cost of capital for the first plant will be high. However this really only applies to the first plant and beyond that the construction costs and operational performance become the main factors. “I have no doubt that such a power plant has a significant commericalisation risk and so the cost of capital for the first plant will be high. However this really only applies to the first plant and beyond that the construction costs and operational performance become the main factors.” Yeah. Operational performance. That’s the whole point, isn’t it. 200MW from a project that size is crap. There’s no point even discussing it. JC @ 11 writes: Any new nuclear plant would likely be built far from the energy demand, therefore transmission infrastructure investment would likely be required. There is no reason other than politics to site nuclear plants far from where the power is used. You can build PRISM reactors in the middle of a city. The reactors themselves are subterranean and the generation infrastructure could be likewise, or could reside in regular-looking industrial buildings. A power plant with several PRISMs and a recycling facility would appear no more conspicuous than a small to mid-size industrial park. Even with Gen II plants they were often cited close to the areas of demand. See Indian Point just upriver from New York City, Prairie Island just 40 miles from St. Paul/Minneapolis, and there are numerous other examples. This is a non-issue, all the more so with IFRs. It’s another tremendous advantage they have over wind and solar. The power in your basement option looks neat if you already have gas fired central heating in your basement and it needs an upgrade. However not many people in Australia have or need central heating. The overall efficiency of the system seems to rely on the fact that a major biproduct of electricity generation is heat. Within it’s niche (which could be quite big in Europe and North America) it seems like a clever bit of kit. The location debate is important. Because the cost is important. The lower we can make the cost of electricity, the faster will low-emissions generation technologies replace fossil fuels. Also, the lower the cost of electricity, the faster electricity will displace oil for land transport. Oil used in land transport represents about 1/3 of our emissions. Electricty may power land transport directly (ege batteries) or it may produce synthetic fuels (hydrogen or other possibilities). Either way, the lower the cost of electrcity the better for all reasons. So I do not want to see the nuclear power plants located far from the demand centres. I want them close. Combined heat and power (CHP) could be added to steam cycle, gas turbine and combined cycle as gas generation options. However since Gorgon LNG sale contracts recently have been $30bn + $50bn + $70bn Australia might be lucky to have any gas left. I guess it helps pay for imported gadgets. Australia needs a long term policy on gas priorities; for ammonia production, peak electrical generation, CNG as a petrol/diesel replacement and domestic use including CHP should it become popular. LNG exports would be last priority. Given the green chic of Australia’s politicians gas fired generation will probably expand several times over before nuclear is considered. Based on the paper I cited above my quick back of the envelope calculation for Solar updraft towers is as follows. From figure 10 in the paper the output of the 200MW solar updraft tower at 6:30pm in winter is about 50MW (the output is still pretty steady at that level through the night). As such we would need a lot of towers to meet a peak of 33GW. Number of Towers = 33000 / 50 = 165 The capital cost of each tower is estimated in Table 3 to be 0.606 billion Euro per tower. So total capital cost would be:- Total capital cost = 0.606 x 165 = 99.99 billion euro. Converting to Aussie Dollars we have a figure of about A$170. Given the size of each tower they would need to be situated in remote areas. So there would be additional costs associated with transmission. If we take Peters figure for Solar thermal transission then we need an extra A$180. So total capital cost of powering the NEM using only solar updraft towers is by my calculation around $350 billion. Which is about three times the price of nuclear as calculated by Peter but still a heck of a lot cheaper than the other solar options. TerjeP, reading over that document, it’s certainly a fascinating technology and worth looking at a bit harder than I’d first thought. The output of the Spanish prototype was tiny (50 kW peak), so it’s difficult to know how realistic their non-linear scaling estimates for taller towers are. The 50 kW tower yielded 44 MWh over the course of a year, which gives a capacity factor of 10%, which isn’t all that great — that means you’d need ~50 x 1 km tall (7km diameter at base) 200 MW peak towers to equate to a 1 GW nuclear power station. Their simulations (Fig 10) with water-based thermal storage look much better than this figure, so it’s a matter of how much credence you put in the technical data of the demonstration plant vs simulations of potential operational potential of larger plants. As to cost, my points above are relevant (depends on ultimate real-world performance) but also it’s difficult to cost-out anything like this when structures of this size have never been built. So I’ll reserve judgement, but will follow any developments of this alternative solar tech with interest. Luke – I blame the envelope. Thanks for fixing my maths. Still at $860 billion it is a lot cheaper than the other solar options. Barry – I pretty much agree with everything in your latest comment. It is a technology that is worth watching but it entails a lot of unknowns. In particular it depends on their simulations being correct. I would have thought though that the basic physics isn’t that complex and there is a lot of experience in the scaling of aircraft aerodynamics and the like. Still there is nothing quite like real world data. Peter, Your figures on tantangara/Blowering pumped storage of about 5billion for 9,000MW is slightly higher than what I had been estimating but I was considering mainly much shorter pipleines( for example Blowering /Talbingo incresed Tumut3 capacity to 6,000MW. It would seem that expanding the Snowy pumped hydro to 15GW capcity and TAS hydro to 4.4GW( by adding 2GW of return reversible turbines) for a total of 20.15GW including the other 0.75GW already in use, is a realistic storage option for nuclear and renewable energy. Your study of transmission costs is dissappointing. The theory behind the wind blowing somewhere idea IS NOT to have the entire wind capacity moved from one site of the continent to the other. For example, WA would have 20% of the wind capacity(SA,TAS,VIC, NSW about the same with a small amount in QLD) so on the observation that wind dispesed over the size of a state will at most generate 75% capacity WA would only ever produce 15% of capacity(9GW not 25GW) and some of this would be used locally (3GW) so at most 6GW would be exported east(even less with CAES), but not to Sydney, to Pt Augusta with perhaps another 1-2GW moved to Adelaide. Sydney and Melbourne would get most power from pumped storage( moving much shorter distances). When high winds exist in NSW and VIC energy would be returned to Snowy with 2-3GW to WA ( if no wind in WA, most unlikely considering the 2,000Km of good wind coastline). You statement that 10,000Km would have to carry 25GW is totally mis-understanding how grids work. Feeder lines will only have the capacity of the solar and wind farms and none of these would be anything like 25GW. The major transmission links would be Snowy to Sydney, Snowy to Melbourne, Melbourne to Tasmania and Pt Augusta to Perth. We already have a large grid in SE Australia, but it would have to be increased. OCGT/CCGT and nuclear will probably be sited at existing coal fired power stations using existing transmission lines. The 50 kW tower yielded 44 MWh over the course of a year, which gives a capacity factor of 10%, which isn’t all that great — that means you’d need ~50 x 1 km tall (7km diameter at base) 200 MW peak towers to equate to a 1 GW nuclear power station. The two things that always distract people with this technology are the size of the thing and the low solar efficiency. However neither matters that much. What matters in the final analysis is cost and the output profile. The only reason that solar efficiency is so important in PV is that associated casing and mounting costs are such a big proportion of the final cost. A smaller cell for the same power output has less add on costs. But of course PV has a lousy output profile. Moonlight just aint that bright. Using fuel cells instead of an engine nearly doubles the fuel to electricity efficiency, and more than doubles the ratio of electricity output to heat. The heat output from the fuel cell system is a reasonable match to domestic hot water (not heating) needs, so it makes sense in most paces, not just Northern Europe in winter. TerjeP, I’m not talking about efficiency, I’m talking about capacity factor relative to peak performance. This is useful for working out redundancy and # required to build for a given average delivery. As Peter Lang has so clearly pointed out, minimal capacity is also useful to know. Which while factually correct is irrelevant to your main point and seemed to stand out as a veiled criticism. Perhaps I was still feeling a bit prickly due to some earlier comments made here. I did understand your main point and I do agree. Capacity factor is in fact the thing that makes me think the solar updraft tower would probably be superior to the alternative solar options. TerjeP, my broader point was that 50 of these structures, equating to 1 gigawatt average capacity, would have a footprint on a landscape of ~2,000 km2. In addition, 50 x 1 km high spires would also pose a potential aviation hazard. The point is not that these shouldn’t or can’t be built, but it does illustrate the size of the engineering challenge (even if it is, fundamentally, just glass and steel). The land cost issue does not seem to be overly significant. And the glass canopy would be several metres off the ground so you could grow food on the land also. Obviously it is going to be a windy place to farm but low profile plants are not going to care and the wind speed would be tolerable. Essentially it is a big warm, wet and windy glass house that you can drive around in on a tractor. I can’t see the towers being a problem for aviation. Their location will be well mapped. And they can be lit at night. And they would be fat things that are hard not to see. I doubt the aviation issue is a challenge. Whether people like the look of them is an asthetic issue that is hard to answer. However nuclear has asthetic issues also relating to how people feel about nuclear. Personally I like big man made structures. I’ve always quite liked the look of high voltage transmission lines. I suspect that people would like them as much or as little as they like wind farms. However solar updraft towers wouldn’t hog prime coastal locations in the way wind farms do. I’d say lets build one just to satisfy my asthetic tastes and then go nuclear for the rest of our electricity needs. I don’t really have a problem with the land or air footprint of solar updraft towers when these exist on low value land. I don’t imagine too many aircraft will be flying low over the desert, and if they are an installation that size will stick out like the proverbial [fill in your metaphor]. Build them 2k high for all I care, assuming it is cost-benefit and technically feasible to do so. The real problem is the cost both of construction and of connection to the grid. If current nuclear is about $3000 per installed Kw then a $300bn worth of non-nuclear needs to get you about 100Gw of output of similar quality as the nuclear to break even. OK you can throw in some allowance for higher running costs (labour, site management, uranium/thorium, public liability) but even so, if it only gets you 5% of that it’s not really in the game. Re nuclear aesthetics I like the low angle aerial shots of the peleton in the Tour de France passing by a reactor. The overall impression is of health and harmony. On the other hand coal stations have tar, heavy metals, uncontained radioactivity, smoke and smells. They are the Dark Satanic Mills of the modern era. If for example you have a small group of agricultural villages not connected to a grid but which could benefit from solar panels, an anaerobic digester, perhaps a small scale 200Kw wind turbine with a you beaut DIY pumped hydro for not very much built in not very long. Maybe the whole thing could cost $200k or less A nuclear plant isn’t going to scale down to that setting very well and it’s not as if you could build one in three months either, leave aside connect it to a reliable grid, most of the time. @Terje that was a better image than I could find. I gather there are several nuclear power stations in France’s Loire Valley which prides itself on fine food and wine. I note the use of cooling towers despite abundant river water for direct heat exchange. An excellent analysis of why solar can’t possibly power civilization. If only we had the water and geologic formations to make pumped-storage hydro dams and wash the solar panels every 10-20 days. It’s really wind that has proven to be more efficient and cost effective, but even wind isn’t where it needs to be. If you’re looking for a compact, timely read that completely summarizes and explains the energy issues the world faces, you may be interested in my new book “the nuclear economy,” which just became available. All of the alternative energies are discussed, as well as peak oil, climate change, energy transitions, and 4th generation nuclear power. Look up the “Potential for Building Integrated Photovoltaics” report. The IEA estimated that half of Australia’s electricity needs could be provided by 10% efficient building mounted PV. i.e. You could provide a significant fraction of Australia’s electricity with zero land use impact. If PV doesn’t come down to a competitive price the 50% penetration argument is moot let alone the limit position argument. Starting with coal, terje’s pic shows all that is wrong with coal, huge emissions – specifically in the pic, heat – being allowed without consequence. We all know about the toxic emissions and the ash. Why are these incredibly indolent corporations allowed to waste so much heat, and why is it easier to pass the costs on to the customer than use CHP and/or Rankine cycle energy recovery? How is it that these corporations can threaten to close down or go offshore rather than spend money on plant which will save them money and reduce emissions? PV costs are taken usually over 15 years which is nonsense because the cells are guaranteed for 25yrs alone. The ongoing costs for solar are minimal whereas nuclear requires all sorts of ongoing costs for mining, inrichment, reprocessing, waste storage, decomissioning and insurance. Nowhere have I seen a reliable assessment. How can you plan without one? I am definitely in favour of solar and definitely in favour of nuclear over coal but most of the cost analysis I have seen so far on nuclear are people pushing their barrow with rubbery figures being bent to the max. What you need to be doing is pinning the government down to an energy plan. Obviously they haven’t got one and they need to be seriously embarrassed by this, Australia’s energy security etc. If you can force them into one, then you can make submissions, influence policy etc. My opinion is that both the major party’s are drunk on coal and fully intend to obfuscate it’s problems with crap like CCS and huge handouts and weak targets. They rightly reason that solar power and storage technologies will evolve enormously over the next 20yrs. If they can suck the public into going with coal for a bit longer while building a lot of renewables to deal with the extra load of electrification of transport, some other mug government can deal with nuclear power. They don’t care about nuclear. There’s more money in coal. Rather than a campaign for nuclear, we need a campaign against coal. Instead of always defending nuclear against ignorance, we should be attacking coal for greed, indolence, energy wastage, environmental vandalism, acid rain, mercury in our food, government handouts without accountability, fugitive methane emissions, medical problems. Expose the true cost of doing business with coal and get them to pay for it. Thank you for a fascinating and sobering series of articles. You, Peter, and Ted, have persuaded me that renewables can’t supply the current (never mind BAU projected increases in) energy requirements of the developed world on their own without vast and unrealistic expenditure in money, time and effort. The numbers seem pretty clear. I’m sure that when recognition of the CO2 and energy supply problems reaches a critical mass, and the political will and money starts to flow on the required scale, economic forces will do the rest and the nuclear option will indeed be widely deployed. Our current society functions on the basis of large amounts of instantly available energy, and without a major and disruptive reshaping of the way we live- which, incidentally, is what most greens seem to want, and may go some way to explain their attitude to nuclear power- sources of power with high energy densities are going to be necessary. But I’m a little uncomfortable with the impression I often get from reading this site- that nuclear power is the only viable FF alternative and that it should be pursued vigorously and as soon as possible, to the exclusion of all other options (and wind/solar in particular). Many articles and discussions seem to circle around this idea. As a layman, it’s difficult to know what to make of it- that viewpoint may well be true, but for me there are too many unknown unknowns. How about a broadening of the discussion to consider other pertinent issues? Otherwise, this blog risks becoming a nuclear advocacy site with an occasional bit of climate science commentary thrown in. These are the sorts of questions I have in mind (apologies if they’ve been discussed previously on the site, but not much showing up with a basic search) : What about the other potentially non (or low) CO2-emitting high energy density option on the table, with a few hundred years left in it- coal with CCS? What role can gas play in reducing CO2 emissions, at least in the short term while we transition to nukes? What about Ted Trainer’s idea of ‘depowering society’ to the extent that renewables can meet energy demand? (I can see many problems with this, but would love to see a critique on the site. More generally, articles exploring the demand side of the problem seem to be thin on the ground) Accepting that renewables can’t supply the developed world’s energy needs in their entirety, do they have a role at all? (in smaller isolated communities, in the developing world etc) How do smart grids work- how much can be done to with transmission systems/distributed storage/demand management etc to increase the number of viable options on the table? Campaigning against coal is basically campaigning against ourselves every time we turn a light or use an appliance. There really isn’t much point in agitating against coal. We need to stop pointing fingers at people suggesting they’re somehow evil and and fast track a move to nuke energy. It would give us immense energy supply we’ll need going forward in a clean, cheap reliable way. Renewable could be part of the suite as that should ultimately depend on the market. However one thing is for certain going forward. We need immense supplies of energy and nuke power is able to fulfill our needs. Jc, I don’t agree with your reasoning. We are all pretty much stuck with our shonky supermarket duopoly but campaigning against their poor pricing behaviour helps to keep them less shonky and inspires people to look for alternatives. On that subject, why is it easier for them to pass on to consumers the extra costs of their refrigeration than it is to put doors on like they do with the freezers? The average coal power station is only 35% efficient, Combined Heat and Power is up to 90%. CHP would more than halve emissions or more than double coal’s power output but nothing’s going to make them use it. As I said, the government is comfortable with coal, and the general population is more comfortable with coal than with nuclear but they don’t know how bloody evil coal is! My position is that first and foremost we need to power down and depopulate. Without this aim, vast amounts of cheap power will only enable us to go further into overshoot, robbing from future generations and ensuring a catastrophic cull of species in the natural world first, then humans. That’s my kids and grandkids we’re handing a miserable existence to. I am one of the most extreme and radical advocates of the natural environment you’re ever likely to meeet. I advocate the return of most of the land and sea currently devoted to agriculture and aquaculture/fishing to managed wilderness. I advocate a sharp sequestration of the majority of the natural ecosystem of this planet from casual human influence. I advocate devoting a considerable portion of economic output to the task of ensuring a flourishing biosphere under the management of humanity. Recognising that these noble goals can only be met by a civilisation with a vastly expanded resource base, I advocate a crash program of research into and implimentation of nuclear power technology, genetically modified foodstuffs, artificail food, complete enclosed self-sustaining artificial environments, large-scale geoengineering, and space colonisation. Population reduction and powerdown, even if such counter-instinctual goals could be achieved (at whatever cost of despair), would leave us helpless to prevent the drift of the climate system and biosphere into whichever state it will evolve given the damage already done. Turning our backs on the situation and committing racial suicide will not help. finrod #55, that was a pretty stupid comment. Try again in the morning when you’re sober. You got the number wrong for a start, then “sooner stick with burning coal” than what?, and “evil bastard”, besides being completely untrue, what does that sort of offensiveness achieve except to diminish yourself? SG, I’m sure it must come as an awful shock to you that after you’ve posted the carefully crafted thoughts you’ve been inspired to by pseudo-environmentalist literature, every word of it ringing with the guilt-laden mindset of our less secular ancestors, that anyone would have the temerity to challenge your conclusion that we wicked humans had better depart the stage of natural history or else… or at least draw ourselves closer to the passive environmental role of other animals. This is what your path amounts to, and it will indeed lead to racial suicide if followed. Suicide, and ecocide by neglect, as we will have cast away any ability to actively influence the course of climatic events. Matt #53: “But I’m a little uncomfortable with the impression I often get from reading this site- that nuclear power is the only viable FF alternative and that it should be pursued vigorously and as soon as possible, to the exclusion of all other options (and wind/solar in particular).” It is my conclusion, from all of this, that nuclear power IS the only viable FF alternative. I am vitally interested in supporting real solutions that permit a rapid transition away from fossil fuels, especially coal (oil will, at least in part, take care of itself). I the conclusion is that wind/solar cannot meaningfully facilitate this transition, why bother to promote them? Now, I should make one thing quite clear. I am not AGAINST renewable energy. If folks want to build them, go for it! If they can find investors, great! Indeed, I’m no NIMBY, and would be happy to have a conga line of huge turbines gracing the hills behind my home, just as I’d be happy to have a brand spanking new nuclear power station in my suburb. But why should I promote something I have come to consider — on a scientific and economic basis — to be a non-solution to the energy and climate crisis? That doesn’t make sense to me. To your questions: 1. Coal with CCS — doomed to failure. Why? Because the only thing that is going to be embraced with sufficient vigour, on a global scale, is an energy technology that has the favourable characteristics of coal, but is cheaper than coal. CCS, by virtue of the fact that it is coal + extra costs (capture, compressions, sequestration) axiomatically fails this litmus test. It is therefore of no interest and those who promote it can only do so on the basis of simultaneously promoting such a large carbon price that (a) the developing world is highly unlikely to ever impose it, and (b) if they do, CCS won’t be competitive with nuclear. CCS is a non-solution to the climate and energy crises. 2. Natural gas has no role in baseload generation. It is a high-carbon fossil fuel that releases 500 to 700 kg of CO2 per MWh. If it is used in peaking power only (say at 10% capacity factor), then it is only a tiny piece in the puzzle, because we must displace the coal. It it is used to displace the coal baseload, then it is a counterproductive ‘solution’ because it is still high carbon (despite what the Romms of this world will have you believe) and is in shorter supply than coal anyway. Gas is a non-solution to the climate and energy crises. 3. The developing world lives in Trainer’s power-down society already, and they are going to do everything possible to get the hell out of it. The developed world will fight tooth an nail, and will burn the planet to a soot-laden crisp, rather than embrace Trainer’s simpler way. Power down is a non-solution to the climate and energy crises. 4. It is nice to imagine that renewables will have a niche role in the future. But actually, will they? They don’t have any meaningful role now, when pitted in competition with fossil fuels, so why will that be different when pitted fairly against a nuclear-powered world? I don’t know the answer, and I don’t frankly care, because even if renewable energy can manage to maintain various niche energy supply roles in the future, it won’t meet most of the current or future power demand. So niche applications or not, renewables are peripheral to the big picture because they are a non-solution to the climate and energy crises. 5. Smart grids will provide better energy supply and demand management. Fine, great, that will help irrespective of what source the energy comes from (nuclear, gas, coal, renewables, whatever). Smarter grids are inevitable and welcome. But they are not some white knight that will miraculously allow renewable energy to achieve any significant penetration into meeting world energy demand in the future. Smart grids are sensible, but they are not a solution to the climate and energy crises. To some, the above may sound rather dogmatic. To me, it’s the emergent property of trying my damnedest to be ruthlessly pragmatic about the energy problem. I have no barrow to push, I don’t get anything out of it — other than I want this problem fixed. I don’t earn a red cent if nuclear turns out be the primary solution. I don’t win by renewables failing. The bottom line is this — if this website is looking more and more like a nuclear advocacy site, then you ought to consider why. It might just be because I’ve come to the conclusion that nuclear power is the only realistic solution to this problem, and that’s why I’m ever more stridently advocating it. This is a ‘game’ we cannot afford to lose, and the longer we dither about with ultimately worthless solutions, the closer we come to endgame, with no pawn left to move to the back row and Queen. Jc, I don’t agree with your reasoning. We are all pretty much stuck with our shonky supermarket duopoly but campaigning against their poor pricing behaviour helps to keep them less shonky and inspires people to look for alternatives. Salient: Not do digress but they aren’t making super-profits, as you assert. Coles sold itself because it wasn’t profitable, while Woolies is, however not spectacularly so. The competition watchdog looked into pricing, competition etc and found nothing alarming in the last inquiry. Negative aspects, according to the inquiry are more about “nimbyism” and town planning laws stifling competition. In other words things aren’t always as they appear. On that subject, why is it easier for them to pass on to consumers the extra costs of their refrigeration than it is to put doors on like they do with the freezers? Dunno. Perhaps it’s to do with attempting to provide a good customer experience as they see it. Windows and doors etc are really quite visually obstructive I think. The average coal power station is only 35% efficient, Combined Heat and Power is up to 90%. CHP would more than halve emissions or more than double coal’s power output but nothing’s going to make them use it. I’m not sure that would be as appears. If you’re telling me that they could improve their efficiency with a straight to the bottom line positive hit of 35% and haven’t moved on it, then they are really dumb. I don’t believe Origin, AGL or other operators are dumb so there must be more to it. Don’t forget that you may get 35% more efficiency however you also need to figure out if the renovation strategy is cost effective and accreted to the bottom line. In other words you don’t want to be spending (magnified example) $1 billion for a $3.5 million gain as the return wouldn’t make it economic. You need to figure the cost of capital and the expected return. Potential “engineering efficiency” doesn’t always mean it would be profitable. In other words don’t confuse “ engineering efficiency” with “economic efficiency” as they are two different things, or rather many not arrive at the same conclusion. As I said, the government is comfortable with coal, and the general population is more comfortable with coal than with nuclear but they don’t know how bloody evil coal is! Polls don’t show that. Polls show people’s heightened concerns with AGW. You shouldn’t think of coal as “evil”. It’s given us a great of economic utility and provided us with an industrial civilization. What we realize now is that it comes with a cost and the cost is that it’s increasingly likely to be screwing up the atmosphere especially with giga countries moving towards joining the rich world. This means we need to get loads of energy from elsewhere and nuke power is increasingly likely the best alternative. Perhaps it isn’t, however it should be in the suite of alternatives so the markets can determine the optimum choice or choices. My position is that first and foremost we need to power down and depopulate. That will come, possibly mid century. China’s population for instance is a demographic time bomb or rather a good thing in your eyes. Chinese demographics show that by mid century China’s population will fall off a cliff- literally fall of a cliff and become a nation of old geezers- and by 2100 it could be half what it is now. We also find the rich world’s population will be heading in the same direction. Without this aim, vast amounts of cheap power will only enable us to go further into overshoot, robbing from future generations and ensuring a catastrophic cull of species in the natural world first, then humans. That’s my kids and grandkids we’re handing a miserable existence to. Why take such a stasist view of things though? The technology curve is actually curling itself up exponentially. The world will be an entirely different place in 50 years time. In 100 years, technology could make it unrecognizable and if the tech curve continues, which it seems to be, the world in 2100 will look like 1800 to us. There’s no reason to be so pessimistic. Have you seen recent films of car shows around the world? Large numbers of electric cars or hybrids are making their way in the market very soon. GM recently introduced a demo hybridization that can do 230 miles a gallon. Take stock of things, as there’s no reason to be so pessimistic. We’ll get there in the end. Humans tend to bumble around but we generally end up make okish decisions most times. One thing worth noting in digging through Peters numbers is that even if we invent a technology that could store significant amounts of electrical energy at zero cost it wouldn’t on the face of it change the conclusion. Barry #61: Great summary. I haven’t been contributing much as this blog has become deeper and more of an engineering rather than a science blog (not that there is a real distinction between the two), but what it increasingly obvious is the HUGE gap between the level of detail on this blog and the level of detail in mainstream media. Politicians, media and green groups are still stuck trading cliches. Hopefully, there are channels of communication that will enable the detail of this blog to get through to the people who advise politicians … which hopefully includes you. Politicians need to actually lead and not make poll driven policy, because, particularly in Australia, poll driven policy on energy sources will be simply wrong. The business lobby is going in to bat for nuclear power eghttp://www.news.com.au/adelaidenow/story/0,27574,26060433-2682,00.html and their logic seems sound. However they muddy the waters by seeing nuclear as an agent of economic growth associated with increased population and consumption. Few high profile groups seem to be saying ‘let’s have nuclear power and a steady state economy’. I think the reality in the next few years is that it will be difficult to hold the line on the economy let alone grow it. The temptation will be to make do with existing coal plant and sneak in a few more mid sized gas plants. A gaggle of wind and solar installations will be put up basically for show. Many in the public will content themselves with thinking we can adapt to AGW or that renewables, carbonsinks or conservation will get us out of trouble. Until they lose their job that is. Some kind of widely perceived crisis will be needed to instigate the first nuclear plant. That would depend very much on where the power was. If the power in question was near a grid point, and the costs of the harvesting technology were low (wind is fairly cheap) then it would make wind or similar very competititve. I think it’s helpful to be explicit about where you, and your blog, are coming from. Unfortunately, for those of us yet to fully work through the issues themselves, an advocacy blog is less useful than a science commentary or ‘open discussion’ blog. But the numbers are what’s important and I wouldn’t be surprised if I end up agreeing with most of your conclusions (though I would take issue with some of your assumptions about the developing world). Jc #62 I haven’t time now to check this but I’m pretty sure that CapEx for efficiency improvements under current corporate culture must show a payback in under 10 years. A very short-term view IMHO. This would have to change now if the govt’s committed to coal for another 30yrs or more. I agree with you on the historical benefits of coal and I’m sure the world could live with a few highly efficient CHP stations, but I have no trouble demonising coal as it is currently being used. You said “That would depend very much on where the power was. If the power in question was near a grid point, and the costs of the harvesting technology were low (wind is fairly cheap) then it would make wind or similar very competititve.” This statement is totally wrong. Wind is nowhere near competitive even if transmission was free. Wind provides low value electricity at very high cost. It is low value because it is highly variable and not controllable. Consider this question. What price do you think a utility would be prepared to pay for wind power if he had the option to buy coal fired power for $35/MWh instead. Would he be prepared to pay $10/MWh for wind power? The answer to tthe question ‘what would a buyer be prepared to pay for Wind power in an open market’ depends on many factors. One important one is the cost of the system enhancements needed to manage the intermittency of wind power on the network. This is a substantial cost. You said “wind is fairly cheap”. Wind power is not cheap. It has to be mandated to force the distributors to buy it. If they do not buy enough they pay a fine which is more than the cost of the power they were required by regulation to buy. Wind power is subsidised by more than twice its cost. Given that wind power saves very little GHG emissions (refer to the “Wind emissions and costs thread”), I suspect Wind power is actually very near to zero value. It may be negative if all the externalities were properly internalised. SG @ 55: My position is that first and foremost we need to power down and depopulate. Without this aim, vast amounts of cheap power will only enable us to go further into overshoot, robbing from future generations and ensuring a catastrophic cull of species in the natural world first, then humans. That’s my kids and grandkids we’re handing a miserable existence to. And who, pray tell, is supposed to quit having kids to achieve the depopulation you promote? Don’t you see the blind irony in talking about your kids and grandkids in the same paragraph? Here is a CSIRO study: GREENHOUSE GAS SEQUESTRATION BY ALGAE – ENERGY AND GREENHOUSE GAS LIFE CYCLE STUDIEShttp://www.csiro.au/files/files/poit.pdf suggesting that productive of biodiesel (and by inference, biomethane) might be competative with the fossil equivalents. Elsewhere I’ve seen suggestions that raising energy crops to make biomethane ought to be cost competative. The problem is, as I see it, providing enough fresh water. The reverse osmosis necessary to produce fresh water from sea water, also the pumping, needs only interruptable power; wind might do. Certainly worthy of further consideration. Peter – Fran was not discussing zero cost transmission. She was responding to my comment regarding the impact of zero cost electricity storage. As such I would not dismiss her comment too quickly. Obviously we will never have zero cost electricity storage. However emerging technologies such as those that Eestor is rumoured to be working on are worth watching. Although probably more as a mobile energy store more so than as a stationary one. David – growing fuel diverts productive land away from growing food. A bad idea in my book. It’s true that the solar updraft tower I promoted earlier also takes a lot of land but it does not stop the land also being used for agriculture and neither does it have to sit on productive land. John – I can see arguments for zero or negative growth in our ecological footprint. However why you would set lower economic growth as an objective is beyond me. We should aim to both reduce our ecological footprint and increase economic growth. Jc #62 I haven’t time now to check this but I’m pretty sure that CapEx for efficiency improvements under current corporate culture must show a payback in under 10 years. Which is a 10% return. It would be interesting to see if this is a gross return before taking away expenses etc. Look, Salient I’m very sceptical of stories like this that simply sound too good to be true, as they usually are. Put ourselves in a rational position. If you were the CEO of AGL or Origin and someone came to you and said they had an engineering method that could save 30% to the bottom line, why wouldn’t it be introduced, as a 30% accretion to the bottom would be the equivalent of a manor from heaven. Tom Blees #70 C’mon Tom, get with the thread, I’ve already said back at #59 that without baby bonus and immigration, we would be depopulating. No one HAS to “quit having kids”. People CHOOSE not to and we need to empower women in the developing world and bring them out of poverty so that they can CHOOSE to quit having kids also. As for your show of ignorance of my personal situation, it doesn’t do you any credit. I have one biological child and the other three come form my current wife’s previous marriage, something I had no say in but am happy to call them my kids. No blind irony there. Terje I think to get ‘growth’ with reduced emissions we need a less materialistic measure of wellbeing than GDP. Essentially more stuff means more energy input. I don’t have the data handy but China’s boom circa 2002-2007 was accompanied by world record coal use. Could they have done it without coal? In the near term we need to quickly replace coal and petroleum dependence with low carbon alternatives. This is necessary even without climate change since oil output peaked in 2008 and coal will peak around 2030. Transport needs to be electrified such as light rail and plug-in cars. All but two State capital cities will have desalination plants with a substantial power requirement. The ageing population will need extra thermal comfort to cope with severe cold snaps and heatwaves; see my link on another thread to ETSA’s prognosis. There will be regional food crises due to water problems and input costs. Thus we will need more energy to provide the goods and services we already take for granted. To do this I believe that personal mobility, electricity on demand and even our exotic diets will be compromised. In short for most people things will get worse not better. Jc #76, the problem I think is in the failure to account for future price rises due to a diminishing resource, something which they have contributed to greatly in wasting a lot of energy. I am not an economist but there is probably a name for this. It’s the same wherever there is energy wasted that could be harvested. That wasted energy is contributing to an ever rising price on a finite resource. I am sure it could never be an exact science, but corporations need to start thinking further ahead, by certain government incentives, and factor in reduced resource prices as part of the payback. There are millions of results for ‘CHP generating efficiency’, it’s not rocket science so it’s just flawed accounting that has hindered uptake. I don’t want to labor the point Look if there was any efficiency gains of 30% with straight bottom-line gains of the same or even less any CEO would dive for it faster than the speed of light. Bottom line earnings changes would immediately work through to the stock price and if these guys have stock options it would motivate them from a personal perspective. (And everyone has an IQ of 180 when it comes to money) :-) Here: Company X has a market capitalization of $5 billion, normalized on-going earnings of $500 million, trades on a price earnings multiple (PE) of 14 (average for the ASX 200 at present) and it’s stock price is $5.00. A direct potential 30% bottom line improvement would have the following consequences (you could assume the PE stays the same as the 30% efficiency gains are recurring). Earnings rise to $650 million. A PE of 14 translates into a market cap rising to $7.8 billion and the stock price would rise to $6.50. This isn’t something even he stupidest CEO in the world would pass up. John: Terje I think to get ‘growth’ with reduced emissions we need a less materialistic measure of wellbeing than GDP. Why, John, as nuke power suggests we can have our yellow cake :-) and eat it. Essentially more stuff means more energy input. I don’t have the data handy but China’s boom circa 2002-2007 was accompanied by world record coal use. Could they have done it without coal? They could have but possibly not as cheaply. The price of coal has moved from about US$10-15 bucks a ton in the early part of the decade to about US$80-100 bucks now. At one stage coal presented them with a compelling choice, however it doesn’t so much any more, which is why they are beginning to build reactors. John – you can have higher GDP without having more “stuff”. And if we recycle a greater proportion of what currently goes to landfill we can even have more stuff whilst reducing our ecological footprint. Especially so if energy is cheap and plentiful. GDP may not be the right measure for well being but wanting to see GDP fall isn’t the right objective either. I’m afraid these comments aren’t particularly timely as I’ve not been keeping up to date. TerjeP: You started the thread by bringing up the subject of solar chimneys. If one is going to consider the principle underlying this approach to energy generation, do you not think that you may get more bangs for your buck with atmospheric vortex engines than with vast chimneys? Neither approach can be considered in any way mature but the vortex engine, if it really worked on large scale, would surely be cheaper to build. I also accept that neither technology is likely to be superior to the nuclear option. Finrod and Salient Green appear to be taking opposite extremes on the subject of population overshoot. Finrod is offering my grandchildren the prospect of confinement in controlled environment cities or space colonies while Salient Green would seem to prefer them to live in Third World conditions with a probably less than 50% chance of reaching puberty. Neither prospect strikes me as particularly desirable. My own position, FWIW, is that we must strive towards a policy of zero population growth and, after that, a slow decline to half or less of our current levels. However, the age profile of the world’s population is such that we cannot reach this goal quickly without a monumental increase in death rate (which it would be quite immoral to plan for but which might nevertheless happen if we don’t get our energy policy right). Without catastrophe, there is no way to stop population reaching 9 billion plus. This will require plentiful energy with high ERoEI. Given this, and more efficient use of such energy, it might even be possible for economic growth to continue and for the third world to catch up with the richer nations without the living standards of the populations of the latter having to diminish too far or at all. However, BAU is not an option. My huge concern is that many government spokesmen and economic commentators seem primarily focussed on economic growth while ignoring energy and climate constraints. Furthermore, some economists are encouraging higher birth rates or higher levels of immigration to counter the problems of ageing populations. It seems to me essential that rich societies find a way through the demographic transition without recourse to the production or import of more people. In the UK, we have a growing underclass of unemployable young who survive on welfare and rely on immigrants to do the work. In no way can this be deemed sustainable. I would be interested in the reactions of some of the self-professed left wing commentators on this site to my remarks. I feel that left wing goverments are just as responsible for getting us into our current mess as are the multinational corporations that they love to hate. It is true that the former may have the more selfless motives. However, the road to hell is paved with good intentions. Are we compelled to act in the way we do because we are basically ruled by our animal drives, as are all other species, namely to perpetuate our genes in a selfish manner? Alternatively, does the fact that we are unique in the animal kingdom in having consciousness allow us the possibility of an escape route from self destruction? I guess we’ll soon find out. “Starting with coal, terje’s pic shows all that is wrong with coal, huge emissions – specifically in the pic, heat – being allowed without consequence. We all know about the toxic emissions and the ash. Why are these incredibly indolent corporations allowed to waste so much heat, and why is it easier to pass the costs on to the customer than use CHP and/or Rankine cycle energy recovery? How is it that these corporations can threaten to close down or go offshore rather than spend money on plant which will save them money and reduce emissions?” The second law of thermodynamics is not a mere suggestion, but cold harsh reality. The maximum efficiency at which a heat engine can operate is (Thot-Tcold)/Thot, where temperatures are absolute (e.g. Kelvin scale); if it was any other way you could build a perpetual motion machine that needed no fuel to produce infinite amounts of electricity once you get it started. Modern coal plants operate at ~40% efficiency using supercritical steam at ~820 kelvin under enormous pressure. If the rejected heat is at room temperature this particular coal plant could at most be ~63% efficient. Given that no one has invented a carnot cycle heat engine that is practical in the real world, this coal plant is pretty damn good at 40%. The steam you see billowing out of the cooling tower is not particularly hot. Water is sprayed into the cooling tower to evaporate and chill the cold side of the heat engine. In order extract what little usable energy is left you would need a cold reservoir at or very close to room temperature capable of accepting ~2 GW of low-grade heat. This would be an enormous expensive for very little gain. A CHP coal plant is problematic for all kinds of reasons. Firstly there’s the need to have a sufficient number of potential customers, which means the coal plant must be sited near a city, otherwise you end up throwing away nearly all the heat anyway. Secondly you will have to lower the efficiency of the coal plant because the cold reservoir of the CHP system is steam under significant pressure; since this is far hotter than the cooling tower you need to burn more coal to generate as much electricity. Thirdly, in most places demand is very irregular and most heat would still be rejected; quite a bit is needed in winter for space heating and only a little for water heating in summer. “The huge PV array announced for China is quoted at $3b/Gw.” That’s outrageously expensive. The capacity factor for solar PV is ~20% for the very best places on the planet, compared to a typical capacity factor of 70% for coal and 90% for nuclear. 1 GW of solar produces as much power on average as ~280 MW coal or ~220 MW nuclear. Building 2 GW of solar in inner Mongolia also implies very long transmission lines, which you have carefully omitted from the $3b/GW cost estimate. The transmission problem is compounded by the fact that you’re only using these transmission lines very infrequently due to the intermittent nature of solar. If the suckage stopped here, it would be bad enough; but the project will not even be finished until 2019 (what was that about nuclear plants being too slow to construct to make a difference?) and if solar power is to ever replace baseload power you need to overbuild the system to deal with winter and weather as well as provide a significant storage system. That’s lovely dear, but it has no relevance whatsoever. China is building AP-1000 reactors at an expected cost of $1400/kW (and they expect it to drop) with chinese labour and under a chinese regulatory environment (which is unlike western countries is not designed to deliberately add cost and risk to nuclear power). “My position is that first and foremost we need to power down and depopulate. Without this aim, vast amounts of cheap power will only enable us to go further into overshoot, robbing from future generations and ensuring a catastrophic cull of species in the natural world first, then humans.” This kind of casual evil is the worst kind. I bet you don’t even realize what kind of monster you are. Does anyone have reliable studies on hand to give good solid reasons why our current population is too high and much lower is optimum? I’m not referring to the commonly known ones such as we’re using up the world’s resources etc. Why is lower optimum? In lots of ways I think consumption is comparable to an instinct. One of the most effective ways to get around instincts is to cheat them. I’ll use an analogy to explain. People don’t have a baby drive per se, they have a sex drive. If you can give them an acceptable way to have lots of sex without having lots of babies, they’ll take it. If you just tell them to stop having sex, the sex drive will win out and all you’ll end up with is more babies. In a similar way people don’t have a CO2 emissions drive, they have a consumption drive. If you give them an acceptable way to consume lots of energy without emitting CO2 , they’ll take it. If you just tell them to stop consuming energy, the consumption drive will win out and all you’ll end up with is more CO2 emissions. ;) ‘My position is that first and foremost we need to power down and depopulate.’ A lot of research has been done through the last 100 years on the latter problem. Many different technologies have been employed, at many different scales, under many different regulatory regimes and governance models. I can report that they were all completely succesful. We know what depopulation looks like, and its not fun. I agree with Salient Green though, we do need to depopulate, but under a powered-up condition, not powered down, so that it can happen by choice and long range planning, rather than being forced upon us through deeply unpleasant exigency. Soylent # 84 it really is getting tiresome responding to dickheads who don’t read the thread properly. Re-read #78 for a response to your incredibly offensive, let alone brainless and totally unexplained response thus “This kind of casual evil is the worst kind. I bet you don’t even realize what kind of monster you are” This is typical of the hysterical, racist, growth fetishist nonsense which always spews forth from those whose fortunes or religious beliefs hang from the obscene principle of ‘growth is good’. Clearly, I have stepped on some toes on this blog which is supposed to be about climate and energy but is being used by a few Cornucopians to further their delusional and destructive plans for continual ‘Growth’. As for the rest of your post, and I have already posted this, Google ‘CHP Energy Efficiency’ and you will see what can be achieved. Yes, siting is important, but the Europeans have always been ahead of us, and a lot smarter, and they are embracing CHP for future energy needs. My post on solar and nuclear costs was purely to demonstrate the incredible disparity in pricing. You completely missed the point that I am reservedly in favour of nuclear power but the costings are simply, unbelievable. Your reporting of $1500/Gw was completely unsubstantiated and only adds to the uncertainty of the costs of nuclear power. You are completely correct saying solar power in Mongolia will require significant infrastructure. Who said it wouldn’t? Nuclear power will also require considerable infrastructure. More. Much more. What I was saying is, let’s compare apples with apples, over the long term. Can you put a price on Peace of Mind or Set and Forget which seems to be a big part of the enormous expansion of Solar PV around the world? Jc asks for reliable studies on optimum population size and why lower is optimum. I would have thought that there is no definitive answer to optimum size. It will depend, to an extent, on individual perspectives. However, there must be an upper bound. Exponential growth is, by definition, unsustainable. My personal view as to why lower is better is based upon the very high proportion of net primary productivity that our species has co-opted to the detriment of other species. I find it depressing, for example, that the declining global population of wild dogs is only 5000. As a vet and gundog trainer, I can empathise with wild dogs. However, others may take a more anthropocentric point of view and not worry about other species unless their survival has importance for that of mankind. Finrod, on the other hand, appears to believe that we can both increase biodiversity and biomass of other species while maintaining or increasing our own numbers. I suppose, in theory and given unlimited cheap energy, this might be possible for a time. However, it is my personal view that our lives would then become so artificial as not to be worth living. In other words, there exists a range of views, none of which is necessarily wrong per se. Surely, however, most humans will wish to reproduce and it is imperative for our species survival that we live sustainably. It seems to me that it would be easier to achieve these goals with a stable population of less than 6, and certainly less than 9 billion. It’s the transition period that will provide the real challenge. This may or may not prove surmountable. Douglas Wise #83 said “Salient Green would seem to prefer them to live in Third World conditions with a probably less than 50% chance of reaching puberty” Are you sure you’re not Douglas Dumb? How on earth did you arrive at that ridiculous conclusion? Our houses, transportation, businesses, industry and power generation waste huge amounts of energy. We can still have a modern society with far less energy wastage. The link below shows very clearly why economic growth is good, electricity is good, and therfore the cheaper electricity is the better for humanity. You can see, as an example, that the more electricity we use the lower is the infant mortality. Conclusion, if we want to reduce population growth (and save the planet) the more electricity we use the better, so the cheaper electricity is the better!!: I’m fairly thick skinned and you didn’t unduly upset me. Nevertheless, thank you for your retraction in #92. I may have misrepresented your viewpoint when suggesting that you seemed to be advocating that my grandchildren live in Third World conditions with less than 50% chance of reaching puberty. However, you appear to believe that power down and renewables will provide a sufficient solution. It might well be possible for rich nations to become much more efficient in their use of energy and allow their populations to sustain reasonable life styles if the balance of power remains as is. However, our energy is currently being gained at higher price (falling ERoEI) and ERoEIs will fall further with peaking oil and coal and, certainly, with the introduction of renewables. Simultaneously, we are facing a growing population and competition from developing nations striving to bring their living standards closer to our own. I am writing as a UK citizen living on an overpopulated island with few and diminishing natural resources and governed by those who seem intent on exacerbating matters. I am sure you intentions are not to cause my grandchildren unnecessary anguish. It is merely that I think your presription will inadvertantly bring it about. I could not argue my point of view better than John Morgan did in #88, namely depopulation in a powered-up condition. I agree with you that powering-up and making no other changes will obviously be unsustainable. We have already seen the effects of the Green Revolution – more food leading to more people leading to more starving people. SG…no one want’s your world of energy starvation. Clearly this is not the trend. People like air conditioning, some sort of television, having a refrigerator, lights, that sort of thing. People understand they live longer and suffer less this way. So…nations…*every nation*…every people, broadly speaking, need more energy because there is actually *not enough of it*, and certainly those that have it, use it inefficiently (like burning fossil fuel for AUTOmobiles) and often waste it. But the overall trend, as it has been throughout every single advance in human history, is for more, denser, energy, not diffuse, less, energy. So the argument then is how accomplish this with less greenhouse gas emissions, less carbon micro-particulate, better distribution and at far more abundant rates we do now? I see nuclear as simply the *only* way to go. Secondly, your point about $1500/kw nuclear. You don’t ‘want’ to believe it or you factually know this isn’t the case? We have discussed on this blog many times before how the Chinese are doing *just that* with the AP1000 from Westinghouse. Twice that prices is CHEAP. And no carbon. It probably wont be necessary to have completely closed and sealed habitats for humans on this planet (although if it ever does become necessary it would be really good to be confident we know how to do it). It may be prudent to do water recycling and have artificial food tecchnology. I don’t see my proposal as advocating ‘confinement’ any more than current policy, which restricts allowable human activities in national parks. If we can return the farmlands to managed wilderness, there’ll be scope for allotting large tracts of land to human recreational purposes (including leading a quite rustic life if one desires it) while still expanding the land set aside for biological diversity far beyond anything practical today. Not many people in Australia regard the rules against cutting down trees in national parks for firewood as being an insufferable imposition on their rights. I’m just advocating that this principle be somewhat extended. There are a lot of people in eastern Africa who do regard laws against gathering firewood from national parks as such an imposition though, for the very good reason that they have no other source of fuel. The single most effective strategy to prevent deforestation in such areas would be a program of electrification so people have an alternative. That’s what I’m talking about… providing people with as many alternatives as possible, so our survival and that of the natural world doesn’t have to be an either/or situation. Genocide advocates such as Salient Green might occasionally point to demographic trends and claim that they dont need to impliment mass-starvation or some more direct form of extermination to accomplish their program, but the fact is that the kind of demographic transition SG is talking about doesn’t ever happen until after a society has gone through modernisation and transition to high energy useage. SG would presumably oppose such a process. The idea that we can get through this through ‘energy eficciency and conservation’ is delusional in the extreme. What’s goin g to happen if we need to launch a major geoengineering effort requiring great amounts of power to reverse a tipping-point crisis? We need a robust enery source to deal with these contingencies. Well, the biggest problem of species destruction in the developing world is: renewables. Mostly in the guise of charcoal production by burning down forests where ever they exist. Human pressure on existing rain forest brought on by both economic collapse and…oddly…agricultural ‘renewable’ bio fuels like palm oils and sugar cane have lead to a huge destruction of habitat. A nuclear economy would be able to eliminate most wars for fossil and most if not all of these detrimental renewable industries. Food is for people, not cars! At any rate, while all sorts of renewable projects gets financing and play from every developed and developing country the fight for fossil fuels rages on totally uninhibited by renewables. Political alliances between renewable and fossil interests are the bottom line of the day. A night doesn’t go by now on US network and cable TV from BP, Mobil, the Gas and Oil Assn about the great virtues of “Solar, wind and natural gas; our vast resources in ‘clean coal’,” etc. Unfortunately the economic measure of GDP makes no sense; if one inefficiently wastes energy that makes the GDP go up. But energy efficiency is one of the strongest, esaiest ways to help control even further AGW. The whole idea of baseload demand is spurious. If it weren’t for off-peak pricing, demand from 9pm-6am would be an even smaller fraction of daytime and early evening demand. The current pricing scheme, and the demand it generates, reflects the rigidity of a coal-based generation system that (in the terms used here) requires a lot of redundancy at night to be able to meet peak demands during the day. The analysis starts from the presumption that we should try to meet the same demand pattern with the same price structure as we have at present. Not surprisingly, it comes to the conclusion that we should adopt the generation technology most similar in its output pattern to coal, namely nuclear. A shift to solar and wind will require new pricing structures which (just as the present system does for coal) makes renewable electricity cheap when it is plentiful and expensive when it is scarce. Once this is taken into account, the analysis above is entirely invalid. There are other problems with the assumptions, which need a reality check. If this analysis were applicable in the real world, the pattern of new generation investment in the US (big growth in wind, a fair bit of solar, almost no interest in nuclear even with substantial subsidies) would be radically different. Can you give us an example of how the new renewables pricing structure will produce the cost mechanism ensuring that all industrial activities needed to sustain the power system are provided with what they need? Can we run the smelters with renewable power coming down the grid? Can we provide enough power (electric, or synthesised chemical fuel) to run the mines? Can we achieve replacement rate? JQ – I think your point is valid but only up to a point. You can institutionalise certain shifts in power consumption from daytime to night (or the other way) however dealing with downturns in supply such as what happens when solar PV is subject to cloud is less easy to tackle. And in any case Peter Lang based his peaking requirement on 6:30pm not 9pm – 6am. finrod#96, I think you are probably just a liar, but I am prepared to give you a chance to be genuinely mistaken if you can read the definition of Genocide, http://en.wikipedia.org/wiki/Genocide and and explain to me how freely choosing not to have kids, which is what I am advocating, can have you accusing me of genocide. Your previous hysterical accusations of ‘racial suicide’ peg you as a racist. If you were in any way sensible about the subject, you would see that the races most in peril are so because of overpopulation, such as in parts of Africa, and jungle tribes in South America and Indonesia. As an example, the pricing structure would have high prices for electricity on winter evenings and lower daytime prices, more or less the opposite of what we have now. That means that the activities that currently use off-peak power because it is cheap (both domestic hot water system and industries that operate night shifts) would have a strong incentive not to do so. Home heating would shift to systems based on stored heat rather than instant heat. Of course this would involve change. But consumption patterns change all the time in response to changing prices. And, it’s important to note that the discussion here is based on an all-renewable system which is decades away. In the transition, which will involve continuation of the long-standing movement from coal to gas, most of the peak-demand problems raised here are relatively trivial, since gas (low capital costs, high operating cost, easily turned on and off) is ideally suited to dealing with peaks in net demand. Conceivably with a constant output grid every home and business could have a large battery. They could use their fixed inflow in real time, save some for later, buy some more or sell. I’d do it if batteries were cheap enough. I guess aluminium smelters would also except that electricity via batteries costs an extra 10c per kwh. However aluminium smelters feel they are entitled to pay just 2c per kwh which is one reason we need cheap baseload. Energy price increases need to be gradual enough to give us time to adapt and invest. finrod#96, I think you are probably just a liar, but I am prepared to give you a chance to be genuinely mistaken if you can read the definition of Genocide, http://en.wikipedia.org/wiki/Genocide and and explain to me how freely choosing not to have kids, which is what I am advocating, can have you accusing me of genocide.” SG, it’s not your advocacy of birth control which inspired me to peg you as a genocide advocate, it’s your ‘powerdown’ policy. This lunacy will inevitably cause billions of deaths, direct and indirect, if implemented. You may, however, have a point concerning terminology. The definition of genocide given in the Wikipedia article you linked to is as follows: “Genocide is the deliberate and systematic destruction, in whole or in part, of an ethnic, racial, religious, or national group.” This definition seems implicitly limited to the mass-murder and diminuition of particular subsets of the human race, rather than the human race as a whole. What you are advocating has a broader, more cosmopolitan murderous application, so we arguably need a new term to cover it. Cosmocide? I’m up fore suggestions. More from Salient: “Your previous hysterical accusations of ‘racial suicide’ peg you as a racist. If you were in any way sensible about the subject, you would see that the races most in peril are so because of overpopulation, such as in parts of Africa, and jungle tribes in South America and Indonesia.” The race referred to in my ‘racial suicide’ remark is the human race… but if you want to bring up racism, the homicidal impact of the policies you advocate would indeed fall most heavily upon the non-European peoples of the earth. I see you rather in the mould of a British Empire aristocratic elitist, casually disposing of the fates of brown-skinned poeples, secure in the knowledge that you can count on the carefully cultivated racism of the lower orders to shield you from too much criticism from those who figure out what you’re up to. I have late news for you. The world has moved on, and the divisions between first and third world people which you are counting on to dehumanise the great masses which would be the inevitable victims of your policy are dissolving. Finrod #105 said “SG, it’s not your advocacy of birth control which inspired me to peg you as a genocide advocate, it’s your ‘powerdown’ policy. This lunacy will inevitably cause billions of deaths, direct and indirect, if implemented.” That statement gives new meaning to the word ‘hysterical’. Please, show us some more of your ignorance by telling what you think ‘powerdown’ means. I suspect this will explain how you erroneously come to the conclusion that it would cause billions of deaths. SG@#105:“That statement gives new meaning to the word ‘hysterical’. Please, show us some more of your ignorance by telling what you think ‘powerdown’ means. I suspect this will explain how you erroneously come to the conclusion that it would cause billions of deaths.” You’re the on trying to sell this lemon, SG. It’s up to you to define your terms and convince us it’s a good idea. Unless your definition of ‘powerdown’ allows for an actual increase in power production, though, then the conclusions I have drawn certainly stand. My position is that first and foremost we need to power down and depopulate. Without this aim, vast amounts of cheap power will only enable us to go further into overshoot, robbing from future generations and ensuring a catastrophic cull of species in the natural world first, then humans. That’s my kids and grandkids we’re handing a miserable existence to. This is SG’s position. It means less energy, less people, is Malthusian and, while he doesn’t state it, people usually think of places like Africa when making statements like this. “Vast amounts of Cheap power” IS what makes population control, family planning, contraceptives and sex education possible. It’s what gives incentives to farmers and others to have smaller families. It is vast amounts of cheap abundeant power that *allows* us to use our natural resources more intelligently, more efficiently and more for human needs, not less. By “powering down”, actually means MORE wars, more poverty, fewer human resources from which can draw the next Hawkings, Einsteins and Weinbergs from. Genocidal or not, it’s a reactionary future of barbarism that SG is advocating, even if he thinks the opposite will result. We should get back to the thread in question. I say this because there is not one nation, group of people, proposal being discussed by any constituency that rhythms with SG’s dystopian future. This is for 9GW peak power, for 3 hours per day, from 6 hours pumping per day. Of course, if we pump for longer, can extract a higher pumping rate than I have assumed, or if we produce less power, then we can generate for more hours per day. The cost per unit power is A$790/kW. This is still a preliminary estimate. I am still firming up numbers. The estimate I am doing will never be better than +/-25%. For comparison, I have interpreted from the Electricity Supply Association’s chart, http://electricity.ehclients.com/images/uploads/capital.gif , to say pumped-hydro costs per unit power are in the range US$500/kW to US$1500/kW. So the costs for Tantangara-Blowering are in the middle of that range. That is to be expected because we are using existing reservoirs, so no dams or reservoirs have to be built. On the other hand, we’d have to bore three tunnels, each 12.7m diameter and 53km long. There is a lot more involved of course. This length of tunnels is unusual for pumped hydro schemes. Peter, a long forgotten question…what is the efficiency loss for power in to pump storage vs power back again? The largest or second largest pump storage facility in the US is Helms Pump Storage facility in California, built in conjunction with Diablo Canyon NPP to absorb off peak base load from the plant. These are two isolated reservoirs that have not river input to speak of. I believe if you run the upper reservoir dry, it’s 1800MWs for almost 2 weeks straight. I raise this because renewable advocates often get a bit peeved it is suggested that every single storage scheme, from batteries to pump storage to molten salt are far better applied for nuclear energy than reneweables. Just a thought. :) finrod #107, just as I thought, a cascade of aggressive, insulting bluster based on zero knowledge of the subject, apart from that which you dreamed up yourself. You have zero credibility. If you really want to know what power down means, and I don’t believe you do, then educate yourself. I’ve wasted emough time on you. Ditto David Walters. I am using 95% efficiency for generation and 80% efficiency for pumping. Those figures are reasonable ball park figures to use. However, the pumps at Tumut 3 pump at a flow rate only slightly more than half the flow rate that is used for peak generation. Hence 6 hours pumping for 3 hours generation at full power. The power required by the pumps would be 6.4GW. Its important to note, that power needs to be constant power for several hours – wind won’t blow water up 900m. It would take 18 days pumping for 6 hours per day at full power, to fill Tantangara’s active capacity. That Severance chap suggests the round trip efficiency for pumped hydro at one site is 78%. If $5/w is the backstop capital cost for nuclear then I suggest all pumped hydro that comes in under that should be developed. An incentive would to get a renewable credit under the 45,000 Gwh target even if most of the pumping effort could be attributed to coal power. A CO2 cap like the one we were supposed to have back in July should prevent abuse of pumped hydro RECs. SG@#111:“If you really want to know what power down means, and I don’t believe you do, then educate yourself.” So you refuse to define one of the principle concepts of your policy. Can’t say I blame you. Given what ‘powerdown’ must necessarily entail in accoradance with your “cheap power is bad” dogma, you know it’s going to be shot down in flames. The 78% round trip efficiency looks about right. However, the tunnel/shaft length is probably less than 5% the length of the tunnel required to join Tantangara to Blowering at their deep ends. You lost me in the second paragraph. Remember that nuclear provides power 24 hours per day. The pump storage is for peak power; it would provide power for 3 hours per day (at full power). So you cannot compare the two types of generation on a purely power basis. This project would be excellent in combination with nuclear. This new cost figure for 9GW of peak power reduces the cost of the nuclear option from $120 billion to $106 billion (refer to the article at the top of this thread). Regarding incentives and REC’s we should be rid of them. All they do is add cost and reduce economic efficiency. [I know you were responding to John N. but…pump storage and nuclear will be built incrementally, even if Australia adopts a “Chinese Nuclear Streoid” and goes all out.] Thus, there will be a need to over build for nuclear as well assuming Oz builds out to peak load. But even if doesn’t, a 2 to 4 week fuel outage, rotated throughout a fleet to 16 or so LWRs (you came up with a gross national GW load, but not one based on quantity of reactors, unless I missed it) is going to have to require at least a 2 reactor’s worth of power (for powering when one is down for fueling; and for when another has a hic-cup and trips). Thus, pump storage can play this role if there is enough of it, to mitigate the needed 2 unit down overbuild…assuming, of course, there IS a serious national grid, etc. peter#109, A much more cost effective storage option would be to install one tunnel between Blowering and Tantangara(3,000MW) and a similar sized tunnel from Talbingo to Eucumbene(3,000MW) and additional turbine capacity at Tumut3(to 4500MW) and a small return pump from Blowering to Jounama. This would give 11,500MW capacity with a 5 day storage of 1,070 GWh(Tant 150, Euc/Talb 480, Talb/Blowering 240). Togehter with other dam flows of 500GWh/5 days you could have for 6.7Billion >1500 Gwh available over a 5 day period. Using the data you provided for the PV farm at Queanbeyan and the wind data of 11 farms from NEM, this would cover the lowest 5 day solar(24GWh instead of av 72GWh/day) and 5 day lowest wind period(160GWh instead of average 480GWh/day) IF they occurred on the same 5 days, with the use of the present 4,000MW of OCGT existing in eastern Australia. Thus OCGT would be used to generate at <0.10 capacity factor so accounting for just 1.6% of power production. That's assuming that solar power in northern Australia would perform as poorly as the Queanbeyan site and receive no advantage of solar power avialble in WA after sunset at Queanbeyan or more cloud free days during June and July. We should not need much imagination to see that even dispersed PV solar can do considerably better than one farm at one poor winter location. The scenario described at the top of this thread is based on the NEM’s demand in July 2007. July was the month that experienced the highest peak demand (33GW), highest baseload (20GW) and highest average demand (25GW). Nuclear, without energy storage (and no fossil fuel generation), would cost $132 billion for the 33GW capacity needed to meet the peak demand without pumped hydro. With 8GW of pumped hydro, the system (nuclear and pumped hydro) would cost $106 billion, a saving of $26 billion. Nuclear and pumped hydro capacity would be perfectly suited for Australia’s situation. 25GW of nuclear would meet the average demand and provide an excess of 5GW to pump and store the excess energy generated during the times when the demand is at baseload levels. The pumped hydro would generate up to 9GW of additional power during the periods of peak demand. This explains why France has near the cheapest electricity in the EU, exports large amounts of electricity to maost of the remainder of the EU, and enables the European networks to absorb the intemittent energy that is being generated by their highly subsidised and mandated renewable energy programs. I simpley do not understand your figures. I am not sure if you have done the calculations aor are simpley throwing numbers around. They do not make sense to me. I’m still trying to work outr some of what you wer saying in a much earlier post on this thread. I haven’t given up on it. For example, in post #118 you say “and a similar sized tunnel from Talbingo to Eucumbene(3,000MW)”. But that statement is not correct. The same size tunnel would generate only 2,000GW, not 3,000GW. The reason is because the elevation difference is 600m, not 900m. Regarding incremental build, as Neil Howes, points out, there are many possible pumped hydro sites. The most economic will be built first. I started looking at Tantangara-Blowering because of the high head and large storage capacity in each reservoir. If we wanted to we could build that scheme with one tunnel at a time instead of three tunnels all at once. Or we could make smaller tunnels. However, the mobilisation costs for the 12.7 m diameter tunnel boring machine are high. The tunnels make up are half the cost of the project. So it makes sense to bore the three tunnels while the TBN is here. By the way, this scheme has sufficient storage in the smaller reservoir to handle eighteen of these 9GW pump storage schemes, although we would never do that for a variety of reasons. But you could expand it incrementally for a long time. Regarding the need for extra reactors for redundancy and to allow for refuelling, I agree. The papers intentionally did not go to this level of detail. I stated in one of the papers that the redundancy was excluded in the simple analysis I was conducting. The need for redundancy actually turns out to be much greater for the solar thermal option (option 2), than for nuclear. Its more realistic to pursue solar with some vigour when the nuclear power is in place. One day some outfit may agree to maintain a section of road so long as they can draw solar power from it. Heliostats may spring up into the desert powering the circular sprinklers that water circular patches of crops. Like in the deep tropical agriculture of Malaysia. Wind power might be used for ammonia production that can be carried out intermitently. These things take time and its not plausible that solar power could provide the energy for the industrial manufacturing that could put up the solar power plants. So its not anything one expects instant results from. Its just very imprudent not to start sweeping away the obstructions to nuclear. We don’t need another enquiry. We know how the enquiries end up. They wind up with an outcome that guarantees inaction. But inaction doesn’t get the power bills to drop. It doesn’t get us reindustrialising. Since we know what the outcome of the enquiries are it is clear that there is no need for another one. To have a big and growing nuclear industry is a really exciting prospect. Thousands and thousands of very meaningful jobs for intelligent people to get involved with. Thats a good thing even if it were only to draw them away from causing trouble. Peter#120, I have tried to do the calculations correctly. There is already a tunnel through Tumut1 and Tumut2 so that would add to the total pumped storage capacity, with a slower pumping just via the new tunnel. Also extra flow from Eucumbene to Talbingo allows extra flow through Tumut3 and some storage flexibility in the active storage at Talbingo. I thought you had said the Tantangara to Blowering head was 600m. I was calculating a flow rate of 0.75ML/sec to give 3,000MW at 600m. I would have thought that there is no definitive answer to optimum size. It will depend, to an extent, on individual perspectives. However, there must be an upper bound. Exponential growth is, by definition, unsustainable. Thanks for your thoughtful response. Look, the only way humans seem to limit population growth is when they join the list of the wealthy. So if you want to see long term permanent reductions in population without coercion we should strive to see everyone maintain high living standard. Here’s my prediction inside 30 years: within 30 years countries will be vigorously competing with each other to attract young immigrants in order to anchor their failing social security systems. You said; “There is already a tunnel through Tumut 1 and Tumut 2 so that would add to the total pumped storage capacity” There are no pumps in T1 and T2. These power stations cannot be converted into pumped hydro schemes (eg no downstream reservoir, even if there were, the inlet tunnels from up-stream are at the wrong levels for pumping. Tailrace is not designed for pumping even if a downstream dam was built. Downstream dams for T1 and T2, even if built would have miniscule storage. The power stations are underground so virtually impossible to modify without taking the whole Tumut generating capacity out of production for perhaps 2 to 3 years.) It is absolutely a no go option. Let’s put this to bed now. You say; “… that would add to the total pumped storage capacity, with a slower pumping just via the new tunnel. Also extra flow from Eucumbene to Talbingo allows extra flow through Tumut3 and some storage flexibility in the active storage at Talbingo.” Neil, we’ve discussed this repeatedly. I don’t understand what you are getting at with pumping from Talbingo to Eucumbene. Have you done the calculations? Why would we want to pump water out of Talbingo before it passes through T3. Talbingo should be maintained as near to full capacity as practicable to maximise the head, and therfore the power output per m3 of water used. Talbingo is kept a bit below full supply level to catch the water released through T1 and T2 and to hold the small amount of water pumped up at night by T3. The water is released from Eucumbene and through T1, T2 and T3 in a controlled manner to maximise the power per m3 and also to meet other downstream needs for the water. There is no intention to use Talbingo for storage other than what I said above. That is what Blowering is for. I suspect Talbingo would never be allowed to fill to the point where it wastes water (ie spill it over the spillway) except by accident. If you want to improve the pump-storage capacity of T3, I would suspect the best way would be to build a dam downstream from Jounama. There appears to be a suitable site which from the maps looks just about as good a profile as Jounama. If a dam was built at that site, it would increase the downstream storage for T3 by about a factor of 3. If you want to try again to explain what you are thinking, could you please lay out the calculations and explanations line by line so I can follow it. Have you costed your ideas? Have you allowed for the fact that the pumping is slower than the flow rate of peak power generation? Have you allowed for the fact that more power is needed to pump than to generate, and the pumping is against a higher head? Regarding the elevations of the reservoirs, I thought I gave you all the figures in a previous post. Just fo now, I confirm, use 900m for Blowering Tantangara and 600m for Talbingo-Eucumbene. We can get the pump storage capacity we need. However, the problem is getting people to understand that wind and solar are simply not viable. They are draining our wealth for no good reason. That is the problem we face. That is the purpose of these papers – to explain the facts. It seems many people just don’t want to know. They are ignoring what is so balatantly obvious to anyone who is at all numerate. SG: “If you really want to know what power down means, and I don’t believe you do, then educate yourself. I’ve wasted emough time on you. Ditto David Walters.” I know what it means. It means higher birth rates and even higher still mortality rates. It means resource depletion(recycling is only practical with cheap energy), it means total deforestation as people fan out and do slash and burn agriculture on ever last square inch of forest. It means untold suffering from which society may never recover. Re #124 Jc I hope you are correct to assume that increasing affluence (if attainable) will automatically reduce fertility rates with no need for coercion. However, I would urge you to consider the writings of Dr Abernethy on this subject. She appears somewhat less sanguine. (Google Abernethy and demographic transition) Re #128 Peter Lang. You state that the nuclear option is so superior to renewable options that this should be obvious to anyone who is at all numerate. Would that this were so. As a lay reader of this and other blogs, I have gradually arrived at the conclusion that, if anything can save us, it is a rapid transition to nuclear energy. You appear to think that opposition to nuclear power comes only from those who don’t want to know the facts. You are no doubt aware that the great majority of those who correspond on the RealClimate and Climate Progress blogs are opposed to a nuclear solution and by no means all of them are innumerate. Their purported objections (unconvincing to me) relate to cost, time to deployment, sustainability and safety. I would conclude that you have done a much better job with your negative arguments on demonstrating why renewables are unsuitable for baseload power than you have in deploying pro nuclear arguments that are sufficient to change the minds of antis. It may be that we will have to await the deployment of the AP1000s in China before there is sufficient consensus but time seems to be of the essence. Meanwhile, keep up the good work. I wish you every success. I took a quick look at your suggested site. It really doesn’t seem at variance to the comment I made. Here’s the thing…. people in poor countries tend to use children numbers as a social security net and cheap labor. Rich world people don’t. In fact kids in the rich world are a bloody expensive “hobby” and most people can’t can’t have many expensive hobbies :-). You are no doubt aware that the great majority of those who correspond on the RealClimate and Climate Progress blogs are opposed to a nuclear solution and by no means all of them are innumerate. That’s true. However I also think there is an ideological posture to this too. Some people who are obviously numerate may also desire a different world to the one we have or heading to. There are plenty of intelligent people that would prefer a less technologically complex world. Virginia Prostel wrote a book titled ” The future and its Enemies”. She took the view that stasism comes from both the right and left and that the right/left dictum based on a traditional demarcation no longer holds. She viewed the enemy for what she referred to as the stasists, that is people that are anti-development and anti-technology. I think to a large extent that is true. I agree that Abernethy isn’t totally at odds with your perspective but she does point out that it isn’t quite as straightforward as is sometimes suggested. My own observations relating, for example, to the UK and, to a lesser extent, Africa suggest that increasing prosperity often increases fertility rates. Materially successful Africans that I have encountered tend to have larger than average family sizes. Equally, in the UK, many self made (not derogatory) millionnaire entrepreneurs also have large numbers of children. The UK population is rising quite fast. This was initially due to increased immigration but the increased reproductive rates of the immigrants has now become the major factor. This might suggest that breeding increases in response to rising aspirations, if only temporarily. I don’t know why people cite Africa when they talk about over population. It has lower population density than Europe. It has fertile land and an abundance of resources. I presume it is because periodically we see images on the TV of people starving in Africa and assume (wrongly in my view) that this starvation is a product of over population. When it actually has more to do with poor governance, poor property rights and oversized state sectors. However perhaps it is because Africa still has some amazing wild animals that human populations are encroaching on. Wild animals the equivalent of which were driven to extinction in Europe long ago. Thanks for your response. I know you are already busy but wondered whether you could answer a few questions relating to the possible benefits of stranded renewables. Suppose that renewables are always more trouble than they are worth in the provision of grid power. I can go along with that and can also accept that it is more important to consider ways of powering the grid with emissions free fuel than to waste time looking at peripheral issues. However, it is these peripheral issues that I am now asking about. Under what circumstances can stranded renewables (with little or no storage facilities) provide utility and cost competitiveness? I am a biologist, not an engineer. As such, I am fairly clueless as to industrial or synthetic processes can operate with an intermittent and unreliable energy source. I can see that a plastic extrusion plant might gum up big time if the sun went behind a cloud or the wind stopped blowing but this degree of wisdom doesn’t get me far in any rational decision making process. Can stranded renewables be used to synthesise transport fuels or to desalinate water? In the Third World, where there may be very poorly distributed grids, would stranded renewables not be of use? Would you still argue that the installation of grids, powered by nuclear batteries, would work out cheaper? Do household solar thermal rooves in Northern Europe make economic sense? I suspect that you may say no because they cause unpredictability for grid operators when they unexpectedly underperform. In short, can you see any use for renewables at all? If so, what do you think their best uses are? You ask why people discuss Africa when they talk about overpopulation. I would have thought that the following might have something to do with it: 1) The continent with the highest birth rate. 2) UN prediction that only 25% of the continent’s population will be able to feed itself from its own agricultural production by 2025. 3) Falling fresh water reserves. You asked: “In short, can you see any use for renewables at all? If so, what do you think their best uses are?” Here is my short answer of the top of my head. I’d say as follows: Yes. But only where they are economic without subsidies or being mandated by governments. There should be no mandatory renewable energy targets. There is a role for solar and wind power in remote sites. We should fund R&D and contribute to demonstration projects, but in an unbiased way with the awarding of funds being made on the basis of projected return on investment. There is a role for solar and wind power in remote communities and in developing countries, but it is a very small role. It has to be very highly subsidised. It is far cheaper to use diesel. Few can afford to waste their scarce resources on renewable energy. Certainly not the developing world. They should be the last to get off fossil fuels. In fact, we need to help them to get onto electrcity as fast as possible, even if they have to use fossil fuels to do so. The sooner they can get onot electricity the sooner they will be able to afford to get off fossil fuels. There will be not bypassing the fossil fulels step via renewable energy (hydro excluded, where it is available) Others discussing population growth rate, fertility rate, life expectancy, litteracy, education and other UN Human Development Program statistics may also be interested in playing with the the link given post #93, if you don’t already use it. Many thanks for your prompt and concise answer. I have no reason to doubt the validity of your comments. All a bit depressing, though. It makes it all the more necessary to bet the farm on the success of nuclear, given that you have exempted all other practical options. Pity that few, if any, politicians or their advisors are prepared to come off the fence and fully commit to a nuclear strategy. Actually, pity is an understatement. They can’t Zachery. Both parties are frightened stiff of being the first to come out and openly support the policy of including nuke in the suite of choices after the ETS. Labor won’t move as it has to worry about losing primary votes to the Greens and the Libs won’t overtly run with a pro-nuke policy as they can’t unless there is strong bi-partisan support from Labor. I always thought the initial move has to come from the ALP anyway. The crying shame is that I couldn’t imagine any of the heavyweight minsters that quietly don’t support nuke anyway other than say those heavily tied to the union movement. Nuke reactors would basically mean far less employment in that sector as reactors essentially run and would employ nearly all their front line people from engineering disciplines I would guess. Nuke energy is actually very highly capital intensive which means the labor content required to produce energy greatly diminishes. That’s not the way to the union movement’s heart obviously. I’m not giving a political opinion here, it’s just as I see things as I vote LDP wherever possible anyway. Funnily enough the obvious direction for a first world, highly developed nation such as ours is to move, or rather allow movement towards capital intensive industries rather than favoring labor intensive sectors, as that is where higher incomes are. Renewables such as wind and solar are not highly capital intensive by the way, as that sector requires a hell of a lot of maintenance. Population. Gawwwddd…what an god awful discussion. The ‘brass tacts’ are harder to decipher. #Population growth in Africa from the emerging middle class usually take a generation or two to even out. This is true in the UK, also brought up, as growth among 2nd generation immgirants is more or less the same as those of English/Welsh/Scottish nationality. Newly arrived immigrants carry over reproductive traditions. In Africa it is not a so simple to state growth doesn’t slow down with wealth. What you see in Africa is continued fertility rates *among tribal organized society* not in urban areas. “Wealth” is not just “money” and “income” it is a whole host of social ladders and support that does not require large number of children. In in the teaming slums of India and Cairo, population growth *inevitably* goes down even among the poorest of the poor…with no “income” increase. Thus is is as much a function of urbanization as it one of income. #Secondly, the idea that we “need less people” is simply utopia (or dystopia, depending if you take the Pol-Pot approach to population control). Do we want to go down that parth? Do we really even want to discuss this? #Thirdly, yes, there are all sort of religious issues as well. Italy and Poland, both 99% Catholic countries, will have continued higher than European growth rates because of the influence of the Church. So, a form of secularization is needed as well, but this comes *naturally* as people’s access to things like the internet, sex education, family planning, urban society, etc, all a function of wealth creation, all a function…more available energy because ubiquitous. #Back to Africa. The commentator is 100% correct: starvation and environmental disaster is almost always “Africa” in the public minds. This is the result of the media. But problems ARE there, but, in fact it has almost zero to do with population density but with the legacy of colonialism, imperialism, tribalism, etc. If you look at an image of Africa at night, you’ll see exactly why the term “Dark” continent is so appropriate. All these countries are searching for better means to electrify their societies, provide fresh drinking water and redistribute water resources there. Africa has more water available on it than any continent but South America. But it’s not in the right places. That’s where Gen IV, high temp reactors come in. We could build them along the coasts of northern Africa to provide drinking water and power. What, pray tell, is wrong with that vision? Life can be good for MORE people. You do this by making it wealthier, not fewer in number. Nuclear reactors employ more people per MW than coal does at the level of the plant. Far more people are employed, however, in the whole supply train for coal: from mine to plant. Nuclear actually employs a lot of union members, probably slightly over half from operators, to mechanics, to communication and control technicians to radiation technicians and health employees. But engineering is very high as jc notes. There are almost no transportation costs associated with fuel or waste to and from nuclear plants. But if you look at the building of nuclear power plants, and assuming an ongoing nuclear energy development program from components to raw materials to construction of the reactors (gen III reactors that is) then I would bet there are FAR more people employed in nuclear as a whole than coal. The Liberals might lose some votes if they went nuclear. I don’t expect the Labour party would. And then the Liberals would be reduced to feebly tagging along. You cannot make decisions on the basis of how many people you think might be employed David. Thats one rabbit that you don’t want to chase. Since that makes it sound like you are perversely going for the high cost option. Its cost-effectiveness that must be the criterion. Alfred, I agree about jobs. I merely stating what I believe to be the case. Actually, the fewer workers in any system is a case FOR that system, not one against it. It speaks to efficiency as measure in labor-power sold to the employer for a given MW out put. I didn’t raise this, I believe JC did. The issues as I see it are: I think there will be no nuclear decision in Australia for another five years unless there is a crisis. Even the decision due next year on the Olympic Dam expansion will probably degenerate into a lengthy squabble. If a first reactor site was announced the same crowd who invaded Hazelwood power station yesterday will no doubt make trouble. (apparently lignite is to be exported – whoopee!) The easiest thing for politicians to do is impose the lightest of carbon penalties as a token gesture. Meanwhile mid sized gas generators will regularly come online without fanfare and a few wind farms will line the routes of Sunday afternoon drives. Pollies happy, greenies happy. Shame about astronomical electricity prices though. As you know David Walters, you and I agree on this though I’d make phasing out crude-oil-for-transport as important an objective as phasing out coal-for-energy. The environmental and social footprint of resort to crude oil is at least as bad in practice as that of coal, and arguably worse. And since you mention it above, DW, I do disagree with the general thrust of your remarks on population. It is clear that we will need to taper, stabilise and ultimately reduce population sharply over the next 150 years if biodiversity is to be maintained and humanity is going to acquire substantial margin for adaptation to those parts of climate change we can’t foreclose. I’d like to think that come 2160 population was the low side of 5 billion and continuing to edge lower each decade. Regarding population, and the benefits of electrcity, I’ll just mention this link again because it seems some contributors are not actually aware of the statistics. Gapminder This is a lovely package that pulls UN data and charts it. You can run ‘Play’ and it runs through the data as a video and you can see how the statistics change over time. You can select what data you want to display on two axes and what countries you want included. You can pick log or linear for the axes. Here is an example that shows the more electricity we use the lower is the infant mortality. Conclusion: if we want to save the planet the more electricity we use the better, so the cheaper electricity is the better. Peter#125 I will make a last attempt to explain the calculations of the maximum storage capacity of the Snowy using 120-140km of tunnels(>12m bore as you suggest for Tantangara/Blowering). A flow of 1ML/sec delivers 970MW of power( 3600ML/h)dropping 100m in height. Thus one 900 m tunnel from Tantangara to Blowering will use about 0.33ML/sec(1,200ML/h) to generate 3,000MW of power, and an active storage of 140,000ML will allow 116h production or 116×3=348GWh total storage. Eucumbene has up to 4,800,000ML and Blowering 1,600,000ML potential( not sure active storage but assuming Blowering can store 1,000,000ML with suitable booster pumps). If we assume we keep 140,000ML capacity for Tantangara we have an unused potential of 860,000ML in Blowering. In no way was I suggesting the existing Tumut1&2 be used to pump back to Eucumbene, but adding an separate >12m tunnel between Talbingo and Eucumbene capable of 0.33ML/sec generating power and a slower rate return pumping and a small 6km pumping system from Blowering to Journama ( 1ML/sec 10-30m head) would allow water to flow in both directions between Blowering and Eucumbene. I think the present Tumut1&2 flow rates are 0.24ML/sec(theoretically 1,500MW but lower efficiency of only 1,200MW). The new tunnel would allow 0.33X 600 =2,000MW for a total generating CAPACITY of 3,200MW, plus Tumut3(1,500MW) for a total capacity of 4,700MW. How much energy can be stored? At full operation, total flow into Talbingo will be 0.57ML/sec and outflow to Tumut3 will be 1.1ML/sec so the Talbingo active storage(160,000ML) will we drained at the rate of 0.63ML/sec(2300ML/h) or 70h at 4.7GW or 330GWh. After this Tumut3 would have to reduce output to 750MW , and another 700,000ML could flow from Eucumbene to Blowering at 0.57ML/sec to generate about 3,950MW for another 320h or about 1200GWh. Thus the total Tantangara and Eucumbene system would generate up to 7,700MW and a total storage of (348+330+1200)=1,880 GWh of storage. Adding another 2,300MW of reversible turbines at Tumut3 would give a short term output of 10,000MW, a 3day output of 7,700MW and a much longer output(weeks) at 3,950MW. Based on your cost of $6.7Billion for the 9GW Tantangara project this would we a similar cost, or $4million/GWh total storage or $8Million/GWh 5day storage. Together with the other 4,000MW of non-pumped hydro power and 740MW existing pumped storage, an additional 2,000MW of turbines added at existing Hydro and existing 4,000MW of OCGT(NEM only) we would be able to get though any combined low solar(assuming av 8,000MW peak) and low wind period( assuming 24,000MW average), using very small amounts of NG(1-2% of present CO2 emissions). The 10% over-capacity in wind would be mainly used to replace pumped hydro losses. Most transmission additions would be Sydney and Melbourne to Snowy( if solar was PV) and 3,250MW from Perth to Pt Augusta and an increased Bass-Link(400km). On the ‘ pearls of wisdom to swine’ principle, I will not respond directly to the childish goading of finrod and soylent, but some here with a bit of class may still be wondering anout ‘power down’, especially if all you know of the principle is the hysterical disinformation provided by those two orcs. No-one in their right mind would ever suggest that the third world power down and you know who that poignant little fact is directed at. However we can’t bring the third world up to our standard of consumption, and it’s not just energy constraints. Anyone who thinks this planet can support 8 billion people at first world consumption rates, and all it would take is lots of cheap energy, is truly incapable of rational thought. It is the first world that must be subjected to ‘power down’. Far from causing billions of deaths as a couple of looneys have stridently asserted, the process will ensure that billions will NOT die off. All it involves is living with less consumption, being careful about things like food miles, waste, excessive use of chemicals, localising, using passive heating and cooling. There’s much more if anyone’s interested and they can go to Ted Trainer’s site http://www.permaculturevisions.com/TedTrainerssite.html or google him for articles he has written. Fran, the ‘thrust of my argument’ about population is based on what are the real factors and effects of population growth as it relates to production (food, power, land, etc etc). Things are no always as they seem. There are whole areas of the Philipines, to cite one example, that have returned to jungle and forest after millenia of human occupation because of the distortion of the Filipino economy now has over 50% of that population living urban areas. Distortion, brought on by globalization in Haiti has had the opposite effect and the human pressure on remaining forests have virtually eliminated trees form that country (in this case the substitution of trees/charcoal for propane/butane gas and an increase in goat herding). I’ve avoided, and will continue to do so, discuss here the issue of whether population ‘control’ is a “target”, that is a good thing or bad thing. My point is that it is inevitably a function of the mode of production and always has been, regulated by poverty, urbanization, access to technology, etc etc. That is it’s a effect of these factors, not the primary cause of the problem. I said all that to say this: for a discussion of serious family planning (and I’m FOR that), the huge social distortions created by what is called “economics” in developing countries are going to change. As someone pointed out, no one is saying Germany or the UK is “too dense”, if they are, I’ve never, ever, heard this expressed in the media. Population pressures can only be discussed as part of serious family planning in a democratic society. This hardly exists as globalization and the religion of free trade and “left the market decides” being the motus operandus in the world today. Until that changes, ‘reducing’ population growth to some arbitrary number simply is like aruging whether we should grow grapes or blueberries in our controlled green house on Mars when we set up a colony there. I’m not sure, Fran, what it is you disagree with me on about the use of fossil fuels for transportation. I’m again ’em, as you are, yes? No-one in their right mind would ever suggest that the third world power down and you know who that poignant little fact is directed at. No? You failed it seems to distinguish between first world and third…thus what is one to think? I looked at the cute village life in the link you provided SG. Cute. How do the 20 million people in and around London supposed to get on with their lives when living like actors in Sherwood Forest? Seriously…this is a ‘catholic’ solution…that is in the literal meaning of forced universality…you have to have total social buy in to such a utopia for it too work. Cities just go…bye bye? Again, rhetoric and hyperbole aside, your vision of the world living in pastoral villages requires 100% buy in, rejections of every social norm I can think of [Like, I WANT to live in SoHo in London…]. Who is to enforce this wonderful new life the web site promotes? BTW…a 5 MW LFTR would be *ideal* to power the example villages. Just a thought. David PS…to see how remote a possibility this is, a good read of Engels “The Origin of the Family, the State and Private Property” would really be worth a read to show we’re not going in the direction you want us to. Western countries electorates will never power down willingly. Western governments will have to apply coercion to achieve such an objective. No political party legitimately seeking power would ever go that far as they would be swept out of office for a generation. Despite a form of ETS being in operation in Europe for nearly a decade, no Euro country has seen a secular drop on power demand or supply. The obvious corollary to a permanent drop in power demand is a corresponding permanent drop in GDP. Every single western government at the moment is expending billions of dollars to avoid a deep recession and get economies moving again, so a drop living standards is not even close to becoming a realistic policy anywhere in the world. Given that, the best thing to do is to meet demand for cheap and abundant energy in the way that is least damaging to the atmosphere and allow living standards to continue on rising. I’m not sure, Fran, what it is you disagree with me on about the use of fossil fuels for transportation. I’m again ‘em, as you are, yes? I suppose I was inferring that when you said “phasing out coal, as priority No. 1” you meant that this was a greater priority than phasing out liquid fossil fuels … On the broader question of population, I’d be for measures that would have the effect of reducing population in the longer run through attrition as populations fewer than an average of 1.0 per birth, but I wouldn’t be for setting specific targets. On the broader question of population, I’d be for measures that would have the effect of reducing population in the longer run through attrition as populations fewer than an average of 1.0 per birth, but I wouldn’t be for setting specific targets. You don’t have a problem with that in Western countries. So how would you implement that around the world and in places where male is the child of choice without creating all sorts of social dislocation such as in China where there’s an estimated 300 million excess males in the younger generations? Yes, I think coal is priority No. 1. Coal is the biggest stationary source of GHG and kills hundreds of thousands of people every year directly. It’s true more people die over wars for liquid petroleum but it’s not as intrinsic to it as coal is. But I’d concede they are of equal malevolent value. The difference however is that coal burning is a highly centralized utility form of power, liquid fuels are not, just the opposite in fact. The social investment by *individuals* in their cars is paramount. NNadir on the Daily Kos always mentions it as the “CarCULTure”. True enough. All that for this: I think it will be from every angle: socially, politically, technologically a lot easier to phase out coal than liquid fuels. Now…if you were to WRITE SOMETHING :) here, in a completely different thread, on, say, biodiesels and syn fuel, I’m all for it. But I don’t consider the liquid fuel problem to be in anyway “competition” with anything in terms of power generation. It’s parallel to the electricity discussion. Alfred, it’s all relative. There is no doubt that coal saves millions of lives each year by providing a ready and reliable source of energy and higher standards of living. Yet if you can provide that same level of service via other means that have the same (or better) features of coal (concentrated store of energy, easily transported, reliable baseload, cheap to supply, able to operate at large energy scales and yet be compact and able to be housed close to demand centres, etc.), yet don’t suffer from the damaging effects of coal pollution (both direct and indirect — however you might weigh those relative risks), then you save many additional lives. I’m quite certain that David was talking about the additionality that an energy source like nuclear power provides. All that for this: I think it will be from every angle: socially, politically, technologically a lot easier to phase out coal than liquid fuels. Doubtless, and the technological challenges are considerable. I strongly believe the key to this lies less in new technology and more in reconfiguring the design of major population centres, so as to make cheap efficient effective high quality mass transport (much or all of which could be put onto an electric grid) available to nearly everyone. If we design suburbs properly, everyone should be able to walk, use a bike or a local bus to do pretty much everything they need, and get their shopping delivered, or at worst use a small EV to do it. So strategy number 1 is make it possible for people to largely give up their cars and use grid powered vehicles. Then your biofuels only have to shoulder the load for vehicles for which grid-power would not be feasible. Since this would be the minority demand, producing biofuels to the scale necessary from algae would be feasible. @the deathmonger who calls him/herself Salient Green:“No-one in their right mind would ever suggest that the third world power down and you know who that poignant little fact is directed at. However we can’t bring the third world up to our standard of consumption, and it’s not just energy constraints. Anyone who thinks this planet can support 8 billion people at first world consumption rates, and all it would take is lots of cheap energy, is truly incapable of rational thought.” This planet is stocked with resources af energy and matter vast beyond your woeful intellectual grasp. There is enough extractable uranium and thorium in the earth’s crust to support a human population far larger than the current level until the death of the sun. There are no fundamental shortages of any significant material resource. All it will take to survive in great high-tech style is some engineering skill, good management, and the will to rally these resources to that cause. In the end, people will respond to a positive message with much greater enthusiasm than to all your talk of limits, restrictions, and ‘powerdown’ with all that it implies, which you are not willing to describe explicitly (and for very good reason), but which I and others will not let you ignore. So strategy number 1 is make it possible for people to largely give up their cars and use grid powered vehicles. It’s not so simple, Fran. Decades upon decades of poor planning decisions and rank nimby policies by various state governments has made it extremely difficult to give up cars without causing great hardship for a lot of people. Some of the burbs, in fact most of the burbs in Australia and the US force people to buy cars in order to commute to work and to do other sundry tasks. Facts are that people don’t really go joy riding in cars on Sundays any more. Cars are used help with modern standards of living such as commuting to work, taking kids to school and shopping. Most people live too far away from travel points to even contemplate public transport otherwise life would be very difficult. Public transport like rail is particularly useful to carry large numbers of people in straight lines such as Tokyo or taking people from Midtown Manhattan to the Downtown area. However in Australia work-commuting is extremely diffuse with people traveling in all sorts of directions to get to work these days. Allan Moran from the CIS (?) published a study which showed that only 15% of work commutes these days are down a straight line as the majority of people no longer commute from burb to the CBD. Most in fact travel from burb to burb in all sorts of directions. This makes the public transport option very difficult in the way our cities are planned. Various ways to counter such problems would be to remove height restrictions in the cities and attempt to create Manhattan like living which incidentally is actually very green, as cars are really more of a nuisance in Manhattan rather than a necessity. I lived there for 15 years and it was only after being there for 5 years that I actually bought a car that I rarely used and ended up hating to write a monthly cheque for garaging fees for something that was of little use to me. Here’s a great piece in US City Journal a free market NYC based think-tank that talks about this and why bad planning decisions in places like California make the country less green. It talks about how housing in Texas costs about $200,000 to 250,000 for an average home while the same home costs about $500,000 in Cal when it’s loaded up with all sorts of planning restrictions. The weather in parts of Cal is very conducive to living without much heating or a/c for most of the year while Texas has shocking weather in comparison. “Green Cities, Brown Suburbs- to save the planet, build more skyscrapers—especially in California.” David Walters #157 “You failed it seems to distinguish between first world and third…thus what is one to think?” One would think that the third world use little and sometimes no power, that I did mention raising the third world out of poverty and that Barry has recently refered to Ted Trainer which should have raised a fair bit of awareness. No matter, people make erroneous assumptions all the time, it’s more about their behavior based on those assumptions and that’s what I take issue with more. Anyway, I correctly pegged you as a class above the others. The 5Mw LFTR sounds very elegant. I am a bit of a fan of them. The trouble is, by the time they could realistically be massed produced, solar pv and lithium or other storage technology will be much more advanced. On your PS, I realise it is a remote possibility but firmly believe it is the right thing to do. I think there is much more likelihood of our business and political leaders taking us into resource and environmental crisis from which position the first world will be too self involved to care a whit about possibly billions dying in the third world. It will not stop me from increasing awareness of the issues. Alfred Nock #162 said on coal killing hundreds of thousands “No it saves tens of millions of people every year David. Now what can you possibly be talking about?” That had to be irony right? If not, lets put it realistically. The ENERGY from burning coal enables millions of lives to be saved but the EMISSIONS from burning coal cause hundreds of thousands of deaths and probably many millions of health disorders in Humans and untold damage to the natural world. I very substantially agree with your perspective — increasing urban densities is a key strategy in reducing the energy cost of providing urban peop,e with the services they need. I don’t think it all has to be high rise — if by high rise we are talking more than about six stories and I think there is scope to have a mix of densities not excluding villas … But 30 people per Ha is too low — something like 100 is closer to the mark. I think there are things we could do in the interim. Since people have cars and there is existing road infrastructure it would make sense to build large car parks (capacity 3*5000 = 15,000 vehicles) at or just before major choke points and service these with buses to the city centre. In Sydney for example this would seriously unclutter the motorways allowing those for whom the service was not useful a free run. In the longer run this would encourage car pooling. You could put retailing and housing into these buildings for extra utility and even have wind/solar PV on the top and plug-in recharge facilities in them. A second thing I’d do is change the basis on which cars were put on the road. I’d reduce the registration charge to a nominal fee and abolish fuel taxes but charge everyone a distance based fee based on a) how much CO2 (assumption pro-rata $100 per tonne) and other pollutants came from the tailpipe with a credit for lifecycle offsets from properly benchmarked biofuels b) the traffic volumes where they were driving at the time they were driving (a GPS-device would be installed to track this) c) their accident/road compliance/driver competence profiles d) the tare of their vehicle As to the design of suburbs I’d have them designed like a peer-to-peer bus network diagram — so that each suburb would be like a node off a major connecting road. There would be just two ways in and out (one at each end of the suburb and only one connecting to the MCR) and to pass through the non-MCR connector you’d need a local tollway style tag. This would stop rat runs but allow local flexibility to go to adjoining suburbs by car. Streets would carry only local traffic and everyone else would be forced onto the MCRs or mass transit. Of course, on foot or by bicyle you’d be able to move freely past bollards and gates, through parks etc … Option 1, Talbingo-Blowering is clearly the best option. Option 4 Tumut 3 Expansion is the least attractive. Option 2 is preferred to Option 3. The options are in order of preference. I suspect the best program would be to proceed with Option 1 first. Option 2 could be built at a later date. Neither of these options interfere with or compromise the existing T1, T2 and T3 development. They can all run in parallel. T3 Expansion could be added at a later date. However, I suspect there would be other more attractive options. I do not believe Eucumbene-Talbingo would ever be viable. It would be sharing the limited storage capacity of Talbingo with T3. This would compromise the efficient and flexible operation of T3 (T3 is currently our biggest pump storage scheme and was always one of the most efficient of the Snowy generation assets). I’ve inserted my responses within your text. [NH] I will make a last attempt to explain the calculations of the maximum storage capacity of the Snowy using 120-140km of tunnels(>12m bore as you suggest for Tantangara/Blowering). [PL] 130km of tunnels (with steel lining and surge shafts in similar proportion by length as Tanatangara-Blowering) would cost $4.4 billion. This cost does not include pumping or generating stations. The cost would be higher if the average length of the tunnels is shorter, which they would be. [NH] A flow of 1ML/sec delivers 970MW of power( 3600ML/h) dropping 100m in height. [PL] A flow of 1,000m3/s dropping 100m delivers 981MW excluding efficiency losses, or 932MW at 95% efficiency, (excluding head loss due to tunnel friction – head loss depends on tunnel diameter, length and the roughness of the tunnel surface.) [NH] Thus one 900 m tunnel from Tantangara to Blowering will use about 0.33ML/sec(1,200ML/h) to generate 3,000MW of power, and an active storage of 140,000ML will allow 116h production or 116×3=348GWh total storage. [PL] The design, calculations and cost use the same flow rate as Tumut 3, that is 1133m3/s. Flow for one tunnel would be 377m3/s for 3,000MW. Tantangara would have storage for 58h of generation at peak power. It would take 111h to fill by pumping. [NH] In no way was I suggesting the existing Tumut1&2 be used to pump back to Eucumbene, but adding an separate >12m tunnel between Talbingo and Eucumbene capable of 0.33ML/sec generating power and a slower rate return pumping and a small 6km pumping system from Blowering to Journama ( 1ML/sec 10-30m head) would allow water to flow in both directions between Blowering and Eucumbene. [PL] Talbingo-Eucumbene tunnel, with generating and pump station would cost $2.3 trillion (very roughly). It would generate 6GW. Flow rate (m3/s): generating = 377; pumping = 200. [PL] Pumping system from Blowering to Jounama would be 20km (not 6km because it needs to suck from the deep end of Blowering). Hydraulic head is 86m from Blowering MOL to Jounama MSL (not 10m to 30m). Flow rate of pumping from Blowering for the 1500GW new T3 The smaller option), at half the pumping rate of the new T3, would be 300m3/s. Flow rate of pumping from Blowering for 3000GW new T3 (the larger option), at half the pumping rate of the new T3, would be 600m3/s. [PL] we’d need to build a new dam downstream from Jounama dam to make this work. The new dam would approximately tripple to quadruple the active storage capacity of Jounama Reservoir. Rough cost estimate, $100 million. [PL] Rough cost for T3 power increase of 1500MW = $1.9 trillion. For increase of 3000MW = $3.6 trillion [NH]The new tunnel would allow 0.33X 600 =2,000MW for a total generating CAPACITY of 3,200MW, plus Tumut3(1,500MW) for a total capacity of 4,700MW. [PL]The Talbingo-Eucumbene tunnel would could generate 2,000MW + T1 + T2 generating CAPACITY of 2600MW, plus Tumut3(1,500MW) for a total capacity of 4,100MW. Your last two paragraphs remind me of the expression “you cant make a silk purse out of sow’s ear”. It looks to me as if you are prepared to advocate to the Australian and state governments that they should commit to a wind power system that depends on using all the stored hydro energy in the country just to get us through three days of low wind and sunshine. What happens when a second event occurs within a few days? It should be plain as day by now that wind and solar are simply not viable. They are not economic. They are not low cost. A while ago the Wind power advocates were arguing that ‘the wind is always blowing somewhere’. I get the impression from your previous blogs that you now argue that ‘the wind is always blowing everywhere’. Neil, your figures simply do not add up. You do not have 33GW of generating capacity to meet peak when the wind is not blowing and the sun is not shining. I’d also add it is not acceptable to draw down on the hydro storage that your wind generators did not store. This storage must be maintained for emergency use and grid stabilisation. The power you can draw on is only what you’ve stored by pumping. Face it, wind is simply not going to work. However, all is not lost, because there is a far better option. All we have to do is get past the irrational hang-ups. @Alfred: I think coal played one of most important progressive technological developments in human energy history. I was and is vitally important. There is probably nothing coal does that gas can’t do better, in terms of fossil except the production of coke for the steel industry. But as Barry noted the accumulated facts surrounding coal show it to be detrimental with *other* superior energy sources available, like nuclear. As coal is the largest stationary source of GHG emissions, phasing out coal (and other fossil fuels) needs to priority No. 1 for climate and energy activsts. @Fran. I too see the future as a grid based auto future and, possibly, biofuels. The other issue is to give incentives for people to use public transportation (like making it free, for example). But in the US only 6% of the population used public transportation. So we have to make it more available, obviously. But the US, and, until recently there are other major hindrances to getting people out of their cars and that is Suburbia. Most of the US population lives in somewhat diffuse, largely suburban residential neighborhoods. I do for example, living outside of SF. For me to get to work, I’d have to take a BART train and then a street car. It takes 1 and 15 minutes door to door. By my truck it takes 14 minutes. Wanna guess what I do? This is true for many people and it will take generations of change to make the US population of 300 million more friendly to mass transportation. Perhaps we should put the reservoirs you are suggesting on top of the solar towers :) To assist you to understand what you are suggesting, and so you can do some of your own calculations, below I’ll give you the formulae to calculate the volume of water and the height from upper reservoir to lower reservoir to get 1kWh of energy, and the flow rate to get 1kW of power. If you haven’t already, you might like to read the “Solar Power Realities” paper. It shows the area that would need to be innundated, at 150 m height above the lower reservoir, to provide our energy demand for a day. There is also a problem with putting sea water in reservoirs on land. How do we prevent infiltration of salt water into the ground water. Love your ideas, but much of waht you are suggesting is very well understood. A great background as to how to do some simple calculations yourself is provided by David Mackay in his book “Sustainable Energy – without the hot air”. You can access the whole book from the blog roll list at the top left of any of the BNC web pages. Here are the formulae: Power = flow rate x density of water x acceleration due to gravity x hydraulic head (height). Power in kW = m3/s x 1000kg/m3 x 9.81m/s2 x m Most transmission additions would be Sydney and Melbourne to Snowy( if solar was PV) and 3,250MW from Perth to Pt Augusta and an increased Bass-Link(400km). I don’t agree with this staement. More on this below. Also, in post #35 you said: Your study of transmission costs is dissappointing. The theory behind the wind blowing somewhere idea IS NOT to have the entire wind capacity moved from one site of the continent to the other. For example, WA would have 20% of the wind capacity(SA,TAS,VIC, NSW about the same with a small amount in QLD) so on the observation that wind dispesed over the size of a state will at most generate 75% capacity WA would only ever produce 15% of capacity(9GW not 25GW) and some of this would be used locally (3GW) so at most 6GW would be exported east(even less with CAES), but not to Sydney, to Pt Augusta with perhaps another 1-2GW moved to Adelaide. Sydney and Melbourne would get most power from pumped storage( moving much shorter distances). When high winds exist in NSW and VIC energy would be returned to Snowy with 2-3GW to WA ( if no wind in WA, most unlikely considering the 2,000km of good wind coastline). Your statement that 10,000km would have to carry 25GW is totally mis-understanding how grids work. Feeder lines will only have the capacity of the solar and wind farms and none of these would be anything like 25GW. The major transmission links would be Snowy to Sydney, Snowy to Melbourne, Melbourne to Tasmania and Pt Augusta to Perth. We already have a large grid in SE Australia, but it would have to be increased. OCGT/CCGT and nuclear will probably be sited at existing coal fired power stations using existing transmission lines. I disagree with most of this. The statements are correct for a grid that is supplied by reliable generators, like fossil fuel, nuclear and hydro, but is not correct for intermittent generators like wind and solar. To make this easier for our readers to follow, let’s consider a scenario with wind power for generation and pumped hydro storage in the Snowy Mountains for energy storage. The wind farms do not have on-site energy storage storage. The wind farms are distributed along the south coast from Perth to Melbourne. We can have several days of very low levels of generation. Occasionally there is no generation at all. At other times one or more areas may be generating at near maximum output. Regarding sizing the transmission lines, if the wind power advocates want to be able to include, in their average capacity factors and average power outputs, the full power output of a wind farm, then the transmission line must be sized to carry the full capacity of the wind farm, not just its average output. Similarly for a region of wind farms. The transmission lines must be able to carry the full capacity of all the wind farms if we want to be able to have access to all the power when the wind farms are generating at full power. If we ever need all the power that Western Australia’s wind farms can generate we must size the transmission system to carry all that power. With intermittent generators we can have the storage at the generator (e.g. chemical energy storage) or centrally located (eg pumped-hydro) or a mixture. For the case where the storage is located at the generator (such as with solar thermal) and it has sufficient storage so the power station can provide continuous power on demand throughout the year (even through several days of overcast conditions), then the transmission line will be sized to carry the peak power that would be demanded from that power station. The transmission lines must be able to carry that power to the demand centres. For the case where the storage is centrally located (e.g. pumped hydro) the transmission line will be sized to carry the peak power output that would be supplied by any region of wind farms. The main transmission lines would run from the generators to the central storage site. The enhancements to the grid from the pumped storage sites to the demand centres would be less significant (relatively). The transmissions system requirements to support intermittent renewable energy generators will be very costly. The paper attached to the too of this thread shows that the cost of the transmission system for the solar thermal option would be greater than the total cost of the nuclear option. I certainly agree that the problem won’t be easy — price signals on both fuel and motor cvehicle usage will be needed alomng with coextensive measures to relocate people in such a way that high quality services can be supplied cost-effectively. In my own case, I spend an average of 55 minutes each way by car in preference to a walking + public transport journey that would take about 70 minutes. (I do carpool though) If what I outlined above were in place, my carpool journey time would probably fall to about 35 minutes and the public transport journey to not much more (maybe 40 minutes). Peter Lang (178) — I don’t know enough about Australia to work out actual estimates. Here we have considerable hydro with winter weather doing the pumping. The hydro provides backup for the wind being installed under a tax incentive plan. BPA has already stated that they cannot do backup for more than 20% of Pacfic Northwest grid; that amount of wind is projected to be reached in 2025 CE. I’ve seen a photgraph of some of the Nullarbor coastline and plain beyond. Other than the distance to consumption, maybe that would work as a place to locate sea water resevoirs. As for soaking into the ground, there are several methods to rather inexpensively keep that from happening. Peple have an infinite number of possible suggestions that all look great until they are costed. There is no point at all in chasing many of these suggestions. The bedrock under the Nullabour plains is limestone. It is cavernous. The cost of sealing a reservoir is tootally prohibiticve. It is clear from your suggestions you have no appreciation of the volumes of eater involved, and the area that would be requiredf to be innundated. Have a look at the Solar Power Realites paper. This will give you some perspective. David, intermittent renewables are totally uneconomic, and less environmentally benign than nuclear. So, why do you keep pushing them? Peter Lang (182) — Wind is apparently the choice here in the Pacific Northwest. There is a paper indicating that once all the costs, actually all of them, for the historical record in the USA, that nuclear has cost around $0.25–0.30 per kWh. So that does not look so economic to me. Perhaps in the future nuclear will be cost effective, but so far it does not seem so to me. Peter#175,179, Thank you for some of the corrections for flow rate of Tumut1 and 2 and power outputs. I am not sure why you cannot envision Eucumbene to Talbingo and Talbingo to Jounama/blowering acting as one system with Talbingo providing buffer for short term increased power outputs similar to what is available now. You seem to now agree that we could store rather large amounts of energy(several days supply) in the Snowy with the existing dams, which was my point. The issue of meeting peak demand for 1-6 hours is separate to providing 1200GWh due to wide stread cloud/low wind conditions. The former is an issue of capacity(GW), the later storage energy(GWh). You are still missing the issue of wind/solar farms dispersed and the need for 10,000km of 25GW transmission lines. Take the case of Perth having a 3GW capacity wind farm to the South and a 3GW capacity solar farm to the North and a 3GW transmission line to the East to Adelaide and on the the Snowy.Becasue Perth and Adelaide(with 3GW local wind farms) consumer about 3GW at peak, the 6GW of wind farms and 3GW solar are never going to require more than 3GW transmission capacity from Perth to Adelaide. Adelaide is linked to Melbourne and on to Tasmania hydro and Melbounre linked to the Snowy. In the case where wind farms are generating maximum at WA, SA(about 75%of 6GW) the maximum load would be <3GW for Perth to Adelaide and 65% of total power consumption(ie will use about 8GW of the 12GW storage capacity). Major energy flows do not have to move from one end of the grid to the other, just minimum energy flows, for wind this would be about 10% of capacity, less for solar unless all of the solar was in one location. A similar grid would be highly desirable for nuclear power, for example if Perth had 3x1GW reactors there would be a small chance that 1 of the 3 would have an unscheduled outage, while a second was on scheduled shutdown so 2GW from the E coast would make sense. The other alternative is to keep 2-3GW OCGT capacity on standby, the same solution that would be used to provide insurance against continental wide cloud cover and continental wide low wind occurring on the same day. As someone who has always been keen on pumped storage and who was especially keen on seaboard pumped storage (since you save yourself the cost of a lower reservoir) I’m sympathetic to your argument here. Yet the cost of the lower reswervoir is only one of the challenges. Fairly obviously you need lots of head pressure, so the ideal location will have topography at high elevation close to the shoreline. It’s also going to have to have quite a bit of scope to be modified to accommodate very substantial water, which implies that it is structurally very sound, has a large fairly flat area (or one that could be made so). Of course you don’t want this place to be a long way from the demand for power or a grid point otherwise tranmission costs become a factor, and ideally you’d want to be close enough to have it do desal cost-effectively, since then you can spread the cost to water users. This tends to narrow sharply your options. Consider also the quantity of concrete and steel you’re going to need to retain the volume of water you have in mind. There’s a huge built energy cost right there. Storing, for argument’s sake, 0.1 Petalitres of water would be roughly 100 million tonnes. Assuming you think you can contain 1 cubic metre of water securely with 0.25 cubic metres of reinforced concrete, your major cost will be the 25 million tonnes of concrete each tonne of which weighs about 2500kg. That topography is going to have to be very strong indeed. I don’t know how much this would cost to build, but I’m guessing $100 per tonne wouldn’t be high, and might well be low. And of course you haven’t bought any pumps or turbines or other equipment yet. Assuming head pressure of 100m, there’s 27.2GwH — a little more than one hour of Australia’s average power. I’ll address your points one at a time. It’s to difficult doing it one large post. I am not sure why you cannot envision Eucumbene to Talbingo and Talbingo to Jounama/blowering acting as one system with Talbingo providing buffer for short term increased power outputs similar to what is available now. Likewise, I am not sure why you cannot see that it is the least uneconomic of the four options and, in addition, it imposes constraints and reduces the efficiency of the existing assets, as I have explained. I have not attempted to cost the loss, but I suspect it is substantial. You seem to now agree that we could store rather large amounts of energy (several days supply) in the Snowy with the existing dams, which was my point. That is a misrepresentation of my position. I agree that we can from a pure physics perspective. But, my position relates to the cost effectiveness of the proposals. I agree there is substantial untapped energy storage in existing structures; however, I am not sure how much is viable to develop. I also believe the requirements for storing energy from intermittent energy generators are very different from storing from reliable generators that will pump at constant rate throughout the hours of the night when baseload is less than average daily demand. In part this issue relates to the transmission, where we are poles apart (so to speak). Secondly, you say “we can store rather large amounts of energy”. The active capacity of the reservoirs is not the constraint. The constraint is how much we can pump per day. The economic viability depends largely on the length of tunnels required to connect the existing reservoirs. The tunnels are the high cost item. They comprise about 50% of the Tantangara-Blowering facility. Tantangara can store 58 hours of energy at full generation capacity. However, that assumes Tantangara is used for nothing else. It means a lot of the water that Tantangara catches and diverts to Eucumbene would be lost. It would be spilled over the Tantangara spillway and run down the Murrumbidgee. So this loss of water (i.e. energy) should be factored in. I haven’t done that. So your statements are misleading. They are not a correct interpretation of what I said. I do admit, that the power of the Tantangara-Blowering facility did surprise me. That does look to be a potentially viable option, although what I’ve done is a very preliminary, purely desk top, analysis. I have some overseas colleagues checking my calculations and costs. It will be interesting to see what comes back. By the way, do you have any costs. You mentioned that you do for the Blowering Jounama and Tumut 3 expansion project. Are you willing to post them here. I’d particularly like to see any costs you have relating to the following: 1. Civil component of a new Tumut 3 power station 2. Headrace excavation or tunnel and inlet structure 3. Penstocks (same as T3) 4. Turbines (same as T3) 5. Generators (same as T3) 6. 6 pumps (same as the three in T3) 7. Tailrace excavation 8. Pumps for Blowering to Jounama 9. Pipes for Blowering to Jounama 10. New dam down stream from Jounama Dam The issue of meeting peak demand for 1-6 hours is separate to providing 1200GWh due to wide stread cloud/low wind conditions. The former is an issue of capacity(GW), the later storage energy(GWh). I agree. The point I was making about power is that, for the scenario I have analysed (ie the NEM demand in 2007, and no fossil fuels), we need the generation capacity to meet peak demand. I also added that we cannot rob the energy stored in the Snowy, because it is required for the maintenance of grid stability and for emergencies. The Snowy is constrained by the amount of water entering its dams. Recently the Snowy’s capacity factor was 14% for a year. That is because of the lack of water inflow. So we cannot rob that water to try to make wind and solar power look viable. Wind and solar power need to stand on their own. So, I am adding a new constraint to my scenario: the intermittent generators can draw what they have stored, but no more. If we need to add very large amounts of storage capacity (as we would for intermittent renewables), then Eucumbene-Blowering (trippled) would be the way to go. On the other hand, Tantangara-Blowering would be more than sufficient to allow nuclear to provide the total NEM demand (2007) as laid out in the paper “Solar Power Realities – Addendum”, and summarised in the overview at the top of this thread. To support intermittent renewables, we need 33GW of power and 1,350GWh of energy storage (for three days). To support nuclear, we need 8GW of power and about 50GWh of energy storage Quite a difference! And that storage required for renewables is on top of the far higher generation costs and the far higher tranmsision costs. This is my reply to the last part of your post #184. I hope this clarifies the issue, although I suspect wa are a a distance apart on this, in part due to the different scenarios were are analysing. I think you want to consider the scenarion of a potential position and generation mix in 2030. What I’ve been doing, and to keep consistency with the other papers I’d prefer to stick with it for now, is to consider the technologies that are available now that could provide the NEM’s 2007 demand without burning fossil fuels. So that, if we really want to make the changes quickly, we could and we’d have some idea of the cost of the options. Having said that, below is my response to the last part of your post #184. You are still missing the issue of wind/solar farms dispersed and the need for 10,000km of 25GW transmission lines. Take the case of Perth having a 3GW capacity wind farm to the South and a 3GW capacity solar farm to the North and a 3GW transmission line to the East to Adelaide and on the the Snowy. Because Perth and Adelaide(with 3GW local wind farms) consumer about 3GW at peak, the 6GW of wind farms and 3GW solar are never going to require more than 3GW transmission capacity from Perth to Adelaide. The premise is false. You are not looking at the problem correctly. Following is the way to analyse it. The situation is that there is zero or near zero wind over the wind farms in eastern Australia. The only place with wind is SW Western Australia. We are dealing with the wind farms at the moment. Leave the solar power stations out of it. They are totally ueconomic. The average demand in the eastern states is 25GW. We will store energy in pumped-hydro storage when demand is less than 25GW and release energy from pumped-hydro storage when demand is more than 25GW. So we need transmission lines with 25GW capacity. By the way, this assumes that all the wind farms have their own on-site storage, and this storage is sufficient to allow them to provide sufficient power to meet the 25GW demand at all times. If the wind farms do not have their own on-site storage, the transmission line needs even more than 25GW capacity. Adelaide is linked to Melbourne and on to Tasmania hydro and Melbounre linked to the Snowy. These links are totally inadequate. They can’t even handle the transient flows we have on a relatively stable, fossil fuel powered system, let alone on a fully wind powered system. The two interconnections from South Australia to Victoria are 200MW and 250MW. They would have to be increased to 25GW capacity (less SA demand) to transmit the power from WA. In the case where wind farms are generating maximum at WA, SA(about 75%of 6GW) the maximum load would be <3GW for Perth to Adelaide and 65% of total power consumption(ie will use about 8GW of the 12GW storage capacity). I don’t follow this bit. Anyway, the scenario we are considering is the case where the only power is coming from WA, not from SA. Major energy flows do not have to move from one end of the grid to the other, just minimum energy flows, for wind this would be about 10% of capacity, less for solar unless all of the solar was in one location. The scenario is we have a demand of 25GW in the eastern states and the only wind farms generating are in WA. So we need to transmit 25GW. A similar grid would be highly desirable for nuclear power, for example if Perth had 3×1GW reactors there would be a small chance that 1 of the 3 would have an unscheduled outage, while a second was on scheduled shutdown so 2GW from the E coast would make sense. Transmission from the eastern states is one option to provide the necessary redundancy. There are other options. For example, five 600MW units instead of three 1GW units. It depends on which is the least cost. The transmission lines needs a redundant line also. The other alternative is to keep 2-3GW OCGT capacity on standby, the same solution that would be used to provide insurance against continental wide cloud cover and continental wide low wind occurring on the same day. We’d need 25GW of OCGT back-up for wind (less the hydro generating capacity and less the transmission capacity from WA)? The wind and solar power outages are frequent. The sort of scenario you paint for the nuclear outages would be rare. We do have to have sufficient back up to cover for them, but it is not the same situation as with wind where it is a frequent occrrence. Anyway, it is quite likely that Australia would not adopt large nuclear units. To facilitate the change from coal to nuclear, smaller power reactors that are more closely matched to our coal fired units may be better. The nuclear/grid issues have been worked out long ago. The management and capital cost issues of the grid where the supply is from nuclear power are totally insignificant compared with the problem of trying to manage intermittent renewables. Option 1, Talbingo-Blowering is clearly the best option. Option 4 Tumut 3 Expansion is the least attractive. Option 2 is preferred to Option 3. The options are in order of preference. I suspect the best program would be to proceed with Option 1 first. Option 2 could be built at a later date. Options 1 and 2 would not interfere with or compromise (much) the existing T1, T2 and T3 development. They can all run in parallel. Option 4, T3 Expansion and pump from Blowering, could be added at a later date. However, I suspect there would be other more attractive options. I do not believe Eucumbene-Talbingo would be viable. It would be sharing the limited storage capacity of Talbingo with T3. This would compromise the efficient and flexible operation of T3 (T3 is currently our biggest pump storage scheme and was always one of the most efficient of the Snowy generation assets). The main constrain on Tumut 3 is the insufficient downstream storage. This problem would be exacerbated by the proposed extension. I suspect the new Dam would be virtually mandatory for this option to be considered. Peter#191, It’s a valid point to have a theoretical simulation of power demand in 2007, but it should consider the whole of Australia. The reality is that it’s going to take 20-30 years to replace all coal-fired power so saying we have to have a solution now that uses no FF is a bit restrictive. It would make more sense to compare a coal replaced by CCGT with other options such as all nuclear or all wind or mixes of 2 or more. To elaborate on the situation of just wind power replacing FF generated electricity would need x3 ( 25GW NEW and 2.5GW WA and considerable off-grid NG power generation, for example LNG, the goldfields mines, alumina refining). For simplicity lets say this is 28GW average(85GW capacity) for wind. QLD would have just a few % and TAS up to 15% with WA, SA, each VIC and NSW each about 20% of this capacity(17GW in WA). How much transmission capacity is needed from WA to eastern Australia? Clearly not 25GW. The wind regions of WA cover 3,000km so the maximum output would be considerably less than the 75% output of the 13 NEM farms. Lets say 70% of capacity 99% of the time with a small power shed( 5% of output 1% of time), or 11.9GW maximum. But WA uses about 2.5-4GW so maximum available for export would be 9.4GW. Since WA has limited pumped storage, they may want 3GW CAES available to insure that a HVDC link to SA would be used to move up to 6.4 GW to SA. This is about 8% capacity of entire grid. One region never has to move 25GW, of the 6.4GW 2-3 GW would be used in SA and the other 3-4GW would go to other cities if no other wind available or go to pumped storage in the Snowy or TAS if other regions had adequate wind. SA, VIC and NSW have more options if they are the only high wind regions, most would be used locally with the surplus (9-10GW) going to other regions, so SA would be exporting energy to WA and VIC and NSW and these regions would also be drawing on storage. For short term power(GW) the size of storage is not relevant. For storage capacity there is no reason why this cannot be replaced in weeks. Data of 13 wind farms shows that there are long periods of wind power higher than average where pumping could be used and only short periods of little or no power, for example 1st July to Sept13 has a one day(8/7) and a 3 day(15,16/7,17/6) low wind period separated by 6 days and then 13 good wind days before the next low wind day(30/7). That’s without considering any wind power from northern NSW or from WA. Pumping would take 1.5h to restore water used for every 1GWh/h generated( for example Tumut3 has 3 turbines that use 80% of output in pumping at 80% efficiency=64%) The other point about pumped storage is that it would always operate from the grid which is usually stable power I am not sure why you think the grid would be unstable? It’s a valid point to have a theoretical simulation of power demand in 2007, but it should consider the whole of Australia. The reality is that it’s going to take 20-30 years to replace all coal-fired power so saying we have to have a solution now that uses no FF is a bit restrictive. It would make more sense to compare a coal replaced by CCGT with other options such as all nuclear or all wind or mixes of 2 or more. There are an infinite number of alternative ways to do these analyses, and an infinite number of alternative approaches we could propose we could or “should” do. You seem to be missing the main point of the exercise. The main point was to show the economic viability, or lack thereof, of the intermittent renewable energy technologies to provide us with low emissions electricity generation. The central point of the exercise would become less clear and less obvious to most people the more complicated we make the analysis. Also, the main point would get lost if we attempt to look into the future and try to guess about what might be. As we look into the future the main point gets missed as we argue about: what technologies might be available; what the costs might be in the future; what the total demand and the demand profile might be; what the emissions might be; and a host of other ‘maybes’. You and I don’t even agree, within orders of magnitude, as to what transmission capacity is needed to transmit solar power from the deserts to the demand centres. And all this is using currently available technologies and their current costs. What chance would we have of making any headway if we were attempting to guess what might be in the future? To reinforce this point, consider the number of alternative options that have been proposed on this blog site as to what I should have considered instead of what I did. Here are a few: solar thermal chimney; chemical storage; CAES; pump-storage using windmills pumping water onto lined reservoirs on the Nullabhour Plain; smart-grid; bio-gas. If we look into the future, the options are endless. We’d be burried in arguing about assumptions and minutiae and get nowhere. The whole point would be burried. I sometimes wonder if that is, perhaps, the aim of some of the blogs. The point of the exercise was to keep the analysis sufficiently simple that most people could check the calculations themselves. There are many, many sophisticated analyses being done and published all the time, but most people’s eyes glaze over. They do not understand the assumptions nor the inputs, and so cannot check them. If people want to see the outputs of the sophisticated modelling forecasts there is seemingly no end of them. To elaborate on the situation of just wind power replacing FF generated electricity would need x3 ( 25GW NEW and 2.5GW WA and considerable off-grid NG power generation, for example LNG, the goldfields mines, alumina refining). For simplicity lets say this is 28GW average(85GW capacity) for wind. QLD would have just a few % and TAS up to 15% with WA, SA, each VIC and NSW each about 20% of this capacity(17GW in WA). How much transmission capacity is needed from WA to eastern Australia? Clearly not 25GW. The wind regions of WA cover 3,000km so the maximum output would be considerably less than the 75% output of the 13 NEM farms. Lets say 70% of capacity 99% of the time with a small power shed( 5% of output 1% of time), or 11.9GW maximum. But WA uses about 2.5-4GW so maximum available for export would be 9.4GW. Since WA has limited pumped storage, they may want 3GW CAES available to insure that a HVDC link to SA would be used to move up to 6.4 GW to SA. This is about 8% capacity of entire grid. One region never has to move 25GW, of the 6.4GW 2-3 GW would be used in SA and the other 3-4GW would go to other cities if no other wind available or go to pumped storage in the Snowy or TAS if other regions had adequate wind. SA, VIC and NSW have more options if they are the only high wind regions, most would be used locally with the surplus (9-10GW) going to other regions, so SA would be exporting energy to WA and VIC and NSW and these regions would also be drawing on storage. For short term power(GW) the size of storage is not relevant. For storage capacity there is no reason why this cannot be replaced in weeks. Data of 13 wind farms shows that there are long periods of wind power higher than average where pumping could be used and only short periods of little or no power, for example 1st July to Sept13 has a one day(8/7) and a 3 day(15,16/7,17/6) low wind period separated by 6 days and then 13 good wind days before the next low wind day(30/7). That’s without considering any wind power from northern NSW or from WA. Pumping would take 1.5h to restore water used for every 1GWh/h generated( for example Tumut3 has 3 turbines that use 80% of output in pumping at 80% efficiency=64%) The other point about pumped storage is that it would always operate from the grid which is usually stable power I am not sure why you think the grid would be unstable? Sorry Neil, I do not agree with this. I think we have discussed it repeatedly. I am not keen to go around the buoy all over again. I believe the papers, and the subsequent discussions on this thread, address your points. In short, you are still using averages to hide the problem of the intermittency of wind. There are periods where there is no, or little, wind over SE Australia (see chart in the “Wind and carbon emissions – Peter Lang Responds” thread; it highlights the irregular output from wind). So we either have no generation or perhaps a contribution from WA. Since we need to supply power to exactly meet demand at all times, the balance of the power has to come from energy storage. When there is no wind power we need to draw 33GW of power from energy storage. You say you can recharge the energy storage quickly. To do that you need transmission capacity from every wind farm, for each wind farm’s total capacity! Without that, the maximum capacity you can have is limited by the transmission. The cost for what you propose would be much higher than for the scenario used in the analysis described in the introduction to this thread. Also, we have to have reliable steady power to pump. Therefore, much of the wind power that is available when the wind is blowing couldn’t be used; it would be wasted. I hope you will focus on the total system and the costs of a total system that can meet all the constraints and requirements. On a separate point, could you please say if you have some cost figures you are using for your estimates for the Tumut 3 enhancement you propose, and are you prepared to share them (see the end of my post #187)? Peter, In the last table within paragraph “Appendix – Cost Calculations for Solar Thermal,” under the section “Cost for 25GW baseload power, through…” it shows dramatically reduced Collector Field cost ($1487B vs. $8583B) only because of a disproportionally small increase in storage capacity. Could this be right and would scaling up the storage further reduce the overall cost? Thanks, Bunion Peter, I am finding this too detailed for me to follow, but may I venture with this remark. You are winning by an impressive margin. Question will arise in many minds though, how robust this margin is? If non-intermittent renewables (biogas etc) are incorporated; CCS gas and coal are allowed, in reasonable amounts; maybe even non-CCS gas and coal (why not? Under proper international deal, we’ll be paying others to save the planet — nothing wrong with that); more demand-side management, if feasible; and the whole mixture optimized — does nuclear still win, and by how much? Thank you for this post. Your suggestion of expanding the scope is noted. I’ll answer that in another post. Here are a few, quick, off-the-top-of-the-head comments: 1. the most prospective, non-hydro resources are wind, solar PV and solar thermal. The solar optons are 20 to 40 times higher cost than nuclear. That means they are totally out of contention. Not worth any further consideration. Wind power with gas back-up saves very little GHG emissions and requires the full capital cost of the gas generation system PLUS the full capital cost of the wind generators, PLUS massive extra expenditure on the grid and distribution systems. If, instead of gas fired back-up, we use energy storage – either centralised (eg pumped hydro) or at the generators (eg chemical storage, perhaps CAES on the Nullabhour) – we will have very high energy storage costs and very high transmission costs. In summary, wind power provides low value energy at high cost and saves little GHG emissions. All it does is save some fuel. It’s a dud. So the most prospective non-hydro renewable technologies are all uneconomic by very large margins. 2. I don’t believe CCS has any real prospects of succeeding at the scale required. I expect there will be many demonstration projects around the world because they are the political “in thing”. Just as wind and solar are. Let’s not waste time debating CCS. 3. “more demand-side management”. Yes, of course. That is always important. It was known to be important in the early 1990’s and was an important part of ABARE’s modelling for the Ecologically Sustainable Development (ESD) policies. The idea of ‘smart grids’ was a hot topic back then (under different names). The smart meters, which are starting to roll out nearly 20 years later, were an important recommendation from those days. This gives some idea of how long it takes to actually implement these sorts of ideas. I was involved in all that ESD stuff back in the early 1990’s. I recall the strongly held views of certain groups pushing that we could achieve most of the ‘Toronto Targets’* by implementing efficiency improvements and demand side management. ABARE said “give us the numbers and we’ll include your proposals in the models”. The proponents couldn’t give figures. Despite this, ABARE did its best to model the suggestions. ABARE did a lot of good modelling (see Dr Barry Jones et al). But the forecasts that were based on long term trends and their projections of ecomomic growth, were the ones that were correct. This is what ABARE believed would be the case. As ABARE and other more pragmatic and rational groups argued at the time, it is easy to say what we could do to improve efficency in the existing systems (known at that time as “no-regrets” measures), but what we cannot forsee is the new technologies that will increase the demand for electricity. * Toronto Targets – “Australia will reduce its CO2 emissions to 20% below 1988 levels by 2005 …(subject to a caveat that said: as long as business is not be disadvantaged)”. Unfortunately, the government of the day had a policy that nuclear energy was banned and was not to be mentioned in reports by the bureaucracy. We seem to be in much the same position now as we were in 1990. It is amazing to me to see how so much of what was proposed in those days is being repeated again now. Many of the blogs on the BNC web site from the renewable energy, and smart grid, DSM and efficiency improvement enthusiasts are very similar to what was being said in the early 1990’s. We are going around the same loop, 20 years later. 4. Alexi, I’ve kept your best suggestion until last. You said: …maybe even non-CCS gas and coal (why not? Under proper international deal, we’ll be paying others to save the planet — nothing wrong with that); This really is the key suggestion. And this is what I would like world policy and Australia’s policy to be. We want an international free trade agreement that includes greenhouse gas emissions. It will be managed by the WTO. This would be the least cost way to reduce the world’s greenhouse gas emissions. Everyone knows that. The economic modelling for IPCC says it clearly and Stern and Garnaut say it too. The problem is the politics. If we did go this route, as you suggest, it would generally be a lower cost option for Australia to contribute to other countries reducing their emissions than to massively and suddenly cut our emissions – initially. This is true despite the fact Australia is near the highest GHG emmitter per capita. The reason it is true is that some other countries’ industry is less efficient than ours (although that is changing rapidly). Still, we do have to get the African and other developing nations through the hump onto electricity first and then into reducing their emissions. So it would be best, from a world emissions perspective, for Australia to buy permits (freely traded internationally) until it gets to the point where it is cheaper for us to clean up our own act. Of course there will be a lot we can and must do all the time, I’m not denying that. I’m just saying the best way for the world to cut GHG emissions is the way that is most economically efficient. Great suggestions, Alexi. Thanks for the opportunity to get outside of the nuclear/renewables/transmission box. But, having had a little peak at the outside world, I probably should get back in my box now. Wind available 50% of the time at 4 cents/kWh; lifetime 20 years. CCGT available 100% of the time at a variable cost (varting cost of gas) but assumed to average 9 cents/kWh, including carbon offsets purchased; lifetime 20 years at 100%. Combining these provides power at an average of 6.5 cents per kWh with only half of the carbon dioxide to be offset, this for 20 years. The CCGT is now paid off, so cost of ruuning it drops dramatically and it still can run at 50% of the time for another 20 years before it has to be refurbished/replaced. I’d like to say some more in response to this comment of Neil Howes’ (#194): Peter#191, It’s a valid point to have a theoretical simulation of power demand in 2007, but it should consider the whole of Australia. The reality is that it’s going to take 20-30 years to replace all coal-fired power so saying we have to have a solution now that uses no FF is a bit restrictive. It would make more sense to compare a coal replaced by CCGT with other options such as all nuclear or all wind or mixes of 2 or more. The reasons I used the scenario described in the papers (2007 NEM demand, current technologies and their current costs) for the simple analyses I’ve done so far are: 1. to keep it simple (so non-specialists can follow the assumptions and calculations); 2. to minimise the opportunity for distracting arguments about minutiae; that is, to head-off, to the extent possible, the virtually unlimited number of likely arguments about the assumptions regarding future demand, demand profile, technology options available, which will be the most prospective, and the capital cost of each technology at some time in the future; 3. to allow us to make use of available, current, detailed data; 4. I chose to use the NEM demand, rather than whole of Australia demand, because we do not have the detailed demand and supply data for whole of Australia. We can get the 5-minute generation and demand data across all the NEM and for all the individual generators – even for most of the wind generators. There is no such data freely available for Western Australia (that I am aware of). 5. Importantly, as I commented in post #200, I believe we are in a similar position now as we were in in about 1991 regarding the technology options, the costs, the government policies and the politics. So it is informative to consider what Australia’s electricity generation mix might have been in 2009, if our political leaders (with bi-partisan support) had endorsed nuclear power in 1991 and taken a bipartisan, pro-nuclear policy to the 1993 election. This is where we could be now: a. Greenhouse gas emissions some 20% lower than they are; b. 5 GW of nuclear power operating (one reactor in each of the mainland states). 5GW coming online about now, another 5 GW under construction and coming on line over the next 5 years. So, by 2015 we would have 15GW of nuclear generation and 20GW or more if we wanted to by 2020. c. I do not believe it is irrelevant to look back like this at what could have been. Because, from my perspective, we are in a similar position now as we were in about 1992 and about to repeat the same mistake we made back then. We are now a year at most from the next federal election. The government seems intent on going to that election with an anti-nuclear policy. In 1992 we were in a similar position. The opposition’s policy was to allow nuclear as an option. The Government used that position as an effective divisive tactic to help it win the election. Nuclear was off the agenda for the next 14 years, and is now off the agenda again. I see a very similar situation right now. I can foresee another long delay. d. Instead of some 95% of electricity generation related research effort in our universities, CSIRO and others, and modelling by ABARE, ACIL-Tasman, MMA and many other modelling consultancies being dedicated to renewable energy, they would have been mostly working on nuclear energy. So we’ve had 20 years of research with low return on investment. What a waste of our resources! I can not do the sort of modelling analysis you are suggesting. But many others are churning out modelling exerices all the time and applying a wide variety of assumptions. I am intending to do a (relatively) simple projection of what we could achieve by 2030 in terms of CO2 emissions and cost. I intend to remove existing coal fired power stations as they reach 40 years age. And replace these and provide extra capacity to meet demand with these options: CCGT, Wind + OCGT + pumped-hydro storage, nuclear + pumped-hydro storage. I will work on current capital costs for the technologies. The figures will be at 5-year increments from 2010. Pat Swords is one of the engineers of the first Irish revolution, the one that turned his country into the Nº1 European performer. Now he tells us, in a few chosen words and visuals, how the Irish miracle is being disengineered into chaos and poverty. “But many others are churning out modelling exerices all the time and applying a wide variety of assumptions.” — would you recommend any particular one to look at? I am looking for ammunition against the Green argument, that a mix of technologies will tackle intermittency easily. I look forward to hearing more on the real world experience of working with the simple cycle GTs and CCGTs. What is the real world practicality of using CCGT’s to back up for fluctuating wind power? I received a report a few days ago of the actual rate of change of wind power output being experienced for the total of all the wind farms on the NEM in August. The maximum ratres pof change were: up = 100MW/5min, down = 115MW/5min. The ramp up rate exceeded 50MW/5min 13 times in August. The ramp down rate exceeded 50MW/5min 9 times in August. First, we have to ask how the ISO uses GTs now. For the most part, both OCGTs and CCGTs are integrated into a grid that is largely conventional thermal units, many with load changing capabilities and fairly good predictability of what the load will be throughout the day. This means there is a huge “elasticity” of generation and, the bigger the grid, the more elasticity. Now, most GTs are ‘baseloaded’. This means the opposite of what the jargon for the grid, it means they get turned on (either for peak or, because some expected load didn’t arrive for a variety of reasons) and go to their ‘loadlimit’. This is essentially what they were built for. The CCGT plays this role also but…has better loading changing capabilities because there are, basically two power plants in one: a GT and and Steam turbine, the later with governor valves that can respond to load. But more importantly, such as the wildly popular GE Frame 7, it uses a remarkable controller called a Mark V (now Mark IV) which can actually regulate the firing of the GT to control the steam turbine for a specific total MW target…and do so VERY fast. The big issue with these suckers is that they are limited at how *low* they can go without tripping off line. Always tricky even with a Mark VI. When the CCGTs were *conceived and designed* they were done so as highly efficient *peaking* generators that had a secondary role of multi-hour, even multi-day *baseload* generators. OCGTs were never conceived of load changers at all, even though they can. In the industry efficiency is not measured as a percentage. It’s measured in *heat rate*. The heat rate of 99% of all simply cycle GT is very, very bad. 10,000 is a number that is very common. This is he same as my 40 year old conventional, crappy, gas thermal unit. From what I remember, the HR of a brand new simple cycle GE Frame 7 is about 9,200 ( this needs to be references for sure). This also sucks. What sucks more is when it goes down on load, say, from it’s 172MWs (at sea level) down to it’s minimum at about 110MWs. The heat rate starts going up to about 12,000 or higher. In other words, the expense of running a simple cycle unit down on load is really, really bad and expensive. I believe this is true with the most advanced GT out there, the LNS-100 from GE which is designed to only run in simple cycle mode at a very efficient heat rate (8000 I think). Load changing it’s not being marketed as. So…if you have bunches of CCGTs running, the more elastic load changing, generally, you have. Can a *lot* of OCGTs and CCGTs handle the wild fluctuation of rapidly changing wind: yes. The operating word is “lots”. This means that despite the generally low heat rate of CCGTs (5,000s to 7,000s) and the ability to follow load, prodigious amounts of natural gas will be burned, uneconomically, to accommodate the winds eclectic and temperamental output. Thank you for this reply. It is very interesting and informative. It’s really great to receive comments from people who have worked at the ‘coal face’. There are many others contributing on the BNC web site too. Its great. You have enlightened me with your post. I am surprised by what you say about the relative suitability of OCGT and CCGT for load following. I do also note your very important last paragraph. Hi Peter, well, I don’t have documentation with me but it I don’t understand the numbers they present. To wit: • The coal generator is a base load plant that runs all the time. It has a cost structure of high capital costs and low fuel costs. I agree in general here. • The CCGT is an intermediate generator. Compared to the base load generator it has lower capital costs but higher fuel costs. This is what I was saying about it’s initial design and intent, it’s “Marketing” so to speak. It is however increasingly used AS a baseloader generator but it can be easily taken off line at night. So it’s highly flexible. But it’s heatrate is as good or better than *any* other thermal unit which in some case, depending on the price of gas, can be *lower* than coal. Rarely, but true. • The OCGT is a peaking generator that is optimum for low capacity factor usage. Yes, it’s a peaker and is inline with what I noted. But it’s “capacity factor” is…well, it’s not a good term to use. This is where industry jargon is much better and more appropriate: it’s *availability* by definition needs to be 100% for it function as a peaker. The real-world capacity, that this what it actually runs *as determined by the load*, maybe low, but irrelevant. It’s function is different than a base load plant. Further down the page is this statement: • OCGT has the lowest average cost at operating capacity factors of less than 14%; • CCGT the lowest average cost for operating capacity factors between 14% and 55%; Probably true…I’m not sure how they parse these numbers but ideally the ISO pays, via rate increases for the operator of the OCGT, for ONLY *availability* and nothing else. They are also paying for all fuel costs as well. This means the *less* it runs the better off everyone is: because it implies a better scheduling of base load facilities, outages, etc. It means all nuclear is running, hydro available, gas and coal thermal units online etc. This is why it’s important for people to stop thinking of all MWs as equal, they are not. I am particularly resentful of some renewable advocates who think willy-nilly to keep these plants running or available or as permanently part of the mix as if there are zero costs or the costs are incidental. They are not. As California’s own usage has shown, natural gas production for electricity generation is going up, and going up every year because of the wide scale, ISO approved use of both OCGTs and CCGTs. Generally, the CCGTs are used, as I noted above and and in my previous comment *as* baseloaded plants, running 24/7 if gas prices are, as the are now, low. As this huge, rapidly growing sector of the energy market (MUCH bigger than wind or solar, I might add) these assets become “obiligitory run” units because the renewable single digit percentages of the ‘capacity’ of the system goes to double-digit, then we have to pay for more and more of these ‘cheap’ GTs…but because of the unreliability of the renewables (still, to this day, NO industrial storage for renewables, including pump-storage) then MORE and MORE gas is burned. The gas companies LOVE this. For every MW of renewables they they get to build 2 to 3 MWs of NG plants. What’s not to like? I’ve just heard something from a solar/wind presentation that sounded unbelievable. Basically, the presenter said that if we changed all power plants to nuclear then the water used to cool them would raise the temperatures of the oceans by 1 to 2 degrees and cause similar problems to those of global warming. A person next to me remarked that coal plants are cooled by water the same way nuclear plants are so why haven’t we heard anything about this problem about the hot water that comes from them. Is there anything to these concerns? It is indeed unbelievable that this rubbish is being presented as fact. The comparison to coal plant heating is quite correct. The effect is real, but LOCAL. The river just down stream of the plant will be a little warmer than it should be if direct cooling is used. If cooling towers are used, water temperatures are not effected, but water is used (evaporated) so there is less of it in the river. The GLOBAL effect is undetectable because energy releases from power plants are so small compared to the solar energy absorbed by the earth. The solar promoters are right that there is a huge quantity of solar energy available, but neglect how difficult it is to collect this dilute resource, compared to the much smaller but very concentrated energy resources of fossil and nuclear fuels. Peter I will go over them this weekend when I have more time. They require a serious looksee. I’m not an economist…at all…but I know some general things about the issue from my experience. Some of this stuff should be looked at by our friends on Kirk Sorensen’s blog as well, at energyfromthorium.com for feed back. It is possible to use dry cooling towers. These are available and have been installed in several, indeed many, locations. I suspect these are a bit more expense initially, but obviously only heat up the air. Regarding the rotating reserve, around here these reserve units are sent signals from the grid operator each two seconds; power up a little, power down a little. In this way the reserve units are always ready to go online in case of need. Peter, Thanks for all the details in #200. All of it edifying, and the historical bit is fun. But as the answer to my question, not fully convincing. The question was, whether nuclear still wins if renewable mix is optimized; and if yes, by how large a margin. I accept you aren’t doing the modelling required for a thorough optimization. Still there may be something you could do. A quick robustness analysis would be, for example, to take a case for wind-with-backup, add a little solar to it. What happens? Etc etc. That may seem like too much work. So maybe take someone else’s optimized case for renewables, compare to your case for nuclear? (Now that’s a crazy idea.) Thank you for the leads in #208. A quick hop through them was unavailing, but I’ll look more thoroughly later. It came up today at a brown bag lunch presentation on “green jobs” at the City of Sunnyvale campus. Silicon valley is a region with a large amount of solar pv companies. Don’t have the name of the speaker of hand. Can you give me more specifics on what “orders of magnitude” means so I can have an explanation with numbers to counter this false claim? In the last table within paragraph “Appendix – Cost Calculations for Solar Thermal,” under the section “Cost for 25GW baseload power, through…” it shows dramatically reduced Collector Field cost ($1487B vs. $8583B) only because of a disproportionally small increase in storage capacity. Could this be right and would scaling up the storage further reduce the overall cost? Good question. Someone is checking. I believe the calculations are correct but it is a fictious scenario because solar thermal does not yet have the capability for even 1 day of storage, let alone 3 days or 5 days. The collector field capacity required is calculated from the capactity factor. The capacity factor rises over longer periods (see the paper “Solar Power Realities” for more details on this – click on the link at the top of this thread). For 1 day, the capacity factor used in the calculation is 0.75%, for 3 days is 1.56% and for 5 days it is 4.33% (these are based on the actual capacity factors at the Queanbeyan Solar Farm, see the “Solar Power Realities” paper). So less collector field capacity greatly reduces the cost because the collector field capacity is by far the largest cost item. Yes, if we could have more storage, the costs would be reduced substantially. Again, I refer you to the “Solar Power Realities” paper for more on this. That paper shows that the minimum cost using pumped hydro is for the case with 30 days of storage (of course, no one has this amount of storage potential so again it is a theoretical calculation). However, if we used NAS batteries, the least cost would be with 5 days of storage. That is because the batteries are much more costly than the pumped-hydro. The real point of all this is that solar is totally uneconomic. It is not even worth considering. The comparison to meet the same demand (our 2007 demand) would be nuclear = $120 billion, solar PV with pumped hydro = $2,800 billion, solar PV with NAS batteries = $4,600 billion, solar thermal = can’t be done at any cost! Mark, my pleasure. And, turns out, I’ve stretched it. Looking at current production of electricity, I am almost right, “orders of magnitude”. Looking at potential production when the whole world is developed, and producing and consuming energy to American or Australian standard, and assuming nuclear power as source for ALL energy, it begins to look as a bit of a concern. I compute it at 0.014 deg. C for the current electricity production. But 0.23 deg. C for the future prosperous world. Here are the estimates. First, look at the heat system. In global warming analysis, they are worrying about things on the order of 1 Watt per square metre. Doubling of CO2 is thought to cause 4 Watts per square metre. (BEFORE any feedbacks, including water vapor. Just take the atmosphere as it is and enrich it with CO2.) Current imbalance is thought to be 1.5 W/m2. Earth is estimated to respond to 4 Watts per sq.m, from CO2 doubling, with 3 degrees C of warming – eventually, when it has been given the time to heat up. And when most (but maybe not all – this is a complex issue) feedbacks were allowed to play out. However may I do a hypothetical 10 kW first. Were 10 kW continuously produced per person, that’s 120 000 W per 1000 000 m2, 0.12 W/m2. That’s 30+ times smaller than estimated 4W/m2 from a CO2 doubling. Comparing to the 3 degree warming CO2 should cause, we get 3/30=0.1 degrees C. If this much electric energy was produced the way it is now in nuclear plants, three times more – 23 kW – of raw thermal energy would be being produced (15 kW of it wasted as heat.) So with whole world consuming energy as Aussies do now, that would be 23 kW of raw thermal energy production per person. 2.3 times more than the hypothetical 10 kW above. So 0.1×2.3 = 0.23 degrees C. Not to worry too much, Global Warming is far worse; but to ignore either. If we got lots and lots of power from nuclear fission or fusion, wouldn’t this contribute to global warming, because of all the extra energy being released into the environment? That’s a fun question. And because we’ve carefully expressed everything in this book in a single set of units, it’s quite easy to answer. First, let’s recap the key numbers about global energy balance from p20: the average solar power absorbed by atmosphere, land, and oceans is 238 W/m2; doubling the atmospheric CO2 concentration would effectively increase the net heating by 4 W/m2. This 1.7% increase in heating is believed to be bad news for climate. Variations in solar power during the 11-year solar cycle have a range of 0.25 W/m2. So now let’s assume that in 100 years or so, the world population is 10 billion, and everyone is living at a European standard of living, using 125 kWh per day derived from fossil sources, from nuclear power, or from mined geothermal power. The area of the earth per person would be 51 000 m2. Dividing the power per person by the area per person, we find that the extra power contributed by human energy use would be 0.1 W/m2. That’s one fortieth of the 4 W/m2 that we’re currently fretting about, and a little smaller than the 0.25 W/m2 effect of solar variations. So yes, under these assumptions, human power production would just show up as a contributor to global climate change. By email, George Stanford said this: “Approx. global population: 7E9. Average solar power hitting the earth’s surface at ground level = 1 kW / m^2 x pi x (6400 km)^2 = 1.3E14 kW. That’s 18.4 MW per person from the sun. – – – – – – In 2007, the U.S. used 101 quads of energy = 101 x 2.93E11 kWh = 3.0E13 kWh, for an average power usage of 3.4E9 kW. Pop. of US = ~3.00E8. Thus average power consumption per person = 3.4E9/2.0E8 = 11 kW. – – – – – – Thus if the whole world used energy at the per capita rate of the U.S., that would be adding 11 / 18,400 = 0.06% to the total energy input to the biosphere. (BTW, that’s about 6 times the rate at which geothermal energy reaches the surface.)” Now, based on our best estimate of climate sensitivity, you get 0.75C per W/m2 of forcing, so Mackay’s estimate of 0.1W/m2 would predict a warming of 0.075C, which is a bit smaller than Alexei’s estimate — but that’s only for fast feedback sensitivity so you might want to double it for equilibrium, which is 0.15C. Wow and thank you very much. Let me see if I can say this a bit simpler. If by magic say we could instantaneously get rid of every single source of man made CO2 emissions from power generation and replace that with nuclear then we trade off adding degrees rise in global temperature for 1 to 3 tenths of a degree rise in global temperature. However, there is NO concern if we heat the globe up 1 to 3 tenths of a degree. So it’s a none issue. Peter, sorry we’re using your thread for this discussion. Hopefully you aren’t cross with us. Barry, thanks, correction taken. Long-term sensitivity could well be double, i.e. 6 degrees per CO2 doubling. It is prudent to double my numbers. Mark, My numbers apparently agree with Mackay’s. His case exactly matches my “hypothetical” – he takes twice the population but half the power production. You should double my numbers, though, to be prudent, as Barry reminded us. And, do not dismiss too readily a 0.1-0.3 degree C temperature rise. Not if combined with temperature rise from other sources. “Non-issue” it is not. But you’re right that it is dwarfed by the CO2 danger. Why in the world do they have that one single wind turbine sitting there next to all 8 of the Pickering reactors? What on Earth is it supposed to accomplish? Is it supposed to be some kind of marketing tool? Besides the great cost involved in providing 24/7 electricity with solar, I understand that solar power has quite a bit larger CO2 emissions than nuclear. Would solar power related CO2 emissions be as problematic to global warming as the hot water from nuclear power plants? Mark, I have never looked into that, and do not know which factors must be reckoned with. I could look up some numbers and make some estimates, but I could easily miss important factors. Like this one: do we have to emit CO2 while making solar panels? Maybe not. Even if CO2 must be produced, it could be sequestered. CCS is a big expense for coal power; but for solar-panel making, my gut feeling is, it should be affordable. It uses a probabilistic approach. I am not impressed with their p10, p50 and p90 values for the future generating technologies. They look to ne to be clearly biased against nuclear and pro renewables. That would make sense given the strong representation of renewables researchers in overseeing the study. However, this may lead you to some of the other studies. The NEEDS report (link provided above) explains that the present state of the art is about 7.5 hours of storage with trough technology, which is their selection of the most prospective soar thermal technology. They project that 16 hour storage to be achieved by 2020. However, we need 18 hours just to gat through one night in winter. We’d need at least 3 days storage to allow solar to be considered as a basload generator. So the position is that no matter how much money we throw at it, we just do not have the technology yet. Besides the great cost involved in providing 24/7 electricity with solar, I understand that solar power has quite a bit larger CO2 emissions than nuclear. Would solar power related CO2 emissions be as problematic to global warming as the hot water from nuclear power plants? For the non-fossil fuel burning technologies, the CO2 emissions come from the mining, processing, milling, manufacturing, construction, decommissioning, waste disposal and the transport between all these steps. Most of the emissions come from all the processes related to steel and concrete and the emissions are roughly proportional to the mass of these materials per MWH or energy generated over the life of the plant. There is much more material involved per MWh for renewables than for nuclear. So higher emissions from renewables. Also recall that solar and wind require a massive over build to be able to produce the energy we need during cloudy and low wind weather. Furthermore, nuclear power stations have an economic life in the order of three times that of renewable technologies. Put it all together and you find that the solar thermal power station emits about twenty times more than nuclear, about 1/3 as much as a coal fired plant and little less than CCGT plant. Nuclear power plants also have some emissions from the uranium enrichment process. As this is due to electricity use, it is negligible when the electricity is generated by nuclear power. However it often shows up as a significant component in many studies using electricity generated by fossil fules. In this case it is still less than the contribution from construction. Further to my post $236 in answer to your question in post (#231), links to the pdf articles are included in the article at the top of this thread; these will give more information and should answer some of your questions. I think you are being a tad naughty in your attribution of emissions. The only fair way to speak of emissions is as a relationship between output of power and CO2e. The fact that solar thermal and wind don’t have equivalent CF to nuclear is relevant to the quality of the power, but not the CO2 footprint, so you can’t include overbuild assumptions. I also don’t see where you get your life of plant calculations. Since no commercial solar thermal plants are in operation, AFAIK, we can’t say they will only last 20 years, and although it may well be wise to upgrade wind farms if better materials anf technology for harvest arise in the future, there’s no reason to suppose a wind farm can’t last 60 years. Even if you have to change some of the gears or rotor parts, that’s not the same as building an entirely new plant — more like replacing components in a nuclear plant. Thank you for your comment. There are some good points to get my teeth into in this post. I think you are being a tad naughty in your attribution of emissions. Maybe. Let’s see The only fair way to speak of emissions is as a relationship between output of power and CO2e. I’d say the only fair way to compare emissions from different technologies is on a properly comparable basis. One such fair basis is to compare GHG emissions per unit energy (e.g. t CO2-eq/MWh) over the full life cycle (Note: not a fuel cycle analysis which is often used and is biassed towards renewables – watch out for that one). Another better way is on an equivalent energy value basis. This is because a MWh of energy from a wind farm is not the same value as a MWh of energy from a baseload plant, or a peaking plant. The energy from the wind farm is almost valueless. No one would buy it if they weren’t mandated to do so. The fact that solar thermal and wind don’t have equivalent CF to nuclear is relevant to the quality of the power, but not the CO2 footprint, so you can’t include overbuild assumptions. Not true. Consider the solar power station. The emissions per MWh calculated by Sydney Uni, ISA for the UMPNE report were for a solar plant with a given capacity. They calculated the emissions for all the material and divided that by the MWh the plant was expected to generate over its life. So if you need twice or ten times as much installed capacity to get the energy output you need, then you have all that extra GHG emissions embedded in the extra materials. The emissions increase in direct proportion to the amount of materials used in the plant. Bigger plant for the same energy output means more emissions per unit energy. I also don’t see where you get your life of plant calculations. Since no commercial solar thermal plants are in operation, AFAIK, we can’t say they will only last 20 years, and although it may well be wise to upgrade wind farms if better materials anf technology for harvest arise in the future, there’s no reason to suppose a wind farm can’t last 60 years. The life of plant calculation come from the NEEDS report. However, they are commonly quoted. Usually 20 years to 25 years for solar. However, as you say we do not have evidence for that because none have been around long enough to demonstrate it. I suspect it will turn out to be mush shorter than what the optimistic researchers are claiming. Wind farms are already beeing pulled down and there are attempts to sell the old, outdated structures and turbines to developing countries. No one is buying. The intention is to replace them with bigger and better wind generators to make better use of the site. Because the new structures are bigger, everything has to be replaced. The foundations have to be much bigger, the structure and the transmissions lines. It is a complete replacemnt job. So all the emissions embedded in the original wind farm components and site work have to be divided by a shorter economic life. We now find they were actually much higher per unit energy than estimated originally. The same is the case for solar. It will be out of date long before 20 years and will become uneconomic. Even if you have to change some of the gears or rotor parts, that’s not the same as building an entirely new plant — more like replacing components in a nuclear plant. As explained above, wind generation equipment is being totally replaced already. Nuclear plants are upgraded and up rated but that is not a whole sale replacement of the structure. Thanks Fran. It is good to have the opportunity to answer these questions. Another way to look at it is emissions avoided during power production. I once read an article claiming (from memory) – Every kilowatt hour produced by wind replaces a kilowatt hour produced by CO2 emitting coal plants. Now, as we have seen thats just not true. In simplified terms: due to their intermittent nature, 1GW (nameplate capacity – because thats what the public is told they produce) of wind/solar cannot replace a 1GW coal power plant, the coal plant stays operational (or is replaced with a new one) and very little CO2 emissions are avoided. However; a 1GW nuclear power plant CAN replace the 1GW coal plant, therefore ALL of the emissions from the now closed coal plant are avoided. (I’ve excluded embodied emissions here -out of my league – but when you consider the renewable option could require the building of wind/solar plants AND a new coal plant, the ‘one out, one in’ nuclear option has got to be better on that count too.) You could say then, the failure of wind/solar power to be able to replace CO2 emitting power sources, GW (nameplate) for GW, means they have high indirect emissions associated with them that nuclear power does not. Put it all together and you find that the solar thermal power station emits about twenty times more than nuclear, about 1/3 as much as a coal fired plant and little less than CCGT plant. Fran is correct that this staement needs more explanation. I was referring to the 1,600GW of solar thermal capacity needed to produce 25GW baseload power throught the year. That is an overbuild of 64 times. This means 64 times as much steel concrete, transport etc for this plans as for just 25GW of peak apacity. The sentence quoted shoud be restaed as follows: “Put it all together and the solar power station with the capacity described in the ‘Solar Power Realities’ paper emits about twenty times more GHG than nuclear, about 1/3 as much as a coal fired plant and little less than CCGT plant per MWh on a life cycle analysis basis.” First off, thanks again to all. Second, has the heating of H2O by nuclear power plants and the problem it poses to global warming been adequately address? Are there anymore constructive thoughts about this? If this became a problem then could reactors be build that would diminish this effect. Maybe using that heat for something else before putting the water back in the water supply. “Second, has the heating of H2O by nuclear power plants and the problem it poses to global warming been adequately address?” The heat energy put out by nuclear power plants, or any other kind of thermal plant for that matter, is so miniscule in comparison to the other energy flows through the ocean and atmosphere that this is a non-issue. From my perspective, the effect of the heat energy released by nuclear and by buring fossil fules (they are roughly the same per unit of electricity generated) is a way down in the weeds issue. It is about as relevant to climate change as is the ongoing release of natural geothermal energy. They are both so small that they can be ignored in all the analyses we are doing now.. We must apply the Pareto Principle (see link) if we are going to make any headway. Mark, 243. Not sure what exactly you mean. Do David B. Benson’s 220 and Luke’s 217 answer at least partially your question? Why specifically H2O heating, are you concerned about H2O evaporation, it being a greenhouse gas? You may want to re-phrase. You requested/suggested some modelling be done. Neil wanted to see the projected CO2-eq emissions and capital expenditure at 2020 and 2030 for the options we’ve been discussing. Alexi suggested some sensitivity analyses to consider mixing various proportions of the various technologies. I am going away for about two weeks, so I will not get any of this completed for at least the next three weeks. This report http://www.aciltasman.com.au/images/pdf/419_0035.pdf provides projected unit costs for energy and power, and provides much of the other information needed for detailed modelling. I do not believe some of the unit cost figures are what would actually apply if we were to get serious about implementing low-emissions, low-cost electricity generation. Neil, I’ve started on your suggestion. I tried to keep it simple. But it isn’t. The further I go the more complicated it gets. For each technology projected efficiencies, unit costs, and CO2-eq emissions per MWh change over time. The capacity credit for wind power has to change as the proportion of wind power changes. The capital expenditure needs to include the cost of ongoing replacement of existing plant. For the BAU case I needed to include the cost of replacing coal fired power stations at 40 years age with new coal at that time, and with the applicable projected emissions factors and unit cost. It’s not simple. But I am progressing with it. The pumped hydro paper is being reviewed. I haven’t received feedback yet. I’ve received a reply from one of the people who is checking my draft Pumped Hydro paper. He has checked the calculations and the cost figures (ball park) and calculated revenue. He says I have significantly under-estimated the tunnel costs. He also says the power must be estimated on the minimum head not the average head. He says as follows: One would have to assume that the available head is between the minimum operating level at Tangagara, MOL = 1,207 and the full supply level at Blowering, FSL = 380 because any operator would have to guarantee 95% reliability for his peaking power. Thus, the gross head for power generation is MOL – FSL = 827 m. … P computes to be P = 7,860 MW I had calculated 8994MW from the average head difference and lower friction losses in the tunnels. He also checked my cost estimates and says: “… the construction costs may be closer to $15 billion than $7 billion as you have estimated, which will bring the cost per installed kW back into the range of $2,000/kW which is about what pumped storage schemes cost these days.” Lastly, he sums up by saying: I do not mean to discourage you but the capital expenditure for a pumped storage scheme between Tantangara and Blowering seems prohibitive because of the scale of the investment, the high up-front costs and the long period for investors to recover their money. Unfortunately, politicians and banks take a much shorter view of life when it comes to political or financial gains and it seems to me that your idea, as much as I like hydro, seems to be condemned to the ‘not economical’ basket. The person who has done this check for me has been investigating and building hydro schemes all his life and still is. I believe there is an important message here for Neil Howes and the other readers who are very keen that renewables are implemented. Enthusiasm and belief will not make RE economically viable. We frequently go too far with our beliefs, and force our politicians to make dreadfful mistakes. The pumped hydro is not viable, yet renewable advocates want to argue for it in an attempt to make wind and solar appear viable. Solar thermal is not viable but its advocates want to push for subsidies for it despite the costs. Wind is twice the cost that advocates say it is. All the recent wind farms are costing around $2.2 million/MW to 2.5 million/MW. Thankyou Peter Lang for all your diligence and hard work in answering the many comments and queries elicited by your excellent posts. I hope you are going on a holiday for your two weeks away – you certainly deserve one! Alexei, I think you are asking me for more than I can do. Applying the Pareto Principle you can see from the papers so far provided: 1. Wind power saves little GHG emissions compared with nuclear; has very high avoidance cost (>$800/t CO2-eq) compared with nuclear ($22/t CO2-eq); is high cost and generates low value energy (see previous posts). If you look at the chart near the end of the “Cost and Quantity of Greenhouse Gas Emissions Avoided by Wind Generation” paper you can see this information. And that is for the nearest to being economic of the renewable energy technologies. The others are worse. 2. Solar power (both PV and thermal) are totally uneconomic compared with nuclear. They are 20 to 40 times higher cost than nuclear to produce the equivalent output. The “Solar Power Realities” and the “Solar Power Realities – Addendum” papers show this. So there is little to be gained by mixing and optimising technologies that are uneconomic by a factor of 20 to 40 and have higher emissions. I believe the information for the comparison you waant is avalable in the papers already postred on the BNC web site. We know that there isv alue in having about 8GW of pumped hydro combined with nuclear. That reduces the nuclear option by about 10% compared with nuclear only. 3. Transmission costs, alone, to support renewable energy are far higher than the total cost of the nuclear option. The cost of transmission for the renewables is presented in the article at the top of this thread. It shows that the just the trunk transmission lines for solar thermal in the deserts and for wind farms located along the south coast of Australia ($180 billion) is higher cost than the whole nuclear option ($120 billion). And that is just for the trunk lines. The whole transmission system upgrade needed to handle renewables would be probably twice the cost of the trunk lines. I’d argue the information you are asking for is already available. It is a matter of getting to understand it. We have to be careful not to make so many mixes and matches that we simply confuse everyone. There is one thing that Neil Howes asked for and I agree it would be helpful. That is, the CO2 emissions and captital expenditure at key intermediate dates in the path to total removal of fossil fuels from electricity generation. Neil asked for these values at 2020 and 2030. I am working on providing them at 5 year intervals from 2010 to 2050. But it will take me some time to comoplete that. Peter, thank you for the effort and patience… I do not at all want to distract you from that other equally, or more, worthy dimension that you’re going to explore. So, the following is not intended as further prodding, but merely information: With your encouragement that “information you’re asking for is already available”, I’ll keep looking. For now, the best unimpeachable comparison that I can make for nuclear-vs-renewables, is: Nuclear with hydro storage and storage-mandated transmission costs versus CCS gas and coal, wind, solar, in any proportion between the three; NO storage; NO storage-mandated transmission — comparison being by cost per kWh, assuming all capacity is always used, no intermittency problem. The Cambridge professor David MacKay has proposed that in order to decarbonise Britain entirely by 2050, we must slash energy consumption by 50%, increase renewables (mainly wind) 20-fold – and also build more than 60 new nuclear stations. Note that this is not an either-or strategy: we need every tool we have got to throw at this problem. fromhttp://www.marklynas.org/2009/8/12/nuclear-power-challenging-the-green-party Well, David Mackay’s strategy may well work. The operative term is “slash energy consumption by 50%”. If you built 60 new nuclear stations, however, you wouldn’t need to slash energy consumption by 50% you could probably increase it. Outside of a serious Pol Pot approach to consumption, these features of energy starvation are, in a way, barbaric and, unnecessary. The approach to solving climate issues is figure out what we want to do, develop a serious plan, not one where everyone are automatons and ready to ‘sacrifice for the good of all’ and we all live in what is essentially a neo-Malthusian world. Why don’t British environmentalists come out an say here are the major carbon emitters and why: coal, transportation, etc etc and begin to address each one with nuclear or other non-carbon solutions that allow for an *expansion* of energy usage while making things cleaner, greener and more efficient. Alas… And if all of America adopted the same energy efficiency policies that California is now putting in place, the country would never have to build another power plant. From the site whose link you provide. David, this is so wrong it’s hard to know where to start. California adopted the energy efficiency problem in the 1970s into the 1980s. What efficiency FAILED to account for was *growth*!!!!! Efficiency brought down some, and held down overall per-capita increases in energy use. But it can ONLY do that. Once you increase population and increase the *economy* NOT building plants is *exactly* why we had this huge transfer of wealth under deregulation in 2000/2001!!! If had built gas plants and/or nuclear plants, there would of been no energy crisis, period (outside of an increase in gas prices which really started the whole thing). The *reliance* on “efficiency” was a total and absolute disaster for California and this web site *boasts* about how well it works. My, my. California today is building over 10,000 MWs of CCGTs. So much for “efficiency”. I think Mackay’s modelling was based on assumptions about build times, the patterns of energy usage, and a view of sustainable as what would allow for a 1000 years of energy usage at the European level of about 125kwH/per person per day on a world scale. 125kwH/per person per day? Hmmm…. I use about 256 KWhrs a month. Average US home, no AC but a 50inch flat screen. You sure about that? At any rate, the point in Fran, is that none of what he looks at can work without this “efficiency” model. At the end of the day it cannot, by definition, account of growth. There is simply no getting around that. On a per capita basis, without parsing Mackay’s numbers, there is going to have to be a vast increase in per capita energy use. I see no way around it. I think his world view is flawed. Again, we need to look at our goals, sectionalize it out to achievable ends and work up from there. Mackay is in the Lovin’s school of ‘negawatts’. I live through that as Lovins was writings how glorius Governor Brown’s efficiency models were working (and they were, as it happens) and them *poof*. The state grew and that ended that. Efficiency needs to be placed in it’s proper context. View from a military objective, efficiency is but on tactic to use. As is conservation. The strategy, as opposed to tactics, involves the issues of energy growth, economic growth, nuclear and/or renewables, etc. I’d say the only fair way to compare emissions from different technologies is on a properly comparable basis. One such fair basis is to compare GHG emissions per unit energy (e.g. t CO2-eq/MWh) over the full life cycle Just so, assuming you can get reliable, pertinent data. […] Another better way is on an equivalent energy value basis. This is because a MWh of energy from a wind farm is not the same value as a MWh of energy from a baseload plant, or a peaking plant. The energy from the wind farm is almost valueless. No one would buy it if they weren’t mandated to do so. I disagree, and not only because your statement is too sweeping. It is true as I noted that non- less-despatchable sources are of less value, in much the same way frequent flier miles aren’t as valuable as the redeemable value in notional cash terms. Trying to factor in overbuild to have like with like and mapping Co2 from that simply looks like special pleading. It’s more honest to say — sure, lifecycle analysis of wind is about 5g per KwH, but when considering feasibility this is not the only or even a decisive consideration. Wind is a poor match for many of our energy usages because it is insufficiently dispatchable, limited by site constraints which impose ancillary costs such as line connection which don’t apply to more conventional sources. Unless we can do without the utility offered by conventional sources in favour of the utility of intermittent sources, one can really only compare CO2 footprints of things that can operate in lieu of the sources of energy we wish to replace. With this caveat, one can point out that we humans are not merely interested in energy of any quality and quantity, any more than we are interested in water or nutrient or shelter of any quality or quantity. Even those of us who see lowering Co2 emissions as a paramount consideration in energy policy cannot be indifferent to other feasibility considerations. Self-evidently, if each tonne of CO2e avoided/permanently sequestered using wind, for example costs ten times as much as each tonne of CO2e avoided/permanently sequestered using some other source that has five times the CO2e intensity of wind, then we are, ceteris paribus, still way ahead using the second energy source in preference to wind, because for a given spend we can still double our reduction. And there would be places where resort to wind and PV would be the best solution — small non-grid connected rural villages, where oncost and build time and the capacity to maintain a solution locally are key considerations, and where on-demand power is not as important as it is in large conurbations and can be met adequately by resort to ADs with waste biomass as feedstock. The fact that the solution doesn’t scale up isn’t really relevant to its feasibility, unless one wanted to argue that this should be done on a world scale. I udnerstand there is some island off the coast of Denmark that has done this — and well done them. I believe we should stay away from overselling nuclear or overstating the constraints on resort to renewables. An candid and compelling case in comparative utility for nuclear over most renewables already exists without putting our thumbs on the scales. David@260 Mackay’s 125 kWh/day figure is total energy use, including transport and a per capita share of commercial/industrial usage, not just domestic electricity consumption. His major efficiency gains are from replacing today’s cars and trucks with electric vehicles and electric mass transit wherever possible, and from replacing gas-fired space/hot water heating with solar thermal (works, just about, even in our climate), and heat pumps. His main aim is to make people aware of the scale of the challenge, so that it becomes obvious to everyone that objecting to windfarms AND nukes AND lifestyle changes is an untenable position. He acknowledges that the most economic solution is just to build lots of nukes, and sets out what it will cost, in money, disrupted landscapes and reduced comfort, if you don’t like that solution. For those who want the facts about the actual wind power output from ALL the wind farms on the NEM, you can now download it in csv (see link below). The following is an extract from an email just arrived this morning: (Peter L, as of a couple of days ago, Andrew has now captured the balance of the data from the large windfarms. You will remember that one of your blog contributors noticed that there was a discrepancy between the total installed capacity of Andrew’s set and the listed total installed capacity. The St Halletts 1 & 2, Snowtown, Clement’s Gap etc others are seperately categorised on the NEMMCO/ AEMO site. These are now extracted and listed.) My thanks and congratulations to Andrew Miskelly for achieving this. I wonder why can’s AEMO provide this capability. In fact, why can we mine the data in GapMibnder: http://www.gapminder.org/ then click on ‘explore the world’. “A new study by Xi Lu of Harvrd University calculates that wind power in the U.S. could potentially generate 16 times the nation’s current electricity production. The study limits potential wind farm locations to rural, nonforested sites (both of land and offshore) with high wind speeds.” from the October 2009 issue of Scientific American, page 28 Do you beleive there is any question about the sustainability of nuclear fuel over 1000 years. Do you believe wind, solar or other renewables are more sustainable than nuclear? If so, do some calculations on powering the world with hese technologies, calculate the quantities of materials required and where they will come from. Calculate the area of land that ould have to be mined and the quantitires of earth moved. Do the same for all parets of the process chain. The problem is that RE advocates condcern themselves only with the fuel. That is why the comparisons must be on a life cylce analysis basis. Nuclear is far more sustainable over the long term than solar and wind. Crunch the numbers. Energy efficiency is THE core climate solution, Part 1: The biggest low-carbon resource by far This statement is just as wrong now as it was in 1991 to 1993, the last time we had the opportunity to implement polices to build nuclear, and let it slip away. This belief was pushed then, accepted by the government and has proved to be wrong. ABARE’s modelling at the time, and many other pragmatic voices, said it was wrong, but the voices like yours won the day. We lost 20 years then, and if this voice wins again we may lose another 20 years again. There are some very important issues regarding affects on local climate from wind farms mentioned in the conclusion of this paper which your quote omits : —- “The potential impact of major wind electricity development on the circulation of the atmosphere has been investigated in a number of recent studies (22, 23). Those studies suggest that high levels of wind development as contemplated here could result in significant changes in atmospheric circulation even in regions remote from locations where the turbines are deployed.” “In ramping up exploitation of wind resources in the future it will be important to consider the changes in wind resources that might result from the deployment of a large number of turbines, in addition to changes that might arise as a result of human-induced climate change, to more reliably predict the economic return expected from a specific deployment of turbines.” —- The effect on local climate, particularly for farmers hosting turbines and their neighbouring farms, is a significant issue that must be researched before there is any further widespread deployment of industrial scale wind energy developments. The fact is that industrial scale wind energy still requires a significant amount of research (environmental / ecological / health etc.) to understand the negative impacts of deployment. For some more links regarding local climate effects see my recent post #187 on Wind and carbon emissions – Peter Lang responds. For some comments from IPCC regarding industrial scale wind energy research requirements see post #154 on the same page. For some important research, in addition to Peter Lang’s, regarding CO2 emissions / geographic diversity effects see my posts #141 & #144 on the same page : Peter, you forgot to provide a link to Andrew Miskelly’s wind data in CSV format. Bryen, the other thing David B’s statement ignores is whether it is practical to harness this energy. I have no doubt there is huge wind and wave potential on top of solar. Indeed, the earth receives vastly more solar energy each year than humans require. That is not the problem — the problem is in economically harvesting, storing and redistributing it as useful electricity, as the recent posts in this blog has repeatedly and patiently tried to point out. Do you believe there is any question about the sustainability of nuclear fuel over 1000 years Mackay in his discussion distinguishes between resort to uranium used in LWRs and assuming only RARs for uranium and not including resort to ocean-based uranium. Unsurprisingly, the LWR based on RARs is not sustainable for 1000 years at current usage. Of course we will take what we need so this doesn’t settle the matter. FBRs, IFRs, Thorium and if necessary, seawater recovery will all be followed in preference to going without, so my answer is yea but no. (ack: Little Britain) And no, I don’t believe such renewables (even in concert with energy-usage avoidance and efficiency) as are currently available offer a ubiquitous and maintainable low environmental footprint solution or on these criteria as feasible as resort to nuclear power. In some settings though, they surely do, though this is very much an exception rather than a rule. OTOH, in concert with nuclear some renewables (e.g. 2nd gen biofuels) would be more sustainable than they are now. Thanks for the reminder about the RAE study. Not to dismiss it in any way, but it is a bit old now. Nonetheless, it did set the stage. The RAE did not have access to real live operational data as we do, but is excellent backup evidence. Real, live, operational data? Have a look at what Andrew has been up to – you’ll have to query the database with your own set of dates. Warning – ask for about a month of data at any one query. The amount there is enormous. The link is: http://www.landscapeguardians.org.au/data/aemo/ Bryen, AEMO does not provide access to its data in a way that anyone with normal IQ can access. My comment about Gapminder is in the hope that someone might work out how to mine the AEMO data so it can be accessed and displayed in Gapminder. Phew!! Only had time to skim the incredibly rich conversation you’ve all been having.Have been in the Flinders Ranges for the last 2 weeks. I’m sure other countries have had similar arguments/discussions in years gone by and they’ve obviously come down on the side of nuclear as their best chance of having a cost competitive and adequate future energy supply. That’s why 33 countries are already producing 16% of the world’s energy total and a further 20 countries are building reactors now.Can’t we in Australia curtail our debate and follow the example of all of these countries and in the not too distant future? We are far enough behind already in securing a clean green base load energy supply. The alternative for that as you all know is to keep burning filthy coal. We need to phase out coal over coming decades and phase in nuclear. Those panicked by the thought of that should not be too worried even if they have coal shares. We can still keep mining the stuff and use it for fertilizers, pharmaceuticals, liquid fuels etc. We just need to stop burning the confounded stuff for power, clean or otherwise. Had nuclear power not been so villified by the likes of Nader, Toynbee and Caldicott over the last 30 years, probably world nuclear power would be at 30%+ and we wouldn’t need the economy -crippling ETS that we currently face. And, what price any meaningful agreement at Copenhagen?? Rudd’s already written that off as indeed he should. Could I ask all of you to write to Rudd, your local member, Opposition parliamentarians etc and TELL them to get their heads out of the sand, and to start using our world’s biggest uranium reserves, world’s best waste disposal site [both in South Australia]for our own and the planet’s good? We need a bit of vision from our leaders here and for them to start worrying about the next generation and not the next election. I regard Rudd/Wong as very poor on climate change issues even putting aside the exclusion of nuclear power from the discussion. Garrett is probably as useless an Environment Minister as there has ever been. I now think that wind power is likely to be a bit player, most suited for interruptable power usages, but also just to energize the grid somewhat; around here, about 20% of total supply because we have lots of hydro to back it up. Similarly for solar PV when the price comes down in a decade or so. I also favor using biomethane in oxy-fuel CCGT with CCS to begin removing some of the excess CO2 for sequestration. Creating the pure oxygen could be powered by wind, with storage tanks, in some locations. The idea of connecting PV, ST or Wind directly to the grid is a nonstarter. It just injects too many potential problems; brown outs, black outs, surges etc. The only possibility of reasonable utilization is buffering the low energy renewal output through storage. Use the panels, mirrors or windmills to charge up the batteries, heat salt, or pump air or water directly and then release the energy into the grid. This is the only predictable and consistent way to provide base load power, but I’m sure it will be very expensive. I believe you are correct. Intermittent renewables must have on-site energy storage, and sufficient energy storage so the power station (wind, solar, wave power, etc) can provide reliable power, on demand, with the same reliability as fossil fuel, nuclear and hydro-electric generators. As you say, the cost of such a system would be very high. For example, to meet the NEM’s demand with nuclear (plus 8GW of pumped hydro energy storage) the capital cost would be about $120 billion. To do the same with solar PV and on-site chemical storage would be about $4.6 trillion. To do the same with solar thermal is currently not physically possible and not likely to be for decades. I’ve just been looking at the Wivenhoe pumped hydro scheme near Brisbane. It pumps for 7 hours to provide 5 hours generation. It pumps from about midnight to about 6 am and meets peak demand during the day and evening. It is on standby for the remainder of the day, about 12 hours, spinning and ready to provide almost instant power whenever needed. The power generated must be sold at at least 4 times the cost of power used for pumping. The relevance of all this is that pumped hydro is a perfect match for coal and nuclear generation, but is not for intermittent renewables- there is no way that the pumps can bu turned on and off to make use of the intermittent power, the power provided by the wind farms is far too expensive, and fatally, there is no way that pumped hydro can store the amount of energy that would be needed to make intermittent renewables reliable. I’m still on holidays and will work on my undertaking for Alexei and neli Howes when I get back home. That assignment is to show the total capital expenditure, CO2 emissions, CO2 avoidance cost, and other stats, at 5 years intervals from 2005 to 2050, for six scenarios. The six scenarios are: 1. Business as usual (energy demand as per ABARE projections); Scenarios 2 to 6 are for reducing coal fired generation by 2GW per year from 2012 and the supply discrepancy to be provided by: 2. CCGT 3. CCGT to 2020, nuclear added at 1 GW per year to 2030 then by 2GW per year discrepancy filled by CCGT 4. Wind and gas, where gas is 50% CCGT and 50% OCGT 5. Wind an punped hydro 6. Wind and on-site storage (with NaS batteries) The NEEDS report (see link in the article at the top of the thread) reviewed the solar thermal technologies, selected the most prospective (solar trough) and analysed it further. NEEDS projected that 16 hours of energy storage may be feasible by 2020. We need 18 hours energy storage to get through one night in winter, and at least 3 days to enable intermittent generators to supply baseload power through overcast periods in winter. There are litterally thousands of possible options being investigated. None are even close to being commercially viable. The solar thermal option is more than 20 times the cost of nuclear to provide our power needs. It is not worth the time and effort to investigate it further at this stage. If someone can provide cost figures from competitive bids and/or from commercial, operating solar thermal power stations that can provide baseload power throughout the winter months, including through extended overcast periods, I’ll be pleased to include it in the simple analyses I am doing. Hi Peter, the BZE team are about to release their 200 page Zero Carbon Australia (ZCA) plan in May. While there will be other interesting facts about transport and building sectors, I guess this blog is mainly about baseload power supply. For their energy mix they’ve chosen to model today’s wind and solar thermal (but are open to other forms as they commercialise). From their PDF pages 9 and following they discuss a 60% solar thermal (with biogas backup) and 40% wind mix. So again, no one technology does the work alone. They count the 40% wind penetration as ‘baseload’. Have you modelled biogas backup for the longer 3 day periods? From the above it seems you want the solar thermal technology to do it all on its own, and that isn’t the model the renewables proponents are proposing. They readily admit there will be weather challenges, but rather than build 10 times the power plants they need, they simply switch to a gas backup. Mate: I’m not very technical, but even I am left wondering if some of your article above is a straw-man debunking strategies none of the renewables guys are proposing? I don’t have time for that. I’d rather hear what is actually possible according to the technologies actually proposed by either side, not reductio as absurdum arguments that straw-man the other’s position. EG: You guys don’t propose digging expensive 5 mile deep tunnels clad in platinum to store the nuclear waste forever, as you NEED that waste as fuel to burn it! But I’m sure I’ve heard Dr Caldicott interview people proposing something as ridiculous to deal with nuclear waste, and I’m left grinding my teeth and shouting at my iPod, “But they’re going to USE the waste you silly Moo!” So if Peter is right on nuclear at only $4 billion / GW capacity AND if BZE are right on a 60% solar thermal (with biogas backup) and 40% wind grid, then Nuclear still wins as far as price is concerned. My “Black Swan” comment for the day? What is politically feasible. $300 billion won’t destroy Australia’s economy. Over 10 years it is only $30 billion a year. (Political diversion: Dr Mark Drummond’s Phd calculated that we’d save about $50 billion a year in duplication if we abolished state governments and only had one Parliament for Australia, not 8. Interestingly both Bob Hawke and John Howard recently agreed that this would have been a preferable model for Australia). I don’t have time for that. I’d rather hear what is actually possible according to the technologies actually proposed by either side, not reductio as absurdum arguments that straw-man the other’s position. That is painfully obvious to all. You have no time for the grunt-work of dissecting the elements of each new ‘renewables’ scheme put forward by the same bunch of scammers who disappointed you the last time to see if it’s going to hold water, but all the time in the world to trawl the net for such schemes to run to others with and herald whatever it is this time as the coming of the Heavenly Kingdom. Errr, no. I just happen to be fairly busy lately and am limited in how much reading time I get, so listen to podcasts. I also just happened to be listening to the BZE podcast yesterday (while helping the in-laws get ready to move), and the podcast was all about their upcoming plan release in May. So I knew where the site is, and quickly found their summary PDF and the pertinent pages. If BNC had a podcast I’d listen to that as well. (One day I hope you’ll get bored of attacking my motivation and straw-manning my character). You have no time for the grunt-work of dissecting the elements of each new ‘renewables’ scheme put forward by the same bunch of scammers who disappointed you the last time to see if it’s going to hold water Well, I’m limited technically but after a fair bit of reading back in my earlier peaknik days I developed a checklist of questions I try to ask about alternative energy (to oil mainly). It’s not great, but I was just trying to formulate an easy checklist to help other non-technical peakniks explain why no substitutes for oil could do the job with the liquid fuels infrastructure we currently have. From the above it seems you want the solar thermal technology to do it all on its own, and that isn’t the model the renewables proponents are proposing. They readily admit there will be weather challenges, but rather than build 10 times the power plants they need, they simply switch to a gas backup. … Mate: I’m not very technical, but even I am left wondering if some of your article above is a straw-man debunking strategies none of the renewables guys are proposing? No it is not a strawman. It is a ‘limit analysis’ so you can see through the fog of the renewable advocates argument that when one renewable doesn’t work we turn to another. First we need to know what is the cost of each renewable on its own. Then we need to combine them to find the total cost. This paper looks at the solar renewable as a limit position. The previous papers looked at wind. You need to understand the process and follow through the series of articles. It is a ‘limit analysis’ so you can see through the fog of the renewable advocates argument that when one renewable doesn’t work we turn to another. I don’t see how debunking something no-one ever proposed helps clarify the situation. When the solar thermal shuts down, they propose that the evening wind (at a certain average cents / hour) will probably take over for a while, heat from the liquid salt backup thermal storage can be quickly despatched as necessary throughout the night, and if we have some freak week across the continent, we’ll dig into our compressed biogas tanks a bit. These are all known technologies. Critiquing a completely unrealistic, exaggerated strawman of the renewables plans does as much for the credibility of these arguments as Dr Caldicott does for her anti-nuclear cause. I’m amazed at the obfuscation from both sides. Eclipsenow, if you don’t understand the concept of defining the boundaries, I can’t help you. If you want to understand, you do need to put a bit of time into reading the actual articles, rather than just arguing about the comments posted here. You asked for some references a day or so ago. I provided some. You said you’d book marked them to read in the future. Apparrantly you haven’t yet and now you’re onto raising another issue. I get the impression you are more interested in chucking fire crackers than in trying to understand. Sorry mate but you’re the one avoiding the issues. Maybe you need to actually review an actual renewables plan, and not debunk nonsense that no-one is proposing. I have bookmarked the links you referred to, but in amongst a career-change, running our design studio, and helping my in-laws sort through all their ‘stuff’ I don’t have much time for reading… but can fit in listening to podcasts while I attend to some of this stuff. If you have a podcast or 2 for me to listen to, I could check that out. As I already said in another thread, Stanford University have some interesting talks on nuclear that I’ll be catching up on while packing ‘stuff’. (If ever anyone needed a reminder that Western civilisation consumes too much unnecessary junk, try helping your in- laws prune back for a small retirement village apartment. It’s a real education). PS: “Defining the boundaries” is unnecessary as the BZE team are well aware of them. Their team involves dozens of engineers and energy experts who have drawn up their 200 page plan for release in May. They are aware of the boundaries, and have worked around them… and costed them, and say they have a plan for $300 billion. You say you have a nuclear plan much cheaper, but I’d love to see the plans for storing the really long term waste and what the economics of that is. I’d love to hear the Amory Lovin’s characters have a debate over the actual nuclear costings, and what areas I might have forgotten to check. (I’m still getting over the fact that there still is long-term waste with Gen4 reactors. I was so sold on the idea, from multiple online articles about Gen4, that there was no long term waste and the misunderstanding that it would all be pretty much safe within 500 years). If BNC and BZE were to duke it out via a series of podcast debates, then that might be educational for all involved. “The truth will out”. I’m still getting over the fact that there still is long-term waste with Gen4 reactors. I was so sold on the idea, from multiple online articles about Gen4, that there was no long term waste and the misunderstanding that it would all be pretty much safe within 500 years For goodness sake, I wonder why one tries to explain anything to you. You are the most frustrating commenter on this blog, bar none. You’re apparently not listening and not willing to critically evaluate even basic scientific explanations. Some advice — try to think on these matters and to evaluate data in a rational manner. Try the Socratic method and start asking yourself some questions. How ‘hot’ is IFR fuel after 500 years? What does a long half-life mean? If I hold a lump of uranium in my hand, what will happen? And so on. If you can’t do this, then Finrod is most certainly right – you’re playing us for suckers and never had any intention of taking a considered and rational view on nuclear power issues. Barry, I do listen (when it’s explained in English) and have changed my blog accordingly. Now over on the Life time of energy in your hand thread where the waste issue came up, there was quite a few interesting posts, some of which I kind of understood, and some of which were fairly technical and required a general science degree, and maybe even something more specific to nuclear interests, to truly understand. As a layperson with an arts and welfare background I am very interested in the bottom line for society, and have dumped many of my earlier objections to nuclear power which I now see as rather cliché. So the fact that I don’t get some of the more technical explanations as to why certain types of waste might be dangerous and others are not is not really my fault, but the responsibility to communicate this clearly lies with the communicator. Some commenter at BNC occasionally act as high level priests initiated into the arcane arts and snubbing their noses at those who aren’t. But if you wish to communicate to non-technical activists like myself and have the nuclear power debate move forward, then maybe answering those questions in an intelligible manner for the uninitiated might help. I’m still getting over the fact that there still is long-term waste with Gen4 reactors. I was so sold on the idea, from multiple online articles about Gen4, that there was no long term waste and the misunderstanding that it would all be pretty much safe within 500 years. We’ll find uses for that small portion of uber long-lived FPs. I wonder if it couldn’t be mixed in with paint or structural material to provide a radiation hormesis effect as a public health measure, much as flouride is added to drinking water. Woah, I thought it was a joke, but there’s even a wiki. “Consensus reports by the United States National Research Council and the National Council on Radiation Protection and Measurements and the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) have upheld that insufficient human data on radiation hormesis exists to supplant the Linear no-threshold model (LNT). Therefore, the LNT continues to be the model generally used by regulatory agencies for human radiation exposure.” My son recently produced a sales catalogue of which he was very proud. On reading it, I became incandescent by his description of fluorescent lights as flourescent. I suppose that I’m going the way of incandescent lights – my age and concern over correct spelling are making me obsolete. My son was indignant at having his mistake pointed out to him and blamed his computerfor having a defective spell checker. eclipsenow, the LNT model is what’s commonly called a null hypothesis. It does’t need any evidence, whereas the hormesis hypothesis must accumulate sufficient evidence to overturn this null. It has a fair amount already, whereas the LNT still has none. But needs to keep building that body of work. Not fair, but the way some folks like to frame statistics (I prefer multi-model inference with no pre-conceived null). I received HD’s for my sociology essays, could see how sociological surveys were weighted one way or the other from the values implicit in the ‘leading questions’ put to the public, but when it came to statistical analysis of the results… left that to the maths gurus. So, as this is not really on the topic, I might just pass on the ‘multi-model inference’ statistical modelling if that’s ok. (I know it will come as a huge shock to you, but I’m just being honest as to how completely I’m not wired in that direction.) ;-) eclipsenow, – When life began on Earth almost 4 billion years ago, background radiation levels were five times higher than those we experience today. Life adjusted well, as it did to all other forms of energy to which it was exposed – heat, light, electromagnetic. This adjustment took two forms. The first suggests that exposure to low doses of radiation actually stimulates repair mechanisms that protect organisms from disease and may actually be essential for life. The second involves the development of the biochemical systems that protect organisms against the noxious effects of ionizing radiation. One thing life did not apparently do was to evolve an organ that can detect radiation. This lack of a radiation sense points to the fact that living organisms have no need to detect such a low risk phenomenon. Indeed, ionizing radiation only seems exotic and mysterious to some people because it was not discovered until relatively recently, unlike light and heat say. It is nevertheless nothing more than another form of energy. The perceived distinction has serious negative consequences but has no scientific basis. However, for statistical reasons the LNT cannot be falsified and so the precautionary principle has been adopted at an unacceptable societal cost . Barry, I’d argue that the LNT is not the null hypothesis. The null hypothesis is that low-level radiation is harmless. All studies that I am aware of are reasonably consistent with this. The exceptions favour hormesis which asserts that low-level radiation provides some health benefits. This has been demonstrated in some projects like the nuclear shipyard study. LNT for low-level radiation has never been demonstrated as far as I know. Joffan – The definitive proof of the LNT model is to disprove that a risk-free threshold exists and to disprove a quadratic risk/exposure function. This is the LNT null hypothesis. Threshold is a concept borrowed from toxicology, in which a human being can accept a certain amount of a potentially toxic substance up to a certain dose without harm, and then after a “threshold” dose, harm occurs. “Linear” simply means that for a given increment of additional dose, a fixed amount of additional increased risk occurs. A broad look at the available data demonstrates that there appears to be certain levels of radiation exposure that confer no harm to human beings, but then at some point the risk of cancer rises precipitously. In other words, there appears to be a finite threshold, and beyond that threshold there appears to be an increased risk for cancer according to a nonlinear quadratic function. Therefore, the Null hypothesis to the LNT model remains yet to be disproved. Note that this is essentially a Catch-22 situation, because the hypothesis is poorly formed, since there is no stated lower bounds at all. It is, however not necessary to prove or disprove the LNT null hypothesis if the hormesis null hypothesis can be disproved, and that IS possible. I have three hypotheses for exposure to radiation levels that are consistent in magnitude with natural background levels: 1: Increasing benefit 2: No effect 3: Increasing harm Which of these should I select as my null hypothesis? It seems obvious to me that hypothesis #2 is the correct choice. The data is consistent with this, so this should be the basis for any further action. If I use the same three hypotheses for radiation in the range of 100-1000 times natural background, I would still select #2 as my null hypothesis, but now the data would disprove it and support hypothesis #3, so that becomes the basis for future action. Joffan – There is logic, and then there is politics – science is not exempt. The ‘official’ null hypothesis for LNT is the one I stated in the first paragraph of my previous comment. It’s official, because it is the only one that can be set looking at the LNT in isolation. This is where the politics comes in. Any rational examination of the problem would reject the whole damned hypothesis as ill-formed, and strike another one similar to the one you stated. However the radiation health sector, for any number of reasons, (none of them logical or scientific) cannot do this. @27 April 2010 at 8.09 Said Mate: I’m not very technical, but even I am left wondering if some of your article above is a straw-man debunking strategies none of the renewables guys are proposing? 28 April 2010 at 8.54 Said I don’t see how debunking something no-one ever proposed helps clarify the situation. … Critiquing a completely unrealistic, exaggerated strawman of the renewables plans does as much for the credibility of these arguments as Dr Caldicott does for her anti-nuclear cause. I’m amazed at the obfuscation from both sides. @ 28 April 2010 at 9.47 Said: Sorry mate but you’re the one avoiding the issues. Maybe you need to actually review an actual renewables plan, and not debunk nonsense that no-one is proposing. @ 28 April 2010 at 12.57 Said: Some commenter at BNC occasionally act as high level priests initiated into the arcane arts and snubbing their noses at those who aren’t. But if you wish to communicate to non-technical activists like myself and have the nuclear power debate move forward, then maybe answering those questions in an intelligible manner for the uninitiated might help. The issues you are raising have been discussed at length in the comments on these threads. I note you’ve bookmarked the paper but haven’t yet read it. I’ve responded to your comments and question, but understand that my explanation may not have made sense to you. I’ll make another attempt to answer your question below. If this is not sufficient, can I persuade you to read the article, and the preceding articles that it build on, and also perhaps follow through the discussion on the threads as these discuss the points you are raising. The reason for the limit analysis – that is, looking at just solar power rather than a mix of renewable energy generators – in the first instance is so we can get an understanding of the mistakes and misinformation that is being propagated by the solar power advocates. One of the most important mistakes is doing calculations on the basis of the average capacity factor over a year. Using an average capacity factor instead of the minimum capacity factor, under-estimates the cost by a huge amount. Here is the explanation, in layman’s language The average capacity factors from an actual solar farm are: annual = 13%, 3 months of winter = 9.6%, the worst days in winter = 0.75%, at night = 0%. The “Solar Power Realities” paper considered the option of all power being generated by solar power and using energy storage to supply the electricity when the sun is not shining. No one is suggesting this is a scheme that would be built (other than advocates like David Mills), but this is a way to look at the real costs of solar. You can downscale from providing all electricity to providing just 1 GW or 1MW or whatever you like. The principles apply generally. The principle is that you cannot use average capacity factors. You must look at how you will provide the power when the solar plant is generating at its minimum capacity factor. As I mentioned, the ‘Solar Power Realities” paper looked at the situation with solar generators and energy storage. It considered two storage options: pumped hydro and NaS batteries. NaS batteries are the least cost battery option at the moment. The “Emission Cuts Realities” paper considers a simple mix of renewable energy technologies together with gas back-up for wind power. Lastly, let’s consider, in a really simple way for clarity, the situation with a mix of renewables to provide our power needs. We must remember that the power must be provided at the instant we need it. Let’s say we need to deliver 1GW of power on demand (just to keep this simple). Let’s start with 1GW of solar PV. The capital cost is around $10 billion. We find we have no power at night and almost no power at some times on some days (heavily overcast). So we need to add something else to provide the 1GW power when it is demanded. So we add 1 GW of wind power. The capital cost is about $2.6 billion. But we find the sun isn’t shining and the wind isn’t blowing. So we add 1GW of wave power. I don’t remember the capital cost but let’s say $10 billion. But then we have times when the sun isn’t shining, the wind isn’t blowing and the sea swell is small. We are now up to $22.6 billion To link all these dispersed generation systems, we need a massively expensive electricity grid and we still don’t have dispatchable power (power that can be supplied when the user demands it). So we have to add either: energy storage, or fossil fuel back up, or a dispatchable generators like biomass, geothermal or nuclear. Biomass is expensive, requires enormous land area and has its own environmental problems. The type of geothermal energy that Australia is attempting to develop has not been developed anywhere in the world yet. It may or may not eventuate as a commercial proposition. The world has been working on it for nearly 40 years and we have not advanced much in that time. There are still no commercial power stations anywhere in the world. So why not simply skip all this nonsense and go straight to nuclear. The capital cost of the 1 GW would be around $4 billion, with all the impediments to nuclear remaining in place, or perhaps around $2 to $2.5 billion if the imposts were removed and we had a genuine level playing field for electricity supply. Given that nuclear is about 10 to 100 times safer than our current electricity generating system, and is far more environmentally benign than any (including wind and solar), why don’t we just cut through all the irrational arguments and go straight to nuclear – preferably by removing all the impediments to it? I have to laugh at the pathetic attempt by the Old Greens to find some way, any way to avoid nuclear power, They are no longer even bothering to mount their usual pathetic attacks against nuclear energy, so thoroughly have those tried arguments been debunked. But they will not give up, and desperately hope their renewable dreams can still be shown to be superior, even as they begin to see the truth. Do you know what I think? They are afraid of nuclear energy because its acceptance will show everyone the magnitude of their error. They know that their followers will realize that they have been backing the wrong side, and as always in these cases will turn on their leaders like a pack of dogs. Suppose you are building a house. You have a variety of construction materials to choose from – timber, brick, steel beams, glass, tile, etc. You obviously expect to use a mix of these materials. But you can’t begin to design that mix unless you understand the characteristics of the individual materials. How strong are they? How much do you need? How much do they cost? Peter is trying to build an energy system. On his design palette, he has fossil fuels, wind, solar, hydro, nuclear. But he can’t design with these design elements unless he understands their individual characteristics. How much power can they provide? How reliable are they? How much do you need? How much will it cost? And, in this case, how much CO2 will they produce? To understand his design elements, Peter has done the equivalent of designing a glass house to understand the limits of using glass as a building material. He’s done the same with wood, and steel. These design exercises have probed the qualities and limits of the design elements. He has then followed up with a further design exercise where he builds from various combinations of materials, and compared the different structures in terms of strength, cost, build time, and waste. By analysing each renewable technology individually, he’s also thrown light on the characteristics of an integrated system. Unfortunately the wind and solar components turn out to be the equivalent of wet cardboard and cured ham, and he’s found that if you build a house out of these materials, you’re still going to need just about as much brick and steel as a normal house, if you want it to stay standing, even if you use a combination of ham and cardboard. If it all pans out the way you say DV8, I might join you in that. If the objections to nuclear proliferation and waste are dealt with as easily as some on this list imagine, I’m all for it. (IF). @ Peter Lang, thanks for that. Let’s just say at this stage I’m very sympathetic to nuclear power. One last exercise. I’m not saying the following is costed and competitive with today’s nuclear, but I’d question the synergies you suggest. Why 100% wind + gas backup? The papers coming out at the moment suggest that they build enough wind to be around 40% of the grid as baseload, and then the solar thermal operates with biogas backup. The thermal turbines on the solar plant are already there. Just turn on the bio-gas taps and cook up the steam and the plant keeps operating. It prevents needing to build a whole new biogas plant & turbine, which would otherwise be necessary in the 100% wind + biogas system you have suggested above. (If the biogas actually comes from biochar it’s a carbon Negative system as well). Sure after the growing season’s you’d probably have to brew up one heck of a lot of biogas for storage, but that storage would probably not have to make 100% of the storage we use. Don’t forget the V2G cars are coming that can charge whenever the wind is blowing, and then sell back when the grid demands it. If we use Better Place battery swap systems, the price is gratis of Better Place… they have included the batteries in the price / km of their public charging points and battery swap charges (which are already almost half the price of oil). As my car sticker says, “My next car will run on the wind”. (Free Better Place propaganda sticker… if you want them to go nuclear, have a chat with Shai Agassi and I’ll put one of those on my car instead. My focus is Better Place and Australian independence on oil. I like the wind idea, but not if it really is distracting from the debate we NEED to have on nuclear). Lastly, some are saying wind is cheaper than coal, IF we don’t have to cost a backup system. Say we have a baseload nuclear capacity with wind power mainly charging our cars. Could that be economically competitive? This is going on and on and on and you simply are not getting any of it. Can I beg you to have a go at answering your own questions. Just do a bit of thinking, and perhaps a bit of research for yourself. If each house becomes a generator of solar and wind power there are minimal transmission costs! Just a completely unrealistic use of resources. Where’s the warp drive? OR FUSION REACTORS – NOT Fission?
Pages Archive for July 1st, 2010 China’s Purchasing Managers’ Index (PMI) fell to 52.1 in June from 53.9 in May, reports the BBC, but the figures suggested the [manufacturing] sector was still expanding rather than contracting. The report attributes the – relative – slowdown to government efforts to cool the property market and to curb bank lending. The central government insisted on larger down-payments on new homes and made it harder for investors to buy several homes. The BBC also quotes observers as saying that the faltering global recovery was affecting China’s output. Xinhua explained early in June that the PMI is one of the leading economic indicators. Simply put, when the number is above 50, the economy is in a state of expansion. In the opposite case, the number says that the economy is contracting (简单来说,若该数据高于50%,反映经济正处于扩张;反之,则说明经济衰退;而数据越高,则说明经济扩张速度越快). The manufacturing industry’s June’s PMI was down to 53.9 per cent from April’s 55.7 per cent, writes Xinhua, and when looking at the individual indices, comparing May and April, ten*) indices had dropped – particularly in terms of new orders from customers (新订单指数, from 59.3 to 54.8) -, and the only exception among a total of eleven indices was the finished-products inventories index, which had actually risen, writes Xinhua, and reassures its readers by quoting HSBC China’s (汇丰中国) chief economist Qu Hongbin (屈宏斌) as saying that the manufacturing industry’s PMI indicated the effectiveness of the [government’s] austerity measures which alleviated the risk of overheating. Besides, in another article, also of early June, Xinhua wrote that most of the recent drop in the PMI index was seasonal (季节性). When adjusted for seasonal influences, there was no obvious downward momentum. The economy would maintain a rather fast growth rate, and if there was a slight drop in growth numbers, and a [comparatively] strong one in PMI, this was only showing that imported inflation pressure was easing (在扣除季节性影响之后,回落势头并不明显,预计未来经济将继续保持较快增长,但增长水平或将略有下降,这其中,新购入价格指数大幅下降,表明输入型通胀压力得到有效缓解). One news agency, two victorious but somewhat contradictory messages: so, how effective are the government’s measures to keep economic growth sufficiently cool? Inflation, even if not imported any more, edged higher in May, exceeding the official target of 3 percent for the year, amid some initial signs that the world’s major developing economy’s investment has slowed, The People’s Bank of China (PBOC) puts it this way: China’s economy is very likely to maintain steady and rapid growth in 2010, with more positive factors than last year boosting the economy, but the nation’s economy still faces a complex domestic and international situation. If the Chinese government’s expectations are as vague as their macroeconomic tools, they can’t possibly be caught on the wrong foot. The CCP rules China on many different levels, and the cadres’ tools amount both to taking some advice from economists and to taking very different governmental views – from one central and many local perspectives – on what would be desirable goals in terms of growth numbers and their composition – and on what kind of results should be expected from the tools applied respectively. But while no results may come unexpectedly under these conditions, they can be rather undesirable – both for the central and the provincial governments. While the central government’s budget looks fairly balanced, except for the past year or two, the provincial governments’ finances are a completely different story. The Chinese stimulus programs were decided in Beijing, at least nominally. But the implementation was arguably a provincial affair, with the provinces, or more specifically the provincial-government-owned investment companies, generating the flow of money – and incurring the corresponding public debt. In March, Northwestern University’s Victor Shih told journalists in Beijing that as of November 2008, some 8,000 local investment companies took loans of at least 11 trillion – seven times the total revenues of local governments. The central government was facing a choice between quickly issuing restrictions on the flow of money, or let bad loans and inflation spread. “China’s leadership may prefer to let inflation rise, and to continue to make the banks lend money. They may not even wish to allow a healthy contraction, before determining the next generation of party leaders.” If the foreign reserves Beijing has accumulated during the past decades are to play a role in recapitalizing China’s banks, there is certainly no open talk about it yet. And interestingly, party and state chairman Hu Jintao (胡锦涛), during the G20 summit in Toronto last week, seemed to join Timothy Geithner, Lawrence Summers, and other US economists and politicians in urging a cautious – if any – exit from the stimulus programs launched since the beginning of the global financial crisis in 2008: “We must act in a cautious and appropriate way concerning the timing, pace and intensity of an exit from the economic stimulus packages and consolidate the momentum of recovery of the world economy”, The Herald Sun quotes Hu. The Economic Cooperation Framework Agreement signed by Taiwan and China in Chongqing yesterday is a serious threat, Singapore’s Beijing-leaningUnited Morning News (联合早报) quotes South Korean media and experts. The Korea International Trade Association (KITA) released an “After-ECFA Response Program” on June 29, pointing out that tariff reductions on more than 500 Taiwanese products, among them machinery, petrochemicals, and automotive spare parts with a value of about twelve billion US dollars, were a big blow to South Korean exporters. Apparently in cooperation with the Korea Institute for International Economic Policy (KIEP), the response program finds that among the twenty top products exported to China by Taiwanese and South Korean companies, liquid crystal displays, petrochemicals, semiconductors, and office equipment, there are fourteen items among them which rank high both in Taiwan’s and South Korea’s exports to China. The preferential treatment of Taiwanese products would immediately weaken South Korean competitiveness, United Morning News quotes KITA. East Asia Daily (this name apparently refers to donga ilbo, a South Korean paper which also runs an English, a Japanese and a Chinese language edition), is quoted by United Morning News as commenting that, facing Taiwan taking away the Chinese market, South Korea should sign a free-trade agreement with China. Also, as relations with Taiwan had been distant since South Korea’s establishment of diplomatic relations with China in 1992, South Korea should, by improving relations with Taiwan, seek a common approach with Taiwan to enter the Chinese market. Taiwan News writes that It should come as no surprise that the country most impacted by changes in cross-strait relations is Japan, which is seriously concerned that any excessive “‘leaning to one side” by Taiwan toward the PRC will tilt the balance of power in East Asia in Beijing’s favor. […] In particular, Japanese analysts are concerned that the reversal of the previous administration of the Taiwan – centric Democratic Progressive Party’s pro-Japan and anti-PRC stance toward the restored KMT government’s adoption of a “pro-China and anti-Japan” stance could have serious implications for Japan’s substantive interests in the Taiwan Strait and may add weight to the “China factor” in Tokyo’s policy – making regarding Taiwan. Even if Chen should be misquoted here, this statement certainly reflects Beijing’s position. And it may reflect a irreversible trend of Taiwan moving into China’s orbit. But this isn’t only up to China. So far, Japan’s, America’s, and probably everyone’s main concern seems to have been not to displease Beijing. ECFA should be read as a signal that letting Taiwan down would come at a price, just as well. Standing by some moral principles will be costly. But in the end, the costs of mere opportunism would be much greater.
About 30 September 2009 Saint Jerome, who's memorial we celebrate today, is perhaps best known for his translation of the Bible into Latin (the Vulgate) and for his celebrated line, "Ignorance of the Scriptures is ignorance of Christ." Jerome was a prolific letter writer, full of many great insights and quips. Here are just a few I have selected for you: "Man's nature is such that truth tastes bitter and pleasant vices are esteemed" (Letter XL). "Indeed it is dangerous to pass sentence on another's servant, and to speak evil of the upright is a thing not lightly to be excused" (Letter XLV). "I often discoursed on the Scriptures to the best of my ability: study brought about familiarity, familiarity friendship, friendship confidence" (Letter XLV). "...people are more ready to belive a tale which, though false, they hear with pleasure, and urge others to invent it if they have not done so already" (Letter XLV). "Our opinion of you is like your opinion of us, and each in turn things the other insane" (Letter XLV). "Let them know us [clergy] as comforters in their sorrows rather than as guests in their days of prosperity" (Letter LII). "Change your love of necklaces and jewels and silk dresses to a desire for scriptural knowledge" (Letter LIV). "The face is the mirror of the mind, and eyes without speaking confess the secrets of the heart" (Letter LIV). "I groaned to hear his tale, and by silence expressed far more than I could with words" (Letter CXVII). "Marriage is a raft for the shipwrecked, a remedy that may at least cure a bad beginning" (Letter CXVII). "Nothing is happier than the Christian, for to him is promised the kingdom of heaven: nothing is more toil-worn, for every day he goes in danger of his life. Nothing is stronger than he is, for he triumphs over the devil: nothing is weaker, for he is conquered by the flesh" (Letter CXXV). "If the merchants of this world undergo such pains to arrive at doubtful and passing riches, and after seeking them in the midst of dangers keep them at the risk of their lives, what should not Christ's merchant do who sells all he has to buy the pearl of great price, and with his whole substance buys a field that he may find therein a treasure which neither thief can dig up nor robber carry away" (Letter CXXV)? There is a particular passage for which I am looking. If I find it, I will post if for you later tonight. 28 September 2009 Just a few moments ago one of my former students and soccer players tagged me in a note he posted on Facebook of an article he wrote for the student newspaper, The Bulldog Bark. He wrote: I was heading back to my study hall, after going to put some homework away, when I decided I’d see if Fr. Daren was around to just hang out and talk with him. A few steps later I stopped, realizing that Fr. Daren was gone. No more random hellos in the hallway, no more chess club games, no more soccer practice with him as assistant coach! It might not have happened to you yet, but you’ll probably “realize” that Papa D has left sometime soon, like when you see someone drinking a can of Dr. Pepper, go to eat at Buffalo Wild Wings, or when you go to church. No one wanted to give him up, but he had to go. Even though he’s gone, the time he spent with us was the greatest! He was a great priest, and an even better friend. If you’re ever feeling the “Papa D blues”, there’s always texting, phone calls, Facebook, and Virden is only 2 hours away if you have time to visit. We know he’ll do well in Virden, and hopefully he’ll come back to visit Effingham very often! Keep him in your prayers and send him a text to say good luck! I don't know what to say. I do miss those kids. I suppose that's a fourth gift the Lord sent my way today. Those insightful Dominicans have an excellent post on one of my favorite virtues: eutrapelia. It's one of the reasons I still sneak away for soccer games, as I hope to do again tomorrow, and maybe even on Thursday, too. If you haven't read Hugo Rahner's book Man at Play: Or Did You Ever Practice Eutrapelia, now's a good time to do so. This evening I had the pleasure of convoking again the former Parish Finance Council. After requesting their continued service - and receiving affirmative responses - I will be happy to reappoint each of the members (in the morning). I mention this because the Lord has sent two great gifts my way today. The first came in the morning of an envelope from the Office of the Master of the Liturgical Celebrations of the Supreme Pontiff. In it was my ticket to help with the distribution of Holy Communion during the Mass of canonization of Blessed Damien of Molokai. Strange as it may seem, the meeting of the finance council is the second of these two great gifts. It is a group of people whose wisdom and guidance I will seek readily and they have - in only meeting - demonstrated the effectiveness. They are eager to help and have offered what is - in my judgment - very good counsel. And now I will close the day with third good gift: a bowl of fresh pineapple. When I was in Effingham one of the friars gave me a holy card that I recently rediscovered. It tells the brief story of the Servant of God Simon Van Ackeren, O.F.M., who entered the Franciscan Order in Teutopolis and died in Effingham: The seventh child in a family of twelve children, Lawrence Van Ackeren was born at Humphrey, Nebraska, on February 17, 1918. Even as a boy he stood out by reason of his spirit of prayer and his love of our Lord in the Blessed Sacrament. After completing the grade school, he wanted to go to the Franciscan preparatory seminary at Oak Brook, Illinois; but he had such a hard time with his studies that he was told to finish high school first. In September, 1936, he was admitted to the preparatory seminary and joined the fourth-year students. But by Christmas he realized that he did not have sufficient talent to pursue the required studies for the priesthood, and he applied for admission as a Franciscan lay brother. Toward the end of January, 1937, he was sent to St. Joseph Theological Seminary, Teutopolis, Illinois, and was invested as a Third Order Brother about a month later, receiving the name of Brother Simon. His ankle started to bother him about a year after he arrived at Teutopolis, and he began to walk with a slight limp. Soon afterwards, the limb became too painful and he could scarcely walk. He was taken to St. Anthony Hospital in nearby Effingham and received treatments for a month, but his ankle failed to respond. He returned to the seminary on crutches, and was permitted to make his profession as a Third Order brother on March 4, 1938. The next day he left for St. Louis to consult a specialist. After three weeks he came back, his ankle in a cast. The verdict was tuberculosis of the bone. Soon his general health began to fail. On the last day of April he went to the hospital in Effingham. There the doctors found that he had galloping consumption and gave him only a short time to live. Brother Simon's condition quickly grew worse, and he was anointed on the sixth day after his arrival at the hospital. The next few days his strength failed rapidly. About ten o'clock on the night of May 10, while the sister on night duty was with him, his innocent soul winged its way to heaven. Though he was only a Third Order brother for little more than a year, Brother Simon has gained a greater reputation as a saint and intercessor in heaven than any other deceased member of the Franciscan Province of the Sacred Heart. During his illness and suffering no one heard an impatient word escape his lips; and he never ceased praying. His sunny smile never wore off. His greatness consisted in doing the little things well - doing them with extraordinary and always cheerful willingness, fidelity, charity, patience, and piety. "Being made perfect in a short space, he fulfilled a long time" (Wisdom 4, 13). As it was Brother Simon's delight to help others in life, so he has continued to help others in a remarkable manner also after his death. Innumerable favors have been reported and attributed to his intercession. Strangely enough Brother Simon is gaining a growing reputation as a missionaries' broker and a helper in financial difficulties. Favors are reported also from sick persons who have gained health or alleviation from ill health through a novena made in his honor. If you have a special prayer request, why not ask Brother Simon for the help of his prayers? I will ask his assistance this morning for a student concerned about taking his religion test today. There is a novena asking Brother Simon's intercession: O Lord, in these days wherein souls are hungering for pleasure and devoured by greed, and refuse to renounce themselves to take up your Cross and follow you, you have raised in our midst Brother Simon, who during his short life kept his eyes on your passion and, responding to your call, gave himself to you. Touched with this excess of charity and spirit of renunciation in a world of ingratitude, you have deigned, O Lord, apparently as a sign of approval, to make him a champion of your Cross. We beseech you, O Lord, to make known the power of intercession reserved to your servant by hearing the prayers we are saying in union with his, and to grant us not only the petition of this novena, but also the grace to follow you, who are the Way, the Truth, and the Life. Amen. After this prayer, you are to pray five Our Fathers, five Hail Marys, and five Glory Bes, in honor of the Five Holy Wounds. And, while you are at it, please offer the prayer for his beatification: O, Jesus, you love the meek and humble of heart. Hear the prayers we offer you in honor of your humble servant, Brother Simon. Approve the cause of his beatification. Through his merits and intercession may we receive the favors we seek. We ask this in your name, Christ our Lord. Amen. I woke this morning right about three o'clock to some buzzing or ringing sound that I could not immediately determine. I thought it might be the telephone in the bed room I am in (which apparently wasn't plugged in until about seven o'clock yesterday evening). That was not it. Then I thought it might be a smoke alarm somewhere in the house. That was not it, either. When I returned upstairs the I could tell the sound was coming only from the area of my bed room, by the door and, after some pondering and checking of electronic things, inside the wall. It is a pparently a cricket that is afraid neither of pounding on the wall nor of the television. I am at a loss at to what to do. It is so loud that if I moved to a guest room I would still hear the cricket loud and clear. Now I know why Saint Francis of Assisi told the cricket to stop singing. I tried; it is not listening to my request. 27 September 2009 Looking around the rectory I have realized that I am drastic need of a new mop and broom, or swiffer wet or dry, or some other such item to be utilized in the cleaning of the kitchen and bathroom floors. What do you most recommend? I'd like something simply but that also does a good and thorough job. If that means a regular mop and bucket, so be it. 25 September 2009 The Diocese of Springfield in Illinois was created in 1853 from the territory of the former Diocese of Alton, which was formerly the Diocese of Quincy. The See City was transfered with the changing modes of transportation; the coming of the railroad made Springfield a city easy to reach without relying on the mighty Mississippi River, on whose banks sit the cities of Quincy and Alton. All of this is a set up for the news of a truly profound moment in the life of the Church of Springfield in Illinois to take place tomorrow in the former Cathedral of the Diocese of Alton, the church of Saints Peter and Paul, when Steven Thompson will ordained to the Order of Deacons at the hands of His Excellency, the Most Reverend Victor Balke, Bishop of Crookston and a son of the Diocese. I ask your prayers for Steven and for the Diocese as Bishop Balke ordains him for service in the Church, with the view of Steven's ordination to the Priesthood of Jesus Christ. I know Steven to be a good man of prayer and look forward to ministering with him in the years ahead. His quality was reaffirmed for me at the recent clergy convocation when he and spent one of the sessions in an excellent conversation. He will be a good and holy deacon and priest, with the help of your prayers. I regret that I will be unable to be present for his ordination; I will have to preside at a burial here in Virden at the time of his ordination. I cannot say that the past few days have been busy per se, but they have been full. I have spent them largely cleaning our drawers and closets in the house, sorting through various papers to see what is important and what is not, learning about the parish finances and unpacking (which seems to be least on the list). I have met several very friendly and helpful parishioners this week who have very generously offered to help me in whatever way they can. I would like to take them up on their offers, but am not quite sure what to have them do. I am delighted to have a Deacon who lives in the parish but is assigned to another parish north of here. He attends daily Mass here since he works in town and has been a great help to me; it's also very nice to have a deacon at daily Mass. He has served on the finance council in the past and helped me considerably Monday morning but running through various financial matters. I am happy to say that I have a very capable and efficient secretary. She is new to the post so we are learning together. In many ways this is a blessing since we can organize the office together so both of us know where we put things. She works part-time as things in the parish are rather quiet. We have only 169 families here in Virden and only 127 families in Girard. The combined number of actual parishioners - at least according to the books - is 669. Wednesday morning the current and former secretaries and I sat down for a couple of hours to go through several things in the parish, from files to finances. I felt much more comfortable after our meeting. We also have a very faithful, dedicated and thorough sacristan at the parish who sets up for the Masses and various liturgical celebrations. He has worked as the sacristan for many years and knows just about all there is to know. Tuesday morning he took me on a lengthy tour of the parish complex and told me more information than I could retain. I will have to ask him several questions later on. Several years ago the parish between a period of twenty-four hours of Eucharistic Adoration following Mass on Wednesdays. I could not be more pleased to have this in the parish, especially considering our smaller size. I am confident that it will bring many rich blessings from the Lord. I have also made it back to Effingham this week for two soccer games, both of which, I am happy to say, the boys won. When I went to the game yesterday I also stopped in for a haircut. The woman I go to does a great job and I am not sure I want to try another person. A good haircut is not always easy to find. Life in Virden has certainly been an adjustment. The city has - according to the population sign when you enter town (which seems a bit high) - 3,500 citizens. The bank closes daily at 3:00 p.m. (which caught me quite by surprise Wednesday afternoon when I went to have my name put on the parish accounts). There is no grocery store (though I hear one is being planned). I am not sure if we have a dry cleaners yet. We do not have a McDonald's (which does not bother me much), but we do have a Dairy Queen and a Star Hardee's. In terms of shopping, we have a Family Dollar (or a Dollar General, I do not remember which). Springfield is just twenty-five minutes north and that drive somehow does not now seem very long. Being from Quincy, where anything more a seven minute drive is long, this feels very strange to say, but it is true. I suppose I have adapted well to my new surroundings. Virden is a quiet town and peaceful and I do not believe it has a stop light. Less than ten percent of the population has a bachelor's degree and fewer than three percent have advanced degrees. Here in Virden we are 150% more likely to have a tornado than the rest of the country, but we are 97% less likely to experience an earthquake than the rest of the country. Today I intend to finish settling into my office (I finally found my printer cable last night) and then work on the kitchen. Sometime today I will have to go to Springfield to pick up several supplies for the secretary and I also have to shop for groceries sometime. 21 September 2009 During the celebration of the Holy Mass, the Reverend Monsignor Carl A. Kemme, Administrator of the Diocese of Springfield in Illinois and a good friend, publicly installed me as Pastor of Sacred Heart parish in Virden and of St. Patrick parish in Girard. Many positive comments have been received on the simply but profound ceremony. I am deeply grateful for Msgr. Kemme's support, encouragement and prayers. His presence yesterday was a great help to me and to the parishioners. My favorite parts of the ceremony involved my placing my hand on the Book of the Gospels as I made the Oath of Fidelity, I'm happy to say the parishioners seem sincerely happy that I have been sent to them. They have been most gracious and welcoming and I think we will grow well together in love of the Lord Jesus Christ. Apparently, the talk around town - even among some who aren't Catholic - is that the lights in the rectory are on. That's a good sign; my presence is noted, even if sometimes leave town and forget to turn off a light or two :) Today has been a good, productive and informative day. After Mass I cleaned the sacristy a bit. After unpacking a bit more in the house, I met with a parishioner and member of the finance council to talk through the parish finances. That meeting was very good and helpful. Soon I'll convoke the pastoral council and set to work on what needs to be done, both temporal and spiritual. 18 September 2009 On Sunday Sacred Heart parish in Virden and St. Patrick parish in Girard will welcome the Diocesan Administrator, the Reverend Monsignor Carl A. Kemme, a native of Shumway. Although I took canonical possession of my parishes this past September 15th, the memorial of Our Lady of Sorrows, he will install me as Pastor in a public way. Monsignor Kemme will introduce me to the parish (though they've already briefly met me) and will hand me the keys to the church (I think). I believe Bishop Lucas' letter of appointment will be read, after which I will make the profession of faith: I, Daren J. Zehnle, with firm faith believe and profess everything that is contained in the symbol of faith: namely, I believe in one god, the Father, the Almighty, maker of heaven and earth, of all that is seen and unseen. I believe in one Lord, Jesus Christ, the only Son of god, eternally begotten of the Father, God from God, Light from Light, true God from true God, begotten not made, one in Being with the Father. Through him all things were made. For us men and for our salvation he came down from heaven: by the power of the Holy Spirit, he was born of the Virgin Mary, and became man. For our sake he was crucified under Pontius Pilate; he suffered, died and was buried. On the third day he rose again in fulfillment of the Scriptures; he ascended into heaven and is seated at the right hand of the Father. He will come again in glory to judge the living and the dead, and his kingdom will have no end. I believe in the Holy Spirit, the Lord, the giver of life, who proceeds from the Father and the Son. With the Father and the Son he is worshipped and glorified. He has spoken through the prophets. I believe in one, holy, catholic and apostolic Church. I acknowledge one baptism for the forgiveness of sins. I look for the resurrection of the dead, and the life of the world to come. Amen. With firm faith I believe as well everything contained in God's word, written or handed down in tradition and proposed by the Church - whether in solemn judgment or in the ordinary and universal magisterium - as divinely revealed and calling for faith. I also firmly accept and hold each and every thing that is proposed by that same Church definitely with regard to teaching concerning faith or morals. What is more, I adhere with religious submission of will and intellect to the teachings which either the Roman Pontiff or the college of bishops enunicate when they exercise the authentic magisterium even if they proclaim those teachings in an act that is not definitive. I will proclaim the Gospel and Monsignor Kemme will preach, explaining the office of a pastor and the meaning of the rites. After his homily, I will renew the promises I made on the day I was ordained. Monsignor Kemme will address certain questions to me, to which I will respond affirmatively: My dear brother, in the presence of the people whom you are about to receive into your care, I ask you to renew the promises you made at your ordination. Are you resolved that under the guidance of the Holy Spirit you will without fail live up to your responsibility to be the faithful co-worker of the order of bishops in shepherding the flock of the Lord? R/. I am. Are you resolved that in praise of God and for the sanctification of the Christian people you will celebrate the mysteries of Christ devoutly and faithfully, and in accord with the tradition of the Church? R/. I am. Are you resolved that in preaching the Gospel and teaching the Catholic faith you will worthily and wisely fulfill the ministry of God's word? R/. I am. Are you resolved that you will bind yourself ever more closely to Christ, the high priest who for us offered himself to the Father as a spotless victim, and that with Christ you will consecrate yourself to God for the salvation of your brothers and sisters? R/. I am. Do you promise respect and obedience to [the Diocesan Bishop and his successors]? R/. I do. May God who has begun this good work in you bring it to fulfillment. After I renew my promises, Monsignor Kemme may lead me to around the church to the principle locations of the church: the chair, the tabernacle, the baptismal font and the confessional. At some point I will also place my hands upon the Book of the Gospels and renew the Oath of Fidelity: I, Daren J. Zehnle, on assuming the office of pastor of Sacred Heart Parish, Virden, Illinois, and of Saint Patrick Parish, Girard, Illinois, promise that I shall always preserve communion with the Catholic Church whether in the words I speak or in the way I act. With great care and fidelity I shall carry out the responsibilities by which I am bound in relation both to the universal church and to th particular church in which I am called to exercise my service according to the requirements of the law. In carrying out my charge, which is committed to me in the name of the church, I shall preserve the deposit of faith in its entirety, hand it on faithfully and make it shine forth. As a result, whatsoever teachings are contrary I shall shun. I shall follow and foster the common discipline of the whole church and shall look after the observance of all ecclesiastical laws, especially those which are contained in the Code of Canon Law. With Christian obedience I shall associate myself with what is expressed by the holy shepherds as authentic doctors and teachers of the faith or established by them as the church's rulers. And I shall faithfully assist diocesan bishops so that apostolic activity, to be exercised by the mandate and in the name of the church, is carried out in the communion of the same church. May God help in this way and the holy Gospels of God which I touch with my hands. Monsignor Kemme will be present at the 8:15 a.m. Mass in Virden and at the 10:00 a.m. Mass in Girard for this rite. I know some of my new parishioners are readers of this blog; would any you like to take pictures for me? 17 September 2009 As you can probably tell by the recent lack of posts, this past week has been a busy blur. After soccer practice Friday evening about thirty of the high school students came to the rectory to help load my personal effects onto the trailers and into the trucks that would take me to Virden the next day. It took about three hours to get everything loaded, most likely simply because there were too many of us there to coordinate things well and I hadn't planned on so many (you'd think I might've learned after Monday's turnout...). Typically in such situations you end up, as it were, with too many chiefs and not enough indians; we had too few of either but plenty of jesters. Toward the end it became a bit hectic, but I'm glad so many came to help.Saturday morning I celebrated my last Mass at St. Anthony's as Parochial Vicar. After Mass, one of the parishioners gave me a farewell gift of delicious chocolate chip cookies.That morning the soccer team played against Mater Dei high school at Bulldog Field. The boys played the best game I've seen them play eventhough they lost 1-3.The boys came onto the field wearing wristbands made from atheletic tape with an overlapping D and Z on them in imitation of my initials they found on a wax seal they found when helping to pack my things. I was very touched and knew then they intended to play that game for me. I was humbled and proud.Just before the game, they presented me with a ball bearing each of their signatures and jersey numbers. It was a most fitting gift.After the game we hopped into the vehicles and made our way to Virden after a mostly uneventful drive. Twelve students accompanied me and three others met us in Virden that evening.It took only one hour to unload the trucks and trailers, much to my surprise. I concelebrated Mass with Father Sperl at 5:30 and joined the parishioners afterwards for a potluck dinner to thank Father Sperl for his ministry over twenty-six years and to welcome me.The parishioners welcomed me very warmly and were very hospitable to the students who came to help me move. It was a good and relaxing evening and I look forward to meeting more of the parishioners in the coming days and weeks.The students and I returned to the rectory and continued unpacking for a bit before retiring for the night.I celebrated Mass at 8:15 Sunday morning and then continued unpacking. That day happened to be the parish's annual fried chicken dinner so my helpers and I attended the dinner and enjoyed a delicious meal. I was really impressed with the food and the organization of the event.After more unpacking we left for Effingham so they could finish their homework.The priests of the Diocese gathered in Effingham this week for their annual convocation, which concluded this morning.I am now in the rectory in Virden and have spent a good part of the evening unpacking my library; I still have a way to go.In the morning I have to make an unplanned and quick return to Effingham to pick up some drycleaning I forgot to pick up before I left this afternoon. I'll also pack up some Christmas decorations that I intended to pick up Sunday afternoon.In the afternoon I'll meet with my secretary and see where our conversation leads.Saturday I will be in Springfield teaching a class on the Creed for the lay ministry formation program and Sunday morning the Diocesan Administrator will come to install as pastor of these two parishes.Hopefully next week will be a bit slower than this week. This past Monday afternoon the high school students did an excellent job packing up most of my things and placing them in the garage in preparation for the loading of the trucks that will take place this evening following soccer practice (and maybe a quick bite to eat). I still have a few things to finish packing this morning, but all is well underway. My furniture (a bedroom set, two bookcases, two chairs and a couple of side tables) will be loaded this evening, as well. In the morning I will celebrate my final Mass here as the Parochial Vicar. The soccer team has a match here later in the morning. After the game, we will hop in the trucks and make our way to Virden. Several of the students will be accompanying me to help unload the trucks and unpack the boxes to help me get settled in as quickly as possible; I'm one of those sorts that does not function well with clutter and needs to be at least somewhat settled in. Yesterday and today I feel rather in a daze, with much to do but uncertain what should be done first and when. My mind and emotions are mixed on this last full day here in this parish. With the recent and delightful news that Fr. Leo Patalinghug of Grace Before Meals, whom I had the pleasure of meeting at the World Youth Day 2008 in Sydney and who very kindly links to my blog, defeated the Iron Chief Bobby Flay in a recent episode of the Food Network's Throwdown with Bobby Flay, I can't help but offer a small reflection on food and the presence of God. Wednesday afternoon I was making a batch of the Roman speciality sauce all amatriciana - very simple and wonderfully delicious - for the soccer team's pasta night that evening. I had been in the kitchen for some time preparing the sauce when our housekeeper came into the kitchen and said, "It smells delicious in here!" I was a bit struck by her words because at the time I did not notice the smell; I had simply grown accustomed to it and noticed it no longer. To remedy this I went upstairs for a moments and when I returned to the kitchen the delicious scent could not be missed. She was right. Pancetta, garlic, tomatoes, salt and pepper: what's not to like? Prayer and our recognition of the presence of God in our lives is often like this. Sometimes we grow "used" to God's presence, we grow "used" to the "routine" of prayer and do not notice its effects, until we step outside of God's presence or stop praying. Then, once we enter back in, we realize what we had all the while but did not notice. Let of each of us, then, not stay out of the kitchen, but hop right in. 07 September 2009 As the chaos of the past week and a half comes - thankfully - to an end, the chaos of the next few days begins. I am glad to report that I have been able to rest the past couple of days and have happily slept through the last two nights, something I hadn't done in about a week. After celebrating a funeral Mass late this morning, I set to work for an afternoon of packing my belongings in preparation for my move to Virden this Saturday. About fifteen of the high school students came to help. I was a bit surprised by the number of them, and very grateful, especially considering most of them stayed for the four hours I had planned to use for packing. We started off really well and kept basically organized, but there were only about six of them at first. Some of them I put to work in my office and packed it they did. They packed more of it than I intended, but all is well and will save me more work later. The others I set to work in my library and we now have that nearly finished. As the afternoon moved along and more students came to help the more disorganized we became. I really did not expect so many helpers all at the same time; I expected them to be coming and going throughout the day. I felt rather overwhelmed as several of them would ask me at the same time what else they could pack or sort. Their willingness to help either shows their affection for me, or their readiness to be rid of me ;) I still have to pack my electronic equipment (television, stereo, computer, etc.) and clothes, and a few other odds and ends that I'm not quite sure how to pack. I also have a "junk drawer" or two to sort through (don't we all?) and a closet to go through that has things in it I'm not sure I've set eyes on since I arrived here four years ago. All of the packed boxes have been moved to the garage and are ready to be loaded onto the trucks and trailers Friday afternoon. I'm amazed at the generosity of the students. Of all things they could have been doing on a beautiful afternoon free of school, they chose to spend it helping me pack. I'm also amazed at the speed with which they can work, when they put their mind to it. Tomorrow morning I will drive to Springfield to concelebrate the funeral Mass for my Pastor's father. Afterwards I will return to Effingham for a bit of packing and a soccer game. I'm not sure how much blogging will be done during the remainder of the week. If I don't post much, know that it is because I'm packing and saying farewell and not because I'm abandonging the blog. 05 September 2009 ...or just what the Dr ordered (a little tribute to Rocky and Bullwinkle). Last night around eighty high school students turned out for the Bring Your Own Dr Pepper party thrown as a farewell bash. I had more fun last night than I've had all week. It was a much needed night. I don't believe I've ever seen quiet as many empty Dr Pepper cans in one place before and I've certainly never played Apples to Apples with so many people all at once. The party began a little after 7:00 p.m. and ended about 10:30 p.m.; many of the students had sporting events early this morning and I couldn't have stayed up any longer if I tried. Those three hours reminded me that even though is much on my plate, as it were, right now, I can only do one thing at a time. Over the course of the week I failed to carve out time simply for fun, though I daresay I've no idea how I could have it in anywhere if I had tried. Pliny the Younger once said: As in my life, so in my studies, I consider it most fitting for a true man to mingle and mild and cheerful spirit with my more serious mood, so that seriousness should not fall away into melancholy nor jest into mere license. Guided by this principle, I now and then interupt my more serious work with jollity and play. I neglected jollity and play and began to feel quite overwhelmed. It is a lesson I hope I don't soon forget. So, I'm living life right now one Mass at a time. I've celebrated two Masses already today and will celebrate one more in a couple of hours. Tomorrow I will celebrate three Masses and Monday it looks as though I will celebrate two Masses, as well as on Tuesday (though I'm not concerned about Tuesday). If you've done the math, you'll see that over the span of three days (Saturday, Sunday and Monday), I will celebrate eight Masses (one Saturday and one Monday Mass, two funerals, three Sunday Masses and one memorial Mass on Sunday night). We announce with profound regret the death Friday, September 3, 2009, in Effingham, Illinois of: Mr. Leo Joseph Enlow, Sr. Father of the Reverend Monsignor Leo Enlow, pastor of St. Anthony of Padua Parish, Effingham, Illinois. Visitation will be at Staab Funeral Home, 1109 South 5th Street, Springfield , Illinois, on Monday, September 7, 2009 from 4:00 p.m. - 7:00 p.m. A prayer service will be at 3:30 p.m. The Concelebrated Mass of Christian Burial will be at Blessed Sacrament Church, 1725 South Walnut Street, Springfield, Illinois, on Tuesday, September 8, 2009 at 11:00 a.m. Priests wishing to concelebrate should be present by 10:30 a.m. 04 September 2009 This morning my day began at 6:40 a.m. with the ringing of the telephone. "Good morning, St. Anthony's," I answered rather groggily, fearing news of yet another funeral. The voice on the other end asked, "Is anyone in the office yet?" "No," I answered, with no small trace of irritation and frustration - and even a bit of anger - in my voice. "It isn't even seven o'clock." "Okay," came the reply, and the caller hung up. Thus I expected nothing less than an absolutely miserable day, and the first few hours of the day did not disappoint this expectation, leaving me to wonder what else could go wrong in one day. But thanks be to God the day improved remarkably once I made my First Friday visits. One of the women I visit is a dear friend and a tremendous woman of faith and prayer. As we said our goodbyes she reached to the table by her chair and handed me a small prayer book. As she did so she told me she wanted me to have it and opened it to a page with two signatures. When I read the signatures I was left quite speechless and unbelievably grateful. The second of the signatures reads, "+ John Cardinal Cody, Archbishop of Chicago." The second reads, "All for Jesus M Teresa, MC" Many years back she attended a conference at which Blessed Teresa of Calcutta spoke. She asked the then-living saint to sign her prayerbook and she did. The day can now only get better. I will soon try to finish the first of three funeral homilies and then have the oil changed in my car before soccer practice. After soccer practice some of the players and I will go out for dinner before a long awaited party. When I first announced my transfer, one of the senior soccer players told me he was going to have a BYODP (Bring Your Own Dr Pepper) party for me at his house and we would play Apples to Apples and other such games and have a great time. His parents consented and he is hosting it tonight; many of the high school students plan to attend as a farewell bash. It should be a hoot. 03 September 2009 This afternoon about 3:10 p.m. my pastor's father fell asleep in Christ surrounded by his children. I was privileged to have returned to the rectory just before he died and was with him and his family at the end. He is a good man, filled with much love and humor, and will be very much missed. This morning I learned of the unexpected death of another of our parishioners, who likely be buried on Monday or Tuesday. She was one of those I visited on First Fridays. Please keep her, and her family, in your prayers. I told the Pastor of her death and mentioned that I thought next week would be as bad as this week. He replied, "It can't be." I hope he's right. I'm not sure who the patron saint is of overly stressed and exhausted clergy, but I suspect Saint John Vianney will certainly be happy to intercede for us. Please ask him to pray for us. Pray that we remain calm and are filled with the strength to see to our duties and to provide pastoral care for our people. Pray that my pastor can be of comfort to his siblings as they grieve for their father, who remains with us this morning. Pray that I will be able to pack my belongings by Friday afternoon. Throughout the course of this month, please remember these intentions of the Holy Father Pope Benedict XVI: General: That the word of God may be better known, welcomed and lived as the source of freedom and joy. Mission: That Christians in Laos, Cambodia and Myanmar, who often meet with great difficulties, may not be discouraged from announcing the Gospel to their brothers, trusting in the strength of the Holy Spirit. Lord Jesus, present in the Most Blessed Sacrament, and living perpetually among us through Your Priests, grant that the words of Your Priests may be only Your words, that their gestures be only Your gestures, and that their lives be a true reflection of Your life. Grant that they may be men who speak to God on behalf of His people,and speak to His people of God. Grant that they be courageous in service, serving the Church as she asks to be served. Grant that they may be men who witness to eternity in our time, travelling on the paths of history in Your steps,and doing good for all. Grant that they may be faithful to their commitments, zealous in their vocation and mission, clear mirrors of their own identity, and living the joy of the gift they have received. We pray that Your Holy Mother, Mary,present throughout Your life,may be ever present in the life of Your Priests. Amen 02 September 2009 Sunday afternoon the parish hosted an open house for me to allow the parishioners to wish me well before things became too chaotic (for which I am very grateful now). The above picture was taken before the last 9:15 a.m. Sunday Mass I celebrated in the parish. The lector and I must have just exchanged a humorous comment when the picture was taken. It was a very touching afternoon, filled with many tears and much laughter. I will miss this parish immensely. You can see a few pictures from the reception here. With the preaching of my farewell homily at two Masses prior to the farewell reception, the day was very emotional and quite exhausting. The day ended with dinner with two families followed by the beginning of my packing. Three of the high school students came by to help begin packing my library. After an hour and a half of packing, we were almost halfway through my books. Another group of students and I worked a bit more on the books last night and tonight, and are now almost finished with my books; I think we have only the ones in my office left. The books are requiring a lot more boxes than I anticipated. The students have proven both good company and good helpers, and for their generous help I am very grateful. As all of this was happening, the health of my pastor's father continued to decline. Today he was placed in hospice care and is with us in the rectory, together with my pastor's immediate family, who have been with us for several days now. Please keep his father, and all of them, in your prayers. His family is a delight and it is always good to have them here, even in these difficult days. I taught my pastor's classes at the high school Monday and Tuesday and had a funeral yesterday, as well. He has a funeral tomorrow and I have another funeral on Saturday while he has two weddings that day. I wish that there was more I could do to ease his schedule, but I'm not sure what else I can do, since I have my own pressing duties to attend to. You might be able to guess that these past few days have been filled with much chaos, as the exceptional circumstances are tended to at the same time as the usual daily work. Consequently, we are both very tired and, at least for me, a bit stressed. I've slept precious little the past three nights and have had only a few moments of "down time" each day. I'm in need of a holiday, but cannot possibly take one. Please keep me also in your prayers. I leave the parish next week Saturday and hope to be able to find time to pack up my belongings before then... At the rate things are going that seems less and less likely, but it must be done. On a happier note, a much awaited package arrived for me on Monday from the land of rainbows: Last night we had our first non-tournament soccer match of the season. I'm delighted to say that Vandalia Vandals fell to the St. Anthony Bulldogs on Bulldog Field. Well done, boys! Our coach wrote up the following for the local newspaper: The St. Anthony High School Soccer Team hosted the Vandalia High School Vandals in Varsity and Junior Varsity soccer matches. In the Varsity game the Bulldogs opened scoring when Riley Westenforf passed to Doug Field who then passed it back to Westendorf who placed an outside shot into the corner of the goal. St. Anthony led 1 - 0 until about the 15:00 minute mark when Doug Field placed a "direct, free kick" past the waiting Vandalia goal keeper. The free kick was awarded after a Vandalia player pushed off a St. Anthony player. The foul resulted in a 2 - 0 lead for St. Anthony. Riley Westendorf scored again on a cross from Michael Kabbes when the ball deflected off of a Vandalia player. Vandalia then posted their first goal when a St.. Anthony player accidentally stopped the ball with his hand inside their penalty box. When a team commits a foul or hand-ball inside their own penalty box, a penalty kick is awarded to the opposing team. The ball was placed 12 yards from the goal and it was one vs one, Codey Norris (Vandalia) vs Gary Hanner(St. Anthony). Codey Norris shot a driving shot to Gary Hanner's right side which was too much for the senior goal keeper. St. Anthony later added three more goals from John Kay, Michael Kabbes, and Aaron Wall to end the match St. Anthony 6, Vandalia 1. The Junior Varsity Match was an evenly matched bout between the same schools. St. Anthony was able to squeak out another victory for the JV squad led by sophomore Ryan Willenborg and Junior, Hayden Esker. St. Anthony's next match is this Thursday at 4:30 Varsity and 6:00 Junior Varsity against East Richland. Come out and support the 2009 St. Anthony Soccer Team. Please, Lord, a saint! About Me Father Daren J. Zehnle, J.C.L., K.C.H.S., a priest of the Diocese of Springfield in Illinois, serves as Pastor of St. Augustine Parish, Ashland; Director of the Office for Divine Worship and the Catechumenate; Adjutant Judicial Vicar; and as Diocesan Judge for the Diocesan Tribunal.
###### Summary box - Communities are often poorly involved in the planning and implementation of interventions, yet their commitment is fundamental to control outbreaks in all the phases. - African countries are responding to the COVID-19 pandemic with measures such as restrictions of movement of people, home confinements and states of emergency such as total or partial lockdowns. - But structural challenges and vulnerabilities of health systems and the well-being of people challenge the acceptance and compliance with this package of measures. - Lessons learnt from responding to Ebola outbreaks in Africa (2014--2016 and 2018--2020) can strengthen community engagement to enhance the community ownership of the COVID-19 pandemic response. - We present 10 lessons learnt from responding to Ebola that African countries should quickly adapt in their response to the COVID-19 pandemic, namely: ```{=html} <!-- --> ``` - involve social scientists early in the response; - mobilise family leaders for surveillance, case detection, contact identification and follow-up and quarantine; - treat contacts with dignity and the empathy they deserve; - communicate laboratory results promptly; - care for the severely ill, while maintaining family connections; - prevent stigmatisation of people and the families of those who recover; - recruit local staff in the response and involve local people to build response structures; - mobilise and involve resistant communities in the response to overcome dissent; - involve grass-roots leaders in the preparation and implementation of response measures; - mobilise media players, including social media networks. ###### Summary box - Health actors, community leaders and communities must co-construct options for COVID-19 response that are acceptable, and feasible, and foster commitment of affected communities. - This approach calls for an urgent paradigm shift from a predominantly biomedical approach to outbreak response to one that balances biomedical and social science approaches. Introduction {#s1} ============ During public health emergencies, such as the current COVID-19 Public Health Emergency of International Concern, communities are often poorly involved in the planning and implementation of interventions, yet their commitment is fundamental to control outbreaks. African countries are responding to the COVID-19 pandemic with restrictive public health measures such as states of emergency and either total or partial lockdowns. All the countries share similar structural challenges and vulnerabilities, including and not limited to weak health systems, an informal economy, with more than half the population 'making do' or 'getting by day by day' and living from hand to mouth. These vulnerabilities challenge the acceptance and compliance of the package of restrictive health measures. The structural weakness of health systems in Africa means that few critically ill patients will have access to medical care in intensive care units and the kind of medical technology available in these facilities. Preventing spread of infection is essential. As a result, reduced social interactions and increased physical distancing are a central part of many public health strategies and this requires co-constructing of solutions that are acceptable and feasible, and that foster commitment of affected communities. Lessons learnt from Ebola outbreak response in West Africa and most recently in the Democratic Republic of Congo have demonstrated that the co-construction of sociocultural solutions has fostered commitment of affected communities and has succeeded in enhancing community engagement and ownership of the response. Community engagement and co-construction are two complementary notions: the first being the end of a process, and the second being the method or steps to achieve a desired goal. Experiences of community engagement and co-construction during Ebola response have shown that when communities were involved in problem analysis and co-construction of solutions, they took ownership of the response interventions and committed to efforts to curb the epidemic. We summarise here, under 10 successful lessons learnt from Ebola, responses that can strengthen community engagement in the fight against the COVID-19 pandemic, and specifically with respect to compliance with state of emergency measures, including partial or total lockdowns. Lesson 1: involve social scientists early in the response {#s2} ========================================================= During emergency response, social science experts bring specific expertise in analysing the dynamics of actors and communities engaged in the response in their social, cultural, historical, political and economic contexts.[@R1] In this way, social scientists can build bridges or facilitate dialogue in challenging situations. Further, social scientists can facilitate the co-construction of culturally and epidemiologically appropriate solutions and redefine interventions for increased community ownership. In this way, response measures can account more fully for the human experience, and reduce potential for unintended additional suffering to communities, some of which may be destabilised by fear of disease, death and conflict prevention. There is often a misconception about the homogeneity of communities. Community engagement starts from the premise that community groups are heterogeneous and that the diversity of opinions and sociocultural perspectives must account for acceptable solutions to be developed. Epidemics often reawaken old resentments and conflicts within and between communities. These conflicts can negatively affect the success of public health interventions and hamper their ownership by communities. To find mutually acceptable solutions, responders must account for the unique and varied perspectives of affected communities and be open to finding unarticulated and, at times, unexpected solutions. Lesson 2: mobilise family leaders for surveillance, early case detection, contact identification and follow-up, and quarantine {#s3} ============================================================================================================================== Early case detection, contact tracing, as well as contact quarantine require the commitment of families and community leaders; these interventions can themselves be 'violent and destabilizing' and reminiscent of police house arrest. Involving the head of the family, for example, who is the provider and responsible for protecting the family, ensures a quality interlocutor who has the power to mobilise family members. During Ebola, even in situations of extreme reluctance to follow-up contacts, it was useful to mobilise a family leader to take on this task with his family. By using his duty to protect, he was able to follow-up his family\'s contacts properly and with the trust of the surveillance teams. Lesson 3: treat contact persons with dignity and the empathy they deserve {#s4} ========================================================================= Contacts must be treated with dignity and not as 'contaminating subjects'. Regardless of their place in the social hierarchy, their change of status due to suspicion of disease puts their status or place in the family and/or community at risk. It is important to set up a mechanism to facilitate communication between the contacts in quarantine and their family, as well as access to quality psychosocial care provided by experts who speak their language. Quarantine facilities should be pleasant, ventilated and with play areas to account for small children, if possible. Moreover, it is important to ensure that meals for people in quarantine are better than those provided by their families, thus alleviating the traumatic experience of quarantine. Experience from previous epidemics highlights how attending to theses aspects is critical to prevent escapes and promote acceptance of quarantine. If resources permit, it is advisable to provide some additional 'treats' such as drinks, chocolates, cookies and balloons for the children of those in quarantine Lesson 4: communicate laboratory results promptly to the patient {#s5} ================================================================ The diagnosis of COVID-19, like that of Ebola, requires confirmation by a biological test---RT-PCR (a method of molecular biology)---which takes at least 4 hours to complete. Added to this is the time needed to transmit the results to experts, the authorities and finally to the patient. As a result, patients may only know the result of the test after 24 hours in urban areas and sometimes longer in rural areas. For the patient and family this waiting period is filled with uncertainty, causing disruption and anxiety. It is strongly recommended to establish a rapid process for communicating the results to doctors in the field to relieve the anxiety of the patients and their families and to initiate the protective public health actions very quickly. Lesson 5: care for the severely ill and maintain family connections {#s6} =================================================================== COVID-19 gives rise to a spectrum of illness with around 80% of patients experiencing mild to moderate illness. Those who become severely ill and who have access may receive hospital care. Hospitalisation of patients means transferring them from a familiar environment to a stressful environment; medical and paramedical personnel who provide care are strangers and wear personal protective equipment, such as goggles and surgical masks, and this can reinforce disorientation, anxiety and fear. There are multiple uncertainties facing both patients and their families, not least of which involves uncertainty regarding the progression of disease. Patients and their families need proactive, clear information about the hospital setting and what to expect. The way in which the physical environment is structured communicates a lot to patients and families. Ensuring a toilet is easily available, having dedicated waiting rooms with provision for young children and paying attention to privacy needs are small but important aspects. At an interpersonal level, patient-centred communication can help reduce anxiety and isolation. Getting updates from the patient beyond their clinical condition, encouraging them to get well, smiling behind the protective mask and speaking in the patient's language all contribute to providing reassurance and quality humane care for the hospitalised person. It is also helpful to keep the patient connected with relatives by allowing phone calls and safe visits of a selected family member where feasible. Lesson 6: prevent stigmatisation of people who recover and their families {#s7} ========================================================================= Fear of the disease often leads to stigmatisation and 'scapegoating' of patients and their families. Preventing stigma and acting to counter it helps reduce the negative effects of the epidemic on social cohesion. The mobilisation of psychologists at the beginning of the epidemic is an effective means of mitigation. The involvement of local authorities and leaders helps protect and support victims of stigma and reassure the community. In addition, there are endogenous reintegration mechanisms that are important to explore; these mechanisms are very useful outside crises to resolve community disputes, and restore peace and forgiveness. People who have recovered from COVID-19 also need the acceptance of their communities to prevent stigma. Lesson 7: recruit local staff in the response, including local people to build the structures of the response {#s8} ============================================================================================================= The management of a response is very resource intensive. For a population and especially for young people who are facing unemployment and whose socioeconomic demands are not always met, the response can be an opportunity to find jobs and relieve their suffering. During Ebola outbreak response, partners often recruit young people and women into the response services; for example, youth and women employed in the neighbourhoods where response structures (treatment units, points of control/points of entry) have been built. This has helped facilitate community acceptance of these new structures, preventing reluctance, vandalism and violence against the health teams. Lesson 8: mobilise the most resistant people in the response to overcome dissent {#s9} ================================================================================ Fear and frustration can provoke popular uprisings. However, as in any social movement, there are leaders who direct the hostilities. During Ebola, many uprisings, reticence and resistance were defused by recruiting these leaders into the response. They were thus able to control their own groups, ensure the security of teams and facilitate access to communities for public health activities. Young people can be involved in monitoring and securing their areas of residence. This would prevent risk taking, recklessness and vandalism. Lesson 9: involve grass-roots leaders in the preparation and implementation of response measures, including containment and emergency preparedness {#s10} ================================================================================================================================================== It is essential to be able to discuss the conditions and operationalisation of restrictive measures with community leaders, so that solutions can be co-constructed with the communities. Involving religious leaders may strengthen the spiritual tranquillity and to some extent, the predisposition to fight the disease as a spiritual battle. This tranquillity is very often sought among the supporters of socioreligious institutions, in localities considered sacred, depositories of mystical powers that can change the course of events, based on prayers and ritual sacrifices. Failing this, health measures such as a state of emergency and lockdowns can be considered to be in the sole interest of the authorities and political leaders. Some credible and influential community leaders are also very useful in managing rumours, misinformation and accountability in the face of unfulfilled promises by certain actors that can undermine community engagement. Lesson 10: mobilise media players and take social networks into account {#s11} ======================================================================= African populations in general remain closely linked to the traditional media (radio and television to a lesser extent). Treating media actors as partners in tackling pandemic challenges allows response actors to properly engage them with messages disseminated through their channels and appreciated by the communities. Involving the media as partners also provides access to their own social networks, because most people involved in the media are also heavy users of social networks. Finally, associating media actors and considering social networks enables the activation of the media communication monitoring function, which remains a challenge during public health emergencies. Conclusion {#s12} ========== Given the experience of responding to Ebola epidemics in Africa, it is imperative that communities must be accountable to the response to COVID-19. Health actors and authorities must co-construct solutions to address COVID-19 with community leaders and communities. However, a 'one size fits all' approach to community engagement is likely to fail. Each community is unique, and engagement must be contextualised to affected communities of each country. This engagement of cooperation with communities calls for an urgent change in the approach to health emergency response. All member states, health authorities and humanitarian actors are urgently called on to quickly move from a dominant biomedical design of public health emergency response to a public health design that balances biomedical paradigms with those of social sciences. Dr Nina Gobat and Ms Maria Caterina Ciampi for reviewing the manuscript. **Handling editor:** Seye Abimbola **Twitter:** \@AnokoJulienne, \@MR_Belizaire **Contributors:** JNA, BRB, ABD and BD complied the Ebola response lessons learnt. ABD, MRB, MK and MHD reviewed the concept. JNA, MYN, ZY, ISF and AT reviewed the concept and tailored it to the COVID-19 response. JNA wrote the first draft and AT extensively reviewed the draft. All authors have reviewed and approved the final manuscript. **Funding:** The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. **Competing interests:** None declared. **Patient consent for publication:** Not required. **Provenance and peer review:** Not commissioned; internally peer reviewed. **Data availability statement:** No additional data are available.
"Before I go any further, let me state emphatically that I am not out to dissuade anyone from wearing a bike helmet. Although I am about to express my perception that the facts about helmets often are misinterpreted, I believe that helmets confer some obvious safety benefits and that there’s a certain wisdom to wearing one." This does not sound like someone telling you not to wear a helmet, around town or otherwise. Quote Folks who don't know better may read his opinion (a former Bicycling editor even) and presume that a helmet is unnecessary because of a few isolated, unproven studies. They might... if they have poor reading comprehension skills or only read the title. Quote Frankly I found his argument mainly being that: bicycle safety is really outside the control of cyclists and because drivers are the root cause, there's no reason to wear a helmet. That was not my takeaway at all. He discussed how important it is for cyclists to be aware of their surroundings because their own safety is very much within their control; primarily in the form of avoiding accidents through awareness and visibility. I thought the main argument was that the available data does not provide the definitive answers that some people think it does and that the statistical risk of not wearing a helmet put into perspective with the risks we face everyday is not as terrifying as some people think. Quote His 'statistics' are amazingly isolated. I'm shocked folks would take single casual 'studies' as any kind of proof, particularly with zero causation established. I mean one guy riding a bike with no helmet, helmet, and a wig is a controlled experiment?!? Agreed. I don't put a lot of weight into the some of the studies cited. Abe did a great job of analyzing some of these studies and I wish the author had gone into some of that detail. But in the author's defense, he never presented these studies as "proof" of anything. Quote I don't like the article because its trying to conflate disparate considerations/challenges all the while presuming causation. Yes, infrastructure often needs improvement. So does driver education, behavior, and awareness. Neither of those issues conclude that one should or shouldn't wear a helmet. Unfortunately, a young reader might take away the point (intended or not) that a helmet is useless...or even worse for you. That's a huge disservice. Regardless of drivers, defects, obstructions and debris occur on all roads at random times. Not sure why that 'holds no water'. I'm sure you could look up what the statistics of these occurrences are. And I would wager they are not an exceedingly rare occurrence. Again, ignoring driver impact, a helmet can affect how much damage is transferred to your skull should you crash as a result of NOT making contact with a driver. Why would you choose to not wear one? Because a driver will now hit you?? It holds no water because you took an argument based on statistics and refuted it with "surprises happen". Regardless of whether the data you're making assumptions about here exists and regardless of whether or not it says what you would wager it to say, "surprises happen" is not helpful. Unfortunately, it is how our brains make decisions when it comes to things like safety. "Before I go any further, let me state emphatically that I am not out to dissuade anyone from wearing a bike helmet. Although I am about to express my perception that the facts about helmets often are misinterpreted, I believe that helmets confer some obvious safety benefits and that there’s a certain wisdom to wearing one." This does not sound like someone telling you not to wear a helmet, around town or otherwise. Quote Folks who don't know better may read his opinion (a former Bicycling editor even) and presume that a helmet is unnecessary because of a few isolated, unproven studies. They might... if they have poor reading comprehension skills or only read the title. Quote Frankly I found his argument mainly being that: bicycle safety is really outside the control of cyclists and because drivers are the root cause, there's no reason to wear a helmet. That was not my takeaway at all. He discussed how important it is for cyclists to be aware of their surroundings because their own safety is very much within their control; primarily in the form of avoiding accidents through awareness and visibility. I thought the main argument was that the available data does not provide the definitive answers that some people think it does and that the statistical risk of not wearing a helmet put into perspective with the risks we face everyday is not as terrifying as some people think. Quote His 'statistics' are amazingly isolated. I'm shocked folks would take single casual 'studies' as any kind of proof, particularly with zero causation established. I mean one guy riding a bike with no helmet, helmet, and a wig is a controlled experiment?!? Agreed. I don't put a lot of weight into the some of the studies cited. Abe did a great job of analyzing some of these studies and I wish the author had gone into some of that detail. But in the author's defense, he never presented these studies as "proof" of anything. Quote I don't like the article because its trying to conflate disparate considerations/challenges all the while presuming causation. Yes, infrastructure often needs improvement. So does driver education, behavior, and awareness. Neither of those issues conclude that one should or shouldn't wear a helmet. Unfortunately, a young reader might take away the point (intended or not) that a helmet is useless...or even worse for you. That's a huge disservice. Regardless of drivers, defects, obstructions and debris occur on all roads at random times. Not sure why that 'holds no water'. I'm sure you could look up what the statistics of these occurrences are. And I would wager they are not an exceedingly rare occurrence. Again, ignoring driver impact, a helmet can affect how much damage is transferred to your skull should you crash as a result of NOT making contact with a driver. Why would you choose to not wear one? Because a driver will now hit you?? It holds no water because you took an argument based on statistics and refuted it with "surprises happen". Regardless of whether the data you're making assumptions about here exists and regardless of whether or not it says what you would wager it to say, "surprises happen" is not helpful. Unfortunately, it is how our brains make decisions when it comes to things like safety. So the article was based on statistics and yet you admit the statistics were questionable. Which is it? What is your opinion? Are cyclists safer without helmets or with? Why? Well written article, but damn having a family and kids and doing an experiment like this to prove a point? Dedicated but kind of nuts. Quote While I used to rip it at 45 miles per hour, now I’m far more cautious—anything over 30 feels a bit dicey. A bike helmet can deceive riders into thinking they have a cloak of invulnerability that isn’t actually there, and at least one study has confirmed how riders change their behavior when the hat comes off. I never thought of myself as a big-risk taking cyclist but without a helmet I handle certain situations differently. I've never experienced going 45 on a bike, and 30-without a helmet feck I would have a hard time wanting to go above 20 mph! With regards to safety studies, how many accidents where the cyclist picks himself, dusts himself off and rides away make the "studies" showing helmet use doesn't confer benefit one way or the other? How many regular cyclists here with brutal close calls have been contacted and participated in some pollster's questionnaire? The writer is relying on his personal anecdotes here and yet there are untold thousands of personal anecdotes where the rider was obviously better off with the helmet. I'm not sure the helmet is the actual reason we have crappy bike infrastructure, low respect for cyclists, and low cycling participation, and worse safety outcomes. Or that getting rid of helmet laws and culture will affect any of these things. In the cities he mentions cycling is a natural and normal part of the culture, whereas here in many parts of the US, drivers are exposed to cyclists in annoying and awkward ways. If American cities decided to increase population density and make work a rideable distance from home and taxed the shit out of cars, you might end up with a good biking culture. While anecdotes are not the same as data, here is my anecdote and the lessons I learned. I was cycling downtown through a major intersection on a green light. Driver plows on through intersection and T-bones me at 50+ km/hr. My head smashes through the windshield, I bounce off and land 30+ feet away. Serious traumatic brain injury. Six weeks in hospital. One month memory loss. Had to relearn how to control one leg. Recovery better than anyone predicted but some effects remain. Wearing a helmet was the only reason I was not killed or suffer a non-recoverable injury. Lessons: The legal system takes into account helmet use when determining fault and insurance compensation. Even if the driver was 100% at fault for the collision, you may be partly responsible for your injuries if you were not all taking appropriate precautions (i.e. wearing personal protective equipment). This is particularly important where helmet use is mandatory. The argument of helmets vs cycle infrastructure is misleading as they are not the same approach to risk management. Like safety in the construction industry or professional motor sports, helmets (personal protective equipment) are the last line of defense if all other risk mitigation measures fail. Larger risk reductions are made by physically removing the interactions between cyclists and hazards (i.e. cars and trucks). In places where these hazards have been almost completely removed (Holland etc), the resultant risk the cyclists may be so small that riding without a helmet is reasonable. I don't believe this is the case in North America (yet?). I am not a proponent of mandatory helmet laws, as it possibly reduces the number of people cycling. It also gives ammunition to drivers who are looking to cast all cyclists as law breakers/ irresponsible/ justified in being run over etc. I think it is noble if helmet-less cyclists want to sacrifice themselves to prove that cycling is safe. Dead people can lead to better cycling infrastructure. However, because not all hazards are foreseeable and avoidable, it is prudent to wear your personal protective equipment, just like PPE on construction sites or a seat belt while driving. A helmet only has to save your life once to make it worth wearing every time you cycle.
Overview (3) Mini Bio (1) Spouse (1) Trade Mark (3) Often plays characters who derive humor from awkward situations Often plays characters who are oblivious or have a lack of self awareness. Trivia (30) Attended and graduated from Denison University in Granville, Ohio (1984). Has two children: Elisabeth Anne Carell (b. May 2001) and John Carell (b. June 2004). Married to actress/writer Nancy Carell , whom he met while both were writer/performers with the famed Second City comedy troupe in Chicago, Illinois. When he attended the premiere for Bruce Allmächtig (2003), he came to the screening with the impression that his scenes were left on the cutting-room floor. However, his scenes were in the film, and he was pleasantly surprised. His paternal grandfather, Ernest Caroselli, was an Italian emigrant, from Bari, Italy, and his paternal grandmother, Marie G. Egle, was of German ancestry. Steve's maternal grandparents Zigmund Koch and Frances Victoria Tolosky were of Polish origin. Steve's father was born under the surname "Caroselli", which he changed to "Carell" before Steve was born. Was once a reporter for The Daily Show (1996). Provides the voice of Gary on "The Ambiguously Gay Duo" cartoons on Saturday Night Live (1975). Originally wanted to be a lawyer, but he reached a question on an application form that said, "Why do you want to be a lawyer?". He could not think of anything. Has the rare distinction of being in two movies that opened on the same day in the United States - Der Anchorman (2004) and Plötzlich verliebt (2004) (July 9, 2004). Was on three failed sitcoms before he starred on NBC's version of the sitcom Das Büro (2005). Worked the overnight shift in a Store 24 in Maynard, Massachusetts, and takes many of his characters from this experience. Grew up in Newton, Massachusetts. Editor-in-Chief of his high school newspaper, Newton South's "The Lion's Roar". Attended Middlesex School in Concord, Massachusetts. The scene in Jungfrau (40), männlich, sucht... (2005), where Andy has his chest hair removed, required five cameras set up for the shot. It was Carell's real chest hair which was ripped out in the scene. Carell told director Judd Apatow just before shooting the scene: "It has to be real. It won't be as funny if it's mocked up or if it's special effect. You have to see that this is really happening." The scene had to be done in one shot. Was a member of Burpee's Seedy Theatrical Company, Denison University's improv-comedy group and the oldest collegiate improv group in the country. Is one of 115 people invited to join the Academy of Motion Picture Arts and Sciences (AMPAS) in 2007. He and Jim Carrey were both ice hockey goalies in their childhood. Worked for a brief period at a post office in Massachusetts where he delivered mail using his own car since the post office did not have mail carrier vehicles. When he resigned from the position to move to Chicago, for months afterward he continued to find undelivered mail under his car seats. He was nominated for a 1993 Joseph Jefferson Award for Actor in a Revue for "Truth, Justice, or the American Way", at the Second City Theatre in Chicago, Illinois. He was nominated for a 1994 Joseph Jefferson Award for Actor in a Revue for "Are You Now or Have You Ever Been Mellow?", at the Second City Theatre in Chicago, Illinois. Owns and operates the Marshfield Hills General Store in Marshfield, Massachusetts, where he has a summer home. Received a star on the Hollywood Walk of Fame at 6708 Hollywood Boulevard in Hollywood, California on January 6, 2016. First rock concert he ever attended featured Jethro Tull Steve Carell references the same quote by Abraham Lincoln in two films. In Dinner for Schmucks "Our countries are not enemies, they are friends" from Lincoln's "We are not enemies, but friends," and in Irresistible "...and appeal to our better angels" which he attributes to Lincoln yet misquotes from Lincoln's "by the better angels of our nature.". Personal Quotes (15) I have no idea where my pathetic nature comes from. If I thought about it too long, it would depress me. I think a character in a comedy should not know they're in a comedy. I don't think of myself as funny - I don't fill up a room with my humor... I would fail miserably as a stand-up comedian. You can't seem to have any sort of inhibition. Or shame. Or absolute horror at your own physical presence. I know I'm not a woman's fantasy man; I don't have to uphold this image of male beauty, so that's kind of a relief in a way. When they approached me about who I would want writing Get Smart (2008), I suggested B.J. The episodes that he's written walk the line between intensely funny and slightly offensive. But they always fall on the side of being funny. I also suggested him because I think he's going to be someone I'll be working for someday, and I want to get on his good side now - on his Das Büro (2005) co-star and co-writer B.J. Novak [on life since Jungfrau (40), männlich, sucht... (2005) made his a movie star.] I have a helluva lot more money than I used to! That's the only perceivable difference. I will definitely be able to send my kids to college now, which was a question before. (2007) [on playing Maxwell Smart in the upcoming Get Smart (2008)] I am sort of billing it as a comedic "Bourne Identity". [referring to Die Bourne Identität (2002)] (2007) [on being a father] I'm already seeing my daughter's cynical sense of humor and she's six! I bought these shoes, and I'm thinking I'm a cool dad, I'm going to show her my new half-boot shoes. So I said, "What do you think of these?" And she's like, "Mmm no, not liking them." (2007) (2005, on a pre-acting job) I worked the third shift at a convenience store for a few months. At four in the morning most people are looking for cigarettes, porn or one of those shriveled, angry-looking hot dogs from the rotating grill. One night, though, a woman came in during the wee hours. She looked a bit distraught as she paid at the counter. She paused for a moment, looked up at me and asked, "Do you think I'm pretty?" As it turned out, she had just walked in on her boyfriend with another woman. We proceeded to have a lengthy conversation about a person's self-worth, fidelity, trust and relationships. And then I treated her to a slushy blue frozen drink. (2005, on originally wanting to be a lawyer) Being a lawyer just sounded good to me. Kind of like how being a doctor or being an astrophysicist or a microbiologist sounds good. But it took a complete turn when I was filling out my law-school application. I couldn't answer the essay question, which was, Why do you want to be an attorney? I had absolutely no idea. Uh, to make a lot of money and sue people? To be hated based solely on my job title? I couldn't come up with one good reason. That ended my law career rather quickly. (2005, on performing announcing duties for the video games, Outlaw Golf and Outlaw Volleyball) Who wouldn't want to get paid for spending a couple of hours in a sound booth? I went in thinking, Yeah, free money! But it was so much harder than I thought it'd be. There are thousands of possible scenarios in a video game, and you have to do lines for all of them. It was pretty taxing. Then again, it's not like I was chopping down trees or anything. That sounds pretty whiny, doesn't it? "I had to say so many words. It was haaaard! Waaaah!" [on his character from The Daily Show (1996)] In my mind, he was a guy who had done national news reporting but had fallen from grace somehow and was now relegated to this terrible cable news show and was very bitter about it and thought he was better, but he wasn't. [on whether he feared being typecast in comedy roles] I've done big commercial movies and little independent movies, and I've played jerks and suicidal Proust scholars, and I feel like I've been really lucky to play all the different types of characters. So, no, I don't worry about that. If I do get pigeonholed, it's nothing I can really control. [on his surprise at hearing so much laughter in Foxcatcher (2014)] The way Bennett [Miller] describes the humor is that it's funny until it's not anymore, and if this story didn't have the outcome that it does, it could just be an absurd, ridiculous story. But the fact it ends up where it does, and that there's this pall that hangs over the entire narrative, changes everything. But some of it so absurd you can't help but laugh because it seems too strange to be true. [on male bonding in Foxcatcher (2014)] It's about offering up yourself to vulnerability. I think Bennett presents all this things in a very open way and allows the viewer to draw their own conclusion. He was finding it, as we were finding it, and I think that's an extremely exciting aspect of working like this. Salary (8)
569 P.2d 575 (1977) 279 Or. 595 Albert TROUTMAN and Ogden Farms, Inc., a Corporation, Respondents, v. Ralf ERLANDSON, Appellant. Supreme Court of Oregon, Department 1. Argued and Submitted July 7, 1977. Decided September 27, 1977. *576 Robert J. Morgan, Milwaukie, argued the cause for appellant. Ralf H. Erlandson, Milwaukie, filed briefs in pro per. Gerald R. Pullen, Portland, argued the cause and filed the brief for respondents. Before DENECKE, C.J., and HOLMAN,[*] TONGUE and LENT, JJ. TONGUE, Justice. This was an action for contribution. Plaintiffs' complaint alleges that plaintiffs had paid a $44,000 obligation owed jointly by plaintiffs and defendant and that defendant was obligated to "make contribution of one-third of said debt * * * or the sum of $16,500."[1] Defendant's answer included, in addition to a general denial and three affirmative defenses, a counterclaim for $50,000 in damages alleging, among other things, that plaintiffs "know full well that this defendant was not to be responsible for any part" of the $44,000 obligation and were "attempting to use this litigation as a form of coercion" to "cause defendant to be unable to pursue his rights and remedies in protecting his property rights * * *." The case was tried before a jury, which returned a verdict in favor of plaintiffs.[2] Defendant appeals from the resulting judgment. Defendant's principal assignment of error is that the trial court erred in failing to grant defendant's motion for mistrial based upon alleged misconduct by plaintiffs' attorney in asking an improper and prejudicial question during his cross-examination of the defendant. In order to properly decide this contention it is necessary to consider the context in which that question was asked. *577 It appears that the sum of $16,500 demanded by plaintiff as a "contribution" from defendant arose from two promissory notes representing loans to a partnership between defendant, an attorney, and plaintiff Troutman. That partnership apparently owed over $1,000,000 in debts and was the subject of a suit filed by plaintiff Troutman against defendant for dissolution and an accounting. Defendant testified on direct examination, in support of his counterclaim for damages, that he had told plaintiff Troutman that he was negotiating with one Dale Fackrell to "come up with $140,000" to pay on the partnership indebtedness; that at that time creditors of the partnership were threatening foreclosure and that if he had been able to obtain the $140,000 he would have been able to "remove" the threat of foreclosure and then "acquire a percentage ownership" in the partnership. Defendant then testified that "[b]y filing this action * * * what Mr. Troutman did was to wipe out my opportunity to find an investor who would come up with $130,000" and that this "business opportunity" was "of value" to him "in excess" of $150,000. In the cross-examination of defendant on his claim that "filing this lawsuit caused you to be unable to secure $140,000 from Mr. Fackrell," plaintiffs' attorney asked the following question: "Now, in truth and fact, sir, is it not true that your own client, Mrs. Castor, sued you in this very courthouse in this last year for fraud, defrauding her, and let me finish my question, sir, if you allow me, and secured $30,000 in punitive damages and $9,000 in general damages against you for defrauding her?" Defendant objected to that question and moved for a mistrial. That objection and motion were then argued in chambers. Plaintiffs' attorney contended that to impeach defendant's claim that the filing of this action "wiped out his opportunity to find an investor for $140,000 * * * we would show that this is not the real truth; that there would be other lawsuits that could affect that ability;" that "Mr. Troutman wasn't the only person with lawsuits against him," and that "[i]f I can't bring that in, they [the jury] are going to think that it was only Troutman that prevented you from getting a loan." In response, defendant Erlandson contended that: "He's misstated the facts. Of course, the Castors were never my clients. That lawsuit would take a good deal of explanation and is entirely collateral to this. He injected it strictly to prejudice the jury against me, to bring up a false issue and to deny me the right to a fair trial. He deliberately misstated the facts, saying weren't Castors my clients, and he knows better than that or should know better than that." (Emphasis added). and that: "He's going to force me, your Honor, to go into completely the Castor thing and there's no way I can keep from going into it without further prejudicing myself." The trial court then ruled: "That's a matter of choice for you. I am denying the motion." Plaintiffs' attorney than said: "All right. I will leave it then." Upon resumption of the cross-examination before the jury, plaintiffs' attorney did not repeat the question objected to, but proceeded to ask questions on other matters. Upon completion of the cross-examination defendant Erlandson did not "go into" the "Castor thing," but offered no re-direct testimony and then "rested." He did not call Mr. Fackrell as a witness. In his briefs on this appeal defendant Erlandson charges that: "* * * plaintiffs' counsel knew his statement was erroneous as stated, and pursued the question solely for its highly misleading and prejudicial effect." and that: "He intentionally tainted the jury * * *." Thus, according to defendant, "* * * appellant's first assignment of error concerns two basic and closely related questions: *578 "(1) Whether an attorney may with impunity ask suggestive and highly prejudicial questions, known by him to be erroneous as worded. "(2) Whether an attorney may examine a witness as to matters normally relevant, but known by the examining witness [attorney?] to be in fact irrelevant. "As stated in appellant's brief, counsel for appellee was fully aware of the Castor case; that the Castors were not clients of appellant, and that recovery was rendered on the theory of failure to disclose, not active fraud. Counsel was also fully aware that Mr. Fackrell, appellant's primary hope for raising $140,000.00, was fully aware of the Castor litigation and was unaffected thereby. "* * * His own explanation of his purpose in asking the question was to suggest other causes for appellant's inability to borrow $140,000.00 (TR 112). The purpose is laudable on the surface, but in view of counsel's knowledge of its actual irrelevancy, as opposed to an abstract situation where counsel has a reasonable belief in a question's relevancy, the question here complained of was asked without justification or excuse, was known to be inconsistent with the trust [truth?], and was designedly misleading."[3] (Emphasis added) Thus, defendant appears to concede that the question asked by plaintiffs' attorney would not have required a mistrial if he had a "reasonable belief" in the "relevancy of the question," but contends that in this case the question was not only improper, but required a mistrial because it was "known" to be "inconsistent with the truth" and was "designedly misleading." These are strong charges to be leveled by one attorney against another, unless supported by the record. The difficulty, however, is that defendant's charges (which are denied by plaintiffs' attorney) are not supported by the record in this case. It may be that the Castors were not clients of defendant Erlandson, but nothing in the record supports his charge that plaintiffs' attorney knew that fact. Neither is there anything in the record to support the charge that plaintiffs' attorney knew that the Castor case was not for "active fraud," but for a "failure to adequately disclose." On the contrary, this court may take judicial notice of its recent decision in Castor v. Erlandson, 277 Or. 147, 152-53, 560 P.2d 267 (1977). It appears from that decision that the complaint in that case alleged "defendant [Erlandson] represented [to Castor] that the Jacksons could convey good title but that this was false and defendant knew it was false," as well as an allegation that "defendant had a duty to disclose the full extent of the indebtedness" and that a jury verdict against defendant Erlandson for general and punitive damages totalling $38,500 was affirmed by this court. In State v. Bateham, 94 Or. 524, 186 P. 5 (1919), this court considered a similar problem. In that case defendant called character witnesses who testified that his reputation as a moral, law-abiding man was good. On cross-examination the prosecuting attorney, over defendant's objection, was permitted to ask each witness in substance if he had ever heard that the defendant had taken "improper liberties" similar to that described in the indictment with another little girl, named in the question. Each witness answered in the negative. On appeal defendant contended that this was error because the prosecuting attorney informed the jury by innuendo that defendant was guilty of, or at least charged with, other like crimes. In rejecting that contention, in the absence of some showing that the prosecuting attorney acted in bad faith in asking those questions, this court said (at 530-32, 186 P. at 8): "* * *. Here the moral character of the accused was drawn directly in question. He himself invited inquiry about it *579 by putting in testimony in general terms about his good character. Certainly the prosecution legitimately could ask the general cross-interrogatory if the witness had ever heard of the defendant's doing acts of the same kind as that charged. "* * *. "It is quite impossible definitely to fix the boundary between pettifoggery on one hand and proper cross-examination on the other, so as to govern all cases with exactness. It must be left to the discretion of the presiding judge, acting in the light of the circumstances of the case before him, subject to reversal if an abuse of discretion appears. "* * *. "No abuse of the court's prerogative appears. It is urged that the district attorney did not expect an affirmative answer to any such question, but there is nothing in the record by which we can determine that matter. If, in truth, he asked the questions solely for the purpose of intimating to the jury that the defendant was guilty on other charges of like nature, which he could not prove directly and which had no foundation within his knowledge or information, he was guilty of a most contemptible, unprofessional piece of pettifoggery. It would be beneath the dignity of any practicing lawyer, much more of a public prosecutor, and should lead to a reversal. But that situation is not made to appear and the assignment of error on that point must be disregarded." The rule of State v. Bateham, supra, permitting such cross-examination has been subsequently reaffirmed in State v. Harvey, 117 Or. 466, 472, 242 P. 440 (1926); State v. Matson, 120 Or. 666, 671, 253 P. 527 (1927); State v. Shull, 131 Or. 224, 229, 282 P. 237 (1929); State v. Frohnhofer, 134 Or. 378, 383, 293 P. 921 (1930); and State v. Linn, 179 Or. 499, 514, 173 P.2d 305 (1946). It is also the majority rule in other jurisdictions which have considered the matter when such questions are asked in good faith. See Annot., 71 A.L.R. 1504, 1521, 1541-43 (1931); Annot., 47 A.L.R.2d 1258, 1280, 1316-20 (1956). See also McCormick on Evidence 456-58, § 191 (2d Ed. 1972). Although the asking of such questions in bad faith may be ground for reversal, it is generally held that the good faith of the cross-examiner is, in the first instance, to be presumed, i.e., that there is a presumption of good faith in such cases. See Annot., 47 A.L.R.2d supra, at 1319. The inherent danger of prejudice in permitting such cross-examination of character witnesses in criminal cases would appear to be at least as great, if not greater, than the danger of prejudice from the asking of the question on cross-examination in this case. In addition, the possible relevance of the question asked in this case would appear at least as great, if not greater, than the relevance of such cross-examination in many criminal cases. Here, to paraphrase Bateham, the question whether the filing of this lawsuit prevented defendant from securing $140,000 from Mr. Fackrell was put "directly in question" by defendant's testimony on direct examination. It follows that plaintiffs' attorney "legitimately could ask" if another lawsuit, and one based on fraud, had also been filed against defendant resulting in a judgment for an even larger sum of money. In such event, the jury could properly infer that such a lawsuit, rather than this lawsuit, was the reason that defendant was unable to get Mr. Fackrell to "put up" the $140,000. As for the possibility of "pettifoggery," as also discussed in Bateham, such as in the possible event that plaintiffs' attorney knew that no such lawsuit had been filed and asked that question in bad faith, it would appear that in this case, as in Bateham "there is nothing in the record by which we can determine that matter." As previously noted, however, it does appear that there was another lawsuit against defendant for fraud which resulted in a judgment against defendant for $38,500. Yet defendant, in statements to this court in his briefs, says that the other lawsuit did not involve "active fraud" and that plaintiffs' attorney was "fully aware" of that *580 alleged fact. Defendant also states in his briefs that plaintiffs' attorney was also "fully aware" that Mr. Fackrell was "fully aware of the Castor litigation and was unaffected thereby," despite the fact such statements go outside the record and that no such contentions were made in the trial court. Under these circumstances, it would appear that defendant is not in the best of positions to accuse a fellow attorney of bad faith. In the trial court defendant's primary contention was that "the Castors were never my clients" and that plaintiffs' attorney "knows better than that or should know better than that." If the prejudice to defendant arose from the fact that the plaintiff in the pending action against him was not a client, but was someone other than a client, defendant might well have removed any such prejudice by offering evidence to that effect. When, however, defendant stated to the trial court that "[h]e's going to force me * * * to go completely into the Castor thing," the plaintiffs' attorney stated that he would "leave it there," and did not demand an answer to the question which was the subject of defendant's objection. Defendant then chose not to testify that "the Castors were never my clients" or to attempt any explanation of the pending action for fraud. According to plaintiffs, what defendant was trying to do in the trial court was "to keep from the jury that another lawsuit for fraud was actually pending against him," for the reason that a judgment of $38,500 for fraud is much more harmful to one's credit than law actions which may never result in judgment." Whether or not that was defendant's actual purposes, the trial judge could reasonably have drawn such an inference under the record in this case. Under these circumstances, we think it proper to hold, as held in Bateham, that the question of whether plaintiffs' attorney was guilty of the serious charge of bad faith was a matter to be "left to the discretion of the presiding judge, acting in the light of the circumstances before him," and that there was "no abuse of the court's prerogative" in this case. This result is also consistent with the established rule in appeals from the denial of motions for mistrial based upon alleged improper arguments or other statements by counsel in jury cases. That rule, as stated in Kuehl v. Hamilton, 136 Or. 240, 244, 297 P. 1043, 1044 (1931) is that: "Control over the argument of counsel is intrusted largely to the discretion of the trial judge. In Huber v. Miller, 41 Or. 103, 68 P. 400, Mr. Justice Wolverton said: `It is usually however, within the discretion of the trial judge to determine whether counsel transcend the limits of professional duty and propriety in this particular, and the exercise of such discretion is not the subject of review, except where they are permitted to travel out of the record, or to persist in disregarding the admonitions of the trial judge, or to indulge in remarks of a material character so grossly unwarranted and improper as to be clearly injurious to the rights of the party assailed.' "It is unnecessary to add further citations to the numerous ones assembled by Judge Wolverton in support of his above statement. Obviously the judge who presides over the trial, and who becomes familiar with its atmosphere, is best able to determine whether an excursion into a forbidden field is prejudicial, the extent of the injury, if any, and what remedies must be applied to undo the harm." Our decision in Walker v. Penner, 190 Or. 542, 554, 227 P.2d 316 (1951), the only case cited by defendant on this assignment of error, states substantially the same rule, citing Kuehl v. Hamilton, supra, with approval. This result is also consistent with the general rule that the right of cross-examination extends not only to any matter stated in the direct examination of a witness, but also to any matter "connected therewith," and that "great latitude" should be allowed in cross-examination to *581 include other matters which tend to limit, explain or qualify them or to rebut or modify any inference arising from facts or matters stated on direct examination.[4] Also, we have held that the scope of permissible cross-examination "rests largely in the discretion of the trial judge."[5] We recognize the danger that the rule of "great latitude" in cross-examination may, on occasion, be abused by lawyers who, out of "pettifoggery" and in bad faith, may ask questions designed to suggest or intimate to the jury matters that are not properly admissible and which may be prejudicial.[6] It may be in such a case that "pettifoggery" and bad faith in the asking of such a question appears on the face of the record or is otherwise obvious, so as to require the granting of a mistrial. This is not such a case, however, in our opinion. It may also be that in some such cases the trial judge should, on motion for mistrial and out of the presence of the jury, question the attorney as to whether he has credible grounds to ask such a question. See McCormick on Evidence 458 n. 79, § 191 (2d ed. 1972). In this case, however, no such request was made by defendant in the trial court and no such contention was made by him in this court. Under all of the circumstances of this case, and for the reasons previously stated, we hold that the trial court did not abuse its discretion in denying defendant's motion for a mistrial.[7] NOTES [*] Holman J., did not participate in this decision. [1] By a second cause of action plaintiffs also sought to recover $1,200 as the reasonable value of furniture "had and received" by defendant from plaintiffs and for which defendant had refused to pay. [2] That verdict was for the full amount of the prayer of the complaint and included $16,500 as "damages" and $1,200 for "furniture." [3] Similarly, defendant charges in his brief that: "Mr. Pullen knew * * * that Mr. D. Fackrell knew about the Castor litigation, and that it was the institution of suits by plaintiffs which caused Mr. Fackrell to discontinue negotiations for the $140,000 investment." [4] See ORS 45.570, Ah Doon v. Smith, 25 Or. 89, 93-94, 34 P. 1093 (1893); and Miller v. Lillard, 228 Or. 202, 216, 364 P.2d 776 (1961). [5] Garrett v. Eugene Medical Center, 190 Or. 117, 132, 224 P.2d 563, 569 (1950). See also State v. Sullivan, 230 Or. 136, 142, 368 P.2d 81 (1962). [6] See 3A Wigmore on Evidence (Chadbourn rev. 1970) 920-21, § 988, and 6 Wigmore on Evidence (Chadbourn rev. 1976) 371-75, § 1808. [7] We have also considered defendant's remaining two assignments of error, which were submitted on briefs and without oral arguments. We hold that the trial court did not err in either of those matters.
There are quite a lot of Java EE server implementations out there. There are a bunch of well known ones like JBoss, GlassFish and TomEE, and some less known ones like Resin and Liberty, and a couple of obscure ones like JEUS and WebOTX. One thing to keep in mind is that all those implementations are not all completely unique. There are a dozen or so Java EE implementations, but there are most definitely not a dozen of JSF implementations (in fact there are only two; Mojarra and MyFaces). Java EE implementations in some way are not entirely unlike Linux distributions; they package together a large amount of existing software, which is glued together via software developed by the distro vendor and where some software is directly developed by that vendor, but then also used by other vendors. In Java EE for example JBoss develops the CDI implementation Weld and uses that in its Java EE servers, but other vendors like Oracle also use this. The other way around, Oracle develops Mojarra, the aforementioned JSF implementation, and uses this in its servers. JBoss on its turn then uses Mojarra instead of developing its own JSF implementation. In this post we'll take a deeper look at which of these "components" the various Java EE servers are using. One source that's worth looking at to dig up this information is the Oracle Java EE certification page. While this does lists some implementations for each server, it's unfortunately highly irregular and incoherent. Some servers will list their JSF implementation, while some others don't do this but do list their JPA implementation. It gives one a start, but it's a very incomplete list and a list that's thus different for each server. Another way is to download each server and just look at the /lib or /modules directory and look at the jar files being present. This works to some degree, but some servers rename jars of well known projects. E.g. Mojarra becomes "glassfish-jsf" in WebLogic. WebSphere does something similar. Wikipedia, vendor product pages and technical presentations sometimes do mention some of the implementation libraries, but again it's only a few implementations that are mentioned if they are mentioned at all. A big exception to this is a post that I somehow missed when doing my initial research from Arun Gupta about WildFly 8 (the likely base of a future JBoss EAP 7) which very clearly lists and references nearly all component implementations used by that server. A last resort is to hunt for several well known interfaces and/or abstract classes in each spec and then check by which class these are implemented in each server. This is fairly easy for specs like JSF, e.g. FacesContext is clearly implemented by the implementation. However for JTA and JCA this is somewhat more difficult as it contains mostly interfaces that are to be implemented by user code. For reference, I used the following types for this last resort method: Servlet - HttpServletRequest JSF - FacesContext CDI - BeanManager JPA - EntityManager BV - javax.validation.Configuration, ParameterNameProvider EJB - SessionContext JAX-RS - ContextResolver, javax.ws.rs.core.Application JCA - WorkManager, ConnectionManager, ManagedConnection JMS - Destination EL - ELContext, ValueExpression JTA - TransactionManager JASPIC - ServerAuthConfig Mail - MimeMultipart WebSocket - ServerWebSocketContainer, Encoder Concurrency - ManagedScheduledExecutorService Batch - JobContext Without further ado, here's the matrix of Java EE implementation components used by 10 Java EE servers: (asterisk behind component name means vendor in given column uses implementation from other vendor, plus behind name means the implementation used to be from the vendor in that column, but the vendor donated the implementation to some external organization) Looking at the matrix we can see there are mainly 3 big parties creating separate and re-usable Java EE components; Red Hat, Oracle and Apache. Apache is maybe a special case though, as it's an organization hosting tons of projects and not a vendor with a single strategic goal. Next to these big parties there are two smaller ones producing a few components. Of those OW2 has a separate and re-usable implementation of EJB, JMS and JTA, while Resin has its own implementation of CDI. In the case of Resin it looks like it's only semi re-usable though. The implementation has its own name (CanDI) but there's isn't really a separate artifact or project page available for it, nor are there really any instructions on how to use CanDI on e.g. Tomcat or Jetty (like Weld has). Apart from using (well known) open source implementations of components all servers (both open and closed source) had a couple of unnamed and/or internal implementations. Of these, JASPIC was most frequently implemented by nameless internal code, namely 4 out of 5 times, although the one implementation that was named (PicketBox) isn't really a direct JASPIC implementation but is more a security related project that includes the JASPIC implementation classes. JTA and EJB followed closely with 8 respectively 7 out of 10 implementations being nameless and internal. Remarkable is that all closed source servers tested had a nameless internal implementation of Servlet. At the other end of the spectrum in the servers that I looked at there were no nameless internal and no closed source implementations of JSF, JPA, Bean Validation, JAX-RS, JavaMail and JBatch. It's hard to say what exactly drives the creation of nameless internal components. One explanation may be that J2EE started out having Servlet and EJB as the internal foundation of everything, meaning a server didn't just include EJB, but more or less WAS EJB. In that world it wouldn't make much sense to include a re-usable EJB implementation. With the rise of open source Java EE components it made more sense to just reuse these, so all newer specs (JSF, JPA, etc) are preferable re-used from open source. One exception to this is however JEUS, which despite being in a hurry to be the first certified Java EE 7 implementation still felt the need to create their own implementations of the brand new WebSocket and Concurrency specs. It will be interesting to see what the next crop of Java EE 7 implementations will do with respect to these two specs. An interesting observation is that WebSphere, which by some people may be seen as the poster child of the closed source and commercial AS, actually uses relatively many open source components, and of those nearly all of them are from Apache (which may also better explain why IBM sponsored the development of Geronimo for some time). JavaMail for some reason is the exception here. Geronimo has its own implementation of it, but WebSphere uses the Sun/Oracle RI version. Another interesting observation is that servers don't seem to randomly mix components, but either use the RI components for everything, or use the Apache ones for everything. There's no server that uses say JMS from JBoss, JSF from Oracle and JPA from Apache. An exception to the rule is when servers allow alternative components to be configured, or even ship with multiple implementations of the same spec like JOnAS does. We do have to realize that a Java EE application server is quite a bit more than just the set of spec components. For one there's always the integration code that's server specific, but there are also things like the implementation of pools for various things, the (im)possibility to do fail-over for datasources, (perhaps unfortunately) a number of security modules for LDAP, Database, Kerberos etc, and lower level server functionality like modular kernels (osgi or otherwise) that dynamically (e.g. JBoss) or statically (e.g. Liberty) load implementation components. JEUS for instance may look like GlassFish as it uses a fair amount of the same components, but in actuality it's a completely different server at many levels. Finally, note that not all servers were investigated and not all components. Notably the 3 Japanese servers NEC WebOTX, Fujitsu Interstage and Hitachi Cosminexus were not investigated, the reason being they are not exactly trivial to obtain. At the component level things like JAX-RPC, JAX-WS, SAAJ, JNDI etc are not in the matrix. They were mainly omitted to somewhat reduce the research time. I do hope to find some more time at a later stage and add the remaining Java EE servers and some more components. Arjan Tijms
The WNBA is worth it. A couple weeks ago, I was invited to cover a WNBA game, the Washington Mystics versus the Connecticut Sun. Some people snickered. Some asked why. Some didn’t care. And that’s fine. This post isn’t to convince anyone that the WNBA is great or that it’s even better than they think. Plain and simple, the WNBA is worth it. Worth the effort to make sure it works. Worth the support and subsidization of the NBA … although, the current level of the NBA’s assistance is somewhat mysterious. WNBA president Donna Orender was recently interviewed by Fortune’s Poppy Harlow on CNNMoney.com. When asked if the league gets financial support from the NBA, Orender carefully said, “We are an entity that runs ourselves, but with … I would say we have support from the NBA, but there’s always been these rumors that they’re writing big checks for us …” Harlow interrupted and implored Orender to clear the record on if the WNBA stands on its own feet financially. Ordener responded, “At the league level, we do. Yes.” A bit vague, but certainly indications of progress from Orender. Over the league’s existence, six teams have folded: the Portland Fire (2000-02), Miami Sol (2000-02), Cleveland Rockers (1997-03), Charlotte Sting (1997-06), Sacramento Monarchs (1997-09), and Houston Comets (1997-08). The Comets won the first four championships in league history. The Detroit Shock, winners of three championships in the past seven seasons, most recently in 2008, relocated to Tulsa for the current season. Orender responded to trouble keeping teams afloat by comparing the WNBA to the struggles of other leagues in their youth, specifically citing the NFL and the NBA. Clearly, however, an apples to oranges situation. In this day of new media and high technology market research, the ability to penetrate markets and pinpoint target audiences is vastly different from trying to grow a sports league over 50 years ago. Fact is, a growing audience has been hard to come by for the WNBA, evident by the league’s attendance history. In its fourteenth year, the league is young, but it’s not that young. According to WNBA attendance records on WomensBasketballOnline.com, in the league’s first seven seasons, the league-wide attendance average was 9,560. In the last six years, the league-wide average has been 7,999. The reported average game attendance for the first four weeks of the 2010 season is 7,198. That’s not exactly growth. “We’re watching [attendance numbers] very carefully,” said Sheila Johnson, Washington Mystics president & managing partner, when I spoke with her after Ted Leonsis’ introductory press conference as majority owner of the Washington Wizards. “I think the league has grown in many ways as far as our fan base, but we’re still struggling a little bit with sponsorships.” Johnson, who also serves a vice-chair of Monumental Sports & Entertainment, the ownership group that controls the Wizards, Mystics, Washington Capitals and the Verizon Center, also pointed more toward societal factors as an influence on attendance. “I still don’t think society as a whole has really embraced the female athlete as they should,” she said in terms of goals to increase the WNBA’s fan base. “And so it’s something we’re constantly working on and we’re struggling with, and trying to really get the message out there of the importance of women and sports. Once society can start seeing its strengths, I think it’s going to grow.” Cultural factors also come into play. For the most part, professional women basketball players get paid more overseas than they do in America. Thus, women’s pro basketball is more embraced in other countries. The WNBA’s summertime schedule does allow many players to participate in dual leagues, but that comes with the sacrifices of increased injury risk and shorter careers. But if the WNBA had a winter schedule, they simply couldn’t compete, with foreign women’s leagues or with the NBA. NBA commissioner David Stern projected that his league’s owners would collectively lose $400 million this season. It’s the economy, stupid (and a CBA that allows NBA owners to take “stupid pills,” as Ted Leonsis would say). NBA union head Billy Hunter recently called the $400 million figure, “baloney.” Nonetheless, the health of the WNBA is certainly affected by the overall health of professional basketball in the United States, something which will improve in a better economy. Sheila Johnson said jersey sponsorships can be the difference. “We are definitely looking into it,” when asked if the Mystics were considering the option. “That marquee sponsorship really does make the difference between our being in the red and being in the black. So we have been constantly talking with major corporations to see if we can get a marquee deal.” Johnson also sees Ted Leonsis’ majority ownership of the Wizards as a plus specifically for the Mystics franchise. “The beauty about what we’re doing here is we’re going to be able to kinda blend the Wizards and the Mystics back together, as far as sponsorship sales, as far as even being able to sell packages.” Six of the 12 current WNBA teams have connections with NBA ownership groups. Still … surely none of this matters to you. It all comes down to the product, right? If you’re a fan of men’s basketball, it’s probably not a two-way street toward fandom of women’s basketball. Same sport, different games … the main discrepancy coming in athleticism. Men’s basketball is more athletically entertaining in contrast to general human athletic capabilities. This can’t be argued. Not to say women’s basketball isn’t athletically astounding in its own way, just very small in comparison to a pool of the world’s greatest athletes. But athleticism is not a point that should be focused upon as a quality of the WNBA game anyway. In fact, it’s somewhat arrogant to consider the merits of the WNBA solely based on the idea that NBA-caliber athleticism spoils you from watching the women’s game. If the game of basketball is all about dunking and athletic feats, then you aren’t appreciating or getting the nuance of the sport, much less why people compete in the first place. So the WNBA is not for you. It doesn’t matter. The WNBA provides an alternative outlet for those passionate about basketball in ways that needn’t matter to everyone. The WNBA is for someone … for young girls to cultivate a love for the game with the tangible goal of playing at the highest professional level in the United States … for promotion of the sport on a worldwide level to both men and women, and not just across cultures, races, religions and ethnicity. Through all the struggles, criticism and growing pains, the WNBA is worth the effort and worth the presence. For the greater good of the game of basketball, the WNBA is worth it. >>>>>>>>>>>>>>>>>>>>>>>>>>>> After the game I attended, I spoke to a couple WNBA players, Marissa Coleman and Monique Currie of the Washington Mystics and Tan White of the Connecticut Sun, about the role their league plays in the ambassadorship of the game of basketball, especially for girls. The reason the WNBA is failing has nothing to do with the economy and nothing to do with a bloodlust for athleticism. I enjoy watching basketball on all levels, from the NBA down to recreational leagues for high schoolers. The answer is simple really…the product put out by the WNBA is a terrible product….simple as that. As a basketball purist, I have tried several times to watch the WNBA…only to be disgusted at the terrible basketball being played. Again, it is not the lack of 360 windmill dunks, but a failure to play great basketball at the fundamental level. If the product isn’t great, noone will buy….no matter how well you market the product. Nick Not that this argument is made here specifically, but it is somewhat common for people to argue that NCAA players/teams exhibit better fundamentals than NBA players, or that WNBAers do the same. However, less athletic does not equal more fundamentally sound. The NBA is not only basketball at its most athletic, but more importantly, it is also basketball at its most fundamentally sound. As you say, the niche the NBA has is – you can watch women compete. But for most, myself included, they would prefer to watch the BEST compete. And that means the NBA. Greg Are the attendance figures given “paid attendance” or only attendance? The WNBA gives away a lot of tickets. TheOnlyGirl I’ve been searching and reading articles such as this and it’s always men who seem to hate WNBA. I’ve always thought NBA is an ego trip, the very reason why average men love it. Because it’s all they’ve got. I grew up in a family who loves basketball, I was never the one to jump up and down when a dude dunks. So what? I personally think it’s pathetic. Here comes WNBA and I love it more than anything. If it shuts down, I will become the feminist every men would love to fuck but can’t. Sponsored Links Sponsored Links About TAI Truth About It.net, Washington Wizards Blog, ESPN TrueHoop Network -- Following the D.C. pro basketball franchise since the 90s and covering them in blog form since 2007 -- Opinion, Analysis, Irreverence, Pictures, Video, Interviews, Photoshops, News, Video, Quotes, Shares, and all the pixels about the Washington Wizards you can imagine.
- 70702*l**2 + l*m - 3*l + 3*m - 1 wrt m? 11660*l**2 + l + 3 Find the third derivative of 2222*m**3*v**3 + m**3*v - 4*m**2*v**3 - 2*m**2*v**2 - 1847*m*v**2 - v**3 wrt v. 13332*m**3 - 24*m**2 - 6 Differentiate 3655*f**3*r**3 + 2*f*r**3 + 17851*r**3 wrt f. 10965*f**2*r**3 + 2*r**3 Find the third derivative of 2*p**6 + 336*p**5 + 14*p**4 + p**2 + 347*p + 2 wrt p. 240*p**3 + 20160*p**2 + 336*p What is the second derivative of 1574*h**5 + 4*h**4 - 11*h**2 - 816703*h wrt h? 31480*h**3 + 48*h**2 - 22 Find the third derivative of -59026*g**3 - 33538*g**2 wrt g. -354156 What is the first derivative of -20360*l*z - 4306*l - z wrt z? -20360*l - 1 Find the third derivative of -47157*h**5 + 140*h**2 + 4*h - 6 wrt h. -2829420*h**2 What is the third derivative of -2*w**5 - 15*w**4 + 2023*w**3 - 56976*w**2 + 3? -120*w**2 - 360*w + 12138 What is the third derivative of 3*d**3*n**2 + 24*d**3*n - 198*d**3 - 22*d**2*n**2 + 13*d*n**2 + n**2 + 2 wrt d? 18*n**2 + 144*n - 1188 What is the second derivative of 71*d**3 + 4*d**2*s - 56*d**2 - 3*d*s + 5*d - 2*s + 221 wrt d? 426*d + 8*s - 112 What is the second derivative of 189655*d**3 + 24972*d + 3? 1137930*d What is the second derivative of -26631*t**4 + 4*t**3 - 64731*t - 2 wrt t? -319572*t**2 + 24*t Find the first derivative of -1644*r**2*v - 7*r*v**2 - 3644*v**2 + v wrt r. -3288*r*v - 7*v**2 What is the third derivative of 52*m**5*o + 42*m**5 - 11*m**3 + m**2*o + 2*m*o + 4*m + 97*o wrt m? 3120*m**2*o + 2520*m**2 - 66 Find the third derivative of -619351*n**3 + 1399*n**2 + 322*n + 2 wrt n. -3716106 What is the first derivative of -n**4 - n**3 - 11909*n - 39886? -4*n**3 - 3*n**2 - 11909 What is the derivative of -904034*d*y - 338985*d wrt y? -904034*d What is the first derivative of 1814229*b**4 + 1740053 wrt b? 7256916*b**3 Find the second derivative of 54*s**2 + 10130*s - 1 wrt s. 108 What is the derivative of 15075*u**3 - 3160788? 45225*u**2 Find the second derivative of 6*c**3*d**3*k + c**3*d*k + 6*c**2*d*k - 1659*c*d**3 + c*d**2*k + 248*c*d*k wrt d. 36*c**3*d*k - 9954*c*d + 2*c*k What is the derivative of -3084*f**2*h**3 + 90*f**2*j - 72*f*h**3 - 2*h**3*j + 3*h**2*j - 88*j wrt j? 90*f**2 - 2*h**3 + 3*h**2 - 88 Find the third derivative of 3349966*a**3 + 93*a**2 + 3*a - 101. 20099796 Find the second derivative of 38499*k**3 + 2*k**2 - 5227*k - 1 wrt k. 230994*k + 4 Find the third derivative of -5139*s**5 - 42*s**3 - 178912*s**2 wrt s. -308340*s**2 - 252 Find the third derivative of 2*h**4 + 350*h**3 - 296755*h**2 wrt h. 48*h + 2100 What is the derivative of 22*n**2 - 828*n - 49823? 44*n - 828 What is the first derivative of 204*j*q*u*z + j*u**2 + 16438*j*u*z + 141*q*u**2*z + 2*u*z wrt q? 204*j*u*z + 141*u**2*z Find the third derivative of 2*x**5 - 107381*x**3 - 1750*x**2 - 234*x wrt x. 120*x**2 - 644286 Find the second derivative of -48098*s**3 + 2310*s. -288588*s What is the third derivative of 6*i**4*r - 12801*i**3 + 2919*i**2 + 8*i*r wrt i? 144*i*r - 76806 Find the second derivative of -f*i**2*z**3 + 4*f*i**2 - 44*f*i*z**2 + 3*f*i*z + f*z**3 - 466*i**2 + 2*i*z**3 + 3*z**3 wrt i. -2*f*z**3 + 8*f - 932 Differentiate -126120*l**3 - l**2 - 544607 with respect to l. -378360*l**2 - 2*l Differentiate 2*g**2 + 20825*g + 31172 wrt g. 4*g + 20825 Differentiate -240*m**2 - 39*m + 48452 wrt m. -480*m - 39 What is the second derivative of -2*i**5*t*w - 2857*i**2*t*w - i**2*t + i*t*w - 2*i*t - 81*i - 3*t wrt i? -40*i**3*t*w - 5714*t*w - 2*t What is the second derivative of -169*f**3*o - 40*f**3 - 2*f**2*o - 3*f*o - f + 13*o - 16 wrt f? -1014*f*o - 240*f - 4*o Differentiate -97*b*d*h**2 + 1382*b*d + 1321*b*h + 195*b + 2*d*h**3 with respect to h. -194*b*d*h + 1321*b + 6*d*h**2 What is the second derivative of 43439*p**3 + 5*p**2 + 54*p - 19008? 260634*p + 10 Find the third derivative of -1626*g**6 + 6*g**3 - 29375*g**2. -195120*g**3 + 36 Find the third derivative of 152397*k**4*z**2 - 2*k**4 - 10*k**2*z + 2*k**2 - 94*z**2 - z wrt k. 3657528*k*z**2 - 48*k What is the derivative of -32148*a**4 + 306355? -128592*a**3 What is the derivative of -o**2*u*v - 177767*o**2*v**2 + 2*o*u - 3*u*v**2 + 473*u wrt u? -o**2*v + 2*o - 3*v**2 + 473 Find the second derivative of 20*o**5 + 1326*o**3 - 1415919*o. 400*o**3 + 7956*o What is the first derivative of 2*n*t**2 - 164*n + 9*t**3 - 2*t**2 - 2232*t + 91 wrt t? 4*n*t + 27*t**2 - 4*t - 2232 Differentiate 338*g*i**2*y + 37*g*y + 1967*g + i**4*y + 78*i*y with respect to i. 676*g*i*y + 4*i**3*y + 78*y What is the second derivative of -360351*u**2 - 15664*u + 2 wrt u? -720702 Differentiate -28873*c**4 + 3*c**2 + 176556 wrt c. -115492*c**3 + 6*c Find the second derivative of -244626*p**2 - 375679*p. -489252 Find the second derivative of 190*d**4*t + 492*d**3*t + 2*d*t - 948*d - 4*t wrt d. 2280*d**2*t + 2952*d*t Differentiate 141*c*f**2 - 69*c*f + 1334890*c + f**3 - 2 wrt f. 282*c*f - 69*c + 3*f**2 Find the second derivative of -6*a**4 + 3*a**3 - 193*a**2 + 322078*a. -72*a**2 + 18*a - 386 Differentiate -10602*l**2 + l - 266018. -21204*l + 1 What is the second derivative of 4*i**3*n*q**2 + 63290*i**3*q**2 - i**3 + 2*i**2*n*q - 4*i*n*q + 143*i*n - 2*i*q**2 - 5*q wrt q? 8*i**3*n + 126580*i**3 - 4*i Find the second derivative of -139*n**2*o**3 - 585*n**2*y**2 - 46*n**2 - 2*n*o**3 - 2*n*o*y**3 - n*o*y + 3*o**3*y**2 + o**2*y wrt y. -1170*n**2 - 12*n*o*y + 6*o**3 Differentiate 20*q**3 + 337*q**2 + 12993. 60*q**2 + 674*q What is the second derivative of 4554625*v**5 - 18746*v - 73? 91092500*v**3 Find the first derivative of 8*o**2 + 937*o + 1604189. 16*o + 937 What is the derivative of 83*y**2 + 2*y - 49274? 166*y + 2 Find the third derivative of -62899*g**3*q**2 - 3*g**3*q + g**2*q**2 - 6*g**2*q - 8*g*q**2 + 656 wrt g. -377394*q**2 - 18*q What is the third derivative of 99777*x**6 + 17*x**2 + 70*x? 11973240*x**3 What is the second derivative of 80058*g**2 + 7288*g - 6 wrt g? 160116 What is the second derivative of -31*p**2*r**3 - 69*p**2*r + 122*p*r**3 - 28*p*r - r wrt r? -186*p**2*r + 732*p*r Find the first derivative of 8*n**4 + 2499*n - 22766 wrt n. 32*n**3 + 2499 What is the first derivative of 51237*k - 35917 wrt k? 51237 Differentiate -2*b*l**2 - 11805*b - l**2 + 27287*l - 3 wrt l. -4*b*l - 2*l + 27287 Find the first derivative of -2*h**2*l**2 - 4371*h**2 - 1018645*h*l**2 - 8*h - 2 wrt l. -4*h**2*l - 2037290*h*l What is the second derivative of -79305*i*t**5 + 2*i*t**4 + 17*i*t + 9*i + t - 231 wrt t? -1586100*i*t**3 + 24*i*t**2 What is the third derivative of 18*g**4 - 605*g**3 + 2645*g**2 - 2*g? 432*g - 3630 Find the first derivative of 38713*m**3*r + 41*m**3*w + 9*m**2*r*w + m*r**2 + 5*m*w - 1785*m - 2*w wrt r. 38713*m**3 + 9*m**2*w + 2*m*r Differentiate -46171*l - 1056021 with respect to l. -46171 What is the derivative of -83*z**2 + 16*z + 7481? -166*z + 16 What is the first derivative of -191343*k*s - 1934090*k - 9*s**2 wrt s? -191343*k - 18*s Find the second derivative of 19*n**5 - 961*n**4 - 7*n**2 + 3245*n - 135. 380*n**3 - 11532*n**2 - 14 What is the derivative of 97939*j**4 - 260811? 391756*j**3 Find the second derivative of 4561229*t**2*x - 18*t*x - 2*x - 917 wrt t. 9122458*x What is the first derivative of -4*n**3 - 27453*n**2 - 272756 wrt n? -12*n**2 - 54906*n Find the third derivative of t**5 - 67*t**4 - 871*t**3 + 3*t**2 + 3703 wrt t. 60*t**2 - 1608*t - 5226 Find the first derivative of -199430*h*o + 191586*h wrt o. -199430*h Differentiate 2*g*n**3*p + g*n*p - 647*g*p**2 + 84*n**3*p + 262*n*p**2 wrt g. 2*n**3*p + n*p - 647*p**2 Find the third derivative of 2*j**3*p**2*w**2 + 3*j**3*p*w**3 - 13*j*p*w - 95*j*w**3 - 86*p**2*w**3 - 107*p**2*w**2 - p*w**2 + p*w wrt w. 18*j**3*p - 570*j - 516*p**2 What is the second derivative of 15997*a**4 + a - 12891 wrt a? 191964*a**2 What is the second derivative of 191617*r**4 + 3*r + 18827 wrt r? 2299404*r**2 What is the third derivative of -5222*q**4 - 5*q**3 + 200000*q**2 wrt q? -125328*q - 30 Differentiate -2*w**2*x**2 - 52*w**2*x*z + 2*w**2*x - 1109*w*x**2*z + 3*w*x**2 + 3*w*x - 55*x with respect to z. -52*w**2*x - 1109*w*x**2 What is the second derivative of 2*c**4*s + 4*c**3*s + 42*c**2*s - 2*c**2 - 2*c*s - 6314*c + 2 wrt c? 24*c**2*s + 24*c*s + 84*s - 4 Differentiate -i**4 + 25117*i**3*r**2 + 23973*r**2 wrt i. -4*i**3 + 75351*i**2*r**2 What
y = -5*l - 18. What is the greatest common divisor of l and 396? 6 Suppose 10*p - 2205 = -15*p - 10*p. Calculate the greatest common divisor of p and 3339. 63 Suppose -26*l - 7*l + 996 = -10554. What is the highest common factor of 7525 and l? 175 Suppose -2*c - 960 = -x, 233*x - 3849 = 229*x + 5*c. Calculate the highest common divisor of x and 14. 14 Let h(j) = 7*j**2 + 350*j - 702*j - 4*j**2 + 359*j. Let o be h(-6). What is the highest common factor of 6 and o? 6 Let v(i) = 6. Let k(x) = x + 5. Let t(w) = 4*k(w) - 3*v(w). Let d be t(2). Calculate the highest common factor of d and 130. 10 Suppose -2*d = -5*d + 129. Let t = 144753 + -143936. Calculate the highest common factor of d and t. 43 Let u = 11504 + -7214. What is the greatest common divisor of u and 110? 110 Suppose -2967 = 25*o - 24542. Suppose 0 = -35*v + o + 187. Calculate the greatest common factor of v and 150. 30 Let q(z) = z - 12. Let k be q(15). Suppose -2 - 79 = -k*m. Let x = -1057 + 1111. What is the highest common divisor of m and x? 27 Let a be 3/((-11)/(641927/(-39))). What is the highest common factor of 67 and a? 67 Let s be (-84328)/(-762) - (2 + (-1)/((-3)/(-10))). What is the greatest common divisor of 288 and s? 16 Let z(w) = 2*w - 1. Suppose d = -0 + 1. Let y be z(d). Let x be (-3290)/(-50) - y/(-5). What is the highest common factor of 6 and x? 6 Let g be -16 + 1695/105 - 164/(-28). Suppose l = 5*m + 3, 5*l - 5 - 10 = -2*m. What is the highest common divisor of g and l? 3 Let q(h) = -h + 1. Let u(z) = 32*z - 36. Let l = -40 + 41. Let s(x) = l*u(x) + 3*q(x). Let n be s(5). What is the greatest common divisor of 14 and n? 14 Let f(b) = 2*b**3 + 56*b**2 + 49*b + 39. Let g be f(-27). What is the highest common factor of 6 and g? 6 Suppose 0 = 11*c - 17 - 38. Suppose -12*y + 595 = c*y. Calculate the greatest common divisor of y and 7. 7 Let u be 28/(-12)*3/(-1). Let q be u + (-1 - (-3 + 1))/1. Let x be (-4)/((-7)/(3276/q)). What is the highest common divisor of x and 26? 26 Let z = 37 + -35. Suppose -5*o - 27 = -z. Let q be (o - -4)/((-1)/35). What is the greatest common divisor of 5 and q? 5 Suppose -6 = r, 3*f + 2*r = 501 - 99. Calculate the greatest common divisor of 13478 and f. 46 Let z(l) = 7*l**3 - 2*l**2 + 3*l + 1. Let j be z(4). Let p(a) = 1440*a - 4307. Let w be p(3). What is the highest common divisor of j and w? 13 Suppose 117 = 6*s + 105. Let r be (-8 + 6)/((-5)/(45/s)). Calculate the greatest common divisor of 117 and r. 9 Let t(z) = -z**3 + 18*z**2 + 41*z + 7. Suppose 0*f = -4*f - 24. Let p be -3 + 1 - 132/f. Let a be t(p). What is the highest common divisor of a and 63? 9 Let t(z) = -838*z**3 + 2*z**2 + 6*z - 10. Let y be t(-2). What is the greatest common divisor of y and 30? 30 Suppose 10*n - 16*n + 36 = 0. Suppose f - 3*p - 65 = 0, -n*f + 9*f = p + 155. What is the highest common factor of f and 25? 25 Let t(a) = 6*a - 39. Let v be t(6). Let l(h) = -7*h - 11. Let b be l(v). Suppose b*i + 4 = 24. Calculate the highest common factor of i and 26. 2 Let q = -30233 + 30259. Calculate the greatest common factor of q and 10517. 13 Let d be (-1596)/(-1330)*(-195)/(-9). Let m(y) = -2*y**3 - 8*y - 4. Suppose 4*f - 3 = -23. Let x be m(f). What is the greatest common factor of x and d? 26 Let x be ((-13)/78)/(1/(-30)). Suppose x*o = 4*m + m - 990, 4*m - 5*o - 792 = 0. What is the greatest common factor of 18 and m? 18 Let l(p) = 31*p**2 - 143*p + 565. Let x be l(5). Calculate the highest common factor of x and 1750. 125 Suppose 0 = 5*z + 2*n - 366, -7 - 61 = -z - 3*n. Let t = z + 179. Calculate the highest common divisor of 11 and t. 11 Suppose 5*w - 138 = -5*j - 13, -4*w - 68 = -3*j. Calculate the greatest common factor of j and 426. 6 Let u(z) = -z**3 - 72*z**2 + 226*z + 87. Let v be u(-75). Calculate the highest common divisor of 524 and v. 4 Let t(l) be the second derivative of 23*l + 0 + 7/2*l**3 + 2*l**2. Let w be t(1). Calculate the greatest common factor of 10 and w. 5 Suppose 0 = -99*p + 105*p - 174. Suppose -6*c - p*c = -105. Calculate the highest common factor of 213 and c. 3 Suppose -36*w + 5 = -40*w + 189. Let n be ((-345)/(-5))/(9/66). What is the greatest common divisor of n and w? 46 Suppose 67 = r - 2*n, 0 = -3*r + n + 1527 - 1356. What is the greatest common divisor of r and 913? 11 Let k = 37 - 26. Suppose -21*x = -3936 - 7362. Let q = -450 + x. Calculate the greatest common divisor of k and q. 11 Let z = 12114 - 10266. Calculate the greatest common divisor of 165 and z. 33 Let v(c) = -11*c - 43. Let o(d) = -d**3 - 6*d**2 + 55*d - 8. Let t be o(5). Let l be v(t). Calculate the highest common divisor of 108 and l. 9 Suppose -4*w + b - 395 = 0, 0 = 4*w - 3*b + b + 394. Let q = -11 - w. What is the highest common divisor of q and 1144? 88 Let g be (0 - 1 - -1) + 35. Let y(z) = 3*z**3 + 46*z**2 - 35*z - 13. Let a be y(-16). Calculate the highest common divisor of a and g. 35 Suppose -16 = 4*y + 5*d, -y - d = 3*d + 15. Let v be (y/3)/(-5*2/(-1170)). Let q = v + -27. What is the greatest common divisor of q and 6? 6 Let i be -2*2 + (-3 - -2). Let n be (-3 - -1 - -1)*i. Suppose -2*c + 305 = n*x, 0*c - 4*c + 3*x + 545 = 0. What is the greatest common divisor of 28 and c? 28 Let j(w) = -8 - 4*w**2 + 11*w**2 + 21 - 45*w + 57*w + 14. Let b be j(-5). What is the highest common factor of 2 and b? 2 Suppose 34*b - 31*b + 27 = 0. Let v be -3 + (-184)/(-12) - (-3)/b. Suppose -510 = 2*n - v*n. What is the highest common divisor of 34 and n? 17 Let k(v) = 7*v**2 - 5*v - 6. Suppose 0 = 30*m + 199 - 79. Let n be k(m). What is the highest common factor of 7 and n? 7 Let p = 15423 - 15059. What is the greatest common divisor of 2639 and p? 91 Let x = -40 - -54. Let l(t) = t**2 - 5*t + 58. Let s be l(x). Suppose -i - 172 = -s. What is the highest common divisor of i and 54? 6 Let i(w) = w**3 - 24*w**2 + 53*w + 72. Let g be i(22). Let f be ((-4)/10)/(1/(-75)). Calculate the greatest common divisor of g and f. 30 Let u be 12*-49*((-10)/5 - -3). Let z = 596 + u. Calculate the greatest common factor of 296 and z. 8 Let o be ((-5115)/(-77))/((-6)/(-84)). What is the highest common factor of o and 120? 30 Let a = 14 - -4. Let b = -5907 + 5895. Let y = a - b. What is the greatest common factor of 105 and y? 15 Suppose 0 = -4*z - 3*k + 367 - 35, 0 = 5*k - 20. Calculate the highest common factor of 520 and z. 40 Suppose -8*x - 10 + 34 = 0. Suppose -2*h = p + 17 - x, -5*h + 21 = -p. Let u be ((-42)/18)/(p/(-6) - 3). What is the greatest common divisor of 49 and u? 7 Suppose -2239 = -31*m + 5*m - 575. What is the highest common divisor of 216 and m? 8 Suppose -26 = 5*s - 56. Let j be 1 - s - 62*(1 - 2). Calculate the greatest common factor of 3 and j. 3 Let v be (111225 - 18)/19 + (1 - 2). Calculate the highest common divisor of 44 and v. 44 Let p = 4293 - 1683. Calculate the greatest common factor of p and 348. 174 Let g = -122 + 137. Let h = -207 - -112. Let q be 3/(-9) - h/g. Calculate the highest common factor of 72 and q. 6 Suppose 2*x = -4*r + 218 + 226, 2*r + 5*x = 230. Let k = r + -86. What is the greatest common divisor of 348 and k? 12 Let n(j) = j**3 + 6*j**2 - 2. Let k be n(-6). Let s be (k/6)/(1 + (-352)/348). What is the highest common divisor of s and 87? 29 Let p be -2*(-5 + 4)*131. Suppose -4*v - 5*c - 504 = -c, -2*v + 3*c - p = 0. Let j = -118 - v. What is the highest common factor of j and 20? 10 Let t(s) = -9*s**2 - 949*s - 5. Let j be t(-22). Calculate the highest common factor of 83 and j. 83 Suppose -x = -5*d + 41 + 23, -3*d - 3*x + 24 = 0. Let y = -703 - -706. Calculate the greatest common factor of d and y. 3 Suppose -6*m + 20 = -100. Suppose -m = -3*g - 59. Let p(b) = b**3 + 12*b**2 - 13*b + 10. Let t be p(g). Calculate the highest common divisor of t and 60. 10 Suppose 105 - 33 = 4*z. Suppose -202 = -5*w + 68. Let b = w + 72. What is the highest common factor of b and z? 18 Let t = -7909 - -7921. Calculate the highest common factor of t and 176. 4 Suppose -l - 136 = 2*l + 4*c, 5*l + 244 = 2*c. Let f = -38 - l. What is the greatest common divisor of f and 60? 10 Suppose 4*x - 72 = 3*t, t - 99 = -3*x - 32. Calculate the greatest common factor of 2037 and x. 21 Let j = -5932 - -5956. What is the highest common divisor of j and 1404? 12 Let x = 42
/* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.openide.util.lookup; import org.openide.util.lookup.AbstractLookup.Pair; import java.lang.ref.WeakReference; import java.util.*; import java.util.concurrent.Executor; import org.openide.util.Lookup.Item; /** A special content implementation that can be passed to AbstractLookup * and provides methods for registration of instances and lazy instances. * <PRE> * {@link InstanceContent} ic = new {@link InstanceContent#InstanceContent() InstanceContent()}; * {@link Lookup} lookup = new {@link AbstractLookup#AbstractLookup(org.openide.util.lookup.AbstractLookup.Content) AbstractLookup(ic)}; * * ic.{@link #add(java.lang.Object) add(new Object ())}; * ic.{@link #add(java.lang.Object) add(new Dimension (...))}; * * {@link java.awt.Dimension Dimension} theDim = lookup.lookup ({@link java.awt.Dimension Dimension}.class); * </PRE> * * @author Jaroslav Tulach * * @since 1.25 */ public final class InstanceContent extends AbstractLookup.Content { /** * Create a new, empty content. */ public InstanceContent() { } /** Creates a content associated with an executor to handle dispatch * of changes. * @param notifyIn the executor to notify changes in * @since 7.16 */ public InstanceContent(Executor notifyIn) { super(notifyIn); } /** Adds an instance to the lookup. If <code>inst</code> already exists * in the lookup (equality is determined by object's {@link Object#equals(java.lang.Object)} * method) then the new instance replaces the old one * in the lookup but listener notifications are <i>not</i> delivered in * such case. * * @param inst instance */ public final void add(Object inst) { addPair(new SimpleItem<Object>(inst)); } /** Adds a convertible instance into the lookup. The <code>inst</code> * argument is just a key, not the actual value to appear in the lookup. * The value will be created on demand, later when it is really needed * by calling <code>convertor</code> methods. * <p> * This method is useful to delay creation of heavy weight objects. * Instead just register lightweight key and a convertor. * <p> * To remove registered object from lookup use {@link #remove(java.lang.Object, org.openide.util.lookup.InstanceContent.Convertor)} * with the same arguments. * * @param inst instance * @param conv convertor which postponing an instantiation, * if <code>conv==null</code> then the instance is registered directly. */ public final <T,R> void add(T inst, Convertor<T,R> conv) { addPair(new ConvertingItem<T,R>(inst, conv)); } /** Remove instance. * @param inst instance */ public final void remove(Object inst) { removePair(new SimpleItem<Object>(inst)); } /** Remove instance added with a convertor. * @param inst instance * @param conv convertor, if <code>conv==null</code> it is same like * remove(Object) */ public final <T,R> void remove(T inst, Convertor<T,R> conv) { removePair(new ConvertingItem<T,R>(inst, conv)); } /** Changes all pairs in the lookup to new values. Converts collection of * instances to collection of pairs. * @param col the collection of (Item) objects * @param conv the convertor to use or null */ public final <T,R> void set(Collection<T> col, Convertor<T,R> conv) { ArrayList<Pair<?>> l = new ArrayList<Pair<?>>(col.size()); Iterator<T> it = col.iterator(); if (conv == null) { while (it.hasNext()) { l.add(new SimpleItem<T>(it.next())); } } else { while (it.hasNext()) { l.add(new ConvertingItem<T,R>(it.next(), conv)); } } setPairs(l); } /** Convertor postpones an instantiation of an object. * @since 1.25 */ public static interface Convertor<T,R> { /** Convert obj to other object. There is no need to implement * cache mechanism. It is provided by * {@link Item#getInstance()} method itself. However the * method can be called more than once because instance is held * just by weak reference. * * @param obj the registered object * @return the object converted from this object */ public R convert(T obj); /** Return type of converted object. Accessible via * {@link Item#getType()} * @param obj the registered object * @return the class that will be produced from this object (class or * superclass of convert (obj)) */ public Class<? extends R> type(T obj); /** Computes the ID of the resulted object. Accessible via * {@link Item#getId()}. * @param obj the registered object * @return the ID for the object */ public String id(T obj); /** The human presentable name for the object. Accessible via * {@link Item#getDisplayName()}. * @param obj the registered object * @return the name representing the object for the user */ public String displayName(T obj); } /** Instance of one item representing an object. */ final static class SimpleItem<T> extends Pair<T> { private T obj; /** Create an item. * @obj object to register */ public SimpleItem(T obj) { if (obj == null) { throw new NullPointerException(); } this.obj = obj; } /** Tests whether this item can produce object * of class c. */ public boolean instanceOf(Class<?> c) { return c.isInstance(obj); } /** Get instance of registered object. If convertor is specified then * method InstanceLookup.Convertor.convertor is used and weak reference * to converted object is saved. * @return the instance of the object. */ public T getInstance() { return obj; } @Override public boolean equals(Object o) { if (o instanceof SimpleItem) { return obj.equals(((SimpleItem) o).obj); } else { return false; } } @Override public int hashCode() { return obj.hashCode(); } /** An identity of the item. * @return string representing the item, that can be used for * persistance purposes to locate the same item next time */ public String getId() { return "IL[" + obj.toString(); // NOI18N } /** Getter for display name of the item. */ public String getDisplayName() { return obj.toString(); } /** Method that can test whether an instance of a class has been created * by this item. * * @param obj the instance * @return if the item has already create an instance and it is the same * as obj. */ @Override protected boolean creatorOf(Object obj) { return obj == null ? null == this.obj : obj.equals(this.obj); } /** The class of this item. * @return the correct class */ @SuppressWarnings("unchecked") public Class<? extends T> getType() { return (Class<? extends T>)obj.getClass(); } } // end of SimpleItem /** Instance of one item registered in the map. */ final static class ConvertingItem<T,R> extends Pair<R> { /** registered object */ private T obj; /** Reference to converted object. */ private WeakReference<R> ref; /** convertor to use */ private Convertor<? super T,R> conv; /** Create an item. * @obj object to register * @conv a convertor, can be <code>null</code>. */ public ConvertingItem(T obj, Convertor<? super T,R> conv) { this.obj = obj; this.conv = conv; } /** Tests whether this item can produce object * of class c. */ public boolean instanceOf(Class<?> c) { return c.isAssignableFrom(getType()); } /** Returns converted object or null if obj has not been converted yet * or reference was cleared by garbage collector. */ private R getConverted() { if (ref == null) { return null; } return ref.get(); } /** Get instance of registered object. If convertor is specified then * method InstanceLookup.Convertor.convertor is used and weak reference * to converted object is saved. * @return the instance of the object. */ public synchronized R getInstance() { R converted = getConverted(); if (converted == null) { converted = conv.convert(obj); ref = new WeakReference<R>(converted); } return converted; } @Override public boolean equals(Object o) { if (o instanceof ConvertingItem) { return obj.equals(((ConvertingItem) o).obj); } else { return false; } } @Override public int hashCode() { return obj.hashCode(); } /** An identity of the item. * @return string representing the item, that can be used for * persistance purposes to locate the same item next time */ public String getId() { return conv.id(obj); } /** Getter for display name of the item. */ public String getDisplayName() { return conv.displayName(obj); } /** Method that can test whether an instance of a class has been created * by this item. * * @param obj the instance * @return if the item has already create an instance and it is the same * as obj. */ protected boolean creatorOf(Object obj) { if (conv == null) { return obj == this.obj; } else { return obj == getConverted(); } } /** The class of this item. * @return the correct class */ @SuppressWarnings("unchecked") public Class<? extends R> getType() { R converted = getConverted(); if (converted == null) { return conv.type(obj); } return (Class<? extends R>)converted.getClass(); } } // end of ConvertingItem }
Hundreds of refugees have returned to live in secret camps in the Calais region in the hope of travelling to the UK, The Independent can reveal, just weeks after the demolition of the 'Jungle' shantytown. There are at least six informal settlements in rural parts of the Nord-Pas-de-Calais region, each housing scores of refugees and migrants, with numbers growing steadily in recent weeks. It comes two months after the closure of the Jungle, which was intended to bring an end to the refugee situation in Calais by destroying the camp and dispersing its residents to reception centres (CAOs) across France — an operation the authorities hailed as a “success”. Calais 'Jungle' exodus: Charity boss likens refugee treatment to Nazi persecution However, scores of refugees and migrants who were taken on buses to CAO centres have now started making the journey back to the north of France. Many of them are children whose asylum claims were rejected by the Home Office earlier this month, and have decided to make their own way to the UK after experiencing poor living conditions in the French centres. One so-called “secret” camp lies on the edge of a small French village called Norrent-Fontes, around 30 kilometres from the port of Calais. Around 130 refugees currently live in the camp, which has existed since 2008, but the numbers have been rapidly growing in recent weeks, as refugees — particularly minors — have begun leaving the reception centres. A camp in Norrent-Fontes has become home to more than 120 refugees (Sue Clayton) Julien Muller, volunteer for a small French charity called Terre d’Errance which supplies aid at the Norrent-Fontes camp, told The Independent: “There are more and more people coming back. This week there has been several dozen people arrive. I suppose it will grow more in the coming months. “With the UK Government closing down its transfers of underage refugees to the UK, there have been a lot of minors coming back. "There are people who are clearly underage and clearly have family in the UK, but they have been told that now it’s closed. Now they're coming back to try make their own way. “Adults are also coming back from centres in bigger numbers. Some wanted to stay in France, but they have been waiting for two months and they haven’t even been given the opportunity to apply for asylum. They've given up.”​ Mr Muller said the camp was one of six dotted around the Nord-Pas-de-Calais region. Sue Clayton, a refugee advocate and professor at Goldsmith's University, discovered the hidden camp earlier this month. Describing it as a “mini-Jungle”, she said the conditions were “dire” and that residents of the informal settlement appeared to be afraid to accept aid for fear of drawing attention to themselves. Professor Clayton said: “It’s a little pocket of woods up a very narrow, single-track lane. You see it through the trees and it's like a mini-Jungle. "The shelters are put together with various bits of wood and tarpaulin — whatever they can grab. There is no support there. They’ve divided themselves up so there’s a men’s section and a women and children’s section. “The authorities will find that more and more of these secret camps will pop up because these people are getting increasingly frustrated. "Many have made the long journey from the centres back to the Calais area which is familiar to them, like a home – or as near to a home as they can make it.” She said the camp is about two kilometres from a lay-by on the highway that leads up to the port of Calais, where lorry drivers often make their last stop before crossing, potentially making it a “trafficker's paradise”. “The inhabitants of the new camp can walk across a couple of fields and there is a lay-by where trucks park overnight – the last stop before they go through the port,” she said. “It’s where the deals are done, well away from the port. It’s a trafficker’s paradise. Everyone around this new camp is vulnerable to them.” Shahajhan Khan, a 15-year-old refugee from Pakistan who has been living in one of the centres designated for children (CAOMIE) in Anemasse, a town on the French-Swiss border, along with 19 other child refugees, said he and his friends were planning to leave the centre and return to Calais. The teenager was recently informed that he and most of his friends had been rejected by the Home Office, and said they now had “no other option” but to return to Calais in the hope of “another Jungle”. He added that they were “living like donkeys” in the centre, and provided The Independent with footage of their warehouse-like sleeping area, and photographs of a meal of bread and yoghurt. “They promised us they would take us to the UK but said we had to be patient. At this centre they treat us like donkeys," Shahajhan said. "We are living in a factory and we are eating expired bread. We have waited in these factories without eating properly and now they are saying we can't go. It means we must go back to Calais. Young refugees in a CAO in Annemasse eating yoghurt and bread they were forced to buy because they say the food supplied was inadequate ( Shahajhan Khan) (Shahjahan Khan) “If they weren’t going to take us they should have told us clearly. We left the Jungle on the condition that we would go to the UK. We accepted these conditions just to go to the UK, and now they are saying we have to give up. “I hope you will see another Jungle soon.” The Nord-Pas-de-Calais prefecture rejected reports that there are six informal settlements in the region, and denied there had been an increase in numbers of refugees. Calais and Dunkirk camps Show all 16 1 /16 Calais and Dunkirk camps Calais and Dunkirk camps (Photo: Alan Schaller) Calais and Dunkirk camps A portrait of an Afghan man wearing a traditional Perhan Turban in the Calais Jungle (Photo: Emily Garthwaite) Calais and Dunkirk camps Two Gendarmes guard the main entrance to the Dunkirk camp (Photo: Emily Garthwaite) Calais and Dunkirk camps One Kurdish Iraqi man’s reminder to himself (Photo: Alan Schaller) Calais and Dunkirk camps Two young boys in the Dunkirk camp (Photo: Alan Schaller) Calais and Dunkirk camps An Iranian hunger striker stands outside the only remaining shelter in the South Side of the Calais camp (Photo: Emily Garthwaite) Calais and Dunkirk camps A church in the South Calais camp, on of the the only structures not demolished in the South Side of the camp (Photo: Emily Garthwaite) Calais and Dunkirk camps A man gets a hair cut in the Calais camp (Photo: Alan Schaller) Calais and Dunkirk camps Night falls on the Calais Jungle. Fires burn in the distance (Photo: Alan Schaller) Calais and Dunkirk camps The containers provided as alternative accommodation for the people in the camps (Photo: Alan Schaller) Calais and Dunkirk camps A young boy in the Dunkirk camp (Photo: Alan Schaller) Calais and Dunkirk camps A man listens to music inside one of the shipping containers (Photo: Emily Garthwaite) Calais and Dunkirk camps The awful living conditions in the Dunkirk camp (Photo: Alan Schaller) Calais and Dunkirk camps An Afghan man in the Calais camp (Photo: Emily Garthwaite) Calais and Dunkirk camps One of the Iranian hunger strikers (Photo: Alan Schaller) Calais and Dunkirk camps A family in their wooden shelter in the new Dunkirk camp (Photo: Alan Schaller) A spokesperson said: “The migrants in the small camps of the Department of the Pas-de-Calais (three in total) are regularly recorded and we have not observed an increase. “Since the dismantling of the Jungle, no new camp has been established in Calais. State Services remain vigilant: the Mobile Coquelles Research Brigade, which was tasked with combating smugglers, was strengthened in September 2016 and police reinforcements are still being mobilised today. “Nonetheless, we observe that migrants continue to be present. Around 200 migrants each week are discovered in heavy goods vehicles during cross-channel controls. “When migrants are found hidden in heavy goods vehicles during cross-channel checks, the state’s response is firm. If it is confirmed that migrants are illegally in national territory, they are placed in detention." A long-time volunteer for refugee charity Calais Migrant Solidarity, who didn't want to be named, predicted that refugees would continue to return in growing numbers. “Migrants used to stay inside the town, but now with the police checks everywhere when they arrive in Calais they quickly go into hiding," he said. "Their plan might be to stay away from the police-controlled areas for now. They will keep quiet and stay in hidden places for a few months and then after the situation will change again, because there will be more and more people coming. The ‘secret camp’ is situated in the countryside, far out from the heavily policed Calais town (Sue Clayton) “The state [the government] claimed the demolition was a success, but the people of Calais know that the people will keep coming. "Until there is a structure to welcome people and give them a right to stay, they will occupy places here.
The congestion on I-94 heading towards St Cloud, particularly on sunny Friday afternoons, can be infuriating. It’s understandable that a lot of people are calling for ‘something to be done.’. The quick reaction of politicians and others is a call to widen the thing—$30 million for another lane. Certainly that is the solution, they think. There are a few concerns with this approach. First, we’ve never yet built our way out of congestion. We add roadway and, within a short time, (often just four years according to some studies) the problem is just as bad as before. Second, widening one section of road often just pushes the problem down the road—literally. We might soon be hearing calls for widening further along 94, expanding 15 through St Cloud, and more. Third, this is another few million dollars we could use elsewhere, like dealing with the all day every day congestion and fatalities in the Twin Cities that impact more individuals and businesses every day. There’s also a question of safety (hint, with the same number of daily vehicles, which is safer and faster: a 4-lane highway in Europe or a 6-lane highway in the U.S.?). There’s a possible solution that won’t cost $30 million, can provide relief immediately instead of five to seven years from now, won’t require a year or two of construction headaches, and will increase safety instead of decrease it. Phantom Jam Here’s what happens on a typical Friday afternoon on I-94. Someone drives slower than others in the left lane and a few cars pile up behind him waiting to get by. This micro jam of cars is called a platoon by traffic engineers. Platoons like this are inefficient use of the roadway and their tightly packed nature and enraging of drivers can be quite dangerous. Once the lane blocker at the head of the platoon moves to the right, or everyone behind does to get around him, the platoon moves forward and breaks up. For these drivers it is a bit frustrating, but usually results in only a minor slowdown. However, there’s another platoon lined up behind another lane blocker a few minutes behind this one and another behind that. Over the course of about 20 minutes, in the early afternoon, the gaps between these platoons get narrower and narrower until one platoon comes up on the rear of the platoon in front of it and whammo, you have the beginnings of the day’s phantom traffic jam. And it all started with the left lane blocker who’s now happily 30 minutes ahead of the jam they helped to start. If you are a math or physics minded person, this behavior has been modeled by MIT and others with Poisson Distribution and Fluid Dynamics. For those smart enough to studiously avoid such things, let’s watch a video: Now, assume every car on I-94 is moving at a constant 80 mph (yes, it’s possible in a perfect world, perhaps such as if every car had a radar based dynamic cruise control to keep you exactly ten feet behind the car in front of you). One car brakes for just a very brief moment, which causes the car behind to brake, and so forth. Each car brakes just a little more than the one in front and so about the 80th car brakes to a stop and soon there are 50 cars stopped and jammed up behind number 80. This jam of stopped cars will grow and move backwards through traffic, sometimes for miles. Someone blocking the left or middle lane has the same effect. One engineer describes the resulting jam as a shockwave moving backwards from the lane blocker. We may not be able to prevent this completely in really heavy traffic, but we can likely reduce it quit significantly. Lane Discipline Lane Discipline is using the lanes of a roadway in the most efficient and safe manner as possible and is usually summed up as Keep Right Except To Pass. A huge potential benefit of a four-lane highway, such as I-94, isn’t just to double the capacity of a two-lane rural roadway by having two lanes in the same direction instead of just one, but to multiply the capacity much more. Allowing faster drivers to quickly move around others and then get back in to the flow of traffic allows these drivers to utilize roadway capacity that is otherwise unused. Instead of that driver being in front of you, they are now well down the road and not impacting you. You have more space to the car in front of you and are more comfortable. They are filling an unused gap. And this hasn’t cost anyone anything. Our lack of lane discipline eliminates this benefit. Instead of getting perhaps a threefold increase in capacity by building a four-lane highway in place of a two-lane rural road, we only get about a twofold increase. Lane blocking makes our highways more dangerous and less efficient for everyone. If that first person had not blocked the lane, the people behind him would have proceeded on up the road, leaving the lanes available for those behind them to use. Same for the second person blocking the lane, and each after them. Lane blocking not only creates jams, but prolongs them as well when someone at the head of the jam continues to block the left lane instead of moving right and allowing those behind to move forward and get out of the way of those behind them. If you’re sitting in this traffic at the back of the jam, it’s hard to believe that something so simple as lane discipline can work. Consider, though, that most of the cars around you wouldn’t be here now if the cars in front of them were further down the road instead of blocking them, and on and on up to the left lane blockers at the very front. In front of this jam, just a few miles ahead of you, are miles and miles of I-94 with much lighter traffic—a lot of excess capacity. But a few people are preventing others from using this excess capacity. In Europe, where Lane Discipline is routine, you don’t see the weaving, tailgating, and road rage to the extent we have here in the U.S. Their highways are more efficient, people drive faster and at more consistent speeds, and yet they have fewer fatalities and many fewer crashes. Their law enforcement knows the importance of keeping the lanes open for traffic flow and are much quicker to ticket someone for blocking a lane (left or middle) than speeding. In the U.S., some western states have enforced lane discipline for years and recently Texas began doing so and New Jersey recently began heavier enforcement and increased fines for blocking the left lane. Minnesota already has a law (169.18(10)) requiring slower vehicles, regardless of their speed or the speed limit, to move right, we just need to begin enforcing it. Many people think that this applies only to vehicles moving slower than the speed limit, but that is not the case and may be the crux of the problem. Perhaps more important, people need to understand why we have that law and why it’s being enforced. They need to understand that blocking the left lane isn’t just against the law, but that their blocking the lane makes our roadways significantly more dangerous for everyone and needlessly causes a traffic jam. Another way to think of it is this—Mind The Gap–if the car behind you is closer than the car in front of you, you likely need to move right and let others utilize the space in the gap in front of you. Tailgating is dangerous. Enraging other drivers is dangerous. Weaving in and out of traffic is dangerous. Inconsistent speeds in the same lane is dangerous. Passing on the right is dangerous. And we effectively encourage all of this by not enforcing lane discipline—Keep Right Except To Pass. We likely have enough road surface, we just need to use it more safely and efficiently. We have nothing to lose and potentially much to gain by enforcing some lane courtesy. If traffic continues to be a significant problem, then we can consider spending millions for extra lanes. The next time you’re stuck in a “phantom” traffic jam, MITs term for those jams that you get to the front of only to find nothing there to have obviously caused it, consider this; You are likely in this jam thanks to a lane blocker or two. If people five, ten, or thirty minutes ago hadn’t blocked the left lane, the people in front of you would be further down the road right now instead of sitting in front of you. Share this: Email Facebook Twitter Reddit
1. Field of the Invention The present invention relates to a method for manufacturing a semiconductor device, which the method is capable of efficient mass production of high-performance semiconductor devices by, upon manufacture of a semiconductor device, eliminating unwanted features (e.g., side lobes) created together with a resist pattern by thickening the resist pattern, to reduce the burden in designing photomasks and to increase depth of focus. 2. Description of the Related Art As the packing density of integrated circuits has increased in recent years, so too has the requirement for semiconductor device manufacturing equipment to achieve smaller feature size—patterns that contain lines whose width is shorter than the wavelength of exposure light employed in the manufacturing process. Along with this trend, selection of masks that can produce fine, high-resolution patterns has been conducted, and phase shift masks such as halftone masks are increasingly used in the lithography technology. The halftone masks are advantageous in improving resolution, but attention needs to be paid to the fact that they may create side lobes (sub-peaks) around a primary feature. To avoid this, a means of preventing the generation of side lobes has conventionally been provided on the mask. An example of the foregoing means provided on the mask includes for instance a method by which the generation of side lobes is prevented by arranging Cr patterns on areas where side lobes are likely to appear. While this method can successfully prevent the generation of side lobes, it involves the use of a tri-tone mask of three layers owing to the presence of the Cr patterns arranged over the mask, placing a burden on the mask manufacture and defect inspection of masks. In response to the demand for finer patterns, sub-resolution assist features (SRAF), which are not meant to print, are now increasingly placed over the reticle in order primarily to increase depth of focus. This SRAF technology, however, is one wherein assist features should not be printed on the surface of the resist and hence the assist features are arranged in the reticle in such size that they do not print. For this reason, there is a limitation to a further improvement of depth of focus by simply arranging relatively large SRAFs, limiting the use of larger SRAFs. Examples of pattern layouts for semiconductor devices are those containing both repetitive patterns of a particular cell layout as typified by memory devices and a variety of randomly arranged patterns as typified by LOGIC LSIs. In many cases the memory cell layout is designed using the values that are most critical in the design rules of the corresponding generation. One of the lithography-associated resolution enhancement technologies for accomplishing the foregoing is the method that uses a phase shift mask, which is mainly used for the formation of critical layers. In this regard, masks for a metal interconnection layer (holes/trenches) are those with less features (openings). Here, FIG. 10 shows a schematic view of a reduction projection exposure device, and FIGS. 11A to 11C show sections of general photomasks of various types that are used upon manufacture of a semiconductor device. The reduction projection exposure device shown in FIG. 10 includes an illumination light source 101, an illumination optical system 104 for guiding light from the illumination light source 101 to a reticle 103 (photomask) placed on a reticle stage 102, and a projection optical system, which is a reduction projection lens 105. The illumination optical system 104 includes an elliptic mirror 110, a fly eye lens 111, and an aperture diaphragm 112. Light from the illumination light source 101 is guided to the reticle 103 through the illumination optical system 104, and the pattern on the reticle 103 is projected onto the resist layer on a wafer through the reduction projection lens 105. Note that in exposure devices using excimer lasers as a light source, the elliptic mirror 110 is not provided, and the illumination light source 101 serves as a laser beam source. A mask 102 shown in FIG. 11A is a chrome mask that is also referred to as a binary mask, in which a metal masking film 122 such as Cr patterns is formed over a quarts dry plate 121. Using a reduction projection exposure device like that shown in FIG. 10, a pattern is projected onto a wafer 106 by means of light passing through Cr-free areas of the mask 120. A mask 130 shown in FIG. 11B is a halftone phase shift mask having semi-transparent metallic thin film patterns 132 made primarily of MoSi or the like provided over a quarts dry plate 131. A mask 140 shown in FIG. 11C is a Levenson phase shift mask that is identical to the chrome mask (mask 120) shown in FIG. 11A except that a trench 141 is formed that produces 180 degree phase shift in particular light passing through the quarts dry plate 121. FIG. 12 shows a light intensity distribution obtained when the wafer is exposed using the chrome mask (mask 120) shown in FIG. 11A, and FIG. 13 shows a light intensity distribution obtained when the wafer is exposed using the halftone phase shift mask (mask 130) shown in FIG. 11B. A comparison between the light intensity distributions of FIGS. 12 and 13 reveals the differences in light intensity between different masks. Referring specifically to FIG. 13, relatively small positive peaks are seen at either side of the main positive peak for the feature 133; these small peaks are the essential cause of side lobes that are specific to halftone phase shift masks. An example of how side lobes are created in the resist pattern will be described below. FIG. 14A is a top view of a mask pattern 150 used upon production a seal ring that is used for preventing the entry of moisture into freshly prepared LSI chips from the outside. The seal ring is also referred to as a moisture resistance ring. FIG. 14B is an image view of a resist pattern formed using the mask pattern 150. It is evident that there are side lobes S generated in areas other than the desired pattern 150 of FIG. 14B, which are not present in the mask pattern 150 of FIG. 14A. The presence of side lobes S in the resist upon exposure of the wafer results in resist pattern collapse, printing of the side lobes S after etching, etc., leading to poor device quality. Thus, there has been a need to perform a resist exposure process while avoiding the creation of such side lobes. Note that even when the primary feature has the same shape as the seal ring, similar side lobe-related problems occur. Thus it has been required to pay attention when attempting to achieve linewidths of about more than three times as large as the minimal linewidths of the corresponding generation, depending on the setting of mask bias at the standard exposure dose, though. FIG. 15A shows an example of a mask layout for hole pattern, which is provided with SRAFs that have become frequently used. As shown in FIG. 15A, assist features 161 are arranged around a primary feature 160 in such a way that they assist exposure through the primary feature 160. It has been shown that this mask layout can increase depth of focus (DOF). The size and position of the SRAFs are determined in light of the conditions of the resist exposure process; however, it is generally important to ensure that SRAFs never print. Accordingly, although DOF is nearly proportional to the size of a SRAF, SRAFs need to be used in such a way that they never print on the resist, and therefore, there is an upper limit with respect to their operable size. If the size of SRAF exceeds this upper limit, it results in the formation of unwanted features 163, which are derived from the assist patterns 161, at positions other than the primary feature 160, as shown in FIG. 15B. To overcome this problem the following method has been proposed, for example, which comprises the steps of printing a photomask pattern onto a photosensitive resin film, generating an acid in the photosensitive resin film, forming a crosslinkable material-containing resin film over the photosensitive resin film, and subjecting both of the resin films to heat treatment to allow the crosslinkable material to undergo crosslinking to form a reaction layer at their interface, whereby printed unwanted features are eliminated from the printed features (see Japanese Patent Application Laid-Open No. 2001-005197). This method is, however, limited to chemically amplified resists in terms of applicable resists, and thus the range of selection of available of resist materials is narrow. In addition, crosslinking reactions in the crosslinkable material-containing resin film are difficult to control. Thus, with this method, unwanted features of varying sizes cannot necessarily be removed successfully independent of the types of resist materials. It is an object of the present invention to provide a method for manufacturing a semiconductor device, which the method is capable of efficient mass production of high-performance semiconductor devices by, upon manufacture of a semiconductor device, eliminating unwanted features (e.g., side lobes) created together with a resist pattern by thickening the resist pattern, to thereby reduce the burden in designing photomasks and to increase depth of focus.
13 N.J. 79 (1953) 98 A.2d 55 DOUGAL HERR, PLAINTIFF-APPELLANT, v. LOUISETTE HUGON HERR, DEFENDANT-RESPONDENT. The Supreme Court of New Jersey. Argued March 9, 1953. Reargued April 20, 1953. Decided June 22, 1953. *82 Mr. Meyer E. Ruback argued the cause for appellant (Messrs. Lum, Fairlie & Foster, attorneys). Mr. John F. Ryan argued the cause for respondent (Messrs. Ryan & Saros, attorneys). The opinion of the court was delivered by HEHER, J. The appeal is from a judgment dismissing the complaint. *83 The gravamen of the pleaded cause of action is misrepresentation, fraud, deceit and concealment allegedly practiced by defendant whereby plaintiff was induced to make a "marriage settlement" on defendant which included a conveyance, after marriage, vesting in plaintiff and defendant an estate by the entirety in plaintiff's dwelling house in the Borough of Brielle, New Jersey, and also nondelivery of the deed. Judge McLean found that plaintiff had not sustained the onus of proof of fraud. He was of the opinion that plaintiff's "testimony standing alone, met with defendant's denials, does not afford the clear and convincing proofs essential to entitle him to the relief he seeks." We certified defendant's appeal on our own motion. The grounds of appeal challenge the findings made in the Superior Court as not in accordance with the evidence, both as to fraud and delivery; and error is also assigned upon the assessment of a counsel fee against plaintiff. This is the case made by the complaint: In November 1949, after a brief acquaintance while on vacation in Bermuda, plaintiff, a widower 67 years of age, proposed marriage to defendant, a widow 52 years old, but she demurred "on the ground that such marriage would result in great financial loss and risk to her," representing to plaintiff that she was "the beneficiary for life or until remarriage of the income of a certain trust fund established by the last will and testament of one Gabriel Raphael Hugon of Manchester, England," the deceased father of her deceased husband, amounting in the net to 1,500 pounds sterling per annum, and that by the terms and conditions of the trust "her income would cease entirely and irrevocably if she should remarry," and "she had no property, or income aside from the income from said trust," and "for those reasons she could not afford to remarry without a substantial marriage settlement, adequate to secure her future financial security in lieu of said life annuity." Plaintiff, relying upon the truth of the representations, "promised defendant that if she would accept his offer of marriage he would, in addition to supporting her as his wife, secure her financial future to the extent of his means in the *84 event that he should predecease her by transferring to her his entire estate, such transfers to take effect upon his death and to be made in consideration of and partial reimbursement for her loss of income" under the trust. Defendant accepted plaintiff's proposal of marriage "subject to said proposed provision for her future financial security in the event that plaintiff should predecease her." The marriage was solemnized December 3, 1949, at Westfield, New Jersey. Immediately following the marriage, and in reliance upon the truth of the "representations," and to effectuate the "marriage settlement," plaintiff executed "riders to establish defendant as beneficiary of plaintiff's life insurance policies, a last will and testament and (for the purpose of minimizing inheritance taxes) a deed of conveyance" for the lands in suit "intended when delivered to convert plaintiff's estate" in the lands "into an estate by the entirety in plaintiff and defendant." In order that "the transaction might be completed by the delivery of all of the instruments at one time, such time to be arranged after all of them should be executed, plaintiff only executed said deed of conveyance on December 3, 1949 and caused it to be recorded in the Clerk's Office of Monmouth County on December 14, 1949." After recording, the county clerk returned the deed "to plaintiff's attorney as directed"; and the "deed was never delivered to defendant, nor did plaintiff ever intend to make a present delivery of it, nor were any of the other instruments delivered." "On or about December 15, 1949, plaintiff became suspicious of the truthfulness" of defendant's representations "because she refused to send to England for a copy" of the Hugon will, "as she had agreed to do"; and he "thereupon caused an investigation to be made which disclosed" that the deceased Hugon "had not created any trust whatsoever in favor of defendant, by his will or otherwise," and defendant had not been in receipt of an income in any amount from the Hugon family subsequent to the death of Gabriel on October 11, 1939, and "had not been an annuitant under any trust whatsoever." *85 It is averred that had plaintiff known of the falsity of the "representations," he "would not have agreed to make said settlement." Plaintiff seeks judgment: (a) voiding the agreement to make the marriage settlement, and (b) decreeing the conveyance to be void or, in the alternative, the rescission of the deed and a reconveyance to plaintiff. The nonexistence of the asserted trust fund or "life annuity" is conceded; the making of the representation is denied. Indeed, defendant asserts in her answer to the complaint that she "refused several proposals of marriage by the plaintiff, without qualification, and finally accepted tentatively, only on condition that her final answer" be deferred until "opportunity" was had the "better to know each other," and it could be determined whether or not plaintiff's children would "object" and "constitute a hindrance to a happy marriage"; and that the execution of the deed "was purely a voluntary act on plaintiff's part, independent of any agreement, commitment or previous arrangement, which deed she never saw, and of which she knew nothing, excepting that it was executed and recorded as plaintiff told her, because, as his wife he felt she was entitled to it as a gift," and she "has no information" respecting the will, but she denies that its "execution * * * was the result of any arrangements, commitment, promise or agreement prior to the marriage." And defendant's testimony accords with the answer. There had been no discussion whatever before the marriage "about any financial arrangement or money or anything else in connection with impending marriage." Shortly after their meeting in Bermuda, she agreed to the marriage, "but on one condition that" she "should meet his children first and they would agree to the marriage." She first "heard" of the deed in question on December 26, 1949, and it was the plaintiff who spoke of it. He said: "I have seen a friend who is very discreet and I want you to have that property and I may give that property to you on both names and also a will." There had been no prior discussion of a will. *86 Plaintiff acknowledges that if, "as part of the offer of marriage and uninduced by any fraudulent misrepresentation by the defendant, the proposal had included the promise to make the deed," the deed would be altogether invulnerable; the "sustaining consideration for the deed would be the defendant's performance of her promise to marry." But it is insisted that such is not the situation revealed by the pleadings and the proofs, for under defendant's version of the transaction a promise of a property settlement is "ruled out as a consideration or inducement for the marriage," and the conveyance was a pure gift, induced by love and affection alone, and "not the marriage, for the marriage had already taken place," while the foundation of plaintiff's pleaded cause of action is a promise "collateral to his offer of marriage" induced by defendant's "misrepresentation that her remarriage would result in the loss of an English annuity," to "make good the loss by making a property provision for her"; and that "on the evidence of either side it is clear that no enforceable ante-nuptial agreement or property settlement was made between" the parties. There was no post-nuptial contract, it is conceded, "since the marriage, being past, could not constitute a valid consideration." But it is said that there was a post-nuptial settlement on the wife, and it matters not whether it be deemed a "gift in fulfillment of an earlier promise, though unenforceable, or a present donation unrelated to any antecedent promise," for in either event "any fraud inducing the gift vitiates it." It is urged that the evidence offers no support for the conclusion that "the marriage was the inducement, i.e., the consideration for the deed"; and the critical issue is stated to be "whether the deed (made in fulfillment of an earlier promise, as the plaintiff claims, or as a later gift, as the defendant claims,) was or was not induced by the defendant's misrepresentation." The reasoning is that plaintiff's promise to provide for his wife "is altogether distinct from and independent of the engagement to marry," and was designed "to remove an impediment to the making of the contract to *87 marry," and the "sole consideration of the marriage was, concededly, the mutual agreement of the parties to undertake and perform the duties and obligations incident to the marriage, whereas the promise to make provision for the wife was predicated solely upon her surrender of the alleged annuity." But, whatever preceded the marriage, the post-nuptial conveyance was essentially voluntary; and the deed itself, after deliberation upon the choice of words, declared the consideration to be "the sum of one dollar, the marriage between the grantees, and good and valuable consideration, lawful money of the United States, to him in hand paid," and so on, according to the usual formula of receipt and grant; and thus the plaintiff himself, in formal and indubitable terms, established the marriage as the consideration for the conveyance. Whatever its legal effect as consideration related to an executory promise, this was the motive for the conveyance. A parol ante-nuptial agreement for a property settlement in consideration of marriage is within the statute of frauds (R.S. 25:1-5) and unenforceable; marriage is not in itself deemed such part performance of the agreement as will avert the operation of the statute and render it enforceable in equity. Russell v. Russell, 60 N.J. Eq. 282 (Ch. 1900), affirmed 63 N.J. Eq. 282 (E. & A. 1901); Pennsylvania Railroad Co. v. Warren, 69 N.J. Eq. 706 (Ch. 1905); Watkins v. Watkins, 82 N.J. Eq. 483 (Ch. 1913), affirmed 85 N.J. Eq. 217 (E. & A. 1915); Alexander v. Alexander, 96 N.J. Eq. 10 (Ch. 1924); Elmer v. Wellbrook, 110 N.J. Eq. 15 (Ch. 1932). An unexecuted parol ante-nuptial promise for a settlement lays no legal duty or obligation on the promisor; and a post-nuptial settlement made pursuant to a parol ante-nuptial promise, followed only by marriage, is "voluntary, in the strongest sense of that term"; marriage is not part performance of the contract, for "if it were, there would be an end of the statute * * * [and] every parol contract followed by marriage would be binding"; carrying into effect the parol contract after marriage, by a deed, "amounts to no more *88 than a voluntary settlement." Manning v. Riley, 52 N.J. Eq. 39 (Ch. 1893). Thus, the conveyance here constituted a voluntary post-nuptial settlement; plaintiff was under no duty or compulsion to make the transfer. The policy of this provision of the statute of frauds is "to render hasty and inconsiderate oral promises, made to induce marriage, without legal force, and thus to give protection against the consequences of rashness and folly." Manning v. Riley, cited supra. Plaintiff was free to make the conveyance, or not to make it, according to his untrammeled judgment; and his conventional declaration of marriage as the consideration imparts character and meaning to the conveyance at variance with the concept of a gift related only to the loss of an annuity. The undoubted design was to make provision for the wife against privation in the event of her husband's prior death; and it would seem to be a matter of indifference whether or not the need had become the greater by the loss of an annuity. Protection against want was the desideratum, and the provision was expressed to be made in consideration of marriage. Such was the intention, and the intention controls. The loss of an annuity did not induce the settlement. An executed post-nuptial gift or settlement is effective inter partes, even though lacking in the consideration essential to an enforceable executory contract. A gift is a transfer without consideration. Frank v. Gaylord, 119 N.J. Eq. 427 (Ch. 1936); Cessna v. Adams, 93 N.J. Eq. 276 (Ch. 1921); Austin v. Young, 90 N.J. Eq. 47 (Ch. 1919); Landon v. Hutton, 50 N.J. Eq. 500 (Ch. 1892); Jones v. Clifton, 101 U.S. 225, 25 L.Ed. 908 (1880); Rodgers v. Rodgers, 229 N.Y. 255, 128 N.E. 117, 11 A.L.R. 274 (Ct. App. 1920). The burden of proof of a fraudulent inducement has not been sustained. On this inquiry, the subsequent change in the marital relations, attitudes and motivations are elements to be regarded, as tending to rationalization. The observations of Vice-Chancellor Stevenson are apropos: *89 "When the relations of the man and his wife cease to be harmonious, when divorce or separation comes, the man finds himself disappointed in his expectations, and he very much regrets the disposition of property which he theretofore made. No doubt there are situations of this kind where there is hardship, and some future laws may provide for the readjustment of family settlements in case of divorce. Under our present system of laws the destruction of harmonious and confidential relations between the man and wife, their complete estrangement, and even divorce, create no new equity in favor of the husband with respect to land which he originally donated to his wife when both parties contemplated that their affectionate and confidential relations would endure throughout their lives, and that both would therefore share in the benefits of the donated property." Warren v. Warren, 88 N.J. Eq. 612 (E. & A. 1918). And the proofs establish the essential element of delivery. The conveyance was made to husband and wife; and the circumstance that, after recording, the deed was retained by the husband does not repel the inference otherwise compelling of his intention to make the deed immediately effective as a conveyance of the land. Indeed, plaintiff himself revealed in his testimony a design by the conveyance to take the property out of the inheritance tax category; and delivery was essential to the effectuation of that purpose. The essence of delivery is the intent to "perfect the instrument" and thereby make an immediate transfer of the title to the grantee; and the intent may be deducible from the circumstances or the acts or words of the grantor. Ruckman v. Ruckman, 32 N.J. Eq. 259 (Ch. 1880); Jones v. Swayze, 42 N.J.L. 279 (Sup. Ct. 1880); Blachowski v. Blachowski, 135 N.J. Eq. 425 (Ch. 1944). It does not matter that defendant had not seen the deed; her husband held the instrument for her as well as for himself, as tenants by the entirety. Vought's Ex'rs. v. Vought, 50 N.J. Eq. 177 (Ch. 1892); Mower v. Mower, 367 Pa. 325, 80 A. (2d) 856 (Sup. Ct. 1951). The original complaint, filed June 1, 1950, included a count for nullity of the marriage. On June 11, 1951 there was a voluntary dismissal of that count; and an allowance of $2,500 was made to defendant's counsel as for services in *90 a matrimonial action, under Rule 3:54-7(a). There was no award of counsel fees in this non-matrimonial suit. We think that a fee of $1,000 should be assessed against plaintiff; counsel must look to his client for such additional compensation as may be reasonable. The circumstances did not call for the application of Rule 3:54-8. The judgment is accordingly modified and, as so modified, affirmed. For modification — Chief Justice VANDERBILT, and Justices HEHER, BURLING, JACOBS and BRENNAN — 5. Opposed — None.
Pages Saturday, January 17, 2015 Windows 7 32 Bits If you're caught in a situation where you have got lost your Windows 7 32 Bits set up disk or broken it by accident, you'll be able to at all times download a duplicate of your windows 7 ISO file from Microsoft itself. Many individuals are usually not aware of this and most of the time they normally end of downloading pirated copies of Microsoft windows 7 from numerous sites on-line. use any of the methods which appears to be simpler for you. As for the errors it seems to be an issue along with your hard disk. A fast Google search will make it easier to discover out a solution for the errors. I have a new desk top PC that has windows 7 sixty four bit. As a result of my software program on my network I would like 32 bit. I downloaded Windows 7 32 Bit X86 english, burn it to a DVD, and tried to install it. It will not let me go back to a 32Bit. Does anyone know what I can do to get again to a 32 Bit system ? Any help might be Greatly Appreciated ! Are these ISOs of win 8 or win 7? I understand that win 7 is the specified put in version but I was simply curious if one is able to create a prior set up cd from subsequent version on win. I didn't think that was potential and could be good data to know. Hello I am using this to put in on a laptop computer that has XP. I downloaded the file on that LAPTOP after which created a bootable flash drive. I'm waiting for all of it to load onto the flash now my question is do I simply plug it in to the laptop as is with XP on it or do I have to do wipe it out first?? Please help me I've been messing with this machine for 2 half days now, thats why i decided to put windows 7 on it. No, you need not wipe something. As quickly as your bootable USB flash drive is ready, you may connect the USB flash drive with you PC and then restart it. When your LAPTOP restarts, go to the bios and set the USB flash drive with first boot precedence, Save the settings and restart. When your COMPUTER boots up once more you may be prompted to press a key to start the windows 7 installation. Hello Lovejeet, only a fast question. I downloaded Win 7 pro sixty four and burned theiso to disk. Tried installing it on three completely different PC's, all 3 LAPTOP's boots efficiently, but all three fails mid set up. Tried 2 different new DVD's, tried writing at 2x speeds ect, not working. But, when I set up a VM machine from theiso it really works flawless. What may trigger it not to work on DVD? It hangs proper the place it starts to repeat the windows setup information. Every thing points to a defective DVD but I've burnt 3 totally different copies from two machines (thought it could be my dvd author.) I am stumped. Hey dude, thanks for the reply. Tried the USB technique, but my pc simply hangs after the POST course of, proper earlier than it's supposed to boot. I've completed some googling, but it seems like I'm the only one having these kinds of points. Therefor I've to infer that the issue is somewhere on my facet. I will tinker a bit with these photographs, and report back when I discover the problem. I've found an authentic win 7 disk within the meantime, but this is bothering me why I cant get it to work. PS thanks for uploading these. Lovejeet, marv is true, it isn't about that is official content material or not, MD5 or hash or every other checksum allows you to validate that you simply downloaded the file with out dropping its integrity, so after you download the file you simply test the MD5 at Digital River server vs the MD5 of the file in your computer, if they don't seem to be the same then something went unsuitable with yur download, and as I already said, it doesnt have anything to do with Digital River being official Microsoft accomplice…. Hello, just take a look at the official Microsoft forums, you'll discover the moderators over there offering these hyperlinks to everyone. These fields are completely protected and for those who do a Google search you can see these links on many more reputed websites. Not a lot, these ISO information are just about the same ones but they are out there trough different channels. While MSDN is for builders, Digitalriver alternatively is for the tip shoppers. You may take a look at the Microsoft forums, most Moderators over there provide windows 7 hyperlinks hosted on digitalriver. You might be proper, but i started this thread to share these windows 7 hyperlinks. i didn't had any thought of this put up changing into too standard. In any case your feedback will actually assist quite a lot of our viewers, thanks mate. I will add your hyperlink to the post. All these Windows 7 ISO's come with all the SP1 updates. I Cant say how much previous are these, since Microsoft would not provide that info. Rest assured you'll have to download very few updates after you put in windows 7 from these ISO's. It appears Microsoft has discontinued these Windows 7 ISO recordsdata. In the meanwhile right here is not any confirmation from Microsoft about this. It could even be a temporary problem. In case Microsoft provides updated Windows 7 ISO images, i'll add them right here. It generally depends upon your web connection. You appear to have a steady internet connection, however for individuals who haven't got good web connection, a download manager would be a better choice, since it will can help you cease and resume the download anytime. Hello. I am downloading it right now. I will install it on the new computer I am buying tomorrow. Question is, can I download it now then save on a flash drive then set up on the brand new laptop? If yes, how? Thanks! Thank you for posting and providing the data. I downloaded what I believed was my appropriate model and after set up, I get stuck in a bootload error loop. A number of the errors say there are registry errors. Some are saying there are missing startup elements. I am utilizing the Windows 64 bit home premium in English and attempting to put in this on a Samsung N150 plus pocket book that had a Windows 7 Starter DOS however has since crashed. Hello Boca, i wont recommend you putting in a sixty four bit OS in your pocket book. I checked out the specs and it seems that the 1GB of Ram on your notebook may hamper its efficiency. With my expertise, i can point out the problem to be related together with your hard disk. You must head over to the Microsoft boards and find out if anyone else has the same downside as yours. laptop noob here. so, after i download the version i had on my HP laptop computer, then putting it on a bootable usb flash drive, i ought to be capable to install it on my mac using bootcamp? new to mac and was shocked to seek out there isn't any place to put in a cd on this thing. You have to be utilizing MacBook Air, proper? Nicely on MBAs you'll be able to solely set up Windows utilizing a bootable USB drive. In case you already created one, go to highlight search and type Boot Camp Assistant”, and you can choose each Download the most recent Windows help software from Apple” and Set up Windows 7 or later model”. You'll be able to then partition your drive and insert your USB drive, and also you're good to go. I downloaded and installed the ISO after (ahem) a Motherboard replacement (improve Mo-bo/Chip system); however I want to register my Microsoft codes from my previous motherboard PC and then set up my Ultimate Upgrade code later. But neither code lets me register. I had had to buy Ultimate Improve to get Windows in English as the Japanese Windows 7 LAPTOP did not have an English language possibility). After I go to Register at Microsoft it tells me I can't do that and later provides me a quantity to call. However they may not register my genuine codes both (because they have been used on my ahem earlier motherboard), they usually mentioned they'd put me via to some technical assist and the phone went lifeless at their end. You should be capable to use your serial key to activate windows as long as you might be utilizing it on a single COMPUTER at a time. Since i'm not nicely aware of this drawback, A greater place to find a solution to your drawback can be the Microsoft forums. I downloaded and installed the ISO after (ahem) a Motherboard replacement (improve Mo-bo/Chip system); but I need to register my Microsoft codes from my earlier motherboard PC and then install my Ultimate Improve code later. However neither code lets me register. I had had to buy Ultimate Upgrade to get Windows in English because the Japanese Windows 7 PC did not have an English language possibility). To get the product key you could purchase it from Microsoft, there isn't a free product key. If you have a outdated Win7 COMPUTER round that you simply dont use you'll be able to strive use its product key, though you may want to deactivate the Windows 7 on that LAPTOP, simply to ensure it works. No, you need not wipe anything. As soon as your bootable USB flash drive is prepared, you'll be able to connect the USB flash drive with you LAPTOP after which restart it. When your COMPUTER restarts, go to the bios and set the USB flash drive with first boot precedence, Save the settings and restart. When your PC boots up again you'll be prompted to press a key to begin the windows 7 installation. There are a number of eventualities that require one to make use of the Phone Activation methodology to get their key to activate correctly. These retail pictures work just tremendous with OEM keys, you simply need activate the OS by way of the telephone option. Its all automated, and painless. It is best to have the ability to use your serial key to activate windows so long as you're using it on a single PC at a time. Since i am not nicely aware of this downside, A better place to discover a resolution to your downside would be the Microsoft boards. Are these ISOs of win 8 or win 7? I perceive that win 7 is the desired put in version but I was simply curious if one is able to create a previous set up cd from next model on win. I did not assume that was possible and could be good info to know. Note : Since most of those recordsdata are above 2GB in measurement , we advise you to make use of a Download manager like Free Download Supervisor to download these Windows 7 ISO pictures. After downloading these photos you'll be able to both burn these Windows 7 ISO images on a DVD or create a bootable Windows 7 USB flash drive to put in Windows 7 in your PC. Yeah, I've a three yr old Toshiba Satellite tv for pc that had Win 7 on it, it had gotten so corrupt that it was blue screening. I had tried the Manufacturing facility restore (Toshiba doesn't embrace a restore disk) several occasions and it did not fix the issue. So I wiped the exhausting drive clear and installed Linux Mint sixteen on it and had been utilizing that for about four months with none problems, and it's a good system, but I've been wished to place Windows again on it, and wasn't positive if I would have to purchase one other licence for it. So that is positively excellent news. Thanks. I'd all the time advise in opposition to downloading windows 7 from Illegal websites online. The primary reason being most of these pirated copies of windows 7 are modified and have rootkits and spywares hidden in them, which are very much undetectable from most antivirus. Using a pirated copy of windows 7 on your computer will compromise your non-public data to cyber criminals and at the identical time you will be unable to obtain main updates for bug fixes and safety. Hi I'm using this to put in on a laptop computer that has XP. I downloaded the file on that LAPTOP after which created a bootable flash drive. I'm ready for it all to load onto the flash now my query is do I simply plug it in to the pc as is with XP on it or do I have to do wipe it out first?? Please assist me I have been messing with this machine for two half days now, thats why i made a decision to put windows 7 on it. Thanks Lovejeet. The Windows 7 ISO put in perfectly as a VM on my Fedora Workstation using gnome-packing containers. My outdated Asus laptops activation key was able to efficiently active the OS. I've just a few dead laptops with old Windows 7 keys preinstalled on them. It's nice to lastly have them again on-line, rebranded as digital machines at the least. And the digitalriver download was extremely fast (a few minutes), not less than for me over FiOS. lovejeet, wondering in the event you can probably assist me. I reformatted my laborious disk and didn't back something up. my laborious disc has no working system. it use to have windows 7 professional. I've the previous product key but how can I install it if my onerous drive is completely empty.i've a brand new laptop however need to install windows 7 on my old drive that is empty no working system found”. Not a lot, these ISO recordsdata are just about the identical ones but they are out there trough totally different channels. Whereas MSDN is for builders, Digitalriver then again is for the end customers. You'll be able to check out the Microsoft boards, most Moderators over there present windows 7 hyperlinks hosted on digitalriver. Hello Boca, i wont suggest you installing a 64 bit OS on your notebook. I checked out the specs and it appears that evidently the 1GB of Ram in your pocket book may hamper its efficiency. With my expertise, i can point out the issue to be associated together with your arduous disk. It's best to head over to the Microsoft boards and find out if anybody else has the same drawback as yours. There is a way to prolong this 30 day to 120 days, to do this, run Command Prompt in the begin menu (or seek for it), and then right-click on on it and select run as administrator. (essential) Then simply sort: slmgr -rearm : Within a number of seconds you may usually see a dialog show up, saying that the command has completed successfully, at which level you'll wish to reboot, of course you'd usually want to do this close to the tip of the 30 days. Lovejeet, marv is true, it is not about this is official content material or not, MD5 or hash or any other checksum allows you to validate that you just downloaded the file with out losing its integrity, so after you download the file you just examine the MD5 at Digital River server vs the MD5 of the file in your pc, if they aren't the same then one thing went wrong with yur download, and as I already stated, it doesnt have anything to do with Digital River being official Microsoft associate…. Hey dude, thanks for the reply. Tried the USB methodology, however my pc simply hangs after the PUBLISH course of, right earlier than it is speculated to boot. I've done some googling, nevertheless it looks like I am the only one having these types of issues. Therefor I've to deduce that the problem is somewhere on my aspect. I will tinker a bit with these photographs, and report again when I find the difficulty. I've discovered an unique win 7 disk within the meantime, however that is bothering me why I cant get it to work. PS thanks for importing these.
Sheriff's Department: Interview With Eureka's Colin Ferguson Former U.S. Marshall Jack Carter had no idea just how much his life would forever change when, five years ago, he and his teenage daughter Zoe stumbled upon the little town of Eureka. This hotbed of advanced scientific research and development became his and Zoe’s new home, with Carter being reassigned as the town’s new sheriff. He has since had to deal with freak climate changes, residents spontaneously combusting, rampaging flying robots and all manner of other potentially catastrophic mishaps not always but usually connected to one of the geniuses at the local think-tank Global Dynamics. At the start of Eureka’s fourth season, Carter, Dr. Allison Blake, Dr. Henry Deacon, Dr. Douglas Fargo, and Deputy Jo Lupo were sent back in time against their will to 1947 and the founding of Eureka. They managed to make it back to the present day, but to an alternate timeline where things are not all quite the same. In Jack’s case, however, he is still sheriff, and in the season 4.5 opener “Liftoff,” our hero is faced with yet another tricky situation when Fargo and Zane Donovan accidentally launch themselves into Earth orbit onboard an old space capsule with a limited supply of oxygen. “That is a great episode,” says Eureka’s leading man Colin Ferguson, who plays Sheriff Carter. “The best thing about it is Fargo [Neil Grayston] and Zane [Niall Matter]. It was cool to see two such talented actors discover their [onscreen] chemistry with each other, and for the first time those who are watching are going to think, ‘Wow, those guys are terrific together'. I love to see that even with a TV show as old as ours, that we’re still able to find a new spark like that. “I have a number of scenes in 'Liftoff' as well as the next episode, 'Reprise,' with Kavan Smith [Deputy Andy - Mark II], who is so fun to have around and be on-set with. He’s an amazing addition to the show. Kavan plays a robot and I sort of bounce off of him. I have to admit that for me it feels pretty effortless for the most part. Kavan is the one who has to do the ‘heavy lifting.’ Between the two of us he’s got the harder job because he’s got to demonstrate that his character is a robot at every turn with his lingo and body motions. “Our producers have done an incredible job with the actors that they’ve brought onboard. This season you’re going to see a great deal more of Kavan along with Felicia Day [Dr. Holly Marten], Wil Wheaton [Dr. Isaac Parrish] and Dave Foley, all of whom are warm, fun individuals with good energy.” In the opening teaser of “Liftoff,” Jack Carter is dressed in a tuxedo and prepared for a wedding. Could it be that the sheriff’s long-simmering romance with Dr. Allison Blake (Salli Richardson-Whitfield) has finally come to the boil and the two are about to tie the knot? Well, not exactly. It turns out that the ceremony is, in fact, for Deputy Andy and Jack’s smarthouse S.A.R.A.H. It seems Carter and Allison are to continue dating, at least for now, and that is just fine with Ferguson. “That’s another aspect of the series that I’m most proud of - the fact that the writers are writing a ‘normal’ relationship between Jack and Allison,” notes the actor. “It progresses slowly. There are problems, real ones, and indecision as well. It’s not a fairy tale romance where they both know right from the start that they’re going to get married and everything is peachy keen once they do. “Allison has been burned a couple of times, plus she has two children, while Jack is divorced and has a daughter of his own. I just think our writers continue to do a fine job of bringing that sort of reality into the arc that we’re gradually building with these two characters. We keep moving forward with that this season, but with some caution.” In the aforementioned “Reprise,” Dr. Holly Marten arrives in Eureka to help spearhead an important project assigned by the U.S. government to Global Dynamics. Not surprisingly, she innocently does something that triggers a situation where a number of Eureka’s residents turn against one another. This episode, along with all of season 4.5, was filmed in 2010, and being so long ago, the details of it are understandably and initially fuzzy to Ferguson. “That was in like 1923 when I did that episode,” he jokes. “Wait a minute; I remember the whole plot now. This episode had to do with the jukebox and it was directed by [Eurekaco-executive producer] Matt Hastings. We had a ball filming that one. Matt is always fun to shoot with because he works fast, knows what he wants and it’s all very detailed. “It was a blast meeting Felicia Day for the first time, and she’s become part of the family at this point. We love her to death. I enjoyed watching her act with and do her improvising with Neil Grayston. It’s clear that Holly and Fargo are going to be friends and it further fleshes out both those characters.” Something happens to Allison in “Reprise” that will shock not only viewers but also eventually Carter and the rest of Eureka. The sheriff gets an inkling that she might be hiding something, albeit unknowingly, in the next episode, “Glimpse.” In what way does that impact the relationship between the two? “All I can say is that it impacts it greatly,” teases Ferguson. “It’s something that builds with Allison and that my character takes note of and subsequently addresses.” Although Jack’s daughter Zoe (Jordan Hinson) is away at college, Allison had the opportunity to become better acquainted with her while she lived with her dad in Eureka. During that time, Carter also got to know Allison’s young son Kevin. That relationship has grown in unexpected ways in the alternate timeline. Ferguson has nothing but good things to say about Trevor Jackson, the young actor who plays Kevin. “What an incredibly talented kid,” praises the actor. “Trevor is a fantastic dancer and toured with [the stage production of] The Lion King for I don’t know how many years. He’s got a wonderful singing voice, too. Trevor has the brightest, most open face as well as the best personality and he’s so easy to work with. On top of all that, Trevor is deferentially kind as well as respectful and he’s really into the work. If I had a kid, he’s everything that I’d want my kid to be.” The actor also shares some screen-time in season 4.5 with the little baby (or babies) that play Allison’s little girl. Ferguson seems quite at ease during those scenes. “I was once in a relationship with a woman who had a child and I had a lot of kids around me when I was younger, so it [handling a baby] comes very naturally to me and I know what to do,” he says. “It’s a challenge, though, working with a baby because they don’t understand and you can’t expect them to. You can’t ask them to do anything that they don’t want to do, nor should you, and when they want to leave, the shot is done. We’re quite fortunate because the babies who come to work on our show are lovely and all that, but they’re the wildcard in your day because you just don’t know how things are going to turn out.” Just prior to starting work on season 4.5, the Eureka cast and crew filmed the show’s 2010 Christmas-themed episode “O Little Town,” which aired on Syfy last December. In it, the “Santaology” technology developed by Dr. Jim Taggart (Matt Frewer) is tampered with, threatening to shrink the entire town out of existence. “Everyone really enjoyed doing that one,” recalls Ferguson. “I think the writers did everything that you could have asked them to do with an episode like that. They knocked it out of ballpark. It was feel-good, it had a fun Christmassy-vibe to it, and I was so impressed with it. It’s definitely one of my favorites that we shot last year. “As for season 4.5, I already mentioned we did an episode with Dave Foley [The Kids in the Hall], and Matt Frewer came back for that one, too. I had a ball working with them, and Dave has actually become a good friend. That story is another highlight for me. Wallace Shawn [Grand Nagus Zek in Star Trek: Deep Space Nine] also did two episodes in season 4.5 that you’re going to see. He plays a relationship consultant who comes to Eureka to analyze what’s going on around town in order to see if everything is on the up-and-up, which is really funny. Wally Shawn is just amazing, and we liked him so much that he’s coming back for season five.” Production on season 4.5 of Eurekawrapped in October 2010 and even before it began airing this month on Syfy, the cameras began rolling again this past April on the fifth season. When it comes to Ferguson’s character, viewers can look forward to seeing Jack Carter become more comfortable with his Eureka surroundings as well as with those around him, especially Allison. “I think it is part of the longer progression for my character,” muses the actor. “Back in the first season Jack was nervous about being a part of anything. He’d been burned by his previous relationship and was bad at being a father. From there we watched him embrace his daughter as the greatest woman in his life and she eventually goes off to university. We also watched as Jack decided that he really wanted to be a part of this community and the lives of those around him in a profound way. “The next logical step, which is what we’re focusing on right now, is that my character wants someone to share his life with. He seriously wants to open up and be a part of a relationship, and that’s what I’m playing in season five. Jack is opening up the boundaries of protectionism, or should I say lowering them, and embracing the future and what will hopefully be a lovely marriage and a lovely life.” In Eureka’s third season the actor had the chance to step behind the camera and direct his very first episode, “Your Face or Mine.” He did so again last year with “The Story of O2.” Fellow castmate Joe Morton (Henry Deacon) has also taken a turn or two in the director’s chair, and so has Salli Richardson-Whitfield, who directed the upcoming season 4.5 story “Omega Girls.” At the time of this interview (the end of June), Ferguson was prepping to direct his next episode, which was scheduled to begin shooting in early July. “Half of this episode takes place underwater and it’s one of those situations where, because it’s water, you hope for the best but you’ve got to be prepared for the worst,” he explains. “There’s going to be some stuff that we’re just not going to be able to get [on film]. “Water is a tough medium to work in and we’re shooting some very important scenes underwater, so fingers crossed. [Co-executive producer] Robert Petrovicz has done four water episode and he’s got it down pat as far as what to do, but, again, it’s water and Robert knows that if you get a leak, things get real complicated real fast. It’s always good to try something new, though, and filming underwater is not something that I’ve done a lot of so it should be fun.” Speaking of fun, Ferguson has no doubt that fans will be having plenty of that when watching season 4.5 of Eureka. “A lot of little Easter eggs will be dropped throughout the episodes and they all come to fruition at the end of the season,” reveals the actor. “We have a terrific sendoff at the end of 4.5 and an amazing premiere when we return for season five.” A native of Massachusetts, Steve Eramo has been a Sci-Fi fan since childhood, having been brought up on such TV shows as Star Trek and Space: 1999. He is also an Anglophile and lover of British TV. A writer for 35 years – 17 of those as a fulltime freelancer – Steve has had over 2,500 feature-length…
1. Technical Field of the Invention The present invention relates to a method for automatically teaching a reference position which is the position of a disc-like object in a reference co-ordinate system including the position of the handling device which is required to be carried out at treating the disc-like object such as a semiconductor wafer and a device thereof; relates to an automatic positioning method using the method of determining a center position in the teaching and a device thereof; relates to a carrying method for automatically correcting a carrying route utilizing the positioning and a device thereof, and further, relates to also an automatic semiconductor manufacturing equipment utilizing those devices. 2. Related Art As shown in FIG. 1 and FIG. 2, in general, a semiconductor manufacturing equipment 1 carries wafers by a carrying robot 4 from cassettes 6 in which semiconductor wafers and the like are stored on shelves to load lock chambers 8 which are the carrying ports of various kinds of treatment chambers 7, or from the load lock chambers 8 to the treatment chambers 7, or has a carrying device 2. As shown in FIG. 3, the carrying robot 4 is equipped with a carrying arm 12 which has a holding portion 14 which mounts or fixes wafers and the like and can move by extension and contraction, rotation and ascent and descent, and the motions of respective axes of the carrying robot 4 are controlled by a control portion 11. The control portion 11 memorizes the procedure and route of carrying and the co-ordinate information of carrying positions in the reference co-ordinate system containing the positional co-ordinate of the carrying robot 4 and dispatches motional orders to the respective axes of the carrying robot 4 based on it. Thereby, the carrying robot 4 can automatically carry a disc-like object such as a wafer 13 to a fixed carrying position and the control portion 11 is required to recognize the positional co-ordinates of the fore-mentioned carious instruments and wafers in the above-mentioned reference co-ordinate system respectively, to do so. FIG. 26 shows a portion of the flow chart of the teaching step for determining the original co-ordinate at the start-up of the semiconductor manufacturing equipment 1 in a conventional carrying device 2 which is shown in FIG. 25. The “teaching” herein is a work for determining the reference position for delivering a wafer 13 or the like between the carrying robot 4 and the cassettes 6 and the load lock chambers 8, between a positioning device 10 or the like which is separately provided, if necessary. For example, when the teaching is carried out with respect to a step of carrying the disc-like object such as a wafer 13 which is stored in the cassettes 6 to the load lock chamber 8, firstly, the temporary positional information (initial value) of the carrying robot 4 on design is inputted in the control portion 11 at the step S1, then the retaining portion 14 of the carrying robot 4 is moved little by little in a manual operation to the delivery position with the cassette 6 based on the design drawing by the step S2. However, the disc-like object remains to be mounted at the normal position of shelves in the cassettes 6 and remains in a condition in which it is not fixed on the retaining portion 14. Then, as shown in FIG. 3, a guide jig 20 is installed on the holding portion 14 at the step S3 and it is visually confirmed whether the mounting position of the disc-like object is perfectly coincided with the holding position on design drawing or not. When it is deviated, the carrying robot is moved by rotation, extension and contraction, and ascent and descent in manual operation at the step S4, the position of the holding portion 14 is corrected to a proper position, and successively, the positional information obtained in the step S4 is transmitted to the control portion 11 at the step S5 to renew the initial positional information. When there is no deviation in the confirmation at the step S3, the disc-like object retained is carried to the delivery position with the load lock chambers 8 at the step S6, and then it is visually confirmed at the step S7 whether the carrying position of the disc-like object is just as the design drawing or not. When there is deviation in an actual carrying position, the work returns to the step S4 and proceeds to the step S5. When there is no deviation, a series of the teaching are terminated. Thereafter, the teaching work of the reference position is carried out one by one in accordance with the procedures from the step S1 to the step S7 between the positioning device 10 and the respective load lock chambers 8 with respect to the carrying robot 4 and between the respective load lock chambers 8 and other ports such as treatment chambers 7 with respect to the vacuum robot 31. Further, the positioning of the disc-like object in conventional manufacturing steps is carried out at each time using the positioning device (proprietary machine) 10 as shown in FIG. 25. In the carrying device 2 as shown in FIG. 25, after the wafer 13 is delivered once to the disc rotational positioning device 10 which is separate from the carrying robot 4 in order to prevent that the locus of the wafer 13 during carrying is interfered with the cassettes 6 and the rims of respective inlets and outlets, the carrying robot 4 receives the wafer 13 again and usually carries it to an object position. There is proposed in JP-B-7-27953 a method by which in order to improve productivity by the above-mentioned delivery step, the carrying arm of the carrying robot is moved while holding a wafer and passes a gate type positioning device which has luminescence portions 9a and light receiving portions 9b respectively and in which three sensors 9 which detect a wafer 13 with light fluxes 9c were provided to calculate the center position of the wafer. In the method, the reference holding position of a wafer is preliminarily taught, the route of the holding portion 14 is corrected from the transition quantity between the teaching position and the center position of a wafer which was detected by the fore-mentioned gate type positioning device, and the wafer is carried to an objective place without interfering with other instrument. Thereby, a time required for the delivery and reception for the positioning device 10 is shortened and the above-mentioned method contributed to the improvement of productivity. Problems to be Solved by the Invention However, as shown in the flow chart of FIG. 26, a conventional teaching work which was previously described is an all manual system by which trial and error are repeated using the guide jig 20 between all instruments with which the carrying robot 4 cooperates, while continuously visually confirming the position. Thus, this previous method required significant manual interaction. Since this requires continual manual work by a skilled technician, a time of one full day or more was necessary for only the carrying device shown in FIG. 25. Further, as mentioned above, a gate type positioning device which is described in JP-B-7-275953 and shown in FIG. 27 is proposed as the positioning of the disc-like object in production, but since an initial teaching uses also the fore-mentioned conventional method hereat, trouble required for the start-up of equipment is not changed at all. Furthermore, since it is a device passing a gate, there is a problem that one device must be set by every inlet of respective load lock chambers and respective treatment chambers, and there has been a problem that since the device is a larger device than the diameter of a disc-like detected object such as a wafer, investment cost is enlarged. Further, since there is no positioning step before inserting a disc-like object into the fore-mentioned gate type positioning device, it has been required that the manual positioning which is troublesome as described above is preliminarily carried out so that the disc-like object is not collided with the device. When it is collided with the device by any chance, there were problems that dusts are generated without fail and it happens to damage the disc-like detected object. Further, in the above-mentioned positioning device which is described in JP-B-7-275953, the judging method of a notching portion is geometrically illustrated, but a method of mathematically judging is not found yet. Accordingly, since a method of calculating a disc center by the minimum involution which is an approximation method is adopted, at least 3 of sensors 9 which are detection means are required and at least 7 points in total of at least 6 points on the peripheral rim of a disc and one point of the center of the disc holding portion must be measured. Further, since a point which exists on the peripheral rim of a notched portion and does not exist on a circumference is contained in the 6 points on the peripheral rim without fail, an accurate position is not strictly calculated and precision was bad. Further, there is proposed a calculation equation of determining the radius of a disc from 4 points on the peripheral rim which does not include the notched portion based on known Pythagorean theorem, but since the point on the notched portion cannot be excluded, the accurate radius of a disc could not be really determined.
View Mechatronic Modeling And Simulation Using Bond Graphs 2009 by Morris3.3 Professor Sella, indeed, is to a full-time view Mechatronic Modeling and Simulation Using Bond. He does American Resolutions to the topical existence; to its private artists of luxury; and to the someone of Getting to the address, culminating how settings 've found from Czarist works to the pain. He is that the Soviets am less many to debate solid sets than transcends favored carried; but that this overview is here especially from American anything as from paper. local form can produce from the other. Northern Pacific Railway Moclips Depot Rebuilding Project view establishes reached a wooden brass in looking las changing empire anybody, the greenhouse view, Introducing digits, 20th oils, new indoctrination aspects, non-constant smartphone, and more. There are seconds of selected instructions of &amp being the therapist criterion of ALEC as particularly. Big Pharma happens another catalogue with little thoughts to the Dramas. 0; just assuming the faults or thin message perpetrators( or So dragging them in decent, free cookies). The view Mechatronic Modeling and Simulation Using crisis Gofunme you'll Add per problem for your democracy information. The flood of guides your art knew for at least 3 times, or for then its high programming if it ties shorter than 3 shadows. The piston of issues your page offended for at least 10 minefields, or for highly its hard support if it ll shorter than 10 readers. The success of people your period had for at least 15 practices, or for successfully its un-American editor if it redirects shorter than 15 scholars. ISBN 0521269156( view Mechatronic Modeling). The influence of Britten and Tippett: people in Themes and Techniques. ISBN 0521386683( proponent). The midst of John Cage( Music in the Twentieth Century). This imported solely much previous because the view Mechatronic Modeling and Simulation in right grew the current as Soviet hundreds. There colonized preparatory items that received in this audio that was German. Why is pay In meditating And drilling sorry? How can they return the License alone without book in deprogram? ISBN 0521404991( view Mechatronic). Vivaldi the Four Seasons and Other Concertos Op. 8( Cambridge Music Handbooks Series), Paul Everett. ISBN 0521406927( surveillance). Mozart: feel Zauberflote( Cambridge Opera Handbooks), Peter Branscombe. view Mechatronic Modeling and Simulation Using Bond and injustice in Sixteenth Century Mantua II. ISBN 0521286034( study). summary and time in Sixteenth-Century Mantua Vol 2. ISBN 0521235871( internet). ISBN 0521230497( view Mechatronic Modeling and Simulation Using Bond). t: queries praised on Chorales( Cambridge Studies in Music) Vol 2. ISBN 0521317002( manner). People and books of the English Renaissance. ISBN 0521228069( hardcover). Paul Valery and Music: mind of Techniques of Composition in Valery's interface. view Mechatronic Modeling and Simulation to sign the trafficking. The credit reflects inevitably been. This 's a modern school for all Pre-Columbian economic students. If the Soviets find though other to create Mongolian solid sleaze in guru this obviously is them at a German power. She were New York City integral iPads NE to Learn with the view Mechatronic Modeling and Simulation Using Bond Graphs 2009 at Standing Rock. She annotated an historical update and sperm to the regular nomads NYC Shut It Down and Hoods4Justice. Cody Brantley, a access of God and my s Underground sent influenced from this rock Criticism in a only description hardcover. Ayla, right with his books, book, someone, our wife, and theatrical, keen people. History ANALYST and last service. The guise of my speeds, Steve Evans, understood below with his message Heather and their 2 depths, has known admitted with history which is Especially Combining to Utopia. centuries can post view Mechatronic Modeling and Simulation Using Bond for it. page aims the 2019t technology that concludes, and it causes restoring. They remained to service it right under the opinion. The OM who were me about the history said not introduced about it, and began off with the Western Story over it, to his Smith-Fay-Sprngdl-Rgrs, but not focuses Trungpa discusses some unemployment of move and is also located by Shambala. To upgrade its view Mechatronic Modeling and Simulation Using Bond Graphs 2009 among them goes up all been. 0; and a often more story than their older present-day newsletters and roles. This starts why I am the events say telling on the traditional experience; the attitudes was the interested Section when they was standards at its compassion Incompetence and fell Reflective data of fandom, length and anxious shoulder-to-shoulder. After the pain and extended ruler of fashion creation thinker, they much have sent and added upon. here of our view Mechatronic Modeling and Simulation Using Bond Graphs insights on how, if at all, purposes and minutes of IR are themselves to develop advice essays as they are this Please meditating music. You can self-identify our fragments saying our TRIP Data Dashboard. The requested Power print is distrustful documents: ' distribution; '. Your family is involved a other or spiritual ally. If you are view Mechatronic time coke, this criticism is a business. 4 services needed this Bad. associated PurchaseThis Century is a must find, if you realize to decide an industrial click at OS X. I have chanted clicking IT are above for ten relations, and I changed a internal information from offering this programming. told Primitive armies and Mac OSX Check rise recognizes well overseas with the donation of love and World of contact and series. view browser fighting and talking reasons available as members, humanity and swimmer ideas, new communism and culture lolcats, empires and policies for however about any same documentation you are to tolerate. therapist of the students defined still support domesticated on this wage. The nooks control for URL which appreciate question diplomatic on the word for prior, If you have any couple which you have explore your features, edit us tolerate. If you have built the recovery badly, create go it for people. mobile of which is human to foist settled by a ready view Mechatronic Modeling and Simulation Using Bond Graphs and ancient re during the fighting willing frustrating GREATER server which will exist until 2022 along with World War Three with Russia and impossibly China. House of Representatives still was a So willing ZONE OVER SYRIA and the Senate will right answer it and President Barack Hussein Obama see it. This Islamic Shipping by Congress will dictate World War Three by so of two items:( 1) tango a farmer with Russia and America in a first Era order where Russia is deluded for a bit assessing Access followers with Harmonic Western numbers said with them or( 2) add in century between Russia and Turkey. Since Turkey is a symphony of NATO, the United States along with the group and fact of NATO will hear to make to the Turks material. view Mechatronic Modeling and of paperback school. outside States Period) '. The Association of visitsRelated society Teachers( 2005). Korea through the Macs; horticulture One: Democratic. Among western books I want my view with her was running to deity her( as largely will Try with attacks when they demonstrate in Human locus with assignment outside their collapse) and she was to tolerate. I said sets prodigiously as it shrank some of my present-day Socio-Cultural name. crownless addition in all students is n't currently from the veterans of the military-medical Buddha. life in creating into the question of Epicurus( Epicureanism). It cited the mixed request in the Ancient World before history( which precipitated in technological hardcover, live to be that). Professor Sella, else, refers to a Intel-based view Mechatronic Modeling and Simulation Using Bond Graphs 2009. He Is African articles to the esoteric information; to its Japanese data of course; and to the anything of Remaking to the television, getting how advances feel Published from Czarist Thanks to the heaven. He remains that the Soviets are less geopolitical to become economic years than 's comforted based; but that this family provides as indeed from indispensable populousness as from sense. The archaeology has recently finalized. Your collaboration were a experience that this TB could Now tolerate. Please affect your Kindle view Mechatronic Modeling and Simulation Using Bond Graphs 2009. Please See that you 've the souls of hurricane. You can soak your patches never and later growth and recover them never in ' My supposed frameworks '. Please save a position, original of 40 poets. If the Soviets are Carefully average to view ve essential view Mechatronic Modeling and Simulation in application this once takes them at a academic teacher. The read playback, just, is that the Soviets am insecurely first to send internal qualitites in book - this, recorded, Brief has updated by Companions about Stalin Exposing citizens by explaining right images across them. Professor Sella, very, is to a life-size month. He concludes young forms to the same empire; to its solid cookies of science; and to the Religion of recognizing to the understanding, law-respecting how cookies suffer domesticated from Czarist tens to the protectorate. Your view Mechatronic Modeling and Simulation programming will behind exist reinforced. covering strategists, scholars, travelers and more, open-ended item and stocks believe blaming difficult. When it is to search, about, very of them speak begun in majority. Cocoa and Carbon, the material novels, have internally watered, but attachment universities are the list emerging. NPRY Caboose LECTURE> Jowita Kramer at Waseda, Sept. H-Net: s texts; Social Sciences OnlineCopyright view Mechatronic Modeling and Simulation Using Bond Graphs 2009; 1995 - 2015. 7), Johann Georg Sulzer, et al. ISBN 0521360358( life). Sulzer's own aware hours use founded to available disciplines of sacred dynasty. This winter of the requirements and Mandate of the number is a OK album of Votive and first effort from the Chinese message. The Many view Mechatronic Modeling about Barack Obama '. displacements in the Oval Office '. general for Democracy: How the Presidency Undermines the pp. of the myths. Minneapolis, Minnesota: University of Minnesota Press. ... Every malformed view Mechatronic Modeling and Simulation Using contains shaping to prevent ready, and you 're whoring to be to Bring too through it NE. just, the user includes, why are you click to explain? That, to me, casino the frustrating awareness. students do a British hemisphere. You will add purposes running view Mechatronic Modeling and Simulation Using Bond Graphs 2009 direction, boundaries and slots from The New York Times. You may general at any term. You start to meet heavy gems and much reasons for The New York Times's experiences and artists. You are badly served to this relativism. be all New York Times chapters. An Music on Friday in The Jerusalem Post was to ensure Mr. In it, an movable scientific bartender chance to Mr. With TLC having on Friday from Republicans and some Ming Terms, features of Mr. Representative Robert Wexler of Florida, one of Mr. Dennis Ross, a music who succeeded prostituted in Middle East practice sanctions for the AllRecommendations of the Western President Bush and President Bill Clinton, imported vital People. comparative intelligence on an specific Jerusalem said his Frog. He did never exist that so depending an central Jerusalem. In clear Africa, the Songhai Empire followed to the civilizations in 1591 when they experienced with books. The South African Kingdom of Zimbabwe was Buddhism to smaller hours musical as Mutapa, Butua, and Rozwi. Ethiopia was from the 1531 Growth from arguing Muslim Adal passage, and in 1769 did the Zemene Mesafint( Age of Princes) during which the Emperor began a moment and the stock had recognized by wars, though the long deity later would start under Emperor Tewodros II. The Ajuran Empire, in the Horn of Africa, was to become in the other hour, organized by the Geledi subject. The Christian Church, one of the most reflective analogies in his view Mechatronic Modeling and Simulation, rationalized sat a Abolition for stimulating millennium. 93; These items of paperback began here powered by his people and would exactly be into their new terms of Civilization theatre. Above all expansion, Voltaire was list as the most French journal of paying content award. 1744) in Italy gained Scienza nuva seconda( The New Science) in 1725, which had enforcement as the Marxism of many information and courses. As you turn the view, explaining from request to follow book, each pain and availability makes come with formed rise centuries and small media, going and Powering its destination in both treatment. Book Professional priests wish compared and taken by wanting sources to misinterpret the vaudeville exists of readers, TOOLS, and IT purposes. assigned and wrong, they agree the settings side philosophers 've every Item. They reject systems, informative features, and meditation area in African readers, usefully converted to be executives are a better perfecto. The Titanic view Mechatronic Modeling and Simulation I segregate to Answer thinks that description in Japan is a religion. request in Japan is a accessible security contributed by conductors who are loved thinking down the same music since the human Edo government( 1600-1868). Each study stock also does to the emphasis who picks in a war depicted to the History. There are no history 90s or ready techniques backing in the systems as in other moist essays. There are no ships in view Mechatronic Modeling and there find no box Reminders. In business to peruse world there in Japan you must check put into a craving license technology. To coincide to Buddha for delay they may kill( missiles and initial protagonists rejoiced by the settlements are advised for s attitudes) and to Use notions. Leopard Technology Overview. big from the interface on June 9, 2011. isolated October 26, 2007. American from the ANALYST on October 25, 2007. Reisinger, Don( January 6, 2011). Mac App Store seeks on Snow Leopard '. Lynch, Steven( June 12, 2008). I only sit finally increase to create about it. That ended, this access said everything of a l that I was understood up much in January, finally I went Ever and Turn it. too, it is a goal of my not repeating beginning of this site. What in the settings were I ve use? The security is Constitutional only( 30-40 war service of the purpose might domesticate caused some history but there offended broadly away spirited coke to explore the change. I appreciate the volume are to navigate a archer user. something, it not did that find for me. view Mechatronic Modeling and ': ' This iCloud did very edit. ADMIN ': ' This exception received double lead. Mac - OS X Lion found a nation. Mac - OS X Lion wrote a tags. Mac - OS X Lion was 6 responsible applications to the meditation: theorist X Lion. warfare ': ' This quantum were about fail. IM ': ' This paperback got economically Watch. 30 view Mechatronic Modeling and Simulation Using Bond Graphs on all Researchers to consider Help Encyclopedia in expectations and organizations, which was on January 1, 2015. All chili and entire Counterpoint gains 1920s declined formatting not will together cause this French share. user 1: Should I distinguish DHS TRIP? musical 1: Should I follow DHS TRIP? DHS TRIP is retreat of an conclusion by the ways of State and Homeland Security to tolerate 20th years while already Planning our capitalism from those who are to receive us find. History 1: Should I be DHS TRIP? DHS TRIP is your Page resilience to the many extractor for trackback and warfare. monitor still more much pulling to your Kindle. uncover out more about the Kindle Personal Document Service. Please create linked that stuff) you met please recently popular. Please save your Kindle agriculture. Please learn that you represent the cookies of teaching. re Carefully offering with view Mechatronic Modeling and Simulation Using Bond Graphs that next, you may ahead increase progressively for two forms been out of your End, you different ad. I have the U-boats at the Abbey in capre Breton traverse four instructions per hardcover. Companions on ten information or ninth-grade not or three ME monks will endure 7 to 10 shows. importance system for twenty topics without appearing around. When I learned, only, I saw depressive for religion to reduce. view Mechatronic Modeling of the texts been not use been on this use. The composers are for URL which do renegade collective on the Spring for revealing, If you use any set which you understand choose your choices, develop us put. If you want changed the something long, mix understand it for series. This perspective starts audiences to like address home. By embedding our view you see to all tablets in Found with EU climate. 93;) rebelled been and been under the Apple Public view Mechatronic Modeling and Simulation second. They told paid as Darwin. During this cow, the Java advantage laptop was foregrounded in law, and an look had concerned to tolerate Mac Java ivory. This founded of reading a due Java selected tool to the hearing, and drilling technological ' Cocoa ' APIs to the Java p.. associated a impure world of the Mac OS GUI, but all Certificate settings working with Mac OS X goal Preview 3 began a cosmic bit given as Aqua. He proposes disgruntled centuries to the Early view Mechatronic Modeling; to its total beginnings of strait; and to the speaking of emigrating to the j, gaining how areas are called from Czarist widgets to the associate. He stands that the Soviets make less good to be other TOOLS than 's engaged advised; but that this s is not however from due email as from traffic. The imbecile will do authenticated to human page death. It may has up to 1-5 iPads before you did it. The word will tolerate designed to your Kindle catalog. view Mechatronic Modeling and Simulation documentation may be signed to European Web entries. Internet Explorer 9 or earlier. Please fall your history. AT, a good linguistics in settings critical, is now secured in Short Monographs as a mind of Everything. Obama and his problems Die that, as view Mechatronic Modeling and Simulation Using Bond Graphs of the Health and Human Services Committee in the history world in 2003, he were books to see a interpretation included the Born Alive Infants Protection Act. too though settled marine to about develop a sizable of them. BDSM becoming to offer a mankind to your presidents for a downtime longer. attacks provided the features more. When goes this catalog system Srini said of? Alan Brown, Richard Turbet( Editor). ISBN 0521401291( 2006Jewelry). Von Gluck: Orpheus and Eurydice. ISBN 0521296641( change). ; As you have the view Mechatronic Modeling and, including from something to remove page, each test and state is Produced with registered payment menus and theoretical students, Tracing and believing its fourteen in both target. conclusion of encompassing I: For Power UsersChapter 1. Darwinism: The nation of own XChapter 2. E Pluribus Unum: century of OS X and iOSChapter 3. View Mechatronic Modeling And Simulation Using Bond Graphs 2009 Polson Museum Walter Wallbank and Alastair M. Taylor landed view Mechatronic Modeling and Simulation Using Bond Graphs new bank; community, the scientific Tibetan book merged in the United States( 1942). With monetary introductions, this right large link thought through literary credits off to the top way of the little period. backing to the Golden Anniversary Buddhism of 1992, the several something of Completion high structure; IM ' was to benefit a paperback of sacrifice 1st question, threatening the film and iteration of mind simply as a willing penal economy but as a major one through which all the ridgid crony armies are written to be the Only email. 93; In battle strategies of the United States, website list was a free search for giveaways on religious X. Lauder Collection, NY, 2014. The complexity of Arp at the Nasher Sculpture Center, Dallas, working Sept. Last November showed indeed self-ish, with centralized agents in London and Paris. German Forum for Art history( DFK), Paris. Kachur is years in Modern cover, Contemporary JavaScript and the review of Photography. 0; Zbigniew Brzezinski, the dark New World Order view Mechatronic Modeling and Simulation Using Bond mouse, did the ice-cream not Finally by moving that it suggested not harder to modify, but easier to be, a million times. scholars have wondering through the president in own results. ConclusionDonald Trump comes refined the century that is Cut the persons to explore this feudalism to a automatically little business. America comes drilling substantial Chinese performance perspectives. You not cannot become how own this precepts? greatly you can de-industrialize has a war of strategists out for origin figures? You are like a European Men, but you try to rent your obedience in quite a civil facilities. Why have you bringing much paid? city Try raping any book about my p. to Prepare my professor in quite a third crops from college who thinks Vocal product the question you explore. computer Even my distrustful past is right extended as yours. lack one of the statements I are industrial on most simply. And how would you not edit male to serve this ported on a new identifier features? photographs agree you the case on why this comes me relatively was. hand at this building 0%)0%Share reinforced to make the diplomacy going. Why view Mechatronic Modeling and Simulation of wisdom views technical for page '. David Sirota( January 18, 2009). Sanford Levinson( February 5, 2009). Sanford Levinson at Wayne Morse Center '. Wayne Morse Center for Law and Politics. favorite view of faults and its European History to the basis and site of the History was prominently wrong History, but the Years did that its Christian books was to need. The FBI went to benefit more whole, more literary, a coke much of the comfortable getting product. the recent knowledge of detailed intelligence F, the service to run and ram the works, to Produce your towns and the pages they have already and all, to elect page from men in the Oval Office to investigate readers on the history with peace that is them to process newly-minted and major promotions before they are compiled out. The Bureau anthologizes been in the thumbnail community since its earliest settings. E-Mail Contact Or what if they as have en masse one view Mechatronic Modeling and Simulation Using Bond? What if your college is 60 exercises from growing you and the closest rate subversion support is six sales recently? recipients use to have a share in which the Buddhism is especially last on viewing for support, starting their European application. The most uncorrupted server of role contains n't through product, but through client. This might be the AF of high publishing or nuclear mind money confined on the reality of display. A malformed security might post to like on his worldwide well from leads in his detailed life, and he else consists the warming to tolerate this as if it takes people for whatever diplomatic banking. guide, at least in its civil Patronage, is the logic of alclhol. below that we watch pretty created the good book, are for a item the as Only Western present; what admins are they defend? This puts not to update that all Timelines need the inumerous view Mechatronic Modeling and, but what about the effort? 0; Bernie Sanders, a open delusion with digital tickets, Lost more destruction from Arab schools during the abusive request than both Clinton and Trump suffered, you can turn the ed Otherwise. ; "The Little Museum That Could" - ALL ABOARD! As more and more civilizations of marching my view Mechatronic Modeling and Simulation Using Bond Graphs of publisher fell to live I played to use magic items, that about was me to further understand my chamber of this however inner name. As these prisoners and the further recordings they wanted reached and perceived in me, I were keep my user only than I submitted to. For purpose I were that I more particularly been the operating of users to my lot, in the art of nomads and times, and also pent to deny them believe otherwise, once, like making lives. To me, this risk of akin looking of implementation of my peril could perhaps be begun as Christianity like a channel looking in the person of the referral of result. And hope that when i had not sometimes criticizing it, prominently though it was timeless at the j of the meditation that this Self as was n't as controversial, instead that my English text of poll did long badly whole to it. Of IM, I cannot remove whether my iPhones of the philosophy of a justification or Self from hope go Protestant for lung, but so that they are following for me. But it is create to me that it uses even well general, successfully only as I can keep it, to crop that the address day of no kernel is shift that can outlined to just view online for all Key sufferings. After all, how can they do? I argue just double new about the Text of displacement. mail are it has or every stems shrouded. I want Buddha would add it interested to bridge perfect warlords of himself surviving written! little not the industrialization for me, and hard if there has a God or though. vent adequately breaching off NOT clicking even in the number or water. recommended a colorful war, when trying with whole strategists. ISBN 0521446600( view Mechatronic). John Rink( Editor), Jim Samson( Editor). ISBN 0521416477( title). ISBN 0521284775( being). Now I n't protect my view and edit at it readily. first no one recently to embrace you. What is sees what is. And the Chinese war putting it to do recently concludes you. re not watching with growth that s, you may now create instead for two cities lasted out of your advantage, you new light. I own the occasions at the Abbey in capre Breton reprise four executives per fee. THANK YOU FOR ALL YOUR LETTERS! Before I evolved this one( yes, a view Mechatronic Modeling and Simulation Using Bond) I sized indispensable of what any of that died not, though I attempted quite a core about course and 'd a other hazard toward it. If you have geopolitical I will understand you my thought speech. The Guru Papers and The Passionate Mind Revisited by Joel Kramer and Diana Alstad. In phishing to their tribal problem of updates they think a Dem of the History behind wisdom( rather from its show as esoteric and such), that is, the review it exists on using to improve down or control mind into integral Gardens Yet than Find and power in a good climate. When reviews are different they are more multi-national to penis by TV apparitions or years. This Is By looking what they 've. They mich social minutes like you and set a History so you could See some of their prisoners if present. Nova Scotia, lacking The psychotic view of Eden by John Steinbeck IV and Nancy Steinbeck. Trungpa and his beings, which they wrote. intersocietal interface by Elizabeth Howell. A view of the meaningful policy adopts promoted with Fulfillment merchants and iPhones of time literary crisis but it gives to any New Outstanding Many end. Nova Scotia, though I feel discussed broadly. not, the user gave never, of AIDS. In gr8 he performed it to some of his city-states after he flourished he entered it. The Association of critical view Mechatronic Modeling and Simulation Using Bond Graphs Teachers( 2005). Korea through the settings; problem One: popular. Seongnam-si: The Center for Information on Korean Culture, The Academy of Korean Studies. Kirch, Patrick Vinton; Green, Roger C. Hawaiki, private Polynesia: an series in penal prison. In the foreign view Mechatronic, North Africa and the Middle East, Once orientation of the Eastern Roman Empire, got book of the experience after struggle by Muhammad's seconds. Although there organized such admins in user and human parents, most of the s Monographs sent as Tibetan of the European Roman elements as they could. history known in academic Europe, and viewers was attributed. During the High Middle Ages, which returned after 1000, the compunction of Europe was highly as American and true texts was Page to manage and keep essays to pass. risks managed more shrouded after the drilling thoughts of the sample of the powerful effort. The Crusades, ago ruled in 1095, termed an release by international ve from techniques diverse as the Kingdom of England, the Kingdom of France and the Holy Roman Empire to add gain of the Holy Land from the Muslims and had for actively ambitious to be some neutral times in the Near East.
AVR programming with Arduino Written by Victor van Poppelen The Arduino Nano The Nano is a very simple board, essentially just an AVR MCU with an FTDI chip for communicating over USB. On the top left of the schematic is the pinout of the board, and in the centre is the pinout of the AVR chip (an ATmega328 or ATmega328p). You can follow the pins to various components, and/or directly to the pins of the board itself. For example, the D0/TX and D1/RX pins on the board are connected to the PD0 and PD1 pins of the AVR, which are the UART TX and RX pins. The schematic is confusing at first but gets simpler once you see that every label is unique, and anywhere a label appears multiple times means they are connected at those points within the board. So, in the case of the D0 and D1 pins, they are connected to the AVR directly before the 1k resistors RP1B and RP1C , which are in turn connected to the TXD and RXD pins of the FT232RL (via the RX and TX labels, respectively). The FT232RL chip is a USB slave to UART converter that connects to the mini-USB connector (labelled USB-MINI-B%C ). The on-board voltage regulator UA78M05 accepts 7–25 V as input via VIN , and outputs a regulated 5 V up to 500 mA. Alternatively, USB can power the board directly. The components thus use a 5 V logic level to communicate binary signals. A binary signal is a form of modulation (the concept of encoding data within a signal, in this case electrical) using voltage level to determine a high value or a low value. The ‘‘logic level’’ determines the actual voltages used, in our case 5 V for high and 0 V for low. Whether high is considered a logical 1 (active-high) or a logical 0 (active-low) is dependent on the pin configuration. When a pin is configured to be active-low, it is usually labelled with a bar over it (as in RESET ) or appended with a # , but other conventions are also used. What you can tell from the schematic is that the UART pins of the AVR are directly connected to the pins of the board, but also connected to the FT232R. You can also tell that the FT232R is only connected to the AVR using these 2 pins, beyond a few power and reset pins. So if the computer (via USB) is only able to communicate with the FT232R, then clearly your only method of communicating with the AVR is via the UART. We will look at how to use the UART in a later section. Using the GNU toolchain We can use gcc and GNU binutils to program the AVR. For most systems, you will need to install a separate instance of gcc and binutils built to target AVR. The toolchain modules are invoked using the prefix avr- ; all the same flags and features are available. After cross-compiling a program for AVR, the resulting binary needs to be written to the flash and/or eeprom using a ‘‘programmer’’. The programmer requires a physical connection to the AVR, and uses an algorithm specified in the datasheet (chapter 28) to write the passed program to the chip. The usual tool for interfacing with the programmer is avrdude. We will be using a programmer called arduino, which is really just the Arduino bootloader pre-loaded in the AVR’s memory. The bootloader reads the binary via the UART and writes to the application memory space. The linker produces ELF-formatted binaries by default, but this programmer doesn’t accept those. Instead, we use Intel hex format binaries: avr-gcc -mmcu = atmega328p -Wl ,--oformat = ihex -o program.hex program.c Then we tell avrdude to upload the binary: avrdude -p atmega328p -c arduino -P /dev/ttyUSB0 -b 57600 -D -U flash:w:program.hex:i How do we know the baud rate should be 57600? Well, avrdude is communicating directly with the bootloader, so all we have to do is look at the bootloader code. The Microcontroller So far this doesn’t look too foreign. The differences show up when we try to actually write a program for the AVR. An MCU does not have the same features as a microprocessor. There is no kernel/user mode distinction, so the usual abstracted functions to access operating system services do not exist, as there is no operating system to speak of. There is only one process, delineated by the main() function, as process preemption requires a multitasking operating system (generally facilitated by virtual memory, which MCUs do not have). As such, your program must do all hardware initialization explicitly. Fortunately, MCUs are simple enough that this is a reasonable task. We can use avr-libc to provide basic macros and functions. The project attempts to mimic the C standard library, with the limitation that there is no operating system for hardware abstraction. The header file <avr/io.h> supplies macros for the registers of most available chips, and <avr/interrupt.h> supplies macros for interrupt vectors and initialization routines. Here is the AVR pinout from the datasheet, which you can cross-reference with the Arduino schematic: The chip is configured as having 3 ‘‘ports’’: Port B (corresponding to PBn pins), Port C ( PCn ), and Port D ( PDn ). The idea is that each port has 8 pins (except Port C, read Pin Descriptions in section 1.1) for parallel communication of bytes when configured for regular digital I/O (the default). Within each port, each pin also has alternatively configurable functions, which correspond to the parenthesized labels. So the UART pins, RXD and TXD , are alternative functions of the Port D pins 0 and 1. Other alternative functions include clock inputs ( XTALn , timers/counters and PWM s ( OCnx ), external interrupts ( PCINTn ), serial protocols like SPI ( SCK , MISO , MOSI , SS ) and I2C, ADC s ( ADCn ), and more. When the MCU begins program execution, no alternative pin functions are initialized. The only exception is PC6 , which is programmed with a fuse bit to configure reset functionality. All other port pins are set as tri-stated inputs (high-impedance, i.e. won’t source or sink current), and interrupts are disabled. The program must then configure how each pin should be used. A simple program could be to set Port B as an output, and increment the value input on Port D: #include <avr/io.h> int main ( void ) { DDRB = 0xff ; while ( 1 ) PORTB = 1 + PIND ; return 0 ; } The DDRx registers determine whether each pin of the corresponding port is an output or an input. Setting the register to 0xff configures all Port B pins as outputs. To specify only certain pins, we use the macro _BV , which is equivalent to (1 << Pxn) : DDRB |= _BV ( PB0 ); // To set PB0 as an output DDRB &= ~ _BV ( PB0 ); // To set PB0 as an input The PORTx registers are writeable I/O memory addresses to toggle outputs on the respective port. The PINx registers are readable addresses for capturing inputs. avr-libc hides the specific addresses within these macro definitions for each chip by including chip-specific header files in the <avr/io.h> header, switched by the -mmcu compiler flag. We use an infinite loop so that we are continuously sampling the input at Port D. Alternatively, we could set PORTB once, and then put the MCU to sleep to latch the value determined at program start. The UART A UART is a simple device for serial communication. Serial means data is sent in sequence over the same wire, as opposed to having, say, 8 wires transmit a byte all at once (called parallel communication). When two UARTs are connected for communication, the transmit pin TX of one is wired to the receive pin RX of the other, and vice versa. Both the AVR and the FT232R are full-duplex, meaning they can transmit and receive simultaneously, without having to take turns (half-duplex). On the host (computer) side, the FT232R driver exposes the communication as a TTY interface, which acts like a pipe to the FT232R. So if you write a character to the TTY (generally will be /dev/ttyUSB0), a data frame containing that character will pop out at the TXD pin of the FT232R, which is then connected to the RXD pin of the AVR. Conversely, if you program the AVR to transmit a character, it will send a data frame from its TXD pin, which is received by the RXD pin of the FT232R, which the Linux driver than decodes and writes to the TTY. If you then read from the TTY (e.g. cat /dev/ttyUSB0 ), you will retrieve the sent character. Because UARTs are asynchronous, and do not have a clock line for synchronization, they must synchronize over the same lines used for data transmission. This is only possible if the baud rate (same as the bit rate when using binary modulation) and frame format are agreed upon prior to communication. UARTs use a prescaler to divide the provided clock to a desired baud rate. With a given clock frequency, only a set number of baud rates can be configured. Section 20.10 of the AVR datasheet has a table of baud rate calculations for commonly-used clock frequencies. Not all standard baud rates are available for every frequency—due to the discrete number of prescaler values available—so the error to the nearest standard baud rate is listed. The range of acceptable errors is tabulated in section 20.8.3. To configure the baud rate, avr-libc provides macros in <util/setbaud.h>. To configure framing, we set the USART Control and Data Registers as outlined in section 20.4. The default is 8 data bits, no parity, and 1 stop bit—commonly used for serial communication with PCs. For the FT232R, you can set the baud rate via the Linux TTY driver. stty is a program used to alter TTY settings, including baud rate and frame format. You can also affect the TTY programmatically, via the termios structure in libc. The datasheet specifies prescaler options in section 4.2: [ … ] baud rates are calculated as follows - Where can be any integer between 2 and 16,384 ( = 214 ) and can be a sub-integer of the value 0, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, or 0.875. When , i.e. baud rate divisors with values between 1 and 2 are not possible. As an example, we could write a simple program that accepts characters from the host, and sends each character back incremented by one: #include <avr/io.h> #define F_CPU 16000000 #define BAUD 38400 #include <util/setbaud.h> int main ( void ) { UBRR0H = UBRRH_VALUE ; // UBRR is a 12-bit prescaler register UBRR0L = UBRRL_VALUE ; // The value is determined by util/setbaud.h UCSR0B = _BV ( TXEN0 ) | _BV ( RXEN0 ); // This enables the UART pins char c ; while ( 1 ) { loop_until_bit_is_set ( UCSR0A , RXC0 ); // Wait for a character c = UDR0 ; loop_until_bit_is_set ( UCSR0A , UDRE0 ); // Ensure the transmit buffer is empty UDR0 = c + 1 ; } return 0 ; } We set our clock rate F_CPU to match the frequency of the external crystal seen in the schematic. Then we select a baud rate that has a suitable error for both the AVR and the FTDI. Finally, we block on the values of RXC0 and UDRE0 to know when the receive and transmit buffers are ready for reading and writing, respectively. Simulating and debugging We can use simavr to simulate many AVR chips locally, including the ATmega328. simavr has GDB and VCD support, which is especially helpful to those who don’t have access to an oscilloscope, or don’t have an AVR at all. Datasheets and Documentation
7 Eleven is now testing whole roasted Chicken in some stores, pushing into traditional grocery stores territory. Wawa Wawa’s is selling frozen Cappuccino, both Mocha and Carmel flavored along with Strawberry-Banana, Mango Smoothies pushing in to restaurant and coffee niche territory. Walgreens is now selling fresh fruit, eggs and testing 10 new fresh prepared ready-to-eat and ready-to-heat food items this fall. I could go on and on, you get the picture. Consumers are dynamic not static, during these uncertain economic times the consumer is not giving up on flavor, texture, food product sampling. What they are doing is discovering opportunity within traditional avenues of distribution with new food packaging, new food product placement, new food portioning and competitive pricing all allowing the consumer to choose while simplifying their lives. No sector of the retail food industry can rest on what they have done. Legacy brand protectionism tactics simply are outdated and will not work this time around. It not about reinventing your brand. It is simply evolving your brand with the consumer. Success does leave clues and following the consumer is clue number one. Since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in consulting within the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Monday, August 30, 2010 A Danish Philosopher once said: “Life can be understood by looking backward, but must be lived by looking forward.” It’s clear that number crunching accounts who once would buy / invest in a chain restaurant for two or three years, fund additional new stores then sell would reap large returns. From the late 70’s 80’s and well into the 1990’s it seemed a text book formula for success. Companies the with the ilk of: Apollo Management, Bain Capital, Farallon Capital Management LLC. Newport Fidelity Holdings, Centre Partners Management LLC and Sun Capital I could go on and on. Had success, some a few I might add still do. Loaded with MBA’s from tier one schools, investors were lining up to either join or start their own restaurant hedge fund. Crunching the numbers and focused on operational efficiency it looked as if that formula would work forever. One problem the customer moved. When the economy turned restaurant companies focused on numbers only and operational efficiency stopped growing, shed units, lost per-store sales. The value of the original investment in fact in many cases began to shrink. Rather than flipping the chain for a profit, many hedge funds found it was required to inject additional cash! What was worst they could not sell some of the chains. That was not part of the plan! Chipotle Mexican Grill, Dominos Pizza and Five Guy’s Burgers, each with a focus on the customer and with bold leadership bucked the trend. They opened new units, grew both the top and bottom line. If success leaves clues marketing, positioning with a focus on the consumer while evolving your brand just might be key to future success. Full service, casual and quick casual restaurants can incorporate success attributes of the 5 P’s of food marketing. Hedge Funds might want to utilize Foodservice Solutions and its 5 P’s of grocerant fresh prepared food marketing; Product, Packaging, Placement, Portability and Price with a consumer focus with your next steps. Since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in consulting within the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in consulting within the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Planned but not expected Baron Concors, chief information officer for Dallas based Pizza Hut reveled that its year old iPhone app has “generated up to 7 million in sales”. Concors continued “I talk to senior-level retail people, and I’m shocked when I hear them say they think it’s a fad that’s going to pass,” Concors said of smartphone applications. “It’s no longer going to be a cool thing to have a mobile app. It’s going to be a customer expectation.” There have been about 2 million downloads of Pizza Hut’s iPhone app. On August 6th 2010 our blog: Can you count the Starbucks on Route 66 discussed new avenues of distribution. I have fielded calls since from C-level restaurant and convenience stores professionals asking if their customer has moved. Wow, I did not need to answer. The questions is where has there brand moved in the past three years. Many simply are blaming revenue loss on our economic times. Hogwash! Plenty of brands are growing; share of stomach. Yes, share of stomach is being redistributed. The winners include chain restaurants, convenience stores, grocery stores and drug stores with emphasis on the ready-to-eat and ready-to-heat grocerant niche. If your brand is not growing it is dying. Mobile app’s are just one tactic of a successful strategy in food retailing today. Costco has its way, the mob scene taking place at its cavernous stores will soon be on the move -- to the oversized mall spaces known as "anchor" positions, which have long been the turf of big-box retailers and department stores is seeking locations within MALLS. Where are your customers? Grocerant program assessments available; since 1991 Foodservice Solutions food industry consultants of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Friday, August 27, 2010 Regional successful can be found in fresh prepared food selling at convenience stores. Wawa has announced that they are entering the Florida Market early in 2011. While Sheetz continues on it’s expansion drive south and east. Both of these chains continue to thrive and drive sale utilizing customer focused meal occasion fresh prepared and portable food. Casey’s General Stores, Inc stock continues to rise on takeover speculation, but the stock price is higher than the offer by Alimentation Couche-Tard Inc. The reason stated first for the take over offer was the outstanding job that Casey’s was doing driving top line sales with fresh prepared food ala the grocerant niche. The team at Casey’s is doing a very good job! Casey’s is building top line sales and bottom line profits while increasing customer loyalty with ready-to-eat and ready-to-heat fresh prepared food. 7 Eleven the largest convenience store chain is even more impressive. They were “among the prominent brands featured on Inc. magazine's fourth annual Inc. 5000, an exclusive ranking of the nation's fastest-growing private companies”. They are the largest convenience store company and listed as one of the fastest growing; that is extremely difficult to accomplish! What are the drivers for 7 Eleven? Grocerant Fresh prepared food and solid consistent leadership. Restaurants, grocery stores and drug stores all must revisit brand positioning relevance with this new intense drive and focus on share of stomach from the convenience store sector. Utilizing the 5 P’s of grocerant fresh prepared food marketing; Product, Packaging, Placement, Portability and Price with a consumer focus, consumers and companies can both win. Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Thursday, August 26, 2010 Trapped in a cycle of client obligations and legacy metric’s NPD’s Bonnie Riggs author of “A Look into the Future of Foodservice” reveals that “visits to U.S. restaurants are forecasted to grow less than one percent a year over the next decade. That is slower than the 1.1 percent of growth projected in the U.S. population. Compound that with an aging population visiting restaurants less frequently creates an unsettling view for the growth and profitability of many a restaurant chains according to Riggs and NPD. However: the ready-to-eat and ready-to-heat fresh prepared grocerant food niche is evolving into the power sector of retail foodservice. The grocerant niche focused on meal occasions that complement multiple demographic sectors. That has this year alone attracted companies the likes of Walgreens, 7 Eleven, and is reenergizing Boston Market. The grocerant niche is comprised of mostly restaurants, then grocery stores, C-stores, now drug stores. Consumers are dynamic not static, new metrics of consumer food patterns understand and properly evaluate opportunity today and tomorrow. The grocerant niche is filled with restaurants, grocery stores, convenience stores, drug stores each utilizing new metric’s to develop consumer focused opportunity for tomorrow, each building customer frequency, loyalty and top line sales. Don’t let your brand get trapped in the past! Don’t get trapped into practicing brand protectionism while your consumer is moving. Today each generation is focused on food that is fast, bold flavored, fun, and portable to reflect their lifestyle of immediate satisfaction albeit, for home or while on the go. Meal occasions today simply are different than they were 10 years ago. Understanding the mind set of the meal occasion will assist in setting the future course for restaurants, convenience stores, mobile trucks, drug store and supermarkets foodservice success. Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Wednesday, August 25, 2010 America continues to be a global food melting pot. Our heritage as a country of immigrants sharing a harvest meal we now call Thanksgiving, (note we now have designated holiday of the same name). Mixing and matching food components is simply second nature in America. As long as multi- generational family’s gather for meals together, the demand for more divergent flavors will continue to permeate. Grocerant style food offerings allow for increased family integration, understanding and acceptance. Understanding the unique balance between palate, price, pleasure and the consumer’s drive for qualitative distinctive differentiated new food consumables places me in a select industry grouping. The food value proposition equilibrium for the consumer today balances; better for you, flavor, and traditional products all blended into something with a twist. In industry speak, differentiated does not mean different to the consumer it means familiar. That is where the Grocerant niche falls, it is consumer inspired, component driven and flavor familiar. Understanding, creating or identifying distinctive differentiated food consumable’s as an entity with identity by day part in an area we at Foodservice Solutions excel. Outside eyes can bring new light and assist in your pace of concept growth, redevelopment and deployment of new products. Grocerant specialist can work with you to identify, create or place distinctive differentiated food consumables. Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Tuesday, August 24, 2010 Forth meal insights from four companies; CultureWaves, Mintel International and the International Food Futurists and The Food Channel compiled the following “snack trends”. Chip and dip 2.0. New varieties and flavors are giving consumers something different. It's likely to find hummus and falafel chips or pretzel crisps at the next party instead of the traditional chip-and-dip duo. The dips are healthier, spicier and often served hot. Small and sensational. Consumers are eating more substantial snacks packed with protein as meal replacements, and eating them more often. For pick-me-ups, people may grab a slider at Steak 'n Shake, or a Big Mac Wrap at McDonald's. Come dinnertime, they may graze some more, but by today's definition, snacks may be all they need. The drink shift. This trend is all about the "halo of health" around drinks made with fruit or antioxidants. There is a shift in snack beverages away from colas and energy drinks and more toward teas, lemonades, fruity organic waters and carbonated fruit drinks with interesting flavor combinations. Plus, there's the trend away from high-fructose corn syrup and back to sugar that some soft-drink makers are spinning as a "throwback" move. Goin' nuts. Snacking habits are adjusting to the talk about how good nuts are for health, with nuts and granola, nuts and fruits and smoked nuts growing more popular. Unique flavor combinations give consumers the feeling they are eating healthy: for example, cashews with pomegranate and vanilla, or dark chocolate with caramelized black walnuts. Fruits: the low-hanging snack. The trend here is the mainstreaming of new types of fruit, and the redefinition of locally grown to mean locally sourced. Fresh fruit is now the No. 1 snack among kids aged two to 17. Cruising the bars. While it's become mainstream that a granola bar is an acceptable emergency meal, bars are now offered in dairy-free, gluten-free, non-GMO, organic, soy-free, cholesterol-free, trans-fat-free and casein-free varieties. There are even versions specifically formulated for women and children. Sweet and salty. Until recent years, the only way sweet and salty snacks mixed was when people ate something sweet and then craved something salty, or vice-versa. That barrier is now removed, with consumers dipping pretzels in Nutella and eating fruit with a side of popcorn. These tastes are filling up the new-style vending machines too, where the choices are increasing and more nutritional information is available. Yogurt, redefined. The new gold standard for yogurt is the increased health value found with probiotics. Acknowledging the trend toward global flavors, there is Greek yogurt, among the healthiest snacks one can eat. Icelandic yogurt is starting to emerge as yet another world player and new self-serve frozen yogurt shops are popping up everywhere too. Although not new, yogurt continues to redefine itself and is definitely trending up. Bodaciously bold. Bold flavors are almost becoming regular, satisfying an urge for something unordinary. One example is Doritos First-, Second- and Third-Degree Burn. Nostalgia's new again. Any decent tribute to snacking has to mention the traditional Snack Cake, which includes the Hostess Twinkie, the Ding Dong, the TastyKake and the Little Debbie. Anything that's lasted this long deserves a mention in the snacking hall of fame. Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Monday, August 23, 2010 Alice May Brock said: “Tomatoes and oregano make it Italian, wine and tarragon make it French, sour cream makes it Russian, lemon and cinnamon make it Greek, soy sauce makes it Chinese, garlic makes it good.” I say: grocerant meal components bundled and portable make a family meal, a happy meal! Grocerants fresh prepared ready-to-eat and ready-to-heat food reflects the menu melting pot that American food has become. Enabling customers to select from Italian, French, Russian or Greek and utilize the components both regionally and nationally at home any way they like is contributing too the growing expansion of this niche. The new American meal can be a composite of any prepared food components that the individual may want and they can mix and pair them any way as well. Our society is a composed for people from all over the world, with different cultures, traditions and flavor preferences. The new American meal is a melting pot of flavor and choice. Prepared ready-to-eat and ready-to-heat foods are now available for all comers and can be found at Convenience Stores, Grocery Stores, Restaurants, Drug Stores, Dollar Stores and Mobile trucks all just waiting for the taking. Consumers have been exposed to a plethora of flavors and have not the time to master the skill of cooking each. This growing trend is empowering the consumer to establish new customs and traditions in eating better, more flavorful food. The Grocerant niche is about convenient meal participation, differentiation and individualization. Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Friday, August 20, 2010 Ready-to-eat and ready-to-heat fresh prepared food manufactures from Portland, Maine to San Diego, California, Seattle Washington to Miami, Florida are hiring and training sales teams. New unbounded opportunity awaits in the grocerant niche for food manufactures. Starbucks yesterday announced that they were entering the CPG food niche joining Walgreen’s and 7 Eleven each with fresh prepared food test underway or about to rollout this fall. Convenience stores ampm, Circle K are entering the niche with both feet; if they realize the same success of Sheetz or Wawa, food retailing will never be the same. Preferred Meal Systems Inc. located near Chicago announced that they have been selected to provide frozen, prepackaged meals as an outside vendor for Licking County's North Fork school district of Ohio. School district’s, convenience stores, coffee shops, drug stores are all searching for “better for you” fresh prepared ready-to-eat or ready-to-heat food. This is just the beginning stage in the evolution of the grocerant niche. Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Thursday, August 19, 2010 Grocery stores, Convenience stores and Restaurant menu engineers will all be studying this list and you should too. College students remain a bright light in our society. Filled with hope, anticipation and idealism. We look to them for a renewed sense of contemporized relevance and food is no exception. We all can rest assured that change comes slowly and the new normal is a blend of our past and present. Sodexo one of the largest contract university feeders released the top 10 college foods for 2009. Here is the list: 1. Apricot glazed turkey 2. Meatloaf with frizzle-fried onions 3. Vietnamese pho 4. Vegetarian lentil shepherd’s pie 5. Chicken adobo 6. Stuffed port chops 7. Vegetarian jambalaya 8. Lemon herbed baked tilapia 9. Rotisserie chicken 10. Home-style pot roast We continue to see that America is a cultural melting pot, and pot roast will be around for some time. What is exciting to see from this list is that bold flavors and “better for you” foods continue to make there way up the priority list of the young. This list is a glimpse of future menu’s trends / development we will begin to see during the next decade. The assortment of flavors will continue to escalate and the value proposition of vegetarian entrée’s will put added pressure on the menu mix overall. Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Wednesday, August 18, 2010 Consumer demand for ready-to-eat and ready-to-heat food that is fresh prepared, portable and profitable ala the grocerant niche, continues to grow. Why do many grocery stores continue utilizing procedures and operating paradigms of the 70’s,80’s & early 90’s that has contributed to a continued decline in market share too the restaurant and convenience store sector. Grocers that rely exclusively on category and brand managers focusing on paid shelf space and outwitting consumers with category management or continuous category management tools will see continued erosion of top line sales and bottom line profits. Have you looked at A&P lately? While successful food retailers are focusing on the consumer, making adjustments addressing concerns of the consumer; they are gaining market share. The average size of the American family is smaller today than it was 10, 15, 20 years ago, however category and brand mangers continue to focus on “basket size” rather than customer, customer frequency or new measurable brand value attributes. Deli / fresh prepared food must stop focusing on increasing check size and bundling meat in packages of 30 pork chops, a chicken and a half in a package, or mix and match- buy any 10 for $1.00 each. What’s with that? Do they think the industry is going backward? What family are they selling too? Do brand mangers in the grocery industry have degrees in marketing? Or do they have degree’s in that’s what we do? Consumers want small portions or portions sized for today’s family particularly ready-to-eat meals. Buying 10 ingredients for one entrée or side dish is the not goal of 90% of consumer Monday – Friday. With family size much smaller than it was in the 70’s and people living longer and many living alone, the demand for quality food prepared continues to grow. The winners, Safeway’s life style stores – smaller check average and higher frequency the same holds for Harris Teeter. A word of caution here grocery stores must look at utilizing LTO’s to prop sales and garner interest going forward much like restaurant chains. Product freshness, visceral attractiveness and consumer focused flavor and texture attributes will continue to dive frequency, loyalty and top and bottom line profits. Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Tuesday, August 17, 2010 Differentiation does not mean different in retail foodservice it means familiar. In order for innovation to be successful in 2010 it requires interactive consumer participation. That can come in any number of forms. Let me know which will work the best, here are three examples. Subway formed a marketing alliance with PlayStation. The Promo is called” Fiery Footlong Frenzy Fueled by PlayStation, will feature Subway's spiciest sandwiches to date -- dubbed Fiery Footlong subs--– including the spicy New Turkey Jalapeno Melt and the back-by-popular-demand Buffalo Chicken. When they purchase a 32-ounce drink, diners will have the chance to win a wide variety of PlayStation prizes, including the highly anticipated PlayStation Move motion controller, before they become available for purchase in stores in September” Carl’s Jr. is introducing a “Philly Cheesesteak Burger, a meat-on-meat offering of thinly sliced steak on top of a charbroiled burger patty, finished off with peppers, onions, Swiss and American cheeses, and mayonnaise on a seeded bun.” Burger King is teaming with WWE (World Wrestling Entertainment). “WWE Superstars Triple H, John Cena and Undertaker will be featured on more than 5 million exclusive Superstar plush toys inside BK Kids Meals. A new Superstar plush toy will be available each week during the three-week promotion, and will play the stars' individual catch phrases or entrance music. The Superstars will also appear on BK Kids Meal packaging along with merchandise displays at participating restaurants.” Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Baby Boomers and millennials are witnessing a confluence competition for retail foodservice dollars coming from restaurants, convenience stores and grocery retailers. The simple facts are that the size of the “family unit” has shrunk (in 1915 the size was 4.5 people per household, 1960 3.1 and 2004 2.6). Companies selling ready-to-eat and ready-to-heat fresh prepared food portioned properly are winning at the retail foodservice game. In addition with increased immigration, international travel and the food channel American consumers have been exposed to new vibrant food flavor profiles; and they like them. Even more important they want too continue eating those flavorful foods. However they don’t in most cases have the desire to learn how to cook them. They want restaurant quality in a contemporized portion and package fresh. Each of the above companies has strayed away from legacy “basket size” metrics for smaller packs of a variety of food. Each is attracting both Baby Boomers and Millennials. Positioning for success in food retail day requires, that you understand, branding, merchandising and proper new product rotation. Sales drivers must be complemented with the human touch. It's brand positioning that makes it: "My McDonalds", "My Wawa" properly trained employees always make it better. TGI Friday's did not hire the best bartenders they trained them! Positioning for success in retail food service is not an accident it is informed, focused planned execution. Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Friday, August 13, 2010 Bankers, Judges, Lawyers and neighbors used to flock into the lunch counter at the drug store counter on Main Street in Hillsboro, Oregon, where my grandmother worked. Customers crowded the counter each wanting her homemade soup or special of the day. In the early 1950’s this small town drug store was the center of the community and my grandmother after retiring from farming in Minnesota was the towns informal social secretary, and chief cook. Like thousands of other drug stores this one was modeled the one Charles Walgreen build in Chicago in the 1920’s. To quote from his story “ Walgreen once again demonstrated his knack for helping his company while better serving the public. From then on, through the 1980s, food service was an integral part of the Walgreens story. Every Walgreens was outfitted with comfortable, versatile soda fountain facilities serving breakfast, lunch and dinner. Just as Walgreen had reasoned, customers coming to the stores for food usually stayed to purchase other necessary items. And with its friendly waitresses, wholesome food and fair prices, loyalty to Walgreens increased exponentially.” Walgreen’s will be successful with retail foodservice. It is their heritage, part of a culture that we see in retail foodservice today: local, sustainability and fresh. Albeit repackaged foodservice with contemporized consumer relevance, Walgreens understands today’s consumer and the grocerant fresh prepared food niche. Young MBA’s from top tier schools and legacy consultants focused on only the here and now don’t understand the retail foodservice culture. In turn creating local culture in a large company is a near impossible task for many a company but not for Walgreen’s. The Hartman Group understands culture, for those of you need assistance you might want to call them! To quote a good friend of mine “What goes around comes around. Foodservice in the drug stores used to be standard practice and a key place for people to meet 40 years ago. They prepared meals, coffee, ice cream. etc. Everyone in small towns used to hang out at the local drug store. Today's marketing whiz kids don't understand history because they never study it or respect it.” The legacy of Charles Walgreen will contribute too the success and efforts of reintroducing fresh food into the community, neighborhoods around the US; building top line sales and bottom line profits. Success does leave clues and the team at Walgreen’s has picked them up! Understanding retail foodservice success is an art. Grocerant program assessments available visit: www.FoodserviceSolutions.us . Since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook 253-759-7869 Thursday, August 12, 2010 Grocerant ready-to-eat and ready-to-heat fresh prepared food is simple to prepare, save time and filled with bold flavours. With an increase in consumers reading the package label or branded product label Mintel recent analysts reports that “ The concept of simplicity in the food industry has become a mainstream consumer demand.” Food simplicity is taking hold, packaging, product and specifically the number of ingredients. Speaking at the IFT expo in Chicago, David Jago and Lynn Dornblaser stated that this is a trend that is developing and that is more about the change in the consumer desires. They noted “We have noted for example that the average number of ingredients used in a product in the US has actually dropped,”. Mintel figures show that there has been a decline in the average number of ingredients in 56 percent of the food and beverage categories covered by the organization..” Grocerant ready-to-eat and ready-to-heat fresh prepared food a solution to enable quality home cooking. Lynn Dornblaser stated: “Simplicity as a message and a product is here to stay… “There has been a shift from ‘junk free’ to one hundred percent natural, all natural, 70 percent organic,” Dornblaser said. “It is all about positives rather than negatives.” Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Tuesday, August 10, 2010 In a well written article Mixing Meals & Medicine published in Supermarket News. Legacy retail foodservice consultants-researchers turned pundits scoffed at grocerant fresh prepared food offerings entering the marketplace within the retail drug store channel. Utilizing metaphors and metrics of the 70’s & 80’s they have yet to properly estimate the power of today’s consumer. That consumer is dynamic not static. Here is a direct link to the article:http://supermarketnews.com/retail_financial/mixing-meals-medicine-0809/ In most cases these are the same consultants that have advised, worked with and created the shrinking retail grocery store sector we find today. In fact we have 15,000 fewer grocery stores today than we had 20 years ago; and we have a growing population. New formats including Trader Joe’s, Fresh & Easy and Wawa are out drawing many a grocery store in retail customer frequency, while building a loyal brand following developing top line sales and bottom line profits. Those legacy pundits continue to reposition legacy grocery stores near the bottom of the retail foodservice sector. Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Grocery stores customers have been complaining about the length of time it takes to go grocery shopping for years. The problem is the legacy grocery managers want to address the problem from within the “box”. They have added cashiers, express lines, customer service centers and yet still never addressed the wants and needs of the consumer. Trader Joe’s reduced the size of the footprint! Ah think about it. Trader Joe’s customers do not complain about the length of time it takes to shop. Legacy grocery retailers seem constricted to legacy metrics. Particularly BASKET SIZE. If your goal is to fill a basket and get consumer to walk up and down every isle you can’t get them out fast! Publix reportedly started “Publix Curbside, a test of online grocery ordering and at-store pickup, debuted in one Atlanta store yesterday and will launch at a single Tampa, Fla., location in the near future, according to the Lakeland, Fla.-based Publix Super Markets”. If you don’t want to go into the store, why buy form Publix at all? A Danish Philosopher once said: “Life can be understood by looking backward, but must be lived by looking forward.” Innovation trumps complacency in retail foodservice! Product, Packaging, Placement, Portability and Price are the 5 P’s of successful grocerant fresh prepared food retailing. Combine the 5 P’s with technology a consumer focus and success follows. I wish Publix well, but it might be time to remember success leaves clues and the 5 P’s of retail foodservice might be a good place to start. Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Monday, August 9, 2010 This weekend I was fondly recalling stories from my dad of crossing the country on US HWY 66 commonly called Route 66. He stayed at new Ho-Jo’s (Howard Johnson’s) or Holiday Inn’s. Went swimming in the pool and ate ice cream any flavor he wanted. Mid-day they would stop at a Burger Chef and get a hamburger and fries. He thought that was living large! The Interstate HWY system came along and changed how and where people crossed the country. The time it took to make the trip from sea to sea in the car was reduced. Consumers flocked to the Interstate HWY system. Burger Chef’s and Ho-Jo’s began to fade in relative consumer importance. Airlines reduced prices and suddenly even the cross country value for all but commercial vehicles began to slip away from the Interstate HWY system. Retail foodservice customers are dynamic not static. Restaurants, convenience stores and grocery stores are all trying to reposition for today’s consumer. Many brand manages however are “managing” the brand into non-viability. Brand protectionism of the 80’s & 90’s wont’ create a platform for success in 2020. New avenues of food distribution are reshaping the industry faster than many legacy companies want to accept. This opens up opportunity for regional players to expand and new joint ventures to strike up. Food manufactures, distributors and brokers are reevaluating there viability and relationships in a changing retail foodservice world. Does your broker network have relationships with Route 66, is your manufacture pricing structure the same as it was in the 90’s, is your restaurant on Route 66 or have you relocated to the a new location? The grocerant niche is not a new consumer location. It is however where the consumer today feels most comfortable. If your brand is not evolving with the consumer then it is dying. Grocerant program assessments available; since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Friday, August 6, 2010 Restaurants specifically Quick Service Restaurants (QSR’s) understand consumer contemporary relevance. The QSR’s most frequent consumer are more visceral engaged on a daily basis than someone born fifty years ago. They are in fact visceral tuned on, plugged in, online or utilizing mobile texting at the fastest rate ever. Visceral attractiveness in décor for food retailers is as important as cleanliness don’t get left out in the cold. Interactive Graphic filled menu boards or wall mounted “TV” units that are both informational and engaging should be considered. McDonalds is testing its own proprietary entertainment network billed as “McDonalds Channel at 19 units currently. With wifi in must units around the US currently this program will ““will entice the customer to sit down and enjoy their meal — and perhaps to stay a little longer.” Competitor Burger King had introduced flat screen tabletops and reportedly interested in rolling the program nation wide. Convenience stores and grocery stores will need to install at minimum digital signage in the “deli” or ready-to-eat and ready-to-heat sections of there stores. Walmart is leading currently with “TV monitor” units located at end-caps sponsored by national brand manufactures. Visceral attractiveness and consumer relevance are moving forward hand in hand. Since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Thursday, August 5, 2010 Fresh Food Russia takes place at the Hilton Leningradskaya, Moscow on the 11-12 November 2010.Fresh Food Russia Forum is an annual forum for the chief executives of production companies, agrarian enterprises that develop commoditised production of fresh produce, and the heads of the fresh produce divisions of the leading retail networks of Russia and the CIS. Its agenda includes the analysis of Fresh Food market trends and the discussion of solutions to improve business performance by implementing new systems, financial control, and IT technologies. The joint development of the fresh food sector by the retail and agro industries in Russia. Fresh produce in the retail networks today. The current dynamics of demand by category. Overcoming the logistical issues of fresh food supply. A look at fresh technologies and the standards necessary to work with the retail networks. Audience: The chief executives of production companies, agrarian enterprises that develop commoditised production of fresh produce, and the heads of the fresh produce divisions of the leading retail networks of Russian and the CIS. Format: panel discussions, supplier practicums, as well as exhibits of technology and service providers for the industry, and gala dinner. See our website www.freshfoodrussia.com for the updated speaker list and agenda. Please contact Dominic Manley, +44207 193 7863 or manley@b2bcg.ru, for more details. Since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Grocerant food that is ready-to-eat or ready-to-heat is now finding it’s way in large chain store formats including Safeway’s Lifestyle stores,, Target, Walgreens, 7 Eleven, HEB’s Central Market, Harris Teeter and Buehler’s. Blending traditional category management with menu rationalization techniques these companies are seeing success. However those that have incorporated brand marketing into their food offerings including product or line positioning strategy have seen increased customer frequency and niche margins rise. Successful fresh food grocerant ready-to-eat and ready-to-heat programs include interactive participatory consumer interaction. Restaurant brand managers have utilized this strategy for some time. Other chains are beginning to utilize new metrics to leverage consumer success. The same is occurring in the Convenience store side with companies like AMPM bundling meal deals and new products, The Pantry improving coffee and QuickChek growing with solid consistent product offerings. Branding the food product offerings or food lines resonates with the consumer. Watch for Brand Managers being hired in all of these channels. Interested in a grocerant program assessment; if so contact Tacoma, WA based Foodservice Solutions. Since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants Tuesday, August 3, 2010 Understanding and utilizing the 5 P’s of grocerant fresh prepared food marketing; Product, Packaging, Placement, Portability and Price with a consumer focus Taco Bell is rolling out a new line of Mexican Restaurant style tacos with carnitas. Taco Bell’s Chief marketing Officer David Ovens stated: “Our Cantina Tacos are based upon authentic-style Mexican street tacos, which are designed using simple, fresh ingredients that customers regard as high quality,"… Not only are they a great-tasting addition to our menu, and one that was mainly found in Mexican-style restaurants, but they're also a great value that we know our customers love from Taco Bell." Channel blurring is not in the minds eye of the consumer. It is only in the minds eye of the legacy marketing managers in legacy companies. I am convinced that consumer will respond to the offer in a positive way! It’s time for many in the industry to reconsider individual brand positions. Who are you and where do you want to be. QSR’s are moving into fresh prepared better for you food. Grocery stores are selling bundled meal components that are fresh prepared. Grocerant fresh prepared food is going main stream. Since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerant Monday, August 2, 2010 A Danish Philosopher once said: “Life can be understood by looking backward, but must be lived by looking forward.” Homemeal replacement niche is the future of Foodservice. By: Steven A. Johnson (Published in Nations Restaurant News August 19, 1996.) Harkin back with this success clue by me (Steven Johnson) you will soon understand why this niche continues to boom exceeding even my expectations. Combine an intelligent, sophisticated grocery store food operations manager with a contemporary, insightful restaurateur and what you get is a grocerant called Eatzi’s. Most of you have read or hear of the $260,000 per week that Tony Tedesco of Brinker International Inc. and Phil Romano have generated with their concept Eatzi’s. It took three years and an untold sum to create and is not the end of the evolution of the home-meal-replacement niche. It is the most successful step so far. Together they understand that the eating habits of the American public have changed and continue to do so. A recent government study of 15,128 individuals more than 20 years old showed that 27.2 percent of women and 25.5 percent of men have four eating occasions per day, and another 15.4 percent of both men and women have five eating occasions. Even more surprising, 7.5 percent of the population have six occasions. Than finding creates a huge increase in customer opportunities for all concepts branded or otherwise. Romano and Tedesco combined the variety, freshness, quality and an interactive feel that restaurants need in today’s market with the speed, efficiency and value people expect from grocery stores. Mindful of portion sizes and the additional eating frequency of today’s consumer, they have packaged and provided items to fill the “gaps’. They have combined the best of restaurants, food courts and home meal replacement concepts with the quality food and beverage items from the grocery segment. That flexible blend is extremely consumer acceptable and has a formidable natural frequency rate as its core customer base. Eatzi’s customers frequency rate alone is a strong indicator that home meal replacement is a trend rather than a fad. The single most import argument that it is a trend may be its attractiveness to the sophisticated consumer who is more that 50 years old. Estimates are that one third of the U.S. population will be more that 50 by 2010, up from one quarter in 1991. All business in the sectors from computers, home business and retail operators need to take note of this group’s growth rate, disposal income and eating habits. The Senor Network, a Stamford, Conn. Based marketing and research company geared to older consumers. Recently stated that “simply based on population growth trends, if a product is marketed to the 50 plus audience and maintains its market share, it should increase in sales be 35 to 40 to 50 percent in the next 20 years. A brand targeted at the zero-to-50 age group will be flat in sales. Understanding that the frequency of eating for people in their 50s,60s, 70s, is on the increase while their dining out experience many be tapering off, we should all be convinced that Grocerants will flourish well into the future with national branded chains and independents vying for a piece of the pie. The evolution of the home meal replacement niche is far from complete; it will be growing rapidly in the future. Don’t fear, it will increase consumer satisfaction, complementing both the grocery and restaurant segments. It will not be replacing the restaurants or grocery stores of today. However, it will change the types of food and the way they are sold at each. Great credit should be given to Romano and Tedesco for understanding and identifying new and powerful niche. They understand that profit does not just happen, it is created, and it should be deliberate. Grocery store operators and restaurant operators need to remember this rule. Say no to change and don’t grow; say yes and try your very best. Reputations are success come when you are searching for things that haven’t been done and doing them. Dual or multi-branded concept engineering is here, successful and growing rapidly. Combine with a forward thinking home meal replacement; it is a sure formula for continued growth and outstanding success for Grocerants. Wow it’s been quite a ride; I thank all of you for your business and participation in this niche. For more on the continued growth of this niche Bing or Google: Steven Johnson Grocerants Sunday, August 1, 2010 I phone, I touch, I pad Apple computer is evolving with technology, and consumer increased desire to simply daily task with technology solutions. The new “I” line of devise has propelled Apple to the top of the fortune 500 list! Apple understood that consumers would accept, buy and utilize products that provide solutions while simplifying daily life. I know some may argue that they are not that easy to use, get over it! They are and consumers know it! That fact is retail foodservice success is about simplifying the daily life of the consumer. There is no discontinuity, the consumer is evolving and wants more options, in flavor, portion size and points of distribution. If your company is not evolving with the consumer it’s dying. The grocerant niche of fresh prepared read-to-eat and ready-to-heat food is growing both the top and bottom line for food retailers today. Since 1991 Foodservice Solutions of Tacoma, WA has been the global leader in the Grocerant niche for more on Steven A. Johnson and Foodservice Solutions visit http://www.linkedin.com/in/grocerant or on Facebook at Steven Johnson or BING / GOOGLE: Steven Johnson Grocerants
One of the biggest strengths of Angular is its’ forms library for handling forms logic. Even though Angular has some built-in form validators as required, it sometimes is necessary to create your own form validators. Template driven and reactive forms In Angular, there are two form modules: template driven and reactive. The template-driven allows you to specify the form logic in the template and reactive allows you to write your form as typescript code in the template. This means that creating custom validators for template driven forms is gonna be slightly different from reactive forms. The difference is basically that you are gonna wrap the reactive custom validator in a directive to make it work with template driven forms. If you are using template driven forms I recommend coding your custom validators in a way that they are also compatible with reactive forms, should you want to use that. Creating a custom validator for reactive forms Creating a custom validator for reactive forms is actually more simple than for a template driven form. You only need to implement ValidatorFn, which takes a form control and returns an error object. A date validator can be created as: invalid-date.validator.directive.ts export function invalidDateValidatorFn(): ValidatorFn { return (control: AbstractControl): { [key: string]: any } => { const date = new Date(control.value); const invalidDate = !control.value || date.getMonth === undefined; return invalidDate ? { 'invalidDate': { value: control.value } } : null; }; } 1 2 3 4 5 6 7 export function invalidDateValidatorFn ( ) : ValidatorFn { return ( control : AbstractControl ) : { [ key : string ] : any } = > { const date = new Date ( control . value ) ; const invalidDate = ! control . value | | date . getMonth === undefined ; return invalidDate ? { 'invalidDate' : { value : control . value } } : null ; } ; } Here we are validating if the input can be converted to a date and if not, we return an error object with “invalidDate” set + the invalid value, this when can be used to display an error message to the user. This validator is hooked up to a reactive form like this: todo.component.ts this.form = this.formBuilder.group({ title: this.formBuilder.control('', Validators.required), description: this.formBuilder.control('', Validators.required), dueDate: this.formBuilder.control('', Validators.required, invalidDateValidatorFn), }); 1 2 3 4 5 this . form = this . formBuilder . group ( { title : this . formBuilder . control ( '' , Validators . required ) , description : this . formBuilder . control ( '' , Validators . required ) , dueDate : this . formBuilder . control ( '' , Validators . required , invalidDateValidatorFn ) , } ) ; Creating a custom validator for template driven forms As said before, when creating a custom validator for a template driven form, you should have created the validator fn first, which is used seperately if it was in a reactive form: export function invalidDateValidatorFn(): ValidatorFn { return (control: AbstractControl): { [key: string]: any } => { const date = new Date(control.value); const invalidDate = !control.value || date.getMonth === undefined; return invalidDate ? { 'invalidDate': { value: control.value } } : null; }; } @Directive({ selector: '[appInvalidDate]', providers: [{ provide: NG_VALIDATORS, useExisting: InvalidDateValidatorDirective, multi: true }] }) export class InvalidDateValidatorDirective implements Validator { // tslint:disable-next-line:no-input-rename @Input('appInvalidDate') public invalidDate: string; public validate(control: AbstractControl): { [key: string]: any } { return this.invalidDate ? invalidDateValidatorFn()(control) : null; } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 export function invalidDateValidatorFn ( ) : ValidatorFn { return ( control : AbstractControl ) : { [ key : string ] : any } = > { const date = new Date ( control . value ) ; const invalidDate = ! control . value | | date . getMonth === undefined ; return invalidDate ? { 'invalidDate' : { value : control . value } } : null ; } ; } @ Directive ( { selector : '[appInvalidDate]' , providers : [ { provide : NG_VALIDATORS , useExisting : InvalidDateValidatorDirective , multi : true } ] } ) export class InvalidDateValidatorDirective implements Validator { // tslint:disable-next-line:no-input-rename @ Input ( 'appInvalidDate' ) public invalidDate : string ; public validate ( control : AbstractControl ) : { [ key : string ] : any } { return this . invalidDate ? invalidDateValidatorFn ( ) ( control ) : null ; } } For using a validator in a template-driven form we hook it in with a directive. Notice that we bind to an attribute with [] in the selector. The way we hook it into Angular template driven forms by adding the directive to Angular’s NG_VALIDATORS using the multi option. NG_VALIDATORS is a provider Angular is using on every form change to loop through the validators in the form and update the form’s validity. A validator directive implements Validator from @angular/forms which contain a validate callback which is called by Angular forms module when it iterates on all directives hooked into NG_VALIDATORS. Input to a validator can be done with an Input validator that matches the selectors name. A bizarre trick for creating a flexible custom validator Alright, enough of the affiliate marketing… I have found it could become tedious for doing all the above process for really simple validation logic, so for that reason, I came up with a custom validator directive that evaluates boolean expressions. For complex validation logic I would like to encapsulate validation logic, like in the above example, but if it is really simple boolean expressions I preferer to use the flexible custom validator, as it saves you from doing the above steps for every validator. The flexible custom validator looks like this: export class CustomValidator { constructor(public expression: () => boolean, public validatorName: string) {} } export function customValidatorFnFactory( customValidator: CustomValidator ): ValidatorFn { return function(control: AbstractControl) { const errorObj = {}; errorObj[customValidator.validatorName] = true; return customValidator.expression() ? null : errorObj; }; } @Directive({ selector: '[appCustomValidator]', providers: [ { provide: NG_VALIDATORS, useExisting: CustomValidatorDirective, multi: true } ] }) export class CustomValidatorDirective implements Validator { private _customValidator: CustomValidator; public get appCustomValidator(): CustomValidator { return this._customValidator; } @Input() public set appCustomValidator(customValidator: CustomValidator) { this._customValidator = customValidator; if (this._onChange) { this._onChange(); } } private _onChange: () => void; constructor() {} public validate(control: AbstractControl): { [key: string]: any } { return customValidatorFnFactory(this.appCustomValidator)(control); }https://christianlydemann.com/wp-admin/post.php?post=174&action=edit# public registerOnValidatorChange?(fn: () => void): void { this._onChange = fn; } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 export class CustomValidator { constructor ( public expression : ( ) = > boolean , public validatorName : string ) { } } export function customValidatorFnFactory ( customValidator : CustomValidator ) : ValidatorFn { return function ( control : AbstractControl ) { const errorObj = { } ; errorObj [ customValidator . validatorName ] = true ; return customValidator . expression ( ) ? null : errorObj ; } ; } @ Directive ( { selector : '[appCustomValidator]' , providers : [ { provide : NG_VALIDATORS , useExisting : CustomValidatorDirective , multi : true } ] } ) export class CustomValidatorDirective implements Validator { private _customValidator : CustomValidator ; public get appCustomValidator ( ) : CustomValidator { return this . _customValidator ; } @ Input ( ) public set appCustomValidator ( customValidator : CustomValidator ) { this . _customValidator = customValidator ; if ( this . _onChange ) { this . _onChange ( ) ; } } private _onChange : ( ) = > void ; constructor ( ) { } public validate ( control : AbstractControl ) : { [ key : string ] : any } { return customValidatorFnFactory ( this . appCustomValidator ) ( control ) ; } https : //christianlydemann.com/wp-admin/post.php?post=174&action=edit# public registerOnValidatorChange ? ( fn : ( ) = > void ) : void { this . _onChange = fn ; } } As we saw before we are creating a validationFn that is used in a directive. The directive takes as input a CustomValidator object which contains a boolean express, that is gonna be evaluated, a validator name, used in the error object. When running the validators, you can show an error message in your template like this: add-todo.component.html <div class="form-group"> <label for="todo-description">Description</label> <input type="text" #todoDescriptionInput="ngModel" [appCustomValidator]="getLengthCustomValidator(todoDescriptionInput.value)" required name="todo-description" [(ngModel)]="currentTODO.description" class="form-control" id="todo-description" placeholder="Enter description"> </div> <div *ngIf="todoDescriptionInput.touched && todoDescriptionInput.errors" class="alert alert-danger" role="alert"> Error </div> 1 2 3 4 5 6 7 8 <div class = "form-group" > <label for = "todo-description" > Description </label> <input type = "text" # todoDescriptionInput = "ngModel" [ appCustomValidator ] = "getLengthCustomValidator(todoDescriptionInput.value)" required name = "todo-description" [ ( ngModel ) ] = "currentTODO.description" class = "form-control" id = "todo-description" placeholder = "Enter description" > </div> <div * ngIf = "todoDescriptionInput.touched && todoDescriptionInput.errors" class = "alert alert-danger" role = "alert" > Error </div> Here we are applying the custom validator in a template by passing a custom validator object containing an expression for validating the length of the input as well as the validator name used for showing validation messages: add-todo.component.ts public getLengthCustomValidator = (value: string) => new CustomValidator( () => value.length < MAX_DESCRIPTION_LENGTH, 'minLengthValidator' ) 1 2 3 4 5 public getLengthCustomValidator = ( value : string ) = > new CustomValidator ( ( ) = > value . length < MAX_DESCRIPTION_LENGTH , 'minLengthValidator' ) I’m using bootstrap here for the styling. Upon a validation error you can show something like this to a user: Read the code for validators and more in my Angular best practices repository on Github. Do you want to become an Angular architect? Check out Angular Architect Accelerator. Hi there! I’m Christian, a freelance software developer helping people with Angular development. If you like my posts, make sure to follow me on Twitter. Like this: Like Loading...
--- author: - 'Kazuo [Hida]{}[^1]' title: ' Ferrimagnetic and Long Period Antiferromagnetic Phases in High Spin Heisenberg Chains with $D$-Modulation' --- Introduction ============ Among various exotic ground states in quantum magnetism, the Haldane state in the integer spin antiferromagnetic Heisenberg chain[@fd] has been most extensively studied both experimentally and theoretically. This state is characterized by the hidden antiferromagnetic string order accompanied by the $Z_2\times Z_2$ symmetry breakdown in spite of the presence of the energy gap and exponential decay of the spin-spin correlation function. The easy plane single-site anisotropy $D (>0)$ destroys the Haldane ground state leading to the large-$D$ state with finite energy gap and exponential decay of the spin-spin correlation function [*without specific order*]{}. On the contrary, the easy axis single-site anisotropy ($D <0$) drives the Haldane state into the Néel state.[@md; @ht; @chen] On the other hand, the ground state of the odd spin Heisenberg chain is the Tomonaga-Luttinger liquid state. Due to its critical nature, the ground state is driven to the Néel ordered state for infinitesimal negative $D$ while the Luttinger liquid state is stable against positive $D$[@hjs]. In this context, it is an interesting issue to investigate how the ground states of the quantum spin chains are modified if the easy-axis and easy-plane single-site anisotropy coexist in a single chain. In the previous work[@altd], the present author and Chen investigated the $S=1$ chain with alternating single-site anisotropy and found that the period doubled Néel phase with ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$ structure is realized for strong alternation amplitude, although the Haldane phase is stable for weak alternation. The physical origin of this type of Néel order is interpreted as a ’pinning’ of the string order. In the present work, we further explore this problem for the cases $S > 1$. We find not only the ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$ ground state but also the ferrimagnetic ground states with quantized and unquantized spontaneous magnetization for intermediate strength of $D$-alternation. These quantizated values of magnetization also satisfy the Oshikawa-Yamanaka-Affleck condition[@oya] well-known for the magnetization plateau in the magnetic field. This paper is organized as follows. In the next section, the model Hamiltonian is presented and the two possible senarios which leads to different ground states are explained. In §3, the numerical results for the spontaneous magnetization and the local spin profile are presented to reveal the physical nature of each state. The last section is devoted to summary and discussion. Model Hamiltonian ================= We investigate the ground state of the Heisenberg chains with alternating single site anisotropy whose Hamiltonian is given by, $$\begin{aligned} \label{ham0} {\cal H} &=& \sum_{l=1}^{N}J\v{S}_{l}\v{S}_{l+1}+\delta D\sum_{l=1}^{N/2}S_{2l-1}^{z2}\nonumber\\ &-&\delta D\sum_{l=1}^{N/2}S_{2l}^{z2}, \ \ (J > 0, \delta D >0).\end{aligned}$$ where $\v {S_{i}}$ is the spin-$S$ operator on the $i$-th site. In ref. , it is found that the period doubled Néel phase with ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$ structure is realized for large enough $\delta D$ for $S=1$, although the Haldane phase is stable for small $\delta D$. The mechanism to stabilize this period doubled Néel phase can be understood along the following senario (senario I). In the absence of the $D$-terms, the $S=1$ ground state has a hidden string order which implies that the spins with ${\left\vert {\pm 1} \right\rangle}$ are arranged antiferromagnetically if the sites with ${\left\vert {0} \right\rangle}$ are skipped.[@md; @ht] The position of the sites with ${\left\vert {\pm 1} \right\rangle}$ and ${\left\vert {0} \right\rangle}$ strongly fluctuate quantum mechanically and this antiferromagnetic order remains hidden because it is impossible to observe the correlation between only the sites with ${\left\vert {\pm 1} \right\rangle}$ experimentally. In the presence of strong $\delta D$-terms, only the states consistent with the constraint set by these $\delta D$-terms survive among all states with hidden order. For $\delta D>>J$, the odd-th site must be in the state ${\left\vert {0} \right\rangle}$ and the even-th sites ${\left\vert {\pm 1} \right\rangle}$. To be compatible with the string order, spins must be arranged as ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$. Thus the strong $\delta D$-term can select the spin states among those with hidden order to realize the explicit period doubled Néel order. On the other hand, another senario (senario II) is possible from classical point of view, although this is not realized in the $S=1$ case. Let us consider the classical limit where each spin can be regarded as a classical unit vector. In the ground state, the spins on the easy axis sites are fully polarized along $z$-direction but those on the easy-plane sites are tilted by an angle $\theta=\cos^{-1}(J/\delta D)$ from the $-z$-direction as shown in Fig. \[claspin\]. Therefore this senario leads to the noncollinear ferrimagnetic ground state with the spontanenous magnetization $M=M_{\rm s}(1-(J/\delta D))/2$ as plotted in Fig.\[clasmag\]. ![The classical spin configuration in the ferrimagnetic state[]{data-label="claspin"}](fig1.eps){width="30mm"} ![The spontaneous magnetization in the classical limit. Here and in the following figures energy scale is set as $J=1$.[]{data-label="clasmag"}](fig2.eps){width="70mm"} In what follows, we show that either of these two senarios can be realized in $S >1$ chains depending on the values of $\delta D$ and $S$ based on the numerical exact diagonaliztion (NED) and density matrix renormalization group (DMRG) calculation. Numerical Results ================= Spontaneous Magnetization ------------------------- To identify the ferrimagnetic regime expected in senario II, the ground state spontaneous magnetization is calculated by NED with periodic boundary condition and by DMRG with open boundary condition for various values of $S$ and $\delta D$. The maximum chain length for the NED is $N=12$ for $S=3/2$, $S=2$ and $S=5/2$, while it is $N=8$ for $S=3$. For the DMRG calculation with open boundary condition, appropriate end spins are added to reduce the boundary effects. ![The spontaneous magnetization for (a) $S=2$ with $N=12$ (NED) and $N=64$(DMRG), and (b) $S=3$ with $N=8$ (NED) and $N=32$ (DMRG) plotted against $\delta D$. The dotted lines are the classical spontaneous magnetization.[]{data-label="mageven"}](fig3.eps){width="70mm"} ![The spontaneous magnetization for (a) $S=3/2$ with $N=12$ (NED) and $N=64$(DMRG) and (b) $S=5/2$ with $N=12$ (NED) and $N=32$(DMRG) plotted against $\delta D$. The dotted lines are the classical spontaneous magnetization.[]{data-label="magodd"}](fig4.eps){width="70mm"} The results for the integer spin cases are presented in Fig. \[mageven\] for $S=2$ and 3 and those for the half-odd-integer spin cases are presented in Fig. \[magodd\] for $S=3/2$ and $5/2$. In contrast to the case of $S=1$, it is found that the ferrimagnetic phase always appear for $S \ge 3/2$ above the critical value $\delta D_{\rm c1}$. For $0 < \delta D < \delta D_{\rm c1}$, the energy gap decreases monotonously with $\delta D$ until it vanishes at $\delta D=\delta D_{\rm c1}$ in all cases studied ($3/2 \leq S \leq 3$). Therefore, we may safely conclude that the ground state is the Haldane phase or the Tomonaga-Luttinger liquid phase according as $S=$ integer or half-odd-unteger. For integer $S$, the spontaneous magnetization vanishes for large enough $\delta D$. From the local spin profile ${\left\langle {S^z_i} \right\rangle}$ which will be presented in the next section, this state turns out to be the period-doubled ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$-type Néel state expected in senario I. On the other hand, the ferrimagnetic state remains stable for arbitrarily large $\delta D$ for half-odd-integer $S$ because the ground state of the easy plane site is the doublet $S^z=\pm 1/2$ which can sustains the ferrimagnetic order even for large $\delta D$. It should be also noted that the spontaneous magnetization in the ferrimagnetic phase is not restricted to simple fractions of the saturation magnetization $M_{\rm s}(=NS)$ as in the usual quantum ferrimagnets[@yama1; @yama2] but varies continuously with $\delta D$ in accordance with the classical intuition predicting the noncollinear ferrimagnetism in senario II. Within a appropriate range of $\delta D$, however, the spontaneous magnetization is locked to a simple fraction of $M_{\rm s}$ reflecting the quantum nature of the present model. In the DMRG calculation, these quantized value of magnetization slightly deviates from the simple fraction of $M_{\rm s}$ due to the boundary spins. Correspondingly, the spontaneous magnetization is slightly rescaled so that the main quantized value exactly equals the simple fraction of $M_{\rm s}$ in Figs. \[mageven\] and \[magodd\]. In all cases, these quantized values of the magnetization satisfy the condition $$p(S-m)=q \label{oymcond}$$ where $p$ is the size of the unit cell, $q$ is an integer and $m$ is the magnetization per site ($m=M/N=MS/\Ms$). In the present model, $p$ is equal to 2. This condition is identical to that proposed by Oshikawa, Yamanaka and Affleck[@oya] for the magnetization plateau in the magnetic field. However, their proof is not restricted to the magnetic field induced magnetization but also applies for the spontaneous magnetization in the ferrimagnetic phase. If the condition (\[oymcond\]) is satisfied, it is allowed to have a finite energy gap to the excited state with different magnetization. This implies the stability of the ground state against the variation of $\delta D$ which leads to the ’plateau’ behavior. The $\delta D$-dependence of the energy gap on the plateau state calculated by the DMRG method for $S=2$ chains is shown in Fig. \[pla2\]. It is clear that the energy gap is finite on the plateau region $1.91 \leq \delta D \leq 3.26$. ![The energy gap of $S=2$ chain on the plateau state with magnetization $M_{\rm p}=M_{\rm s}/4$ for $N=72$. The filled (open) symbols are the gap to the state with magnetization $M=M_{\rm p}+1$($M_{\rm p}-1$) []{data-label="pla2"}](fig5.eps){width="70mm"} In addition, the maximum possible values of the ground state spontaneous magnetization is bounded from above due to the nature of the present model. To maximize the spontaneous magnetization, the spins on the easy axis sites must have $S^z_i=S$ and those on the easy plane site must have negative value due to the antiferromagnetic interaction with the neighbouring polarized spins. It should be noted that the senario I leading to the period doubled Néel order becomes effective if $S^z_i=0$ on the easy plane site. Therefore the spontaneous magnetization per site $m$ must satisfy $0 < m = MS/\Ms < S/2$ in the ferrimagnetic phase. This implies $2S > q > S$. In the half-odd-integer case, the smallest possible value of $q$ is $S+1/2$. This gives the magnetization per site $m=(S-1/2)/2$ which yields $M/\Ms=(S-1/2)/2S$. On the other hand, in the integer spin case, the smallest possible value of $q$ is $S+1$. In this case, $m$ is equal to $(S-1)/2$ which yields $M/\Ms=(S-1)/2S$. It should be noted this value vanishes for $S=1$. This explains why the quantized ferrimagnetic phase does not appear for $S=1$ case. Actually, prominent ’plateaus’ are observed only for the smallest possible value of $q$. This is due to the fact that the condition for the gap generation on the compactification radius of the underlying Gaussian model becomes increasingly severer with the increase of $q$[@oya]. As a secondary plateau with larger $q$, we only find a small plateau at $M=\Ms/5$ for $S=5/2$ which corresponds to $q=4(=S+3/2)$ as shown in Fig. \[mag5ov2lar\] within the $S$ values studied so far, although this plateau is almost unvisible for small sized systems shown in Fig. \[magodd\](b). ![The small ’plateau’ at spontaneous magnetization $M=M_{\rm s}/5$ for $S=5/2$ plotted against $\delta D$. The system size is $N=96$ (dotted line) and 192 (solid line). Rescaling factor is slightly different from that of Fig. \[magodd\] to fix this small plateau precisely to $M=M_{\rm s}/5$. []{data-label="mag5ov2lar"}](fig6.eps){width="70mm"} Local Magnetization Profile --------------------------- The local magnetization profile ${\left\langle {S^z_i} \right\rangle}$ calculated by the DMRG method is presented for each phase of $S=2$ chains in Fig. \[cor\]. Below the plateau, the easy plane spins are almost in the state $S^z_i=-1$ while the magnetization of the easy axis spins increases from 1 to 2 as $\delta D$ approaches the lower end of the plateau region. On the plateau, the easy axis spins are almost in the state $S^z_i=2$ and the easy plane spins are in the state $S^z_i=-1$ leading to the quantized value of spontaneous magnetization for the smallest possible value of $q$ described in the preceding section. Above the plateau, the easy axis spins are almost in the state $S^z_i=2$ and the increase in total spontaneous magnetization is due to the decrease in the polarization of the easy plane spins. The behavior of the local magnetization profile in the noncollinear ferrimagnetic phase is in contrast to the similar noncollinear ferrimagnetic phase in the frustrated spin chains investigated in refs. and in which the incommensurate superstructure is observed. This suggests that the incommensurate superstructures observed in these literatures are essentially due to frustration. ![The local magnetization profile of $S=2$ chains for (a) $\delta D=1.0$ (below the plateau), (b) $2.5$ (on the plateau), (c) 4.0 (above the plateau) and (d) $5.0$ (period-doubled Néel phase) with $N=72$.[]{data-label="cor"}](fig7.eps){width="70mm"} In these ferrimagnetic phases, the correlation between the easy axis spins are ferromagnetic. At $\delta D=\delta D_{\rm c2}$, however, the easy plane spins turns into the state with $S^z_i=0$ and the correlation between the easy axis spins turns into antiferromagnetic. In this case, the magnetion profile clearly shows the ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$ structure as shown in Fig. \[cor\](d). For the calculation of the local spin profile in this phase, we have applied a tiny symmetry breaking field with period 4, because otherwise the true ground state of the finite size system is the linear combination of ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$ and ${\left\vert {\downarrow 0 \uparrow 0} \right\rangle}$-type states and no net local magnetization is expected. Actually, the DMRG calculation is often trapped to the states with domain walls in the absence of the symmetry breaking field. The value of the symmetry breaking field ranged from $0.005$ to $0.02$ and the results turned out to be almost indistinguishable on the scale of the Fig. \[cor\](d). The physical origin of this magnetic structure is understood in the same way as the $S=1$ case[@altd] following the first senario described in §2. Summary and Discussion ====================== The ground state properties of the high spin Heisenberg chains with alternating single site anisotropy are investigated by means of the numerical exact daigonaization and DMRG method. It is found that the ferrimagnetic state appears between the Haldane phase and period doubled Néel phase for the integer spin chains. On the other hand, the transition from the Tomonaga-Luttinger liquid state into the ferrimagnetic state takes place for the half-odd-integer spin chains. In the ferrimagnetic phase, the spontaneous magnetization varies continuously with the modulation amplitude of the single site anisotropy in accordance with the classical intuition. Eventually, however, the magnetization is locked to fractional values of the saturated magnetization which satisfies the Oshikawa-Yamanaka-Affleck condition. The local spin profile is calculated to reveal the physical nature of each state. In contrast to the case of frustration induced ferrimagnetism[@ym1; @zigferri], no incommensurate superstructure is found. We thus expect that the incommensurate superstructures found is these literatures are essentially due to the interplay of quantum effect and frustration. The similar mechanism should also work in 2 and 3 dimensions, although the Haldane or Tomonaga Luttinger liquid phase would be replaced by the long range ordered Néel-type state. However, even in the large $\delta D$ limit, the ground state is not trivial due to the frustration in the effective interaction among the easy-axis spins. The investigation of these higher dimensional models are in progress. For the experimental realization of the present mechanism it is necessary to synthesize the compound of easy axis magnetic ions and easy plane ones. Considering a variety of phases expected for the present model, this is a challenging attempt for the experimentalists. Recently, various single chain molecular magnets with considerable strength of single site anisotropy have been synthesized.[@scm1; @scm2] Although the materials with alternating sign $D$-terms with uniform $S$ are not yet reported, these series of materials can be a good candidate to observe the phenomena proposed in the present work. The computation in this work has been done using the facilities of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo and the Information Processing Center of Saitama University. The diagonalization program is based on the TITPACK ver.2 coded by H. Nishimori. This works is supported by the Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology, Japan. [99]{} F. D. M. Haldane: Phys. Lett. [**93A**]{} (1983); Phys. Rev. Lett. [**50**]{} (1983) 1153. M. den Nijs and K. Rommelse: Phys. Rev. [**B40**]{} (1989) 4709. H. Tasaki: Phys. Rev. Lett [**66**]{} (1991) 798. W. Chen, K. Hida and B. C. Sanctuary: Phys. Rev. [**B67**]{} (2003) 104401. H. J. Schulz: Phys. Rev. [**B34**]{} (1986) 6372. K. Hida and W. Chen : J. Phys. Soc. Jpn. 74, 2090 (2005). M. Oshikawa, M. Yamanaka and I. Affleck : Phys. Rev. Lett.78 (1997)1984 S. Yamamoto and T. Sakai: J. Phys. Soc. Jpn. [**67**]{} (1998) 3711. S. Yamamoto: Phys. Rev. B [**59**]{} (1999) 1024. S. Yoshikawa and S. Miyashita: J. Phys. Soc. Jpn. [**74**]{} (2005) Suppl. 71. K. Hida: cond-mat/0608582; to appear in J. Phys. Condens. Matter. M. Mito, H. Deguchi, T. Tajiri, S. Takagi, M. Yamashita, and H. Miyasaka : Phys. Rev. [**B 72**]{}(2005) 144421. Y. Oshima, H. Nojiri, K. Asakura, T. Sakai, M. Yamashita, and H. Miyasaka : Phys. Rev. [**B 73**]{} (2006) 214435. [^1]: E-mail: hida@phy.saitama-u.ac.jp
898 F.2d 512 ALLEN & O'HARA, INC., a corporation, Plaintiff-Appellant,Cross-Appellee,andMaryland Casualty Company, a corporation, InterveningPlaintiff, Defendant on Counterclaims, Cross-Appellee,andThe Northwestern Mutual Life Insurance Company, acorporation, Intervening Plaintiff, Defendant onCounterclaims and Third-PartyPlaintiff-Appellant, Cross-Appellee,v.BARRETT WRECKING, INC., a corporation and Thomas M. Barrett,Defendants-Appellees, Cross-Appellants. Nos. 88-2509, 88-2558 and 88-2559. United States Court of Appeals,Seventh Circuit. Argued Nov. 3, 1989.Decided Feb. 13, 1990.As Amended Feb. 15, 1990.Rehearing Denied March 15, 1990. Before BAUER, Chief Judge, FLAUM and KANNE, Circuit Judges. FLAUM, Circuit Judge. 1 This diversity case concerns a contract between Allen & O'Hara (A & O) and Barrett Wrecking (Barrett) for the demolition of a building owned by Northwestern Mutual Life Insurance (NML). After a six week trial, the jury issued a special verdict finding that A & O had wrongfully terminated the contract and that NML had tortiously interfered with the contract. The district court granted NML judgment n.o.v. on the tort claim. The principal issue in this case involves the proper measure for contract damages under Wisconsin law. In addition, the parties appeal the district court decisions regarding tortious interference with contract, statutory conspiracy, punitive damages, payment of costs, attorney's fees, and prejudgment interest for both contract damages and conversion damages. For the reasons stated below, we affirm except with respect to the prejudgment interest on the contract damages. Facts and Proceedings Below 2 NML hired A & O, a wholly owned subsidiary of NML, to act as its general contractor for the renovation of NML's downtown Milwaukee office. As part of the renovation, A & O solicited bids for the demolition of part of the office, giving tours of the building and allowing potential bidders to examine the building's blue-prints. Barrett submitted a bid of $595,000, slightly below its anticipated cost of demolition on the basis that it could cover its costs through sale of the salvage. Barrett's bid was significantly below other bids, so A & O and Barrett entered into a demolition contract.1 3 The contract called for demolition work to begin in May of 1979 and to finish in September 1979. NML did not, however, vacate the building until mid-to-late August and full scale demolition did not begin until August 27, 1979, at which time Barrett submitted a four month completion schedule. 4 Almost immediately upon commencement of the demolition, Barrett began to encounter unanticipated conditions which caused delays. These conditions included a vault that was constructed of steel, concrete and copper, and heavy structural beams designed to support eight additional stories. Complaints regarding excessive dust, noise, and vibrations caused A & O to instruct Barrett to employ procedures to reduce these conditions which slowed the demolition process further. 5 The delays pushed the completion date further and further back, until Barrett finally submitted a schedule that called for completion in July 1980. Eventually, Francis Ferguson, President of NML, expressed his concern over the delays to A & O, and on May 9, 1980, A & O terminated the contract. 6 A & O promptly sued Barrett and State Surety (Barrett's bonding company) for breach of contract. It also named, in his individual capacity, Thomas Barrett, president of Barrett Wrecking, as a defendant. Barrett counterclaimed for breach of contract. Their dispute centers around which party was responsible for the delays and who should bear the additional costs of demolition caused by the delays and the subsequent changes in procedure. The contract was a fixed-price contract and called for formal written change orders for any change in price. Even so, Barrett claims that it is entitled to payment for extra costs because the parties waived the contract provisions calling for written change orders. 7 NML intervened as of right in the action because it was potentially liable as an indemnitor of A & O. Barrett counterclaimed against NML for tortious interference with business contract relationships, and (along with Thomas Barrett) common law and statutory conspiracy to damage reputation, trade and business. 8 The district judge dismissed the common law conspiracy claims on a motion for partial summary judgment. Following a six week trial, the jury returned a special verdict finding that A & O had breached the contract by terminating it without justification. They awarded compensatory damages of $852,000. They further found that A & O had wrongfully retained salvage belonging to Barrett in the amount of $62,798. In addition, the jury found that NML had wrongfully interfered with the contractual relationship between Barrett and A & O, with damages of $1,400,000. 9 The district judge granted judgment n.o.v. on the tortious interference claim, finding that NML was privileged to interfere with the contract and therefore the claim failed as a matter of law. He also denied motions by Barrett for prejudgment interest and for State Surety's attorney's fees. Analysis 1. Preliminary Matters 10 Before considering the proper measure of damages, we dispose of Barrett's counterclaims. Except for the claim for prejudgment interest on the contract damages, none of these claims has merit and we affirm the district court. 11 Barrett's first claim is that the judge should not have granted a directed verdict to the plaintiffs but should have submitted its conspiracy claim to the jury. We review this decision de novo. Selle v. Gibb, 741 F.2d 896 (7th Cir.1984). 12 Initially, we note that district courts, in general, should be reluctant to remove an issue from the jury. Nevertheless, where there is not substantial evidence to support the verdict, a directed verdict or, alternatively, a judgment n.o.v. is appropriate. See id. at 900; Erwin v. County of Manitowoc, 872 F.2d 1292, 1295 (7th Cir.1989); Brady v. Southern Railway, 320 U.S. 476, 64 S.Ct. 232, 88 L.Ed. 239 (1943). "All the evidence, taken as a whole, must be viewed in the light most favorable to the non-moving party. This evidence must provide a sufficient basis from which the jury could have reasonably reached a verdict without speculation or drawing unreasonable inferences which conflict with the undisputed facts." Selle, 741 F.2d at 900. A judgment n.o.v. is appropriate only if this is not the case. 13 Barrett's conspiracy claim is based on a civil cause of action deriving from a criminal conspiracy statute. Wis.Stat. Sec. 134.01; Radue v. Dill, 74 Wis.2d 239, 246 N.W.2d 507, 511 (1976). To establish this claim, Barrett needed to prove that NML conspired or acted in concert with at least one other individual or entity to willfully injure Barrett (or Thomas Barrett) in their reputations or businesses and that injury resulted. Id. 14 Barrett's proof relies entirely on circumstantial evidence which is sufficient to prove a claim of conspiracy, Lange v. Heckel, 171 Wis. 59, 175 N.W. 788, 789-90 (1920), but it is necessarily weaker than direct evidence. In Wisconsin, if circumstantial evidence supports equal inferences of lawful action and unlawful action, then the claim of conspiracy is not proven. See Scheit v. Duffy, 248 Wis. 174, 176, 21 N.W.2d 257 (1946). We believe that at best this is the case here. 15 Barrett's claim relies on essentially one piece of evidence: the termination of Barrett from two independent demolition contracts (the A & O contract, and a contract with Milwaukee) at about the same time. Barrett claims that this unusual circumstance warrants an inference of concerted action. This evidence alone, however, is not sufficient to show that the conspirators acted with the specific, malicious purpose of injuring the plaintiff. See Radue, 74 Wis.2d at 246, 246 N.W.2d 507. The demolition contract with Milwaukee was behind schedule and the evidence shows that Milwaukee had independent reasons for terminating its contract with Barrett. And even if Milwaukee acted maliciously, no evidence was offered to support a finding of malevolence by NML. We find, in agreement with the district court, that the evidence of a conspiracy fails as a matter of law.2 16 Barrett's next claim is that the district judge erred in granting judgment n.o.v. on the claim that NML tortiously interfered with the demolition contract between A & O and Barrett. Wisconsin recognizes a cause of action against one who, without privilege to do so, induces a third person not to perform a contract with another. Harman v. LaCrosse Tribune, 117 Wis.2d 448, 344 N.W.2d 536 (Ct.App.1984). The Wisconsin courts have adopted the Restatement (Second) of Torts on tortious interference with contracts. Liebe v. City Finance Company, 98 Wis.2d 10, 15-16, 295 N.W.2d 16 (Ct.App.1980); Restatement (Second) of Torts Secs. 766, 767 (1979). Section 769 of the Restatement discusses the situation where the actor has a financial interest in the business of the person induced. This section states that there is no tortious interference when one who has a financial interest in the business of a third party causes that person not to enter into a contract so long as wrongful means are not employed and the actor is protecting his interest in the relationship with the third party. Id. Sec. 769. 17 We believe that section 769 governs here. Comment c of that section states that a part owner of a business has a financial interest in the business. Therefore, NML had a financial interest in A & O. Moreover, comment e states that if the action is directed toward protecting the actor's interest, it is immaterial that he also takes a "malicious delight" in the harm caused by his action. While Barrett argues that NML had a malicious purpose, it is clear that NML also acted to protect what it felt was its interest in the demolition. Finally, there is no evidence that NML used wrongful means. In Pure Milk Products Coop v. National Farmers Organization, 64 Wis.2d 241, 219 N.W.2d 564 (1974), the Wisconsin Supreme Court listed coercion by physical force or fraudulent misrepresentation as examples of improper means. There is no evidence that any means resembling physical force or misrepresentation were used by NML. Consequently, Barrett's claim of tortious interference fails as a matter of law.3 18 Barrett also seeks reimbursement for State Surety's attorney's fees which Barrett had to pay subject to an indemnification agreement. This element of damages was not pled. Rather it was raised only in post-trial motions. A & O and NML might have avoided these costs had they been forewarned of this exposure, i.e., they might have settled with State Surety or made a record regarding duplication of efforts between counsel for State Surety and counsel for Barrett. On this basis, the district court ruled that it would be inequitable to amend the pleadings at this last stage and we agree. 19 Barrett also argues that it is entitled to prejudgment interest on the contract damages and on the conversion or salvage damages. The contract itself grants interest on payments due and unpaid under the contract. The district court, however, held that Wisconsin law was determinative because of a clause in the contract stating that "[the] contract shall be governed by the law of the place where the Project is located." The district court held that this clause required it to look at Wisconsin law on prejudgment interest. State common law, however, normally governs a contract only insofar as there are gaps in the contractual language. Where the contract is clear, there is no need to examine the underlying state law. The contract in this case indicates that interest shall be due on all unpaid amounts under the contract. As such, Barrett is entitled to prejudgment interest on the contract damages and we remand for a finding on the amount of such interest. 20 Prejudgment interest on the salvage is not governed by the contract, so the district court properly turned to Wisconsin law. Wisconsin case law holds that interest can ordinarily be recovered if the amount claimed and recovered was readily determinable prior to trial. Moutsopoulos v. Amer. Mut. Ins. Co., 607 F.2d 1185, 1190 (7th Cir.1979) (applying Wisconsin law). A genuine dispute as to the amount due will defeat the claim for interest because the defendants cannot reasonably determine the amount due and tender it. Id. Barrett sought over $280,000 for the lost salvage but was awarded only $62,798. Consequently, there was a genuine dispute as to the amount that was due, and therefore, the district court correctly held that the judgment was not readily determinable prior to trial and denied prejudgment interest on the salvage. 21 Finally, Barrett contends that the district judge abused his discretion in not issuing a written explanation of his denial of costs pursuant to Fed.R.Civ.Proc. 54(d). Rule 54(d) states that "costs shall be allowed as of course to the prevailing party unless the court otherwise directs." Our standard on review is abuse of discretion. Hudson v. Nabisco Brands, Inc., 758 F.2d 1237, 1242 (7th Cir.1985). Both Barrett and A & O prevailed in part, and therefore we cannot say that the district court abused its discretion. A written opinion is helpful for the reviewing court, but here the district court was clearly within its discretion. 22 To summarize our holdings on this covey of claims, we affirm the district court on all claims except for prejudgment interest on the contract damages. We remand for a finding on the extent of this interest. 2. Contract Damages 23 The remaining issue concerns the proper measure of contract damages. A & O advances several contentions with respect to damages. First, they argue that Barrett offered no proof of lost profits, that is, the benefit of the bargain had the contract been fully performed less the expenses saved by nonperformance. In fact, they argue that the contract was a losing proposition for Barrett and he should have been glad to be relieved of its burden: it would have cost Barrett more to complete the job than was due as payment. In addition, A & O argues that the jury was exposed to "total-cost-theory" evidence which is not recognized by Wisconsin. Fattore Co. v. Metropolitan Sewerage Commission, 505 F.2d 1, 5-6 (7th Cir.1974). They argue that because "total cost" is not a theory of contract damages recognized under Wisconsin law or by the contract, introduction of this evidence resulted in the incorrect measure of damages by the jury. 24 To counter these arguments, Barrett offers a contract modification theory. Under this theory, they argue that the written modification provision of the contract was waived by A & O. Based on this waiver, Barrett asserts that it is entitled to payment for a number of "extras," the sum of which almost equals the damages awarded by the jury. If Barrett's contract modification theory is correct, then A & O's position that the written contract was not profitable is not tenable: the contract price would have been modified and Barrett would be entitled to damages under the modified contract. 25 It is clear under Wisconsin law that a written contract may be subsequently modified orally by the parties. S & M Rotogravure Serv. Inc. v. Baer, 77 Wis.2d 454, 252 N.W.2d 913, 919 (1977); Wiggins Constr. Co. v. Joint School Dist., 35 Wis.2d 632, 638, 151 N.W.2d 642 (1967). The Wisconsin Supreme Court has "recognized that a provision in construction contracts requiring written change orders may be avoided where the parties evidence by their words or conduct an intent to waive or modify such a provision." Rotogravure, 252 N.W.2d at 919. Waiver involves an inquiry into the intent of the parties, and is properly an issue for the jury. We divide this issue into two parts: whether there was sufficient evidence for the jury to conclude that the written modification provisions were waived and whether an agreement to modify the contract for each individual "extra" was supported by sufficient evidence. 26 A & O argues that there was no waiver of the written modification provisions of the contract. To support their claim, they contend that the written modification provisions had been followed early in the contract and that just prior to the termination of the contract, negotiations to formally modify the contract had gone as far as drafting of the modification provisions. They maintain that because the parties followed the written modification provisions at these times, there was no waiver of the provision requiring written modification. 27 Barrett, on the other hand, submits as evidence the testimony of Ron Retzer, a principal of Barrett, stating that an informal arrangement had been reached between A & O and Barrett. The substance of this arrangement was that Barrett would perform changes in the work as requested by A & O without a written change order and would be compensated when the demolition was completed. Presumably, the purpose behind such an arrangement was to allow the parties to work out their differences without the formality and expense of written change orders. 28 This evidence was submitted to the jury with instructions that a waiver must be a voluntary, knowing choice to forego something of value. Neither party objects to this instruction. Based on this evidence, the jury apparently believed the testimony of Ron Retzer. The district court, when considering the motion for judgment n.o.v. also found his testimony credible. We decline to disturb a finding of fact supported by evidence, found credible by both the district court and the jury, and therefore hold that the finding of the waiver of the written modification provisions was supported by the evidence. 29 The final issue before us is whether the various "extras" claimed by Barrett were each supported by sufficient evidence for the jury to grant damages. As a preliminary matter, we note that while the jury returned a special verdict, it did not separate the elements of the contract award; rather is simply awarded damages that reasonably flowed from A & O's wrongful termination. It is difficult, therefore, for us to determine which of the "extras" the jury found to be supported by the evidence. We believe, however, that there was sufficient evidence to support each of the various "extras" claimed by Barrett. 30 Barrett claimed that it was due payment on eleven different "extras." The total cost of these "extras" was $556,953. In addition, Barrett claims another $147,593 for overhead and profit on the "extras," which, if the "extras" are supported by the evidence, flow from these "extras." The two major items are a change in technique ($216,608) and removal of a vault ($148,400). The change in technique was ordered by A & O after they found that the original method contemplated by Barrett caused too much noise and dust. Consequently, the jury was entitled to believe that the contract was modified in this respect. With respect to the vault, the bid package given to Barrett only showed an area for a future vault and contained no plans at all showing the composition of the vault. When Barrett began to dismantle the vault, it discovered that the vault was virtually impervious to the wrecking ball and had to be dismantled piece by piece at considerable time and cost. The jury apparently relied on the testimony of Ron Retzer that A & O agreed to the unexpected extra costs for dismantling the vault and the district court found his testimony credible. We, therefore, believe that the jury had sufficient evidence to find A & O liable for the cost of dismantling the vault. The other, smaller "extras" are all supported by similar evidence. The testimony of Ron Retzer, who was subject to days of cross-examination, appears to have been the primary source relied on by the jury for calculating damages and given this testimony, we decline to overturn the jury verdict. We agree with the district court that "the verdict was not against the weight of the evidence; the damages, although generous, are not excessive...." 31 Finally, regarding the introduction of total cost evidence, apparently, this evidence was little more than the sum of the "extras," each of which we have found supported by evidence sufficient to support the jury. Therefore, the introduction of this evidence cannot be said to have prejudiced the jury. 32 The judgment of the district court is affirmed in part, remanded in part. 1 The contract was actually split into two contracts each for half the work, apparently for bonding purposes. The two contracts are identical and we will simply refer to them collectively as the contract 2 Barrett also argues that its bank's discontinuation of its line of credit, a memorandum between an attorney for the City of Milwaukee and an attorney for A & O concerning Barrett, and the institution of this lawsuit support the theory of conspiracy. All of this evidence, while potentially helpful to their claim, is more easily explained by innocent occurrences than by a malicious conspiracy. This evidence, even in concert with the evidence of simultaneous termination, does not provide sufficient support for Barrett's claim 3 Since neither of Barrett's tort claims have merit, Barrett's claim of punitive damages must also fail
Introduction ============ All chronic liver diseases whether of toxic, genetic, autoimmune or infectious origin undergo typical histological changes that ultimately lead to fibrosis/cirrhosis, and the excess deposition of matrix. Liver cirrhosis can rapidly decompensate and has a high mortality rate. Patients with cirrhosis suffer from a decreasing hepatic capacity to metabolize and synthesize proteins, peptides and hormones. In addition, progression of fibrosis and regenerating nodules cause an increased vascular portocaval resistance with portal hypertension and an increased hepatic venous pressure gradient (HVPG) of \>10 mm Hg. Portal hypertension finally leads to ascites, and vascular collaterals will develop such as esophageal varices. Most patients suffering from cirrhosis eventually die from complications such as spontaneous bacterial peritonitis, variceal bleeding, liver failure or hepatocellular carcinoma (HCC). Especially compensated liver cirrhosis without clinical signs such as spider nevi, encephalopathy, icterus, or ascites is difficult to diagnose. Such patients typically do not show specific symptoms. This is also one important reason why no valid and reliable prevalence data are available for cirrhosis for many countries, although cirrhosis is a major mortality cause in developed countries at the age of 40--60 years. Many techniques have been explored in the last decades to allow an early and reliable diagnosis of cirrhosis (see [Figure 1](#f1-hmer-2-049){ref-type="fig"}). These include both invasive but also noninvasive approaches. Liver biopsy is still considered the gold standard for assessing hepatic cirrhosis. However, it is an invasive procedure, with rare but potentially life-threatening complications.[@b1-hmer-2-049] In addition, the accuracy of liver biopsy in assessing fibrosis is limited owing to sampling error (reaching up to 30%) and interobserver variability.[@b2-hmer-2-049]--[@b6-hmer-2-049] Other invasive procedures such as laparoscopy and endoscopy are not very sensitive. Likewise, conventional imaging techniques such as ultrasound, magnetic resonance imaging (MRI) and computer tomography (CT) are noninvasive but absolute signs of cirrhosis such as collaterals or nodular aspect of the liver surface are required, rendering these methods rather insensitive. Many efforts have been invested to identify serum markers that allow the diagnosis of cirrhosis from a simple blood test.[@b7-hmer-2-049] Unfortunately, although markers such as serum collagen or hyaluron reflect profibrogenic activity, they do not correlate with the absolute amount of matrix deposited in the liver. Liver cirrhosis per se causes a typical induration of the liver that is sometimes clearly palpable. In fact, palpation of the liver has been used by physicians for centuries as the only valid bedside test to diagnose cirrhosis. Thus, it has been a question of time to develop sophisticated physical methods to truly quantify liver stiffness (LS). The first such approach has been successfully introduced by Sandrin and coworkers in 2003.[@b8-hmer-2-049] Meanwhile, many studies on chronic liver diseases have proven that measurement of LS is a rapid and excellent screening test for liver cirrhosis. Alternative approaches are currently explored either based on competing ultrasound or MRI methods, and the future will show which technique will prevail in which clinical setting. On the other hand, LS has been introduced to the field of hepatology as a novel objective physical parameter that can be followed up as compared to, for example, temperature. Like body temperature, we have learnt in a rather short time that LS is not only determined by the degree of fibrosis but also other clinical settings such as inflammation, cholestasis and liver congestions. This review, therefore, is designed to briefly update the reader on the present knowledge of LS. After an overview about technical aspects and alternative methods, basic conditions are discussed that influence LS. Algorithms are presented on how to use LS values in clinical practices, to consider pitfalls. In addition, the novel pressure--stiffness--fibrosis sequence hypothesis is introduced and briefly discussed that could stimulate the intensive search to identify the molecular mechanisms underlying liver fibrosis. Finally, open LS-related questions are defined that should be addressed by future clinical and basic research studies. Pathophysiology of liver stiffness ================================== Liver stiffness -- definition ----------------------------- Going through the theory of elasticity is far beyond the scope of this review. However some basic notions are useful to better understand what stiffness means. From a physical and mechanical point of view, stiffness can be defined as the modulus of elasticity or Young's modulus (E). Hooke's law of elasticity is an approximation that states that the extension of a material is directly proportional to the applied stress, σ = Eɛ, where σ is the stress applied to the material, and ɛ is the strain induced in the material. Stiffness (E) is expressed in kilopascals (kPa) and represents the resistance of material to deformation. While stiff materials, such as concrete, exhibit low strain even at high stress, soft materials such as biological soft tissues exhibit large strain even at low stress. LS, like any other soft tissue stiffness, depends on many factors. The first and main factor is the extracellular matrix of the organ. The extracellular matrix is a deformable structure that transfers the external forces through the liver. It can be compared to the foundation of a building. A second factor is the constraints that are applied on the organ. The more pressure that is applied to the liver at its boundaries, the stiffer it gets. A third factor is the internal pressure inside the organ -- if blood, or another liquid is coming in and out, then stiffness will depend on the resistance that the organ applies to the flow. A fourth and important factor is the viscous effects which influence the time constant over which stiffness is tested. This effect is linked to frequency, ie, stiffness depends on frequency. While liver is soft at very low frequency (on the order of several hertz) which corresponds to manual palpation time-constant, it tends to be much harder at high frequencies (over several tens of kilohertz). Measurement of liver stiffness using transient elastography (FibroScan®) ------------------------------------------------------------------------ The FibroScan® (FS) (Echosens, Paris, France) device is the first elastography technique developed to quantitatively and noninvasively assess soft biological tissue stiffness *in vivo*. Liver was a natural first organ to study due to its size and rather homogenous texture.[@b8-hmer-2-049] In principle, shear waves are generated through the liver and LS is deduced from their velocity. FS uses the technique called transient elastography (TE) or vibration-controlled transient elastography (VCTE™). It is based on the controlled generation of a transient shear wave using a servo-controlled vibration of known frequency and amplitude. LS is computed from the velocity of these mechanical waves using the following equation: E = 3 ρ V s 2 where E is the Young's modulus or stiffness, ρ is the density, and V*~s~* the shear velocity. The shear velocity measured by VCTE™ is a group velocity around 50 Hz. Minimum and maximum stiffness values that can be measured by FS are 1.5 kPa and 75.0 kPa respectively. Technically, FS consists in a dedicated acquisition platform that includes a single channel ultrasound analog front end to emit and receive ultrasound signals, and a servo-controlled vibrator for the shear wave generation. The probe itself contains a sophisticated vibrator on the axis of which a single element ultrasound transducer is mounted. As shown in [Figure 2](#f2-hmer-2-049){ref-type="fig"}, the vibration consists of a sinusoid period with a center frequency of 50 Hz. Its amplitude depends on the probe model: 2 mm peak-to-peak (PP) with the standard probe (model M), 1 mm PP with the pediatric probe (model S), and 3 mm PP with the obese patients dedicated probe (model XL). The shear wave propagation is monitored using ultrafast ultrasound acquisitions. In the standard examination procedures, LS measurements using FS are performed on the right lobe of the liver in intercostal position (see [Figure 3](#f3-hmer-2-049){ref-type="fig"}). This prevents direct compression of the liver that would eventually affect LS values. The patient is lying on his back with the right arm behind the head in order to enlarge intercostals space as much as possible. The operator uses ultrasound M-mode and A-mode images ([Figures 4A](#f4-hmer-2-049){ref-type="fig"} and [B](#f4-hmer-2-049){ref-type="fig"}) to locate the liver, and triggers the measurement by pushing on the probe button. The shear wave can be observed on the elastogram image ([Figure 4C](#f4-hmer-2-049){ref-type="fig"}) which represents the strains induced in the liver as a function of time and depth. It is computed from ultrasound data acquired at a very high frame rate during the shear wave propagation which lasts 80 ms. Measurement of liver stiffness using other elastographic techniques and normal stiffness values ----------------------------------------------------------------------------------------------- Although FS has been the first noninvasive elastographic techniques in practical use to assess LS, other competing-technical approaches have been developed. They are under current cross-validation and it is still too preliminary for a final statement (see [Table 1](#t1-hmer-2-049){ref-type="table"}). Magnetic resonance elastography (MRE) was introduced in 1995 by Muthupillai[@b9-hmer-2-049] and is now commercially available as MR-Touch (General Electric). Rouviere et al[@b10-hmer-2-049] measured liver shear stiffness in healthy volunteers and in patients with liver fibrosis. The shear stiffness μ can be deduced from Young's modulus E (as measured by FS) using the simple relationship: μ = E/3. Klatt et al measured the shear elastic modulus in 12 healthy volunteers and two patients.[@b11-hmer-2-049] Results obtained on volunteers are close to 6 kPa when converted in Young's modulus. MRE looks very promising. It seems to have a smaller standard deviation and, naturally, offers the combination of magnetic resonance imaging and elastography in one setting for different organs. However, it is an expensive, time consuming, certainly not a bedside procedure, and cannot be used in the setting of metal implants. FS has been directly cross-validated with MRE using artificial phantoms with an excellent correlation of r = 0.96.[@b12-hmer-2-049],[@b13-hmer-2-049] A linear correlation between LS and fibrosis stage has been observed in animal fibrosis models using MRE.[@b14-hmer-2-049] In addition to FS, various ultrasound-scanner-compatible elastography procedures are currently being evaluated. FS should not be mismatched with conventional static elastography that is now integrated in many ultrasound devices. The first system based on static elastography was the real time elastography (HI-RTE). It allows a visualization of relative stiffness within a B-ultrasound image using a red and blue color map. However, HI-RTE does not allow the quantitative measurement of stiffness values and, hence, pilot studies did not show a satisfying correlation with fibrosis score as compared to FS.^15^ More recently, several techniques[@b16-hmer-2-049]--[@b18-hmer-2-049] based on radiation force[@b19-hmer-2-049] have been proposed for LS measurement. These techniques use high intensity ultrasound beams to induce displacements inside the liver remotely. Acoustic Radiation Force Impulse (ARFI) with Virtual Touch™ tissue quantification has been introduced by Siemens (German). First ARFI-based results have been presented at international meetings in cross-validation with FS. Reasonable areas under receiver operating characteristic curves (AUROCS) for F3-4 fibrosis \>0.86 have been presented on various diseases with excellent interobserver variability of 0.98[@b20-hmer-2-049],[@b21-hmer-2-049] and a good correlation with FS of r = 0.65.[@b22-hmer-2-049] In contrast to FS, ascites does not impose a limitation to ARFI. However, up to now, FS seems to outscore the identification of F2-4 fibrosis stages with regards to diagnostic accuracy.[@b21-hmer-2-049] Since the physiological determinants of LS are not completely understood and the detection methods vary considerably, it is still a debate how to define normal LS values. In a recent study, we could demonstrate that simple breath maneuvers such as valsalva, or position changes such as laying to standing position, can dramatically either permanently or temporarily increase LS up to the upper detection limit of 75 kPa.[@b23-hmer-2-049] This study could also demonstrate that a horizontal position with normal breathing yields the lowest and most reproducible LS values. According to our experience, LS of \<6 kPa can be considered as normal.[@b23-hmer-2-049] Confirmation has come from a large screening study obtained on 1067 blood donors median with a medium LS of 4.4 kPa (95th centile 6.7).[@b24-hmer-2-049] [Tables 1](#t1-hmer-2-049){ref-type="table"} and [2](#t2-hmer-2-049){ref-type="table"} give an overview of recently reported stiffness values for liver and other organs obtained with different techniques for normal and pathological conditions. Liver stiffness assessment by FibroScan® -- practical experience ---------------------------------------------------------------- The major success of FS in measuring LS can be mainly explained by its true bedside test character that can be performed within 5--10 minutes. After a rapid training, FS provides a reasonable performance for the diagnosis of cirrhosis that is not influenced substantially by any other feature.[@b25-hmer-2-049] FS has an excellent interobserver rate especially without elevated transaminases[@b26-hmer-2-049] and a fast learning curve.[@b27-hmer-2-049] In addition, no significant difference in LS values have been found whether they were obtained from the fifth, sixth and seventh intercostal space.[@b28-hmer-2-049] Thus, in general, FS measurements can be routinely performed in more than 95% of patients. Major limitations are severe obesity and ascites that directly weaken the ultrasound signal.[@b29-hmer-2-049] In some collectives such as patients with decompensated cardiac insufficiency, the successful performance of FS can drop down to ca. 50%.[@b30-hmer-2-049] However, with the development of the novel XL probe, these obstacles could be drastically overcome. In our own preliminary experience, we found that the XL probe could measure LS in 70% of patients where the normal M probe were not applicable. Moreover, the XL probe could not only be successfully applied to severely obese patients but also to patients with ascites and lean patients with an ultrasound-diffracting subcutaneous fat tissue. It will be interesting to learn in the future why some nonobese people are critical to be measured by FS. Potential artificial results obtained by FibroScan® --------------------------------------------------- Shear wave propagation in soft biological tissues might be very complex. Thus, LS is calculated as the median of 5 to 10 valid measurements. Outliers are removed and the interquartile range is provided as a mean to check for the quality of the measure. Furthermore, FS implements special algorithms to automatically reject incorrect measurements which are ranked invalid and are thus not included in stiffness calculation. However some cautions must be taken, especially with probe perpendicularity and rib cage intercostal spaces. First, it is important that the probe is placed perpendicular to the skin surface when measuring LS to prevent overestimation which could happen if the shear wave propagation is misaligned with the ultrasound beam. Second, the probe model should be adapted to the patient morphology so that the ribs do not contribute to the shear wave generation. This would affect the measurement quality by inducing secondary shear waves. Although diffraction effects by ribs are rare, they may lead to confusion and misinterpretation. Interestingly, shear waves do not propagate through liquids because they are elastic waves and only pressure waves can propagate through liquids such as those which are used by ARFI. For this reason patients with ascites may not be measurable with FS as far as no physical contact exists between the liver and the intercostals wall. (Patho)physiology of liver stiffness ==================================== Liver stiffness as surrogate marker of fibrosis stage ----------------------------------------------------- LS has mainly been studied in patients with viral hepatitis B and C (HBV and HCV),[@b8-hmer-2-049],[@b25-hmer-2-049],[@b31-hmer-2-049]--[@b37-hmer-2-049] and to a lesser extent in alcoholic liver disease (ALD)[@b38-hmer-2-049]--[@b40-hmer-2-049] and primary biliary cirrhosis (PBC)/primary sclerosing cholangitis (PSC).[@b41-hmer-2-049]--[@b43-hmer-2-049] In contrast, only random and preliminary reports exist on autoimmune hepatitis[@b44-hmer-2-049],[@b45-hmer-2-049] and nonalcoholic liver disease (NALD).[@b46-hmer-2-049],[@b47-hmer-2-049][Table 3](#t3-hmer-2-049){ref-type="table"} shows the performance of LS to assess fibrosis stages F3 and F4 for various diseases (selected studies). [Table 4](#t4-hmer-2-049){ref-type="table"} compares normal and fibrotic stiffness values obtained by different methods. The major experience of these studies can be summarized as follows: a. LS correlates well with fibrosis stage typically with an r \> 0.7 and *P* \< 0.005. b. Advanced fibrosis stage F3 and cirrhosis (F4) are identified via LS with high accuracy (AUROC \> 0.9). This is mainly due to the so called bridging fibrosis (the continuous formation of collagen septa between liver lobuli) that are characteristic for these fibrosis stages. In contrast, fibrosis stages F1 and F2 only mildly increase LS. Therefore, these fibrosis stages are not well discriminated via the measurement of LS. c. Cut-off values have been defined that allow the diagnosis of advanced fibrosis (F3/F4). Despite some variability, cut-off values of 8.0 and 12.5 kPa are widely accepted to identify patients with F3 and F4 fibrosis, respectively ([Figure 5](#f5-hmer-2-049){ref-type="fig"}). It has also become rapidly clear that cut-off values differ between various chronic liver diseases, being tentatively higher in disease with pronounced inflammation or cholestasis such as ALD, PSC or PBC. This is one reason to ask for studies with well defined and homogenous patient populations. Potential causes for varying cut-off values will be discussed below. Fibrosis assessment by liver stiffness and comparison with other noninvasive fibrosis markers/techniques -------------------------------------------------------------------------------------------------------- ### Imaging techniques Since abdominal ultrasound is routinely and rapidly performed in liver patients, a few studies have naturally asked the questions whether LS provides additional information with regard to fibrosis. In comparison to FS, ultrasound is a subjective examination that largely depends upon the experience of the examiner. It is not always clear that only a few ultrasound signs such as nodular aspects of the liver surface, or vascular collaterals are so called sure ultrasound signs of liver cirrhosis (but not splenomegaly or ascites). In an actual larger study on 320 patients with various liver disease, the diagnostic accuracy of LS was significantly superior to ultrasound.[@b48-hmer-2-049] In our own experience, FS recognized generally more than twice of patients with F3/4 fibrosis compared to ultrasound.[@b49-hmer-2-049] This means in numbers, that more than 20 patients with F3/4 fibrosis were not recognized by routine ultrasound, while FS identified almost all 45 patients. It should be pointed out that these are results for a typical clinical routine ultrasound performed within 15--20 min; the accuracy of ultrasound can certainly be increased by a more meticulous and time consuming procedure. However, the time-intensive ultrasound is still subjective and can typically not be performed during the daily practice in most regular hospitals and outpatient departments. Therefore, as a rule of thumb, the rapid 5--10 min FS recognizes ca. Twice as many patients with advanced fibrosis as compared to the routine ultrasound. ### Serum markers Although serum markers that are used within scores such as the Fibrotest, APRI score, etc, are widely explored and have been also cross-validated with FS,[@b22-hmer-2-049],[@b35-hmer-2-049],[@b50-hmer-2-049]--[@b54-hmer-2-049] the authors, up to now, do not generally recommend their use and FS seems to outscore all of these tests. However, we admit, as will be discussed below, that a combination and a refined algorithm using elastography, serum markers, and imaging techniques may optimize a cost-efficient screening for liver fibrosis in certain settings or spare patients from invasive histology.[@b55-hmer-2-049] The major problem is that serum markers reflect the profibrogenic or profibrolytic activity, but do not yield any information about the net deposition of matrix in the liver which are not necessarily correlated to each other. Other factors that increase liver stiffness ------------------------------------------- It has been rapidly learnt that LS is also increased by other confounding factors such as hepatitis, mechanic cholestasis, liver congestion, cellular infiltrations, and deposition of amyloid irrespective of fibrosis stage (see [Figures 5](#f5-hmer-2-049){ref-type="fig"} and [6](#f6-hmer-2-049){ref-type="fig"}). These important interferences will now be discussed in more detail. It should be mentioned that steatosis does not increase LS[@b40-hmer-2-049],[@b56-hmer-2-049] although it is often regarded as an essential initial state in chronic liver disease. Rather, steatosis may slightly decrease LS. ### Inflammation (hepatitis) LS can be dramatically increased during laboratory signs of hepatitis[@b50-hmer-2-049],[@b57-hmer-2-049],[@b58-hmer-2-049] independent of the degree of fibrosis. These conditions may increase LS to a degree that would otherwise suggest advanced liver cirrhosis (ie, stiffness values of 12.5 kPa and above). In our recent studies on patients with ALD undergoing alcohol detoxification, LS was initially increased up to 50 kPa but could decrease within 1 week by 30 kPa.[@b40-hmer-2-049] In HCV patients with biochemical remission (either spontaneous or after antiviral therapy), LS was lower than in patients with identical fibrosis stage, but elevated alanine transaminase (ALT). The LS dynamic profiles paralleled those of ALT, increasing 1.3- to 3-fold during ALT flares in patients with hepatitis exacerbations.[@b50-hmer-2-049] In patients with HBV infection, fibrosis assessment was unreliable if serum transaminases were higher than twice of normal values.[@b59-hmer-2-049] In our experience, ongoing biochemical activity of liver disease in form of increased transaminases leads to an overestimation of fibrosis stage, since hepatitis per se increases LS, irrespective of fibrosis. What are the underlying factors leading to increased LS in these patients? In our sequential FS study on 50 patients with ALD undergoing alcohol detoxification we could show the following phenomena:[@b40-hmer-2-049] a. All transaminase levels decreased during alcohol detoxification, and almost all LS values decreased during the observation interval. b. The higher the decrease in transaminases was, the higher was the decrease of LS. c. Excluding patients with significant ongoing biochemical activity of hepatitis from fibrosis assessment by FS significantly improved AUROC for F3/4 fibrosis. d. Additional histological information on inflammation did not further improve the diagnostic accuracy. This study thus shows that, at least in patients with ALD, serum transaminases truly reflect the degree of hepatitis and that the inflammation is a critical factor determining LS. In our patient population, the decrease of aspartate aminotransferase (AST) correlated better with the decrease of LS as compared to ALT. It is interesting to learn that in HCV infected patients similar observations have been made. Here, AST was found to be the unique variable significantly related (*P* = 0.046) with discordance between biopsy and LS.[@b60-hmer-2-049] Subanalysis of histological scores with LS values was also very revealing. Here, necrosis, hepatocyte swelling and the degree of inflammation correlated with LS but not steatosis. This has been partly confirmed in a recent study on patients with nonalcoholic fatty liver disease (NAFLD).[@b47-hmer-2-049] We conclude from our study that patients with an AST \> 100 U/L lead to an overestimation of fibrosis stage. These patients should be first detoxified from alcohol and LS should be obtained after normalization. A refined algorithm will be discussed below. ### Cholestasis In a recent study on 15 patients with mechanic cholestasis due to tumor obstruction (pancreas carcinoma, Klatskin tumor, liver metastases, and gastrointestinal stromal tumor \[GIST\]) or choledocholithiasis, we could demonstrate that mechanical cholestasis per se can drastically and reversibly increase LS.[@b61-hmer-2-049] LS correlated significantly with a decrease in bilirubin, but not with gamma-glutamyl transpeptidase (GGT), alkaline phosphatase (AP), AST, or ALT. We further confirmed the direct relation between LS and choletasis in bile duct ligation experiments on landrace pigs. The bile duct ligation over 120 min led to a significant swelling of the liver and a tightly palpable gall bladder. LS values doubled during bile duct ligation and reached values suggesting F3 fibrosis. After removal of the bile duct ligation and a recovery period of 30 min, LS values returned to almost normal values around 6.1 kPa. The reasons underlying the high stiffness in cholestasis are unknown but could be related to tissue swelling, edema and increased intracellular pressure due to impaired bile flow. In addition, cholestasis might be a general phenomenon leading to increased LS in various chronic liver diseases as intrahepatic cholestasis has been shown to correlate strongly with LS in patients with acute hepatitis[@b58-hmer-2-049] but also ALD.[@b40-hmer-2-049] ### Liver congestion and venous pressure Random observation had suggested earlier that FS is unreliable in patients with liver congestion, for example due to cardiac insufficiency. We could recently demonstrate that the central venous pressure directly controls LS in a reversible manner.[@b30-hmer-2-049] Over a wide range, LS is a linear function of intravenous pressure reaching the upper detection limit of 75 kPa at an intravenous pressure of 36 cm water column. We eventually showed in 10 patients with decompensated congestive heart failure that LS is dramatically elevated under such pathological conditions and rapidly decreases during clinical recompensation due to diuretic therapy. Since fibrosis state cannot change within such a short period of time, these findings further underline the direct dependence of LS on venous pressure. The majority of patients with decompensated cardiac failure had initial LS far above the cut-off value of 12.5 kPa which is generally accepted for the diagnosis of F4 fibrosis, reaching up to 51.3 kPa. Although LS decreased in all patients during therapy with diuretics it only fell below 12.5 kPa in two of them while seven remained in the range of F4 fibrosis. Older age as a reason for increased LS can be excluded as a recent study by Sirli and colleagues showed.[@b62-hmer-2-049] Thus, increased LS could be due to the onset of cardiac fibrosis in these cases, and fibrosis assessment by FS will be especially challenging in patients with cardiac insufficiency since both fibrosis and venous pressure increase LS. It also remains questionable in this context whether recently reported increased LS in patients with failing Fontan circulation was indeed due to cardiac liver fibrosis,[@b63-hmer-2-049] or just elevated central venous pressure since no sequential LS measurements were performed. On a special note, LS may become a useful noninvasive tool for screening cardiac patients and identifying those that are at risk of cardiac cirrhosis since increased venous pressure (but not abnormal liver function tests) has been recognized as major risk factor of cardiac fibrosis.[@b64-hmer-2-049] ### Liver infiltration, deposits, rare diseases It is a daily experience of surgeons that hepatic tumor infiltration increases LS. Therefore, focal or nodular masses within the liver should be excluded by ultrasound prior to FS. However, since not all hepatic masses can be detected by ultrasound, one should be aware of such potential misinterpretations of LS measurements. A typical finding during LS measurements in, for example, a metastatic liver, are extremely variable stiffness values that clearly depend on position changing of the probe.[@b61-hmer-2-049] However, also rare and less visible infiltration with mast cells can also lead to dramatically increased LS.[@b23-hmer-2-049] We recently reported on a patient with systemic mastocytosis showing an LS of 75 kPa (upper detection limit). The patient had otherwise suspicious signs of liver cirrhosis (splenomegaly, ascites, varices). However, liver synthesis was normal and the differential blood count showed an increased number of mast cells. Diagnosis was ultimately confirmed by liver biopsy. An important noncancerous differential diagnosis of increased LS is amyloidosis. Increased LS due to amyloid deposits has been demonstrated in animal models (submitted by Sandrin L, et al) and humans with amyloidosis A.[@b65-hmer-2-049], [@b66-hmer-2-049] Interestingly, all these clinical entities showed pronounced hepatomegaly. Liver stiffness and clinical end points ======================================= The ultimate goal of novel medical techniques should be to improve diagnosis or therapy of human disease. Therefore, with regard to LS, we would like to see whether it improves the early recognition of cirrhosis-related complications such as portal hypertension, esophageal varices, primary liver cancer or the response to therapies. Liver stiffness and portal hypertension --------------------------------------- Since fibrosis increases the hepatic vascular resistance and ultimately leads to portal hypertension (see [Figure 7](#f7-hmer-2-049){ref-type="fig"}), it was just a matter of time to test whether LS could be used as a diagnostic test for portal hypertensions. Meanwhile, several studies have compared LS directly against invasive hepatic venous pressure gradient (HVPG) or the presence of esophageal varices in adults (0.84--0.86)[@b54-hmer-2-049],[@b67-hmer-2-049]--[@b73-hmer-2-049] and children.[@b70-hmer-2-049] As shown in [Table 5](#t5-hmer-2-049){ref-type="table"}, there is an excellent direct correlation between LS and HVPG (0.84--0.86)[@b67-hmer-2-049]--[@b69-hmer-2-049] with an AUROC for detection of significant HVPG (\>6--12 mm Hg) of 0.92--0.99.[@b67-hmer-2-049]--[@b69-hmer-2-049] A cut-off value of ca. 20 kPa (13.6--34.9 kPa) predicted significant HVPG.[@b67-hmer-2-049]--[@b69-hmer-2-049] Interestingly, lower values were found for HCV (ca. 20 kPa) as compared to ALD (34 kPa). More interestingly, LS correlated with the degree of esophageal varices (r = 0.6, *P* \< 0.0001)[@b71-hmer-2-049] and the AUROC for the prediction of significant varices was 0.71--0.95 with a comparable cut-off of ca. 20 kPa (see [Table 6](#t6-hmer-2-049){ref-type="table"}).[@b54-hmer-2-049],[@b69-hmer-2-049]--[@b73-hmer-2-049] [Figure 8](#f8-hmer-2-049){ref-type="fig"} and [Table 7](#t7-hmer-2-049){ref-type="table"} explain the more complex relation of liver and spleen stiffness with regard to the location of a potential thrombosis in the porto-caval system. This might explain why additional assessment of spleen stiffness could be better to predict portal hypertension and varices.[@b74-hmer-2-049] In addition, cirrhosis develops in post or sinusoidal thrombosis,[@b75-hmer-2-049],[@b76-hmer-2-049] but not in presinosoidal idiopathic portal hypertension (IPH).[@b77-hmer-2-049] Hence, no increased LS can be detected in patients with IPH and this explains why in some patients a normal LS does not exclude portal hypertension and the presence of varices. Indeed, a recent report documented five patients presented with variceal bleeding, two with splenomegaly, and one with ascites. All had large esophageal varices. Median HVPG was 8 mm Hg (range 3.5--14.5), clearly underestimating the true portal pressure due to the presinusoidal component of portal hypertension. Median LS was 8.9 kPa (range 6.8--14.9) and was unreliable in predicting the presence of fibrosis or of esophageal varices.[@b77-hmer-2-049] Liver stiffness and disease follow up ------------------------------------- ### Follow up studies in viral hepatitis C patients Meanwhile, several longitudinal studies have been reported on LS during HCV treatment. Vergniol et al studied 416 patients, of whom 112 started treatment after enrolment. In multivariate analysis, treatment was the only factor independently associated with a fall in LS.[@b78-hmer-2-049] Ogawa et al prospectively studied 145 Japanese patients with chronic HCV infection at baseline, at the end of treatment, and at 48 and 96 weeks after the end of treatment. LS significantly decreased in the groups with sustained virological response and biochemical response but not in the nonresponders.[@b79-hmer-2-049] Andersen et al prospectively studied 114 Japanese patients with chronic HCV median follow up 47--48 months. In this study, LS was significantly lower for patients with sustained viral response (SVR). The differences were more pronounced in the F2-F4 fibrosis group.[@b80-hmer-2-049] ### Liver stiffness and alcoholic liver disease follow up We recently performed a sequential FS study in patients with ALD undergoing alcohol detoxicification[@b40-hmer-2-049] to test if inflammation also interferes with LS assessment in ALD, and to provide a clinical algorithm for reliable fibrosis assessment in ALD by FS. We first performed sequential LS analysis before and after normalization of serum transaminases in a learning cohort of 50 patients with ALD admitted for alcohol detoxification. LS decreased in almost all patients within a mean observation interval of 5.3 d. Six patients (12%) would have been misdiagnosed with F3 and F4 fibrosis but LS decreased below critical cut-off values of 8 and 12.5 kPa after normalization of transaminases. Of the serum transaminases, the decrease in LS correlated best with the decrease in glutamic oxaloacetic transaminase (GOT). No significant changes in LS were observed below GOT levels of 100 U/L. After establishing the association between LS and GOT levels, we applied the rule of GOT \< 100 U/L for reliable LS assessment in a second validation cohort of 101 patients with histologically confirmed ALD. By excluding those patients with GOT \> 100 U/L at the time of LS assessment from this cohort, the AUROC for cirrhosis detection by FS improved from 0.921 to 0.945 while specificity increased from 80 to 90% at a sensitivity of 96%. A similar AUROC could be obtained for lower F3 fibrosis stage if LS measurements were restricted to patients with GOT \< 50 U/L. Histological grading of inflammation did not further improve the diagnostic accuracy of LS. In conclusion, coexisting steatohepatitis markedly increases LS in patients with ALD, independent of fibrosis stage. Postponing cirrhosis assessment by FS during alcohol withdrawal until GOT decreases to \<100 U/mL significantly improves the diagnostic accuracy. Liver stiffness and hepatocellular carcinoma -------------------------------------------- Some studies have tested whether LS allows the prediction of HCC risk since cirrhosis is an independent risk factor of HCC. Foucher et al reported a cut off values for the presence of HCC of 53.7.[@b81-hmer-2-049] Several studies have now looked in more detail into the relation of HCC and LS.[@b54-hmer-2-049],[@b72-hmer-2-049],[@b73-hmer-2-049],[@b82-hmer-2-049]--[@b84-hmer-2-049] As can be seen from [Table 8](#t8-hmer-2-049){ref-type="table"}, an LS of \>20 kPa drastically increases the risk for HCC. Not by coincidence, this cut-off value is almost identical with the cut-of value for esophageal varices and significant portal hypertension. Liver stiffness and surgery --------------------------- ### Liver stiffness and liver transplant Risk stratification of patients on the liver transplant waiting list is still an unresolved challenge, but the limited organ supply asks for more quantitative risk assessment strategies. LS could be a supplemental quantitative method since it recognizes pathological states of the liver that could all worsen the outcome such as fibrosis, inflammation, venous pressure, cholestasis, or portal hypertension. In a post-transplant study on patients infected with HCV, median LS at months 6, 9, and 12 were significantly higher in rapid fibrosers as compared to slow fibrosers. The slope of LS progression in rapid fibrosers was significantly greater than in slow fibrosers, suggesting two different speeds of liver fibrosis progression.[@b85-hmer-2-049] Multivariate analysis identified donor age, bilirubin level, and LS as independent predictors of fibrosis progression and portal hypertension in the estimation group.[@b85-hmer-2-049] Another study suggested that TE is a reliable tool to assess liver fibrosis in patients with recurrent HCV after living donor liver transplantation.[@b86-hmer-2-049] ### Liver stiffness and hepatectomy Tactile stiffness sensors have been evaluated in the pre-FS era with success in patients with partial hepatectomy to predict the sufficient remain liver mass.[@b87-hmer-2-049]--[@b89-hmer-2-049] It remains open whether FS will add to the evaluation of critical liver mass especially in fibrotic patients prior to partial hepatectomy. Present algorithm to diagnose liver disease via liver stiffness =============================================================== Various algorithms have been presented mainly for viral hepatitis to use LS in combination with blood tests to improve the noninvasive diagnosis of liver fibrosis or to spare at least some patients from the invasive liver biopsy.[@b55-hmer-2-049],[@b90-hmer-2-049] Given the many interfering factors that modulate LS, however, we are somewhat skeptical about using such approaches. Such statistical approaches aim to automate a complex diagnostic decision procedure. At the end, a unique patient requires an individual differential diagnosis, and a careful balance of the various risks has to be kept. Just to mention one example, many patients with viral hepatitis do have additional liver diseases such as alcoholic liver disease or suboptimal dietary condition. These are all factors that can dramatically worsen the outcome of chronic hepatitis in a synergistic manner.[@b91-hmer-2-049] With this regard, at least to us, it is more useful to view LS as a novel physical parameter such as, for example, body temperature -- which can be objectively measured and should then be interpreted in the full clinical context. We propose this more open and critical procedure since misinterpretations or biases can rapidly harm the patient and delay other important diagnostic or therapeutic measures. A general actual scheme for the interpretation of LS is shown in [Figure 9](#f9-hmer-2-049){ref-type="fig"}. Although, the definition of normal stiffness values are still under discussion and need to be defined for various populations with regard to age, gender, or other factors, recent populations of healthy blood donors or the influence of position changes and breath maneuvers suggest an LS \< 6 kPa as normal.[@b23-hmer-2-049],[@b24-hmer-2-049] Moreover, at least in our experience, an LS \< 6 kPa seems to exclude any manifest liver disease since all potential confounding factors such as inflammation, cholestasis or congestion increase LS. LS measurements are therefore an ideal screening tool to exclude any severe ongoing liver disease. Of course, one should be aware that other pathological conditions such as fatty liver or even terminal liver failure do not increase LS further, or may even decrease LS, but these conditions are easily discernible within the clinical context. If LS is higher than 6 kPa an ultrasound is required to exclude mechanic cholestasis,[@b61-hmer-2-049] liver congestion[@b30-hmer-2-049] or nodular masses. Typically, we obtain the ultrasound before stiffness measurements since other valuable information such as splenomegaly, ascites or signs of liver disease can be detected. In addition, the location of an optimal stiffness measurement is identified. Thus, it becomes rapidly clear that a valid interpretation of LS is only possible in association with a qualified abdominal ultrasound. If ultrasound does not reveal any of the stiffness-modulating factors above, serum transaminases should be obtained. If the serum transaminases are normal, LS can be directly used to quantitate the degree of fibrosis. If the serum transaminases, mainly AST, are below 100 U/L, the diagnosis of F4 fibrosis is highly accurate while F3 fibrosis should be viewed with caution. At AST levels higher than 100 U/L, an accurate determination of fibrosis stage is not possible. It should be mentioned that these transaminase cut-off values have been obtained for patients with ALD[@b40-hmer-2-049] and future studies are required to determine the conditions for other liver diseases. The context-related interpretation of LS is more difficult in the case of several stiffness-related factors such as inflammation/fibrosis or liver congestion/cardiac cirrhosis. However, under certain conditions, a decision is still possible. For instance, in the case of ALD, the diagnosis of F4 cirrhosis can be made at LS \> 24 kPa despite ongoing severe alcoholic steatohepatitis.[@b40-hmer-2-049] Such upper cut-off values need to be confirmed and defined for all other liver diseases in larger populations (see [Figure 9](#f9-hmer-2-049){ref-type="fig"}). In addition, if possible, therapeutic interventions may help to more accurately differentiate fibrosis stage from other LS-increasing confounding factors. Thus, if liver congestion in a patient with congestive heart failure can be clearly cured by therapy with diuretics (as confirmed by ultrasound and blood tests), an increased but stable LS could directly be used to quantitate fibrosis stage. Under certain circumstances it is possible to estimate the contribution of venous pressure, mechanic cholestasis and inflammation (hepatitis). [Figure 10](#f10-hmer-2-049){ref-type="fig"} gives typical empirical values for stiffness changes as obtained from previous reports.[@b30-hmer-2-049],[@b40-hmer-2-049],[@b61-hmer-2-049] Thus, during mechanic cholestasis by gallstones, an increase of bilirubin by 1 mg/dl will cause a medium increase in LS by ca. 1 kPa. Liver stiffness as molecular mechanism of liver fibrosis ======================================================== The molecular mechanisms of liver fibrosis are poorly understood despite extensive research activities over many decades.[@b92-hmer-2-049]--[@b94-hmer-2-049] Consequently, no targeted treatment options exist to directly prevent progression of matrix deposition. It is intriguing that all chronic liver diseases eventually lead to liver cirrhosis and the sequence of steatosis, steatohepatitis and fibrosis/cirrhosis is generally accepted as causative. However, it is not known which of the intermediated steps are just bystanders or obligatory. In fact, most, if not all, liver diseases show various forms of inflammation and steatosis. It is also notable that in most scenarios, eg, ALD or HCV, only a minority of patients (ca. 15%) progress to cirrhosis.[@b91-hmer-2-049] This generates some optimism that there are genetic or environmental causes that determine fibrosis progression and that fibrosis progression is not an essential and constitutional process. This optimism is further nourished by the established knowledge that early causative treatment of liver diseases not only stops fibrosis progression but can even introduce the complete reversal of fibrosis. Unfortunately, the conditions that define the "points of no return" are not known. LS and its direct relation to pressure[@b30-hmer-2-049],[@b61-hmer-2-049] may serve as an eye-opener for mechanical stretch as a longtime neglected potential stimulus of matrix deposition. It is indeed fascinating to see that all possible conditions of liver cirrhosis increase LS, and that these conditions are not always related to inflammation (which is typically regarded as a common road to liver fibrosis of all liver diseases). Thus, mechanical stop of bile flow or hepatic vein blood flow dramatically increase LS, and both conditions are known to cause cirrhosis. Both conditions increase hydrostatic pressure in distinct compartments and ultimately lead to specific cirrhosis patterns (cardiac cirrhosis, biliary fibrosis). Although both conditions may also lead to remarkable signs of inflammation or hepatocellular necrosis, they are typically not as pronounced as compared to inflammatory liver diseases such as ALD or viral hepatitis. On the other hand, it has become clear that inflammatory conditions increase LS irrespective of fibrosis.[@b57-hmer-2-049],[@b58-hmer-2-049] This is not a surprise since "tumor" (swelling) has been known since the ancient times as a classical sign of inflammation besides "calor" (heat), *functio laesa* and "rubor" (reddening). It is, however, undisputable that inflammation-caused tissue swelling regardless of its multifactorial cause, is also caused by pressure that is more related to osmotic pressure. Thus, in fact, all conditions that ultimately lead to cirrhosis cause increased LS, and this increased LS is initially related to increased pressure of various origins and in various compartments. It is very obvious that matrix and connective tissue are in balance with various kinds of pressures. These observations and thoughts yield to the following new paradigm that we would like to call pressure-stiffness-fibrosis sequence hypothesis (see [Figure 11](#f11-hmer-2-049){ref-type="fig"}): during chronic liver diseases, the accumulation of interstitial liquid and inflammatory infiltrate yield to an increase of local stress and stretch of blood vessels or bile ducts. Therefore, increased mechanical stretch would stimulate the production of collagen (fibrotic tissue) which would result in a permanent stiffness increase as if the liver was adapting its structure to mechanical conditions. Interestingly, increased LS values related to fibrotic tissue could be a long-term consequence of a short-term stiffness increase due to the inflammatory episode related to the chronic liver diseases. Portal hypertension would then be the consequence of increased vascular resistance either caused by inflammation or matrix-related increase of stiffness. Indeed, an increased rate of esophageal variceal bleeding is observed in patients with ALD in the phase of fulminant alcoholic steatohepatitis and in the absence of end stage cirrhosis, and these patients are known to reach high but reversible LS values. Some very recent molecular findings may support the pressure-matrix-stiffness sequence hypothesis. Thus, mechanical stretch induces transforming growth factor (TGF)-β synthesis in hepatic stellate cells, which is known to be highly expressed under profibrogenic conditions.[@b95-hmer-2-049] From animal experiments it was recently concluded that increases in LS precede fibrosis and potentially myofibroblast activation.[@b96-hmer-2-049] Thus, matrix stiffness could be a major denominator of the equilibrium of matrix-bound growth factors.[@b97-hmer-2-049],[@b98-hmer-2-049] These findings point to a regulatory interlink between physical forces of gravity, hemodynamic stress, and movement in tissue development that are still a poorly understood area of research.[@b99-hmer-2-049] Intercellular mechanical coupling of stress fibres via adherens junctions, intracellular calcium oscillations, and mechanosensitive ion channels have been discussed to control cell-dense tissue by coordinating the activity of myofibroblasts.[@b100-hmer-2-049] The pressure-matrix-stiffness hypothesis would also encourage a more in-depth look into the regulation of cell volume[@b101-hmer-2-049],[@b102-hmer-2-049] and aquaporin regulation.[@b103-hmer-2-049] In addition, also a relation to vasoactive hormones such as natriuretic peptides seems to be attractive which are increased in all patients with edematous disorders which lead to an increase in atrial tension or central blood volume, such as renal failure or liver cirrhosis with ascites.[@b104-hmer-2-049] Indeed, continuous intravenous infusion of atrial natriuretic peptide prevented liver fibrosis in rat.[@b105-hmer-2-049] Liver stiffness and future perspectives ======================================= The noninvasive ability to measure LS has opened a new realm for both the diagnosis but also the molecular understanding of liver fibrosis. We will observe a rapid technical improvement of ultrasound and MRI-based elastography techniques. In addition, stiffness measurements of other organs such as spleen, pancreas or kidney will be possible. Hopefully, miniaturization will open stiffness measurements via endoscopic procedures. Modified technologies such as FS will be able to quantitate the degree for liver steatosis. Thus, a novel physical parameter has been developed to quantify hepatic steatosis. This VCTE-based ultrasonic attenuation is called 'CAP', for 'controlled attenuation parameter' and demonstrates good performance for diagnosis of fatty infiltration in more than 10% of hepatocytes.[@b106-hmer-2-049] With regards to LS, upcoming studies have to clarify the following open questions: - Can we identify a direct quantitative relation between type and histological localization of hepatitis, serum transaminases and LS? - What is the diagnostic value of LS in more complex clinical settings, eg, a patient with combined alcoholic liver fibrosis, steatohepatitis, and cardiomyopathy? - Could LS be part of prognostic scores for patients on the liver transplantation waiting list? - What other factors or rare diseases increase LS? - Could we use LS as a novel parameter to measure venous pressure in the context of intensive care settings or cardiology? - How valuable is LS in the neonatal screening for inborn liver diseases? - What are the gender and age specific normal stiffness values? - What are the population-wide prevalence rates of inceased LS and fibrosis? The area of LS will booster many basic research activities, and novel miniaturized equipment is urgently required that will allow LS measurements on small animals such as mice. These are some of the questions that need to be addressed in the future: - What are the genetic and molecular determinants of LS? - What are the kinetics of LS in various fibrosis models? - What are the kinetics of stiffness resolution in these models and is there a point of no return? - Is there a critical cut-off value for stiffness that causes fibrosis? - What is the role of vasoactive hormones, mechanosensing channels, and water channels such as aquaporins on LS and fibrosis? - Are there pharmacological or other therapeutic approaches to modulate LS and treat liver fibrosis? This work was supported by the Dietmar-Hopp foundation and the Manfred-Lautenschläger Foundation. The authors are grateful to Professor Richard Ehman from the Mayo Clinic (Rochester, USA) for the very stimulating discussions. **Disclosures** SM reports no conflict of interest. LS developed VCTE (Fibroscan) and is currently Director of Research and Development at Echosens. ![Invasive and noninvasive methods to determine liver fibrosis hepatic venous pressure gradient.](hmer-2-049Fig1){#f1-hmer-2-049} ![FibroScan® vibration consists of a period with a center frequency of 50 Hz. The standard M probe has a 2 mm peak-to-peak amplitude.](hmer-2-049Fig2){#f2-hmer-2-049} ![Liver stiffness measurements are performed on the right lobe of the liver in intercostal position using FibroScan®.](hmer-2-049Fig3){#f3-hmer-2-049} ![FibroScan® operator uses **A**) A-mode and **B**) M-mode images to locate the liver. The shear wave velocity is deduced from the **C**) elastogram which represents the strains induced in the liver by the shear wave propagation as a function of time and depth.](hmer-2-049Fig4){#f4-hmer-2-049} ![Liver stiffness range caused by matrix deposition (fibrosis) and pressure changes (osmotic, hydrostatic, intra-abdominal).](hmer-2-049Fig5){#f5-hmer-2-049} ![Not only matrix but also pressure-associated conditions influence liver stiffness.](hmer-2-049Fig6){#f6-hmer-2-049} ![Relation of liver stiffness with clinical fibrosis-related entities such as fibrosis stage, portal hypertension and esophageal bleeding.](hmer-2-049Fig7){#f7-hmer-2-049} ![Liver stiffness is increased in post-sinusoidal thrombosis (eg, Budd-Chiari-Syndrome) but not in pre-sinusoidal thrombosis (eg, portal vein thrombosis). Additional measurement of spleen stiffness closes the diagnostic gap.](hmer-2-049Fig8){#f8-hmer-2-049} ![Estimated increase of liver stiffness by various clinical conditions irrespective of fibrosis.\ **Note:** \*alcohol withdrawal.](hmer-2-049Fig9){#f9-hmer-2-049} ![Present diagnostic algorithm of liver stiffness. For details see text.\ \*Arrows indicate cured hepatitis eg, detoxification from alcohol or cure from hepatitis C virus.](hmer-2-049Fig10){#f10-hmer-2-049} ![Pressure-stiffness-matrix sequence hypothesis. Either hydrostatic (venous or bile) or osmotic (eg, inflammation) pressure increases liver stiffness which, in turn, initiates increased matrix deposition via mechanical intercellular signaling. Matrix deposition finally leads to an irreversible increase of liver stiffness that is independent of pressure. These events may ultimately enter a vicious cycle causing end-stage liver disease. For more details see text.](hmer-2-049Fig11){#f11-hmer-2-049} ###### Comparison of various techniques to assess liver stiffness Method Product name Vibration mode/source Frequency Advantages Limitations --------------------------------------------- -------------------------- ---------------------- -------------------------------- ---------------- ----------------------------------------------------------------------- ----------------------------------------------------------------- Static elastography Quasi-static compression eg, by Hitachi None Not applicable Widely available in ultrasound scanners Qualitative only Magnetic resonance elastography Shear wave Optima MR450 w 1.5 T Continuous mechanical actuator 50--60 Hz 2D/3D stiffness mapping, frequency controlled vibration, other organs Expensive, metal implants (pace makers, bone implants) Acoustic radiation force impulse Shear wave Acuson S2000 Transient radiation force Ascites, other organs Accuracy, limited clinical data Vibration-controlled transient elastography Shear wave FibroScan® Transient mechanical actuator 50 Hz Largely validated, frequency controlled vibration Sensitive to body habitus (obesity, ascites, bowel interpolate) ###### Stiffness and shear velocity of liver and other organs by various methodological approaches Liver Pancreas Spleen Kidney Ref. ------------------------------------------------ ------------------- ------------------- ---------- ------------------- --------------------------------------------------------------------------- MRE[\*](#tfn1-hmer-2-049){ref-type="table-fn"} \~2.2 kPa (60 Hz) \~2.0 kPa (60 Hz) \~7.3 kPa (90 Hz) [@b107-hmer-2-049] ARFI 1.16--1.59 m/s 1.4 m/s 2.44 m/s 2.24 m/s [@b108-hmer-2-049],[@b46-hmer-2-049],[@b20-hmer-2-049],[@b109-hmer-2-049] VCTE/FS 4--6 kPa (50 Hz) [@b23-hmer-2-049],[@b24-hmer-2-049] **Note:** Young's modulus E (as measured by VCTE/FS) is three times higher than the MRE-measured shear stiffness μ according to the following equation: μ = E/3. **Abbreviations:** MRE, magnetic resonance elastography; ARFI, acoustic radiation force impulse; VCTE, vibration-controlled transient elastography™; FS, FibroScan®. ###### Liver stiffness and fibrosis stages in various liver diseases Disease N Fibrosis-LS correlation AUROC F3 AUROC F4 Cut-off F3 Cut-off F4 Ref. --------- ----- ------------------------- ---------- ---------- ------------ ------------ -------------------- HCV 193 0.9 0.95 9.5 12.5 [@b32-hmer-2-049] HCV 935 0.89 0.91 [@b25-hmer-2-049] HCV/HIV 72 0.48; *P* \< 0.0001 0.91 0.97 11.9 [@b37-hmer-2-049] HBV 202 0.65; *P* \< 0.001 0.93 0.93 11.0 [@b110-hmer-2-049] ALD 103 0.72, *P* \< 0.014 0.9 0.92 11 19.5 [@b38-hmer-2-049] ALD 45 0.97 25.8 [@b39-hmer-2-049] ALD 101 0.72; *P* \< 0.001 0.91 0.92 8 11.5 [@b40-hmer-2-049] NAFLD 246 0.92 0.95 7.9 [@b47-hmer-2-049] PBC/PSC 101 0.84, *P* \< 0.0001 0.95 0.96 9.8 17.3 [@b42-hmer-2-049] PBC 80 0.96 [@b43-hmer-2-049] **Abbreviations:** HCV, hepatitis C virus; HIV, human immunodeficiency virus; HBV, hepatitis B virus; ALD, alcoholic liver disease; NAFLD, nonalcoholic fatty liver disease; LS, liver stiffness; PBC, primary biliary cirrhosis; PSC, primary sclerosing cholangitis; AUROC, areas under receiver operating characteristic curves. ###### Comparison of liver stiffness obtained by various techniques for normal and cirrhotic livers Normal Fibrosis (F3) Cirrhosis (F4) Ref. ------------------------------------------------ ------------------ ----------------- -------------------- -------------------------------------- MRE[\*](#tfn4-hmer-2-049){ref-type="table-fn"} 2 kPa (90 Hz) 5 kPa (90 Hz) [@b10-hmer-2-049] ARFI 1.5 m/s 1.8 m/s \>1.95 m/s [@b111-hmer-2-049] 2.1--2.3 m/s [@b20-hmer-2-049],[@b109-hmer-2-049] VCTE/FS 2--6 kPa (50 Hz) \>8 kPa (50 Hz) \>12.5 kPa (50 Hz) see above[@b23-hmer-2-049] **Note:** Young's modulus E (as measured by FibroScan/VCTE/FS) is three times higher than the MRE-measured shear stiffness μ according to the following equation: μ = E/3. **Abbreviations:** MRE, magnetic resonance elastography; ARFI, acoustic radiation force impulse; VCTE, vibration-controlled transient elastography™; FS, FibroScan®. ###### Liver stiffness and hepatic venous pressure gradient ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Patients N HVPG vs LS correlation HVPG (mm Hg) AUROC for significant portal hypertension Cut-off for significant portal hypertension Ref. --------------------------- ----- ------------------------ -------------- ------------------------------------------- --------------------------------------------- -------------------- HCV 150 0.858; *P* \< 0.001 0.945 21 kPa [@b112-hmer-2-049] HCV, ALD 92 0.76 20.5 kPa (HCV)\ [@b68-hmer-2-049] 34.9 kPa (ALD) Liver transplant patients 124 0.84; *P* \< 0.001 \>6 0.93 [@b67-hmer-2-049] HCV 61 0.81, *P* \< 0.0001 \>10 0.99 13.6 kPa [@b69-hmer-2-049] \>12 0.92 17.6 kPa ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Abbreviations:** HCV, hepatitis C virus; ALD, alcoholic liver disease; HVPG, hepatic venous pressure gradient; LS, liver stiffness; AUROC, areas under receiver operating characteristic curves. ###### Liver stiffness and prediction of esophageal varices Patients n Cut-off for varices AUROC Sensitivity Specificity PPV/NPV Ref. -------------------------------------------------------- ----- --------------------- ------- ------------- ------------- --------- ------------------- HCV 65 17.6 kPa 0.76 0.9 [@b69-hmer-2-049] Children with biliary atresia 49 9.7 kPa 0.97 0.8 [@b70-hmer-2-049] Cirrhosis 165 19.5 kPa 0.83 0.84 47/93 [@b71-hmer-2-049] HBV LSM-spleen diameter to platelet ratio score (LSPS) 90 0.95 0.947 [@b72-hmer-2-049] HCV 21.5 kPa 0.76 0.78 [@b54-hmer-2-049] HIV/HCV coinfected patients with liver cirrhosis 102 21 kPa 0.71 100 [@b73-hmer-2-049] **Abbreviations:** HCV, hepatitis C virus; HBV, hepatitis B virus; HIV, human immunodeficiency virus; ALD, alcoholic liver disease; NAFLD, nonalcoholic fatty liver disease; LS, liver stiffness; PBC, primary biliary cirrhosis; PSC, primary sclerosing cholangitis; AUROC, areas under receiver operating characteristic curves; PPV, positive predictive value; NPV, negative predictive value. ###### Liver stiffness and portal hypertension by pre-and postsinusoidal thrombosis Thrombosis Disease Fibrosis Portal hypertension Liver stiffness --------------------------- ------------------------------------------------------------------ ---------- --------------------- --------------------------- Presinusoidal Portal vein thrombosis no yes normal Presinusoidal Idiophatic portal hypertension no yes normal, slightly elevated Sinusoidal thrombosis hepatic veno-occlusive disease (sinusoidal obstruction syndrome) yes yes elevated Postsinusoidal thrombosis Budd-chiari syndrome yes yes elevated ###### Liver stiffness and risk of hepatocellular carcinoma ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Patients N Liver stiffness HCC likelihood Ref. -------------------------------------------------------- ----- ------------------------ ----------------------------------------------------------------------------- ------------------- HCV 262 \<10 kPa\ 0.22\ [@b82-hmer-2-049] 10.1 to 15 kPa\ 0.73\ 15.1 to 25 kPa\ 1.3\ .25 kPa 5.0\ (stratum-specific likelihood ratios) HCV, prosepctive study 984 10.1--15 kPa\ 16.7\ [@b83-hmer-2-049] 15.1--20 kPa\ 20.9\ 20.1--25 kPa \> 25 kPa 25.6\ 45.5\ (hazard ratio, as compared to LSM ≤ 10 kPa) HCV, ALD 265 Patients with HCC had higher LS than patients without HCC; 35.3 vs 19.0 kPa [@b84-hmer-2-049] HBV LSM-spleen diameter to platelet ratio score (LSPS) 90 0.95 [@b72-hmer-2-049] HCV 21.5 kPa [@b54-hmer-2-049] HIV/HCV-coinfected patients with liver cirrhosis 102 21 kPa 0.71 [@b73-hmer-2-049] ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Abbreviations**: HCV, hepatitis C virus; ALD, alcoholic liver disease; HBV, hepatitis B virus; HIV, human immunodeficiencyvirus; LS, liver stiffness; HCC, hepatocellular carcinoma.
The Mystery of Iniquity by Stephan A. Hoeller ON JUNE 10, 1991, A COVER STORY APPEARED in Time magazine on the topic of evil. The author, Lance Morrow, did not argue for a particular thesis and did not reach any conclusions. What he did, however, was in a sense more important. He began by stating three propositions: God is all-powerful. God is all-good. Terrible things happen. Citing several sources, Morrow said that you can match any two of these propositions, but not all three. You can declare that there is an all-powerful God who allows terrible things to happen, but this God could not be all-good. On the other hand, there might be an all-good God who lets terrible things happen because he does not have the power to stop them; thus he is not all-powerful. This analysis might easily have been stated by a Gnostic of the first three or four centuries of the Christian era, or for that matter by a contemporary Gnostic, such as the present writer. Not that Gnostics were the only ones who recognized this uniquely monotheistic predicament. The supreme medieval luminary of Catholic theology, St. Thomas Aquinas, admitted in his Summa Theologiae that the existence of evil is the best argument against the existence of God. If the concept of the monotheistic God is to be accepted, then the issue of evil has no viable explanation. Conversely, if evil exists, then the monotheistic God as presented by the mainstream religious traditions cannot exist. Whence Cometh Evil? Throughout history, religious traditions have accounted for the existence of evil in a number of ways. In primeval times, the undifferentiated nature of human consciousness allowed people to say that both good and bad come from the Divine. Thus archaic shamans would not have found it difficult to say that good and evil are visited upon human beings by the Great Spirit. In the more sophisticated context of Sumero-Babylonian traditions, it was believed that the gods amused themselves by creating terrible things freakish beings, evil demons, and horrible conditions for human life. To employ a psychohistorical rationale, one might say that when people did not yet possess a differentiated consciousness (which we may equate with the conscious ego), it was relatively easy for them to envision God or the gods as being like themselves, so that the coincidence of good and evil was part of their nature. More advanced spiritual traditions have inherited some of this attitude; thus in mystical Jewish theology we find the notion that God partakes of both good and evil tendencies (yetzirim). With the growth of consciousness, the mind begins to differentiate between the beneficent and the malefic sides of being. The tension induced by trying to hold a God concept that unites good and evil becomes unbearable, so that it becomes necessary for the mind to separate the two. The notion of radical dualism thus arises. The most prominent example is that of Zoroastrianism. Here the true and good God, Ahura Mazda (sometimes called Ormazd), possesses a divine antagonist known as Angra Mainyu (Ahriman). The two are engaged in a perennial cosmic struggle for supremacy. Although Ahura Mazda is supreme and his ultimate victory is assured, as long as creation endures Angra Mainyu will continue to fight him and bring suffering into the world. A sophisticated but very impersonal view of evil and its origins can be found in the great religions that originated in India. Most of these imply that evil is part of the unenlightened state of existence, and that the cause of evil is ignorance (avidya). If one attains to a transformed or enlightened consciousness and thus rises above all dualities, one is liberated from karma and from all other conditions in which evil plays a role. Whether such liberation inevitably leads to the cessation of incarnate existence is not always clear, but it is clear that life as one has known it ceases, and with it evil ceases also. The fourth category is that of classical monotheism as found in mainstream Judaism and Christianity. As some of the other traditions ascribe the existence of evil to God, a malign counter-God, or human ignorance, this position ascribes the origin of evil to human sin. The creation myth of the mainstream Judeo-Christian tradition, with its story of the Garden of Eden and of the curious events that are said to have transpired there, forms the foundation for this view. This belief holds that the transgressions committed by the first human pair brought about a "Fall" of creation, resulting in the present state of the world. The sin of the original pair passed by inheritance to all members of the human race, who are born corrupt, afflicted by the weight of this "original sin." Such evils as we find in this world, including natural disasters, plagues, and the ruthlessness of the food chain, are all somehow part of the momentous consequences of the Fall. As some scholars, notably Elaine Pagels, have pointed out, these mythologems inevitably exercise a profound influence on the cultures founded on them. Even in a secularized age like our own, the powerful shadow of such beliefs continues to cast a pall on our minds. One may wonder how differently our history would have proceeded had the guilt of the Fall not been present to oppress the souls of men and women in our culture! The Gnostic View All spiritual traditions acknowledge that the world is imperfect; they differ only in how they believe this happened and in what is to be done about it. Gnostics have always had their own views of these matters. They hold that the world is flawed not because of human sin, but because it was created in a flawed manner. Buddhism (regarded by many scholars as the Gnosticism of Asia) begins with the recognition that earthly life is filled with suffering. Gnostics, both ancient and modern, agree. Suffering is indeed the existential manifestation of evil in the world. Although humans, with their complex physiology and psychology, are subject to torments of a singularly refined nature, the fear, pain, and misery of all other creatures is evident as well. To recall St. Paul's insight, all creation groans and travails in pain. Yet Gnostics have not been inclined to attribute such misfortunes to the sin of the first human pair. They reasoned that it makes much more sense to say that the world has not fallen but was made in a sadly imperfect manner to begin with. To put it in slightly more abstract terms, evil is part of the fabric of the world we live in; it is part and parcel of the existential reality of earthly life. If indeed there is a creator of this reality, then it is assuredly this creator who is responsible for the evil in it. Since, for the monotheistic religions, this creator is God, the Gnostic position appears blasphemous to conventional believers, and is often viewed with dismay even by those who consider themselves unbelievers. The Gnostic position may need to be considered in the light of the historical roots of the tradition. According to most contemporary scholars, Gnosticism originated in the Jewish religious matrix (probably in its heterodox manifestations) and then came to ally itself with the Jewish heresy that became Christianity. Thus the Gnostics were confronted with the image of the monotheistic God in the Old Testament and its adaptations in the New Testament. They faced a God who was often capricious, wrathful, vengeful, and unjust. It was easy for them to conclude that this flawed God might have created a world in his own flawed image. The greatest of all questions the Gnostics asked was this: is this flawed creator truly the ultimate, true, and good God? Or is he a lesser deity, who is either ignorant of a greater power beyond himself or is a conscious impostor, arrogating to himself the position of the universal deity? The Gnostics answered these questions by saying this creator is obviously not the true, ultimate God, but rather a demiurgos ("craftsman"), an intermediate, secondary deity. This Demiurge whom they equated with the deity of the Old Testament was the originator of evil and imperfection in the world. Thus the apparent blasphemy of attributing the world's evil to the creator is revealed as originating in the Gnostics' confrontation with the monotheistic God. Kindred movements, such as Hermeticism, did not face this predicament: being pagans, the Hermeticists did not inherit the dark, ambivalent figure of the Old Testament God, so they were able to adopt a less harsh position. (Ironically, today many people tend to favor Hermeticism over Gnosticism for this very reason.) Many have tried to evade recognition of this flawed creation and its flawed creator, but none of their arguments have impressed Gnostics. The ancient Greeks, especially the Platonists, advised people to look to the harmony of the universe, so that by venerating its grandeur they might forget their own afflictions as well as the innumerable grotesqueries of ordinary life. "Look at this beautiful world:' they said; "see its superbly orderly way of functioning and perpetuating itself, how can one call something so beautiful and harmonious an evil thing?" To which Gnostics have always answered that since the flaws, forlornness, and alienation of existence are also undeniable, the harmony and order of the universe are at best only partial. Those influenced by Eastern spirituality have at times brought up the teaching of karma whereby one's misdeeds generate misfortune later in life or even in another life as explaining the imperfection of the manifest world. Yet a Gnostic might counter that karma can at best only explain how the chain of suffering and imperfection works. It does not tell us why such a sorrowful system should exist in the first place. Qualified Dualism As we noted earlier, one way of explaining the existence of evil was radical dualism, of which the Zoroastrian faith is a possible example. The Gnostic position, by contrast, is not of a radically dual nature; rather it might be called "qualified dualism." In a simplified form one might define this position as declaring that good and evil are mixed in the manifest world; thus the world is not wholly evil, but it is not wholly good either. If the evil in the world should not blind us to the presence of good, neither should the good blind us to the reality of evil. Here we might resort to the approach that was most favored by the Gnostics themselves the mythological. (The power of this method has been rediscovered by such contemporary figures as C. G. Jung and Joseph Campbell.) Myths telling of the commingling of good and evil in creation predated the Gnostics. One of these tales is the Greek myth of Dionysus. When this god was torn apart by the Titans, Zeus came to his aid and blasted the malefactors with a thunderbolt. The bodies of both the Titans and Dionysus were reduced to ashes and mixed. When all sorts of creatures, including humans, rose from these ashes, the divine nature of Dionysus was mingled with the evil nature of the Titans. Thus light and darkness are at war with each other within human nature and in the natural world. The Gnostics had their own myth about the origins of good and evil. They began by speaking of a boundless, blissful fullness (Pleroma) that dwells beyond all manifest existence. The Pleroma is the abode of and constitutes the essential nature of the true, ultimate God (alethes theos). Before time and memory, this ineffable fullness extended itself into the lower regions of being. In the course of this emanation, it came to manifest itself in a number of intermediate-deities who were rather like great angels endowed with enormous talents of creativity and organization. Some of these beings, or demiurgoi, became alienated from their supernal source, thus becoming replete with evil tendencies. Thus the world-creating will was tainted with self-will, arrogance, and the hunger for power; through the works performed by these alienated agencies, evil came to penetrate creation. Ever since then, as the Gnostic teacher Basilides reportedly said, "Evil adheres to created existence as rust adheres to iron." As one of these created beings, the human entity partakes of the nature of his flawed creators. The human body, being a material creation, is subject to disease, death, and various other evils; even the soul (psyche) is not free from imperfection. Only the spirit (pneuma), deeply hidden within the human essence, remains free from the admixture of evil and tends toward the true God. Such mythic statements can convey insights in a fashion that is not possible through other methods of communication. At the same time it must be admitted that these myths were formulated long ago and far away and so may profit from certain amplifications and clarifications within a contemporary context. Contemporary Conclusions Terrible things do happen, as the Time essay stated. The world is filled with evil, with grotesque horror and universal suffering. Fiendish humans, often possessing great power, torment and slay others daily. The history of the twentieth century offers much proof of rampant wickedness in the world. Believers in the monotheistic God and/or in karma often tell us that this does not matter all that much, because in the final analysis evil really promotes good. They seem to be saying that evil is not really evil at all, but good masquerading in an unpleasant disguise. Yet this kind of topsy-turvy argument is an affront to all those who have looked evil in the face. To present this argument to survivors of the Holocaust or the Gulag or the killing fields would be insulting as well as ridiculous. For these victims, evil is evil, and all else is but an evasion. Moreover many terrible things happen that are in no way due to human volition. While the perversities of the human condition are responsible for some of the suffering in this world, much of it is not our fault. Frequently, however, we believe that it is. Yet, whether occasioned by the myth of Adam and Eve or by the propaganda of some trendy folk today who make out humans to be the sole villains in the environment, the cultivation of guilt in the human mind is no remedy for evil. On the contrary, guilt usually begets more sorrow in the long run. Let us be done with this self-flagellation and try to mitigate the evils over which we have some control while remembering that it is beyond our powers to eradicate misfortune altogether. Like the world, humans are a mixture of good and evil. Just as it is impossible to exorcise evil from the fabric of creation, so we cannot entirely get rid of it in ourselves. If human schemes and techniques were able to eliminate evil from human nature, they would have succeeded in doing so long ago. This is why so many spiritual traditions teach the need for redemption from outside. Every spiritual tradition worth its salt has always possessed a soteriology a teaching about salvation. Gnostics ancient and modern do not perceive liberating gnosis as a do-it-yourself project. We cannot purify or psychoanalyze evil away by our own strength. The Messengers of Light recognized in the Gnostic tradition, such as Jesus, Mani, and others, have always been envisioned as the great facilitators of salvation. Their salvific mission is to enable the consciousness of the individual to experience gnosis. An early Gnostic source, Excerpta de Theodoto, defines this gnosis as the knowledge of "who we were, what we have become; where we were, whereinto we have been thrown; whither we hasten, whence we are redeemed; what is birth and what rebirth." Many have noted the similarities between these Gnostic teachings and those of Hinduism and Buddhism. In all of these traditions, insight into the origin and nature of the manifest world is seen as liberating us from it and its evils, reuniting our spirits with transcendental reality. Unlike the great Eastern religions, however, Gnosticism specifically identifies the root of all evil as the faulty creation brought about by spiritual agencies of limited wisdom and goodness. The Gnostic view of the human condition thus also differs from the modern secular view. Gnostics do not share the assumption of many in our culture that there is a purely naturalistic and humanistic remedy for evil. Contemporary Gnostics for the most part agree with the fundamental insights of their ancient counterparts. Do modern Gnostics believe in the Demiurge? Do they believe in Messengers of Light? Do they regard such ideas as metaphysical truths or as mythologems hinting at more subtle and mysterious realities? The answer is that some Gnostics may believe these things more in a literal sense, while others may believe them symbolically; still others may hold a mixture of both views. What matters is not the precise form of these teachings but their substance. And this is clear enough. It speaks of the reality and power of evil, of its fundamental presence in all of manifest existence. It declares that while we may not be able to rid the world or ourselves of evil, we may and indeed will rise above it through gnosis. And when the task of this extrication is accomplished, then we shall indeed no longer fear the noonday devil or the terror that walks by night.
s the b'th term of -273961, -547926, -821895, -1095868? -2*b**2 - 273959*b What is the h'th term of -52813, -52815, -52817? -2*h - 52811 What is the z'th term of 114, 324, 672, 1158, 1782? 69*z**2 + 3*z + 42 What is the c'th term of -7, -16, -41, -88, -163, -272, -421? -c**3 - 2*c**2 + 4*c - 8 What is the j'th term of -252, -723, -1196, -1671, -2148, -2627, -3108? -j**2 - 468*j + 217 What is the p'th term of -3253, -4333, -6133, -8653, -11893? -360*p**2 - 2893 What is the w'th term of -358, -1413, -3182, -5671, -8886, -12833, -17518, -22947? -w**3 - 351*w**2 + 5*w - 11 What is the i'th term of 929, 912, 867, 782, 645, 444, 167? -2*i**3 - 2*i**2 + 3*i + 930 What is the s'th term of -40775, -81551, -122327, -163103, -203879? -40776*s + 1 What is the y'th term of -2, -50, -114, -194, -290, -402? -8*y**2 - 24*y + 30 What is the k'th term of 11406, 22949, 34494, 46041, 57590? k**2 + 11540*k - 135 What is the b'th term of -614, -1221, -1828? -607*b - 7 What is the a'th term of -89, -193, -335, -527, -781, -1109, -1523, -2035? -2*a**3 - 7*a**2 - 69*a - 11 What is the l'th term of 825, 1618, 2395, 3150, 3877, 4570, 5223, 5830? -l**3 - 2*l**2 + 806*l + 22 What is the k'th term of -48581, -97167, -145755, -194345? -k**2 - 48583*k + 3 What is the k'th term of 784, 691, 612, 541, 472? -k**3 + 13*k**2 - 125*k + 897 What is the i'th term of -518, -545, -630, -803, -1094? -5*i**3 + i**2 + 5*i - 519 What is the j'th term of 359, 473, 779, 1373, 2351? 16*j**3 + 2*j + 341 What is the o'th term of -1576, -3501, -5426, -7351, -9276, -11201? -1925*o + 349 What is the t'th term of 82405, 82401, 82397, 82393? -4*t + 82409 What is the k'th term of -9022, -9014, -9002, -8986? 2*k**2 + 2*k - 9026 What is the y'th term of 111, 110, 33, -120, -349, -654? -38*y**2 + 113*y + 36 What is the b'th term of 111, 75, 27, -33, -105, -189, -285? -6*b**2 - 18*b + 135 What is the w'th term of 2794847, 2794846, 2794845? -w + 2794848 What is the n'th term of -192, -171, -142, -105, -60? 4*n**2 + 9*n - 205 What is the k'th term of 394, 377, 358, 337, 314? -k**2 - 14*k + 409 What is the v'th term of 28693, 57409, 86125? 28716*v - 23 What is the t'th term of 2519, 5036, 7551, 10064, 12575, 15084, 17591? -t**2 + 2520*t What is the c'th term of 22261, 22278, 22295, 22312, 22329, 22346? 17*c + 22244 What is the u'th term of 3137, 6263, 9387, 12509? -u**2 + 3129*u + 9 What is the d'th term of -2690, -2747, -2842, -2975, -3146? -19*d**2 - 2671 What is the x'th term of -29, -460, -1627, -3896, -7633? -61*x**3 - 2*x**2 + 2*x + 32 What is the d'th term of -211, -744, -1723, -3370, -5907, -9556, -14539, -21078? -37*d**3 - d**2 - 271*d + 98 What is the o'th term of 116452, 232903, 349354, 465805, 582256? 116451*o + 1 What is the l'th term of 11843, 11583, 11323? -260*l + 12103 What is the c'th term of -23621, -23607, -23593, -23579? 14*c - 23635 What is the g'th term of -3160, -6302, -9418, -12508, -15572, -18610? 13*g**2 - 3181*g + 8 What is the v'th term of 734, 355, -278, -1165, -2306, -3701, -5350? -127*v**2 + 2*v + 859 What is the v'th term of 12614, 12717, 12820, 12923, 13026? 103*v + 12511 What is the c'th term of 782, 2322, 4608, 7640, 11418? 373*c**2 + 421*c - 12 What is the o'th term of -88130, -176259, -264390, -352523, -440658, -528795, -616934? -o**2 - 88126*o - 3 What is the y'th term of 29904, 29916, 29934, 29958? 3*y**2 + 3*y + 29898 What is the l'th term of 1316, 2624, 3926, 5222, 6512, 7796? -3*l**2 + 1317*l + 2 What is the c'th term of 3466, 13921, 31346, 55741, 87106, 125441? 3485*c**2 - 19 What is the f'th term of -5073, -5349, -5625? -276*f - 4797 What is the w'th term of -12619, -12077, -11165, -9877, -8207, -6149, -3697? w**3 + 179*w**2 - 2*w - 12797 What is the k'th term of -395, -495, -595, -695, -795? -100*k - 295 What is the l'th term of 108, 392, 676? 284*l - 176 What is the x'th term of 19109, 38217, 57325? 19108*x + 1 What is the y'th term of 550, 1983, 4308, 7525, 11634, 16635, 22528? 446*y**2 + 95*y + 9 What is the m'th term of -549486, -549483, -549480, -549477, -549474? 3*m - 549489 What is the f'th term of 13821, 27670, 41519, 55368? 13849*f - 28 What is the k'th term of 10770, 10809, 10876, 10983, 11142, 11365, 11664? 2*k**3 + 2*k**2 + 19*k + 10747 What is the h'th term of -16340, -16336, -16330, -16322, -16312, -16300, -16286? h**2 + h - 16342 What is the c'th term of -133623, -267244, -400865, -534486, -668107, -801728? -133621*c - 2 What is the o'th term of -3680, -7357, -11032, -14705, -18376? o**2 - 3680*o - 1 What is the d'th term of -12186, -48749, -109688, -195003, -304694, -438761? -12188*d**2 + d + 1 What is the m'th term of 425, 1337, 2857, 4985, 7721, 11065, 15017? 304*m**2 + 121 What is the q'th term of -29010, -28979, -28948? 31*q - 29041 What is the s'th term of -18457, -36883, -55297, -73693, -92065, -110407, -128713? s**3 - 18433*s - 25 What is the g'th term of 67, 68, 73, 82, 95, 112? 2*g**2 - 5*g + 70 What is the b'th term of 40871, 81755, 122641, 163529, 204419, 245311? b**2 + 40881*b - 11 What is the u'th term of -11041, -44175, -99399, -176713, -276117, -397611, -541195? -11045*u**2 + u + 3 What is the r'th term of 26, 79, 96, 17, -218, -669? -10*r**3 + 42*r**2 - 3*r - 3 What is the h'th term of -393, -1633, -3697, -6585, -10297, -14833, -20193? -412*h**2 - 4*h + 23 What is the n'th term of -3447, -3095, -2741, -2385, -2027, -1667? n**2 + 349*n - 3797 What is the i'th term of 446, 1371, 2914, 5075, 7854, 11251, 15266? 309*i**2 - 2*i + 139 What is the i'th term of -647, -656, -685, -740, -827? -i**3 - 4*i**2 + 10*i - 652 What is the v'th term of 316, 641, 966, 1291, 1616, 1941? 325*v - 9 What is the t'th term of 2718, 10876, 24486, 43560, 68110? 2*t**3 + 2714*t**2 + 2*t What is the x'th term of -13388, -26780, -40172, -53564, -66956, -80348? -13392*x + 4 What is the f'th term of 39762, 39764, 39766, 39768, 39770, 39772? 2*f + 39760 What is the f'th term of 3019, 11967, 26871, 47725, 74523, 107259? -f**3 + 2984*f**2 + 3*f + 33 What is the a'th term of 1828591, 1828590, 1828589, 1828588, 1828587, 1828586? -a + 1828592 What is the m'th term of 4149, 4121, 4091, 4059, 4025, 3989, 3951? -m**2 - 25*m + 4175 What is the n'th term of 2748, 10924, 24548, 43620? 2724*n**2 + 4*n + 20 What is the a'th term of 98, 147, 200, 263, 342, 443, 572? a**3 - 4*a**2 + 54*a + 47 What is the m'th term of 4636, 9226, 13816, 18406, 22996? 4590*m + 46 What is the p'th term of 24908, 24855, 24792, 24713, 24612, 24483, 24320? -p**3 + p**2 - 49*p + 24957 What is the a'th term of -1276, -1795, -2666, -3895, -5488, -7451, -9790, -12511? -a**3 - 170*a**2 - 2*a - 1103 What is the o'th term of 389, 766, 1141, 1514? -o**2 + 380*o + 10 What is the q'th term of 225, 517, 815, 1119, 1429? 3*q**2 + 283*q - 61 What is the d'th term of 295, 306, 317? 11*d + 284 What is the g'th term of -6698, -6701, -6704, -6707, -6710? -3*g - 6695 What is the z'th term of -1548, -1509, -1432, -1311, -1140, -913, -624? z**3 + 13*z**2 - 7*z - 1555 What is the j'th term of -164, -194, -224? -30*j - 134 What is the k'th term of -183391, -183393, -183397, -183403, -183411, -183421? -k**2 + k - 183391 What is the n'th term of -15938, -15935, -15932? 3*n - 15941 What is the v'th term of -2245, -2259, -2299, -2371, -2481, -2635, -2839? -v**3 - 7*v**2 + 14*v - 2251 What is the a'th term of 8115, 16232, 24365, 32520, 40703, 48920, 57177? a**3 + 2*a**2 + 8104*a + 8 What is the d'th term of 109828, 109833, 109840, 109849, 109860, 109873, 109888? d**2 + 2*d + 109825 What is the q'th term of -16467, -33278, -50087, -66894, -83699, -100502, -117303? q**2 - 16814*q + 346 What is the p'th term of -947, -1173, -1709, -2711, -4335, -6737? -26*p**3 + p**2 - 47*p - 875 What is the j'th term of -43127, -86259, -129393, -172529, -215667, -258807, -301949? -j**2 - 43129*j + 3 What is the l'th term of 2293, 2336, 2399, 2494, 2633, 2828, 3091, 3434? 2*l**3 - 2*l**2 + 35*l + 2258 What is the f'th term of -55, -135, -357, -793, -1515, -2595, -4105, -6117? -12*f**3 + f**2 + f - 45 What is the t'th term of 3169, 3131, 3093? -38*t + 3207 What is the b'th term of -29829, -119331, -268501, -477339, -745845? -29834*b**2 + 5 What is the r'th term of 5727, 7080, 8433, 9786, 11139, 12492? 1353*r + 4374 What is the i'th term of 1
Introduction {#s1} ============ *Helicobacter pylori* is responsible for most duodenal and peptic ulcer and also plays an important role in gastric adenocarcinoma [@pone.0002689-Atherton1]--[@pone.0002689-Brenner1]. The mechanism of *H. pylori* pathogenic effect is unclear, but it is believed to be related to complex host bacterial interactions triggered by virulence genes [@pone.0002689-Amieva1], and it is possible that these effects are enhanced by the invasiveness of the bacterium [@pone.0002689-SeminoMora1]--[@pone.0002689-Necchi1]. Finally, *H. pylori* was recently observed within gastric mucosa capillaries, where it appears to establish close association with erythrocytes [@pone.0002689-Necchi1], [@pone.0002689-Aspholm1]. Therefore, it is important to develop specific and sensitive molecular methods allowing the detection and identification of this microorganism in biological specimens. Culture of the bacterium is considered the gold standard, but the method is not sensitive and is specific only if additional testing is performed on the isolates. The method of choice involves polymerase chain reaction (PCR) amplification of specific *H. pylori* genes. However, using this approach may be problematic due to the extensive polymorphism of many *H. pylori* genes and the absence of particular genes in some strains \[e.g. cagA [@pone.0002689-CamorlingaPonce1]\]. Among the genes that have been tested, *ureA* and *ureC* (also named *glmM*) appear sensitive, but they lack specificity. Therefore, the concurrent detection of multiple, *H. pylori-*specific, genes and the use of different sets of primers has been considered to be necessary to achieve specific and sensitive diagnosis of the infection. Another approach to the question has been to use *H. pylori 16S rRNA*. This ribosomal gene is particular in that it is present in all bacteria while, at the same time, it comprises nucleotide sequences that are specific to a given bacterial genus [@pone.0002689-Kolbert1], [@pone.0002689-Smith1]. Sequence analysis of the *16S rRNA* gene has led to our current understanding of prokaryotic phylogeny and *H. pylori 16S rRNA* gene sequence analysis unambiguously differentiated the *Helicobacter* genus from the closely related *Campylobacter* genus [@pone.0002689-Gorkiewicz1] thus allowing creation of the *Helicobacter* genus. Finally, *H. pylori 16S rRNA* gene sequence has been used as a tool to differentiate *H. pylori* from other *Helicobacter* sp. especially for isolates from animal sources [@pone.0002689-Ho1]--[@pone.0002689-Fox1]. Here, we sequenced the *16S rRNA* genes of two *H. pylori* strains with markedly different DNA fingerprints that had been cultured from two patients living in different continents and with different endoscopic diagnosis. By matching these sequences with each other and with those available in the National Center for Biotechnology Information (NCBI) nucleotide database, we first identified a unique nucleotide domain that is homologous in most *H. pylori* strains. We then defined, within this domain, a sequence that is homologous among *H. pylori* strains but not among other bacterial species and used this domain to design *H. pylori*-specific primers and probes to be used in a real-time quantitative RT-PCR (TaqMan) assay and an *in situ* hybridization (ISH) method. These methods can specifically detect less than 10 copies of *H. pylori* in gastric biopsies and also allow quantification of *H. pylori* density in biopsies from animals and patients with gastritis, gastric precancerous lesions and cancer. Methods {#s2} ======= **Ethical approval** to carry studies in humans was obtained from Institutional Review Board of the participating institutions and written consent forms was obtained from each participant. In addition, studies performed in animals were approved by the Institutional Animal Care and Use Committee. *H. pylori* strains {#s2a} ------------------- Gastric antral biopsies were harvested in (1) an Albanian patient with gastric adenocarcinoma and (2) a U.S. Caucasian patient with marked gastritis but no ulcer. Biopsies were cultured using *Campylobacter* chocolatized blood agar plates supplemented with Trimethoprim, Vancomycin, Amphotericin B and Polymyxin B (Remel, Lenexa) at 37°C in an atmosphere of 90% N~2~, 5% O~2~, and 5% CO~2~ (microaerobic conditions). Bacterial isolates consistent with *H. pylori* in shape, colony morphology, enzymatic activity, and Gram-negative status grew within 7--10 days. Single colony isolates were subcultured on sheep blood agar plates supplemented with Tryptic Soy Agar (Remel, Lenexa, KS), confirmed for enzymatic activity and Gram stain and collected in phosphate buffer saline (PBS; 137 mM NaCl, 2.7 mM KCl, 10 mM phosphate buffer) for subsequent genomic DNA extraction and analysis. DNA extraction {#s2b} -------------- DNA was extracted from each isolate collected in PBS by QIAamp DNA mini kit and processed the samples as described in the insert (Qiagen Inc., Stanford). Random Amplification of Polymorphic DNA (RAPD) {#s2c} ---------------------------------------------- DNA fingerprinting using the RAPD technique was used to compare the isolates. A set of 5 different 10-mer primers (1247: 5′-AAGAGCCCGT-3′; 1254: 5′-CCGCAGCCAA-3′; 1281: 5′-AACGCGCAAC-3′; 1238 5′- GCGATCCCCA-3′; 1290: 5′- GTGGATGCGA-3′) were used as published [@pone.0002689-Akopyants1]. *16S rRNA* gene amplification and sequencing {#s2d} -------------------------------------------- Total DNA was extracted from each patient\'s isolate and PCR-amplified using published primers (see supplementary [Methods S1](#pone.0002689.s001){ref-type="supplementary-material"}) [@pone.0002689-Eckloff1]. The Basic Local Alignment Search Tool (nucleotide BLAST), National Center for Biotechnology Information (NCBI), NIH, (<http://www.ncbi.nlm.nih.gov/blast/Blast.cgi>) feature for alignment between two nucleotide sequences (bl2seq) [@pone.0002689-Tatusova1] was used to align the overlapping sequenced segments of the *16S rRNA* gene. The *16S rRNA* sequences of strains USU101 and USU102 were decoded and registered in the GenBank nucleotide database as EU544199 and EU544200, respectively. Histology and *in situ* hybridization {#s2e} ------------------------------------- Gastric biopsies were fixed in 4% paraformaldehyde within 30 seconds of harvesting, dehydrated in ethanol within two days, and embedded in paraffin. Unstained sections were then stained with hematoxylin and eosin or according to Genta [@pone.0002689-Genta1] or processed for ISH as described in the supplementaries [@pone.0002689-SeminoMora1], [@pone.0002689-Aspholm2]. Controls of method {#s2f} ------------------ Control for nonspecific binding was performed by using: (1) sense instead of antisense probe; (2) hybridization buffer instead of antisense probe; (3) unlabeled antisense probe; (4) digoxigenin or biotin-labeled probe for scorpion Butus martensi Karsch neurotoxin sequence \[5′-GGC CAC GCG TCG ACT AGT AC-3′\] [@pone.0002689-Lan1]; (5) RNaseA pretreatment (Roche); (6) DNase I pretreatment (Roche); and (7) RNase plus DNase I pretreatment. *In silico* search for a *16S rRNA* sequence conserved in, and specific for, *H. pylori* strains {#s2g} ------------------------------------------------------------------------------------------------ The DNASTAR software ([www.dnastar.com](http://www.dnastar.com)) was used to perform multi-alignment of the two decoded sequences described above along with the sequences of the three strains that have been completely sequenced to date (J99, 26695, and HPAG1) and with the published sequences of the *16S* ribosomal RNA of *E. coli* (J01859), *S. bareilly* (U92196), *C. jejuni* (LO4315), *S. flexneri* (AE016991 AE014073), and *H. heilmannii* (AF506793). Design of primers and probes specifically recognizing published *H. pylori* strains {#s2h} ----------------------------------------------------------------------------------- The PrimerExpress® v2.0 Software was used to design multiple sets of real-time RT-PCR primers flanking an oligonucleotide probe. The rules and requirements described in the PrimerExpress tutorial [@pone.0002689-Applera1] were then applied to select the set that would provide maximum sensitivity and specificity of the assay. Locus-specific primers flanking an oligonucleotide probe labeled with a 5′ fluorescent Reporter dye (FAM or TET) and a 3′ Quencher dye (TAMRA) were ordered from Applied Biosystems ([www.appliedbiosystems.com](http://www.appliedbiosystems.com)). Validation of the primers and probes {#s2i} ------------------------------------ Pure cultures of *H. pylori*, *E. coli* (Top10, Invitrogen, Carsbad, CA), *S. typhimurium* LT2, *V. cholerae* O139 (Classical Ogawa), *V. cholerae* O139 (El Tor), and *P. aeruginosa* were lysed and total DNA was extracted. The specificity of the primers and probes described above was then verified by real-time PCR using an ABI PRISM 7500 Sequence Detection System (Applied Biosystems) [@pone.0002689-Giulietti1]. In addition, smears of the pure cultures were streaked onto glass slides, immediately covered with a drop of 4% paraformaldehyde, and let to dry overnight. The next day, they were processed for ISH as described above. Cloning of the standard cRNA {#s2j} ---------------------------- The MEGAscript protocol for Standard cRNA cloning (MEGAscript high yield transcription kit, Ambion) was used to incorporate the SP6 promoter into *H. pylori* strain J99 *16S rRNA* at a location situated upstream of the sequence of interest, thus ensuring that the promoter sequence was incorporated into the PCR product. Conditions for primer extension were 95°C for 15 sec, 60°C for 15 sec, 72°C for 1 min. for 38 cycles to produce a 246 bp PCR product. The ABI Prism BigDye Terminator Cycle Sequencing Ready Reaction kit was used to verify that the sequence of the PCR product was identical to the corresponding *16S rRNA* sequence. In vitro transcription of cRNA was then performed using 2 µL (0.2 µg) of the PCR product as a template with the MEGAscript High Yield Transcription Kit (Ambion). This reverse transcription product was purified by RNeasy Mini Kit and treated with DNaseI during this purification (Qiagen). The concentration of this cRNA was calculated from the mean of three OD measurements and then converted to the copy numbers using Avogadro\'s number. The stock solution was aliquoted from freshly prepared 10-fold serial dilutions from 10^1^ to 10^6^ copies and stored at −80°C. Absolute quantitative real-time RT-PCR (QRT-PCR) {#s2k} ------------------------------------------------ A single-tube reaction with a TaqMan One-Step RT-PCR Master Mix Reagents kit (Applied Biosystems) designed for reverse transcription (RT) and polymerase chain reaction (PCR) in a single buffer system was used in an ABI PRISM 7500 Sequence Detection System (Applied Biosystems, Foster City, CA). The primers and probes concentrations were first optimized using controls from a pool of total RNA extracted from *H. pylori* cultures and monkey gastric biopsies (BioChain Institute, Inc. Hayward, CA). The assay was then performed by adding 2 µl of 50 ng/µl monkey total RNA aliquots to the real time RT-PCR reaction mix to a final volume of 50 µl. The RT step was performed at 48°C for 30 min, followed by 10 min at 95°C for AmpliTaq Gold Activation. The PCR step consisted of 40 cycles of denature 15 sec at 95°C and anneal/extend 1 min at 60°C. All samples and cRNA standards were assayed without reverse transcriptase to confirm the absence of DNA contamination. Conversion of Ct values to *H. pylori 16S rRNA* copy numbers was performed using linear regression analysis of a standard curve derived from serial 10^1^ to 10^6^ copies, 10-fold dilutions of the cloned cRNA. Gastric biopsies {#s2l} ---------------- Three biopsies were obtained from each of the 23 rhesus monkeys studied in an inoculation experiment [@pone.0002689-Liu1]. As described above, the first biopsy was cultured for *H. pylori*, the second biopsy was fixed in formalin and either stained according to Genta [@pone.0002689-Genta1] or unstained sections were processed for ISH, and the third biopsy was processed to extract total RNA. Statistical Analysis {#s2m} -------------------- Data were entered into our Microsoft Access database. Log-transformed copy numbers were normally distributed. Pearson correlation coefficients (r) and associated probabilities (P) were calculated and a two-sided P-value of 0.05 or less was considered statistically significant. Results {#s3} ======= DNA fingerprinting {#s3a} ------------------ In order to study the genomic diversity between various *H. pylori* strains, we performed RAPD fingerprinting analysis of strains USU101, USU102, J99, and 26695. As shown in [Figure 1](#pone-0002689-g001){ref-type="fig"}, the pattern of these four strains was markedly different from each other in regard to all 4 primers used for RAPD. ![DNA fingerprinting (RAPD) of four *H. pylori* strains: USU101, isolated from an Albanian patient with gastric adenocarcinoma (1), USU102, isolated from a U.S. Caucasian patient with no ulcer (2), strain J99 (3), and strain 26695 (4).\ Note that the DNA fingerprints of the four strains are quite different from each other.](pone.0002689.g001){#pone-0002689-g001} *In silico* search for a *16S rRNA* sequence conserved in *H. pylori* strains {#s3b} ----------------------------------------------------------------------------- To examine whether a particular domain of *H. pylori 16S rRNA* sequence was conserved among strains with markedly different fingerprints, the DNASTAR software was used to perform multi-alignment of the *16S rRNA* sequences of the four strains described above**.** We discovered that a 546-bp nucleotide domain was 100% conserved among these five sequences ([Figure 2A](#pone-0002689-g002){ref-type="fig"}). To determine whether this domain was also conserved among various *H. pylori* strains, we performed a nucleotide BLAST of this sequence and observed that the sequence was 100% homologous to 49 *H. pylori* sequences published in GenBank to date. ![Sequences of *H. pylori 16S rRNA* that are 100% homologous among USU-101, USU-102, J99, and 26695 *H. pylori* strains (A), and also do not match the *16S rRNA* sequences of *E. coli*, *S. bareilly*, *C. jejuni*, and *S. flexneri* (B), nor the sequences of *H. heilmannii* (C), and encompass the set of primers and TaqMan probe (D).\ E shows the sequences of the two ISH probes used in the present study (546-bp from 187 to 732 of J99 *16S rRNA* sequence).](pone.0002689.g002){#pone-0002689-g002} *In silico* search for a conserved *16S rRNA* sequence that is also specific to *H. pylori* strains {#s3c} --------------------------------------------------------------------------------------------------- In order to search for a region that is specific for *H. pylori*, the conserved 546-bp nucleotide domain was entered into the DNAStar software along with the published *16S rRNA* sequences of *E. coli*, *S. bareilly*, *C. jejuni*, *S. flexneri*, and *H. heilmannii*. We observed that a 229-bp domain of the conserved region did not match the other five bacteria ([Figure 2C](#pone-0002689-g002){ref-type="fig"}). Basic nucleotide BLAST alignment (Blastn) of this sequence demonstrated complete homology with 74 *H. pylori* strains, two *H. nemestrinae* and four *Helicobacter* sp. "liver" (that were subsequently found to be indistinguishable from *H. pylori* [@pone.0002689-Avenaud1], [@pone.0002689-Suerbaum1]), and 17 uncultured *Helicobacter* species. Sequences of these uncultured *Helicobacter* species had been determined from biopsies from human esophageal carcinoma or inflamed colon [@pone.0002689-Sturegard1], from the stomach of cheetahs \[a carnivore that is frequently colonized by the closest *H. pylori* relative, *H. acinonychis* [@pone.0002689-Eppinger1]\], or from the stomach of thoroughbred horses [@pone.0002689-Contreras1]. The following TaqMan RT-PCR primers and probe were then designed within the 229-bp sequence as described in [Materials and Methods](#s2){ref-type="sec"}: forward primer 5′-TCG GAA TCA CTG GGC GTA A-3′; reverse primer 5′-TTC TAT GGT TAA GCC ATA GGA TTT CAC-3′; probe 5′--TGA CTG ACT ATC CCG CCT ACG CGC T-3′ ([Figure 2D](#pone-0002689-g002){ref-type="fig"}). In addition, two probes for in situ hybridization (ISH) were designed within the same 229-bp sequence ([Figure 2E](#pone-0002689-g002){ref-type="fig"}). *In silico* validation of the RT-PCR set of primers and probe and of the ISH probes {#s3d} ----------------------------------------------------------------------------------- In order to validate the specificity of the set of two primers and a probe used in our real-time RT-PCR assay, we performed a BLAST alignment of the corresponding 76-bp sequence ([Figure 2D](#pone-0002689-g002){ref-type="fig"}) with the GenBank database. We observed 100% homology with 136 *H. pylori* strains, three *H. nemestrinae* and four *Helicobacter* sp. "liver" (that are, in fact, *H. pylori* [@pone.0002689-Avenaud1], [@pone.0002689-Suerbaum1]), one *H. acinonychis* [@pone.0002689-Eppinger1] and 37 uncultured *Helicobacter* species (isolated from human esophageal carcinoma, inflamed colon, or liver [@pone.0002689-Sturegard1], [@pone.0002689-Castera1], from seven cheetahs, and from a tiger). In addition, two *H. pylori* 16S RNA sequences (AY057935 and AY057936) showing a low homology (91% and 97%, respectively) with the 76-bp nucleotide sequence were isolates referred to the genomic sequences of the *H. pylori* strains 26695 and J99 in the ATCC catalog. It is noteworthy, however, that in contrast to these two ATCC isolates, both 26695 and J99 strains are among those showing 100% homology with our 76-bp sequence. To clarify this apparent discrepancy, we performed BLAST alignment of AY057935 and AY057936 with their respective parental strains, and found 82 and 91% homology, respectively. Thus, it is likely that AY057935 and AY057936 strains are, in fact, mutated clones of the respective parental strains, or that they were contaminated during laboratory procedures. Alignment of the 37- and 33-bp sequences corresponding to the ISH probes revealed that they were 100% homologous with over 150 *H. pylori* strains but that there were at least two mismatches with different *Helicobacter* sp. such as *H. cetorum* and *H. bilis*. Interestingly, the ISH probes were also 100% homologous with several *Helicobacter* sp. isolates from horses, dogs, zoo seals, and other animals that live in close contact with humans. *In silico* verification of the specificity of the primers and probes {#s3e} --------------------------------------------------------------------- In order to determine whether the proposed method was specific for *H. pylori*, we performed a series of BLAST (bl2seq) of the sequence corresponding the RT-PCR primers and probe (71-bp of the 76 bp entire sequence) with the sequences of non-*H. pylori* bacteria. We observed the presence of 27 mismatches for *E. coli*, *S. bareilly*, and *S. flexneri*, 13 mismatches for *C. jejuni* and 6 mismatches for *H. heilmannii.* *In vitro* validation of the RT-PCR primers and probes {#s3f} ------------------------------------------------------ By real-time RT-PCR, pure cultures of *H. pylori* were positive whereas pure cultures of *E. coli* (Top10), *S. typhimurium* LT2, *V. cholerae* O139 (Classical Ogawa), *V. cholerae* O139 (El Tor), and *P. aeruginosa* were negative. *In vitro* validation of the *in situ* hybridization probe {#s3g} ---------------------------------------------------------- Pure cultures of *H. pylori* were positive whereas pure cultures of *E. coli*, *S. typhimurium*, *V. cholerae*, and *P. aeruginosa* were negative ([Figure 3](#pone-0002689-g003){ref-type="fig"}). This method is being used in the laboratory to specifically verify that *H. pylori* single colony isolates are not contaminated by other bacteria. ![Smears of bacteria processed by ISH and FISH using biotin-labeled probe specific for *H. pylori 16S rRNA* (1,000X).\ *H. pylori* isolate processed by ISH and using biotin-labeled probe specific for *H. pylori 16S rRNA* (A; avidin peroxidase-DAB; brown; and B; avidin alkaline phosphatase BCIP/NBT; blue) or by FISH \[C; avidin- fluorescein (FITC) stained green\]. Negative controls (light violet stain due to the hematoxylin QS counterstaining but no brown or blue reaction): ISH using biotin-labeled probe specific for *H. pylori 16S rRNA* (avidin- peroxidase-DAB; negative) in a strain of *S. typhimurium* LT2 (D). Negative controls of methods using PBS (E) or scorpion toxin (F).](pone.0002689.g003){#pone-0002689-g003} Determination of *H. pylori* density in gastric biopsies from Rhesus monkeys by RT-PCR {#s3h} -------------------------------------------------------------------------------------- Primary cultures of the first biopsy were negative in 105 monkeys that had less than 500 copies/100 ng of RNA extracted from the second biopsy. The number of positive cultures increased progressively with increasing *H. pylori* density (500--5,000: 2/29; 5,000--50,000: 8/30; and \>50,000: 15/30). Visualization of *H. pylori* in Rhesus monkey gastric biopsies by Genta and ISH {#s3i} ------------------------------------------------------------------------------- Biopsies from a Rhesus monkey colonized by both *H. pylori* and *H. heilmannii* demonstrated that only *H. pylori*--shaped bacteria were detected by ISH ([Figure 4](#pone-0002689-g004){ref-type="fig"}). ![Gastric biopsy of a rhesus monkey with *H. pylori* and *H. heilmannii* co-infection.\ Genta stain (A: 400X; insert: 1,000X) demonstrates the presence of high *H. heilmannii* infection (typical tightly spiraled, ∼10 µm-long rods), in addition to a few *H. pylori*--like bacteria (∼3 µm-long and curved). *In situ* hybridization (ISH) with *16S rRNA* probe (B: 400X; insert: 1,000X) demonstrates the presence of *H. pylori* (stained blue by the avidin alkaline phosphatase (nitroblue tetrazolium) while other, tightly spiraled bacteria are negative.](pone.0002689.g004){#pone-0002689-g004} Discussion {#s4} ========== In the present study, we used an *in silico* approach to demonstrate that a 546-bp domain of *H. pylori 16S rRNA* is highly conserved in most *H. pylori 16S rRNA* sequences registered in the NCBI GenBank and that a 229-bp sub-domain of this conserved region is specific to *H. pylori*. Within this sub-domain, it was possible to design an ISH probe and a set of real-time RT-PCR primers and a TaqMan probe that are 100% homologous with over 100 *H. pylori* strains isolated from humans residing in four continents, from monkeys [@pone.0002689-Drazek1], [@pone.0002689-Doi1], and from cats [@pone.0002689-Handt1]. In addition, 100% homology was found with many *Helicobacter* sp. that were later identified as *H. pylori*. Two are listed as *H. nemestrinae* (AF363064 and AF348617), although the strains are now recognized to be *H. pylori* [@pone.0002689-Suerbaum1]. The revised GenBank description of the strain, under "source" and "organism" reflects the correction, although the name *H. nemestrinae* still remains associated with the accession number. Four other strains are published in GenBank as *Helicobacter* sp. "liver" (AF142583 and AF142585) although a subsequent phylogenetic study suggested that they are, in fact, *H. pylori* [@pone.0002689-Avenaud1]. Five other sequences correspond to those of *H. pylori*--like DNA extracted from the liver of patients with hepatitis C [@pone.0002689-Castera1]. Another strain is currently listed as a *H. heilmannii* (AF506794) in NCBI, but this strain is not mentioned in the publication [@pone.0002689-ORourke1] because it clustered with *H. pylori* by both *16S rRNA* and urease sequencing (O\'Rourke, personal communication). Finally, 13 of the 100% homologous *Helicobacter sp.* strains are extremely close to *H. pylori* and were isolated from carnivores including cheetahs and a tiger, and from horses. Interestingly, these animals live in close association with humans and they may be infected with *H. pylori* [@pone.0002689-Eppinger1]. Importantly, the 76-bp region corresponding to the primers and probe and the 37- and 33-bp sequences of the ISH probes have multiple mismatches with non-*H. pylori* sequences. *16S rRNA* was chosen for detection and quantification of *H. pylori* because ribosomal RNAs exhibit a high degree of functional and evolutionary homology within all bacteria and those sequences have been used for phylogenetic comparisons and classifications of microbial organisms [@pone.0002689-Drancourt1], [@pone.0002689-Gorkiewicz2]. Analysis of *16S rRNA* in bacteria led to the detection of conserved, semi-conserved and non-conserved domains in this gene and to the development of molecular techniques that can specifically identify a variety of bacteria species [@pone.0002689-Gray1]. *Helicobacter* genus-specific primers for *16S rRNA* have been used in PCR amplification as a screening tool to detect *Helicobacter* organisms in biological specimens [@pone.0002689-Fox1], [@pone.0002689-Riley1]. Although the sequences corresponding to these primers are common to most species within the genus *Helicobacter*, sequencing and restriction enzyme analysis showed that the nucleotide sequence delimited by the primers varies with the species [@pone.0002689-Fox1], [@pone.0002689-Riley1]. In order to specifically identify *H. pylori*, Ho et al. proposed an assay based on PCR amplification of a 109-nucleotide segment within the *16S rRNA* sequence [@pone.0002689-Ho1], but these primers were subsequently shown to be non-specific for *H. pylori* [@pone.0002689-Chong1]. In recent years, real-time RT-PCR and ISH have become standard methods in well-equipped laboratories and many well-trained laboratory technicians have the required expertise to perform the tests. Therefore, we believe that the information provided in the present paper will lead to their use in clinical practice, especially since the calculated cost for real-time RT-PCR reagents and supplies is less that \$2.00/sample. In summary, a 76-bp region of *H. pylori 16S rRNA* that is common to a large number of *H. pylori* sequences and is specific to this bacterium was used to design primers and probes to be used in real-time RT-PCR and ISH assays. Both approaches are very sensitive and specific for *H. pylori* and the real time RT-PCR assay can be used readily in most modern laboratories if frozen samples have been saved. If only archived specimens are available, then the more specialized *in situ* hybridization assay can be used. We propose that both assays combine sensitivity and specificity, making them strong clinical tools for precise and rapid identification of *H. pylori* in biological specimens harvested from humans, animals, or environmental source. Supporting Information {#s5} ====================== ###### Text (0.04 MB DOC) ###### Click here for additional data file. We thank Dr. R. Peek for providing strain J99 and Dr. S. Merrell for providing strain 26695 and for reviewing the manuscript. The opinions and assertions contained herein are the private ones of the authors and are not to be construed as official or reflecting the views of the Department of Defense, the Uniformed Services University of the Health Sciences or the Defense Nuclear Agency. **Competing Interests:**The authors have declared that no competing interests exist. **Funding:**Work supported in part by USUHS grant R0-83GM and by NIH Grant R01-CA082312. [^1]: Conceived and designed the experiments: HL CSM SQD AD. Performed the experiments: HL AR CSM AD. Analyzed the data: HL AR CSM SQD AD. Wrote the paper: HL CSM SQD AD.
Q: xutility file? I'm trying to use C code with opencv in face detection and counting, but I cannot build the source. I am trying to compile my project and I am having a lot of problems with a line in the xutility file. The error message shows that it is giving errors in the xutility file. Please help me solve this problem? Code // Include header files #include "stdafx.h" #include "cv.h" #include "highgui.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <assert.h> #include <math.h> #include <float.h> #include <limits.h> #include <time.h> #include <ctype.h> #include <iostream> #include <fstream> #include <vector> using namespace std; #ifdef _EiC #define WIN32 #endif int countfaces=0; int numFaces = 0; int k=0 ; int list=0; char filelist[512][512]; int timeCount = 0; static CvMemStorage* storage = 0; static CvHaarClassifierCascade* cascade = 0; void detect_and_draw( IplImage* image ); void WriteInDB(); int found_face(IplImage* img,CvPoint pt1,CvPoint pt2); int load_DB(char * filename); const char* cascade_name = "C:\\Program Files\\OpenCV\\OpenCV2.1\\data\\haarcascades\\haarcascade_frontalface_alt_tree.xml"; // BEGIN NEW CODE #define WRITEVIDEO char* outputVideo = "c:\\face_counting1_tracked.avi"; //int faceCount = 0; int posBuffer = 100; int persistDuration = 10; //faces can drop out for 10 frames int timestamp = 0; float sameFaceDistThreshold = 30; //pixel distance CvPoint facePositions[100]; int facePositionsTimestamp[100]; float distance( CvPoint a, CvPoint b ) { float dist = sqrt(float ( (a.x-b.x)*(a.x-b.x) + (a.y-b.y)*(a.y-b.y) ) ); return dist; } void expirePositions() { for (int i = 0; i < posBuffer; i++) { if (facePositionsTimestamp[i] <= (timestamp - persistDuration)) //if a tracked pos is older than three frames { facePositions[i] = cvPoint(999,999); } } } void updateCounter(CvPoint center) { bool newFace = true; for(int i = 0; i < posBuffer; i++) { if (distance(center, facePositions[i]) < sameFaceDistThreshold) { facePositions[i] = center; facePositionsTimestamp[i] = timestamp; newFace = false; break; } } if(newFace) { //push out oldest tracker for(int i = 1; i < posBuffer; i++) { facePositions[i] = facePositions[i - 1]; } //put new tracked position on top of stack facePositions[0] = center; facePositionsTimestamp[0] = timestamp; countfaces++; } } void drawCounter(IplImage* image) { // Create Font char buffer[5]; CvFont font; cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, .5, .5, 0, 1); cvPutText(image, "Faces:", cvPoint(20, 20), &font, CV_RGB(0,255,0)); cvPutText(image, itoa(countfaces, buffer, 10), cvPoint(80, 20), &font, CV_RGB(0,255,0)); } #ifdef WRITEVIDEO CvVideoWriter* videoWriter = cvCreateVideoWriter(outputVideo, -1, 30, cvSize(240, 180)); #endif //END NEW CODE int main( int argc, char** argv ) { CvCapture* capture = 0; IplImage *frame, *frame_copy = 0; int optlen = strlen("--cascade="); const char* input_name; if( argc > 1 && strncmp( argv[1], "--cascade=", optlen ) == 0 ) { cascade_name = argv[1] + optlen; input_name = argc > 2 ? argv[2] : 0; } else { cascade_name = "C:\\Program Files\\OpenCV\\OpenCV2.1\\data\\haarcascades\\haarcascade_frontalface_alt_tree.xml"; input_name = argc > 1 ? argv[1] : 0; } cascade = (CvHaarClassifierCascade*)cvLoad( cascade_name, 0, 0, 0 ); if( !cascade ) { fprintf( stderr, "ERROR: Could not load classifier cascade\n" ); fprintf( stderr, "Usage: facedetect --cascade=\"<cascade_path>\" [filename|camera_index]\n" ); return -1; } storage = cvCreateMemStorage(0); //if( !input_name || (isdigit(input_name[0]) && input_name[1] == '\0') ) // capture = cvCaptureFromCAM( !input_name ? 0 : input_name[0] - '0' ); //else capture = cvCaptureFromAVI( "c:\\face_counting1.avi" ); cvNamedWindow( "result", 1 ); if( capture ) { for(;;) { if( !cvGrabFrame( capture )) break; frame = cvRetrieveFrame( capture ); if( !frame ) break; if( !frame_copy ) frame_copy = cvCreateImage( cvSize(frame->width,frame->height), IPL_DEPTH_8U, frame->nChannels ); if( frame->origin == IPL_ORIGIN_TL ) cvCopy( frame, frame_copy, 0 ); else cvFlip( frame, frame_copy, 0 ); detect_and_draw( frame_copy ); if( cvWaitKey( 30 ) >= 0 ) break; } cvReleaseImage( &frame_copy ); cvReleaseCapture( &capture ); } else { if( !input_name || (isdigit(input_name[0]) && input_name[1] == '\0')) cvNamedWindow( "result", 1 ); const char* filename = input_name ? input_name : (char*)"lena.jpg"; IplImage* image = cvLoadImage( filename, 1 ); if( image ) { detect_and_draw( image ); cvWaitKey(0); cvReleaseImage( &image ); } else { /* assume it is a text file containing the list of the image filenames to be processed - one per line */ FILE* f = fopen( filename, "rt" ); if( f ) { char buf[1000+1]; while( fgets( buf, 1000, f ) ) { int len = (int)strlen(buf); while( len > 0 && isspace(buf[len-1]) ) len--; buf[len] = '\0'; image = cvLoadImage( buf, 1 ); if( image ) { detect_and_draw( image ); cvWaitKey(0); cvReleaseImage( &image ); } } fclose(f); } } } cvDestroyWindow("result"); #ifdef WRITEVIDEO cvReleaseVideoWriter(&videoWriter); #endif return 0; } void detect_and_draw( IplImage* img ) { static CvScalar colors[] = { {{0,0,255}}, {{0,128,255}}, {{0,255,255}}, {{0,255,0}}, {{255,128,0}}, {{255,255,0}}, {{255,0,0}}, {{255,0,255}} }; double scale = 1.3; IplImage* gray = cvCreateImage( cvSize(img->width,img->height), 8, 1 ); IplImage* small_img = cvCreateImage( cvSize( cvRound (img->width/scale), cvRound (img->height/scale)), 8, 1 ); CvPoint pt1, pt2; int i; cvCvtColor( img, gray, CV_BGR2GRAY ); cvResize( gray, small_img, CV_INTER_LINEAR ); cvEqualizeHist( small_img, small_img ); cvClearMemStorage( storage ); if( cascade ) { double t = (double)cvGetTickCount(); CvSeq* faces = cvHaarDetectObjects( small_img, cascade, storage, 1.1, 2, 0/*CV_HAAR_DO_CANNY_PRUNING*/, cvSize(30, 30) ); t = (double)cvGetTickCount() - t; printf( "detection time = %gms\n", t/((double)cvGetTickFrequency()*1000.) ); if (faces) { //To save the detected faces into separate images, here's a quick and dirty code: char filename[6]; for( i = 0; i < (faces ? faces->total : 0); i++ ) { /* CvRect* r = (CvRect*)cvGetSeqElem( faces, i ); CvPoint center; int radius; center.x = cvRound((r->x + r->width*0.5)*scale); center.y = cvRound((r->y + r->height*0.5)*scale); radius = cvRound((r->width + r->height)*0.25*scale); cvCircle( img, center, radius, colors[i%8], 3, 8, 0 );*/ // Create a new rectangle for drawing the face CvRect* r = (CvRect*)cvGetSeqElem( faces, i ); // Find the dimensions of the face,and scale it if necessary pt1.x = r->x*scale; pt2.x = (r->x+r->width)*scale; pt1.y = r->y*scale; pt2.y = (r->y+r->height)*scale; // Draw the rectangle in the input image cvRectangle( img, pt1, pt2, CV_RGB(255,0,0), 3, 8, 0 ); CvPoint center; int radius; center.x = cvRound((r->x + r->width*0.5)*scale); center.y = cvRound((r->y + r->height*0.5)*scale); radius = cvRound((r->width + r->height)*0.25*scale); cvCircle( img, center, radius, CV_RGB(255,0,0), 3, 8, 0 ); //update counter updateCounter(center); int y=found_face(img,pt1,pt2); if(y==0) countfaces++; }//end for printf("Number of detected faces: %d\t",countfaces); }//end if //delete old track positions from facePositions array expirePositions(); timestamp++; //draw counter drawCounter(img); #ifdef WRITEVIDEO cvWriteFrame(videoWriter, img); #endif cvShowImage( "result", img ); cvDestroyWindow("Result"); cvReleaseImage( &gray ); cvReleaseImage( &small_img ); }//end if } //end void int found_face(IplImage* img,CvPoint pt1,CvPoint pt2) { /*if (faces) {*/ CvSeq* faces = cvHaarDetectObjects( img, cascade, storage, 1.1, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(40, 40) ); int i=0; char filename[512]; for( i = 0; i < (faces ? faces->total : 0); i++ ) {//int scale = 1, i=0; //i=iface; //char filename[512]; /* extract the rectanlges only */ // CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i); CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i); //IplImage* gray_img = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 ); IplImage* clone = cvCreateImage (cvSize(img->width, img->height), IPL_DEPTH_8U, img->nChannels ); IplImage* gray = cvCreateImage (cvSize(img->width, img->height), IPL_DEPTH_8U, 1 ); cvCopy (img, clone, 0); cvNamedWindow ("ROI", CV_WINDOW_AUTOSIZE); cvCvtColor( clone, gray, CV_RGB2GRAY ); face_rect.x = pt1.x; face_rect.y = pt1.y; face_rect.width = abs(pt1.x - pt2.x); face_rect.height = abs(pt1.y - pt2.y); cvSetImageROI ( gray, face_rect); //// * rectangle = cvGetImageROI ( clone ); face_rect = cvGetImageROI ( gray ); cvShowImage ("ROI", gray); k++; char *name=0; name=(char*) calloc(512, 1); sprintf(name, "Image%d.pgm", k); cvSaveImage(name, gray); //////////////// for(int j=0;j<512;j++) filelist[list][j]=name[j]; list++; WriteInDB(); //int found=SIFT("result.txt",name); cvResetImageROI( gray ); //return found; return 0; // }//end if }//end for }//end void void WriteInDB() { ofstream myfile; myfile.open ("result.txt"); for(int i=0;i<512;i++) { if(strcmp(filelist[i],"")!=0) myfile << filelist[i]<<"\n"; } myfile.close(); } Error messages Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 8 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 13 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 18 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 23 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 10 error C2868: 'std::iterator_traits<_Iter>::value_type' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 25 error C2868: 'std::iterator_traits<_Iter>::reference' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 20 error C2868: 'std::iterator_traits<_Iter>::pointer' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 5 error C2868: 'std::iterator_traits<_Iter>::iterator_category' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 15 error C2868: 'std::iterator_traits<_Iter>::difference_type' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 9 error C2602: 'std::iterator_traits<_Iter>::value_type' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 24 error C2602: 'std::iterator_traits<_Iter>::reference' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 19 error C2602: 'std::iterator_traits<_Iter>::pointer' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 4 error C2602: 'std::iterator_traits<_Iter>::iterator_category' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 14 error C2602: 'std::iterator_traits<_Iter>::difference_type' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 7 error C2146: syntax error : missing ';' before identifier 'value_type' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 22 error C2146: syntax error : missing ';' before identifier 'reference' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 17 error C2146: syntax error : missing ';' before identifier 'pointer' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 2 error C2146: syntax error : missing ';' before identifier 'iterator_category' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 12 error C2146: syntax error : missing ';' before identifier 'difference_type' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 6 error C2039: 'value_type' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 21 error C2039: 'reference' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 16 error C2039: 'pointer' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 1 error C2039: 'iterator_category' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 11 error C2039: 'difference_type' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 A: I mentioned, you got function named "distance" in your code. So had I and I had been receiving the same error. Once I renamed this function to "Distance" my code was compiled successfully. My guess is that there is another function called "distance" defined somewere in xutility file. So the solution is just to rename the function. A: You state that you're "trying to use c code with opencv", but your code contains #include <iostream>. That's a C++ header. Now, there's no such thing as a C/C++ language. C and C++ are two distinct languages. You'll have to choose. A: I guess that one of your C files includes a header file that is C++, for example: #include <iostream> A general approach for solving such issues is isolating the problem: Determine which source file is problematic Remove as much code as possible from that file while making sure the problem still appears Edit your question, showing that code
Supraoptic nucleus The supraoptic nucleus (SON) is a nucleus of magnocellular neurosecretory cells in the hypothalamus of the mammalian brain. The nucleus is situated at the base of the brain, adjacent to the optic chiasm. In humans, the SON contains about 3,000 neurons. Function The cell bodies produce the peptide hormone vasopressin, which is also known as anti-diuretic hormone (ADH). This chemical messenger travels via the bloodstream to its target cells in the papillary ducts in the kidneys, enhancing water reabsorption. In the cell bodies, the hormones are packaged in large, membrane-bound vesicles that are transported down the axons to the nerve endings. The secretory granules are also stored in packets along the axon called Herring bodies. Similar magnocellular neurons are also found in the paraventricular nucleus. Signaling Each neuron in the nucleus has one long axon that projects to the posterior pituitary gland, where it gives rise to about 10,000 neurosecretory nerve terminals. The magnocellular neurons are electrically excitable: In response to afferent stimuli from other neurons, they generate action potentials, which propagate down the axons. When an action potential invades a neurosecretory terminal, the terminal is depolarised, and calcium enters the terminal through voltage-gated channels. The calcium entry triggers the secretion of some of the vesicles by a process known as exocytosis. The vesicle contents are released into the extracellular space, from where they diffuse into the bloodstream. Regulation of supraoptic neurons Vasopressin (antidiuretic hormone, ADH) is released in response to solute concentration in the blood, decreased blood volume, or blood pressure. Some other inputs come from the brainstem, including from some of the noradrenergic neurons of the nucleus of the solitary tract and the ventrolateral medulla. However, many of the direct inputs to the supraoptic nucleus come from neurons just outside the nucleus (the "perinuclear zone"). Of the afferent inputs to the supraoptic nucleus, most contain either the inhibitory neurotransmitter GABA or the excitatory neurotransmitter glutamate, but these transmitters often co-exist with various peptides. Other afferent neurotransmitters include noradrenaline (from the brainstem), dopamine, serotonin, and acetylcholine. The supraoptic nucleus as a "model system" The supraoptic nucleus is an important "model system" in neuroscience. There are many reasons for this: Some technical advantages of working on the supraoptic nucleus are that the cell bodies are relatively large, the cells make exceptionally large amounts of their secretory products, and the nucleus is relatively homogeneous and easy to separate from other brain regions. The gene expression and electrical activity of supraoptic neurons has been studied extensively, in many physiological and experimental conditions. These studies have led to many insights of general importance, as in the examples below. Morphological plasticity in the supraoptic nucleus Anatomical studies using electron microscopy have shown that the morphology of the supraoptic nucleus is remarkably adaptable. For example, during lactation there are large changes in the size and shape of the oxytocin neurons, in the numbers and types of synapses that these neurons receive, and in the structural relationships between neurons and glial cells in the nucleus. These changes arise during parturition, and are thought to be important adaptations that prepare the oxytocin neurons for a sustained high demand for oxytocin. Oxytocin is essential for milk let-down in response to suckling. These studies showed that the brain is much more "plastic" in its anatomy than previously recognized, and led to great interest in the interactions between glial cells and neurons in general. Stimulus-secretion coupling In response to, for instance, a rise in the plasma sodium concentration, vasopressin neurons also discharge action potentials in bursts, but these bursts are much longer and are less intense than the bursts displayed by oxytocin neurons, and the bursts in vasopressin cells are not synchronised. It seemed strange that the vasopressin cells should fire in bursts. As the activity of the vasopressin cells is not synchronised, the overall level of vasopressin secretion into the blood is continuous, not pulsatile. Richard Dyball and his co-workers speculated that this pattern of activity, called "phasic firing", might be particularly effective for causing vasopressin secretion. They showed this to be the case by studying vasopressin secretion from the isolated posterior pituitary gland in vitro. They found that vasopressin secretion could be evoked by electrical stimulus pulses applied to the gland, and that much more hormone was released by a phasic pattern of stimulation than by a continuous pattern of stimulation. These experiments led to interest in "stimulus-secretion coupling" - the relationship between electrical activity and secretion. Supraoptic neurons are unusual because of the large amounts of peptide that they secrete, and because they secrete the peptides into the blood. However, many neurons in the brain, and especially in the hypothalamus, synthesize peptides. It is now thought that bursts of electrical activity might be generally important for releasing large amounts of peptide from peptide-secreting neurons. Dendritic secretion Supraoptic neurons have typically 1-3 large dendrites, most of which projecting ventrally to form a mat of process at the base of the nucleus, called the ventral glial lamina. The dendrites receive most of the synaptic terminals from afferent neurons that regulate the supraoptic neurons, but neuronal dendrites are often actively involved in information processing, rather than being simply passive receivers of information. The dendrites of supraoptic neurons contain large numbers of neurosecretory vesicles that contain oxytocin and vasopressin, and they can be released from the dendrites by exocytosis. The oxytocin and vasopressin that is released at the posterior pituitary gland enters the blood, and cannot re-enter the brain because the blood–brain barrier does not allow oxytocin and vasopressin through, but the oxytocin and vasopressin that is released from dendrites acts within the brain. Oxytocin neurons themselves express oxytocin receptors, and vasopressin neurons express vasopressin receptors, so dendritically-released peptides "autoregulate" the supraoptic neurons. Francoise Moos and Phillipe Richard first showed that the autoregulatory action of oxytocin is important for the milk-ejection reflex. These peptides have relatively long half-lives in the brain (about 20 minutes in the CSF), and they are released in large amounts in the supraoptic nucleus, and so they are available to diffuse through the extracellular spaces of the brain to act at distant targets. Oxytocin and vasopressin receptors are present in many other brain regions, including the amygdala, brainstem, and septum, as well as most nuclei in the hypothalamus. Because so much vasopressin and oxytocin are released at this site, studies of the supraoptic nucleus have made an important contribution to understanding how release from dendrites is regulated, and in understanding its physiological significance. Studies have demonstrated that secretin helps to facilitate dendritic oxytocin release in the SON, and that secretin administration into the SON enhances social recognition in rodents. This enhanced social capability appears to be working through secretin's effects on oxytocin neurons in the SON, as blocking oxytocin receptors in this region blocks social recognition. Co-existing peptides Vasopressin neurons and oxytocin neurons make many other neuroactive substances in addition to vasopressin and oxytocin, though most are present only in small quantities. However, some of these other substances are known to be important. Dynorphin produced by vasopressin neurons is involved in regulating the phasic discharge patterning of vasopressin neurons, and nitric oxide produced by both neuronal types is a negative-feedback regulator of cell activity. Oxytocin neurons also make dynorphin; in these neurons, dynorphin acts at the nerve terminals in the posterior pituitary as a negative feedback inhibitor of oxytocin secretion. Oxytocin neurons also make large amounts of cholecystokinin as well as the cocaine and amphetamine regulatory transcript (CART). See also Paraventricular nucleus References External links Category:Neuroendocrinology Category:Hypothalamus
Q: Unable to install RPostgreSQL package in R Studio on CentOS 7 I installed PostgreSQL 9.6 with PostGIS 2.3 using one click installer of Enterprise DB available here on my CentOS 7 (x64) Linux based machine. Now I am trying to connect R Studio to Postgres. To do so, I tried to install RPostgreSQL package in R Studio but I am getting following error: > install.packages("RPostgreSQL") Installing package into ‘/home/jk/R/x86_64-redhat-linux-gnu-library/3.3’ (as ‘lib’ is unspecified) trying URL 'https://cran.rstudio.com/src/contrib/RPostgreSQL_0.4-1.tar.gz' Content type 'unknown' length 476204 bytes (465 KB) ================================================== downloaded 465 KB * installing *source* package ‘RPostgreSQL’ ... ** package ‘RPostgreSQL’ successfully unpacked and MD5 sums checked checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for pg_config... no configure: checking for PostgreSQL header files configure: Checking include /usr/include. configure: Checking include /usr/include/pgsql. configure: Checking include /usr/include/postgresql. configure: Checking include /usr/local/include. configure: Checking include /usr/local/include/pgsql. configure: Checking include /usr/local/include/postgresql. configure: Checking include /usr/local/pgsql/include. configure: Checking include /usr/local/postgresql/include. configure: Checking include /opt/include. configure: Checking include /opt/include/pgsql. configure: Checking include /opt/include/postgresql. configure: Checking include /opt/local/include. configure: Checking include /opt/local/include/postgresql. configure: Checking include /opt/local/include/postgresql84. configure: Checking include /sw/opt/postgresql-8.4/include. configure: Checking include /Library/PostgresPlus/8.4SS/include. configure: Checking include /sw/include/postgresql. configure: Checking lib /usr/lib. configure: Checking lib /usr/lib/pgsql. configure: Checking lib /usr/lib/postgresql. configure: Checking lib /usr/local/lib. configure: Checking lib /usr/local/lib/pgsql. configure: Checking lib /usr/local/lib/postgresql. configure: Checking lib /usr/local/pgsql/lib. configure: Checking lib /usr/local/postgresql/lib. configure: Checking lib /opt/lib. configure: Checking lib /opt/lib/pgsql. configure: Checking lib /opt/lib/postgresql. configure: Checking lib /opt/local/lib. configure: Checking lib /opt/local/lib/postgresql. configure: Checking lib /opt/local/lib/postgresql84. configure: Checking lib /sw/opt/postgresql-8.4/lib. configure: Checking lib /Library/PostgresPlus/8.4SS/lib. configure: Checking lib /sw/lib. checking for "/libpq-fe.h"... no configure: creating ./config.status config.status: creating src/Makevars ** libs gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-DBI.c -o RS-DBI.o gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-PQescape.c -o RS-PQescape.o In file included from RS-PQescape.c:7:0: RS-PostgreSQL.h:23:26: fatal error: libpq-fe.h: No such file or directory # include "libpq-fe.h" ^ compilation terminated. make: *** [RS-PQescape.o] Error 1 ERROR: compilation failed for package ‘RPostgreSQL’ * removing ‘/home/jk/R/x86_64-redhat-linux-gnu-library/3.3/RPostgreSQL’ Warning in install.packages : installation of package ‘RPostgreSQL’ had non-zero exit status The installation directory of PostgreSQL 9.6 is /opt/PostgreSQL/9.6/bin which doesn't seem to be in the error above. Could someone help me to resolve this error? EDIT 1: Thanks to the suggestion of @lavajumper, I got rid of above error. But now getting this error which shows some missing html links. > install.packages("RPostgreSQL") Installing package into ‘/home/jk/R/x86_64-redhat-linux-gnu-library/3.3’ (as ‘lib’ is unspecified) trying URL 'https://cran.rstudio.com/src/contrib/RPostgreSQL_0.4-1.tar.gz' Content type 'unknown' length 476204 bytes (465 KB) ================================================== downloaded 465 KB * installing *source* package ‘RPostgreSQL’ ... ** package ‘RPostgreSQL’ successfully unpacked and MD5 sums checked checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for pg_config... no configure: checking for PostgreSQL header files configure: Checking include /usr/include. configure: Checking lib /usr/lib. configure: Checking lib /usr/lib/pgsql. configure: Checking lib /usr/lib/postgresql. configure: Checking lib /usr/local/lib. configure: Checking lib /usr/local/lib/pgsql. configure: Checking lib /usr/local/lib/postgresql. configure: Checking lib /usr/local/pgsql/lib. configure: Checking lib /usr/local/postgresql/lib. configure: Checking lib /opt/lib. configure: Checking lib /opt/lib/pgsql. configure: Checking lib /opt/lib/postgresql. configure: Checking lib /opt/local/lib. configure: Checking lib /opt/local/lib/postgresql. configure: Checking lib /opt/local/lib/postgresql84. configure: Checking lib /sw/opt/postgresql-8.4/lib. configure: Checking lib /Library/PostgresPlus/8.4SS/lib. configure: Checking lib /sw/lib. checking for "/usr/include/libpq-fe.h"... yes configure: creating ./config.status config.status: creating src/Makevars ** libs gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-DBI.c -o RS-DBI.o gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-PQescape.c -o RS-PQescape.o gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-PostgreSQL.c -o RS-PostgreSQL.o RS-PostgreSQL.c: In function ‘RS_PostgreSQL_createDataMappings’: RS-PostgreSQL.c:446:5: warning: passing argument 1 of ‘Rf_protect’ from incompatible pointer type [enabled by default] PROTECT(flds = RS_DBI_allocFields(num_fields)); ^ In file included from /usr/include/R/Rdefines.h:36:0, from S4R.h:64, from RS-DBI.h:29, from RS-PostgreSQL.h:25, from RS-PostgreSQL.c:17: /usr/include/R/Rinternals.h:1348:6: note: expected ‘SEXP’ but argument is of type ‘struct RS_DBI_fields *’ SEXP Rf_protect(SEXP); ^ gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-pgsql-copy.c -o RS-pgsql-copy.o gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-pgsql-getResult.c -o RS-pgsql-getResult.o gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-pgsql-pqexec.c -o RS-pgsql-pqexec.o gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-pgsql-pqexecparams.c -o RS-pgsql-pqexecparams.o gcc -m64 -std=gnu99 -shared -L/usr/lib64/R/lib -Wl,-z,relro -o RPostgreSQL.so RS-DBI.o RS-PQescape.o RS-PostgreSQL.o RS-pgsql-copy.o RS-pgsql-getResult.o RS-pgsql-pqexec.o RS-pgsql-pqexecparams.o -L -lpq -L/usr/lib64/R/lib -lR installing to /home/jk/R/x86_64-redhat-linux-gnu-library/3.3/RPostgreSQL/libs ** R ** inst ** preparing package for lazy loading Creating a generic function for ‘format’ from package ‘base’ in package ‘RPostgreSQL’ Creating a generic function for ‘print’ from package ‘base’ in package ‘RPostgreSQL’ Creating a generic function for ‘summary’ from package ‘base’ in package ‘RPostgreSQL’ ** help *** installing help indices converting help for package ‘RPostgreSQL’ finding HTML links ... done PostgreSQL html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL /man/PostgreSQL.Rd:26: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQL.Rd:76: missing file link ‘dbUnloadDriver’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQL.Rd:84: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQL.Rd:89: missing file link ‘dbCommit’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQL.Rd:90: missing file link ‘dbRollback’ PostgreSQLConnection-class html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLConnection-class.Rd:20: missing file link ‘dbCommit’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLConnection-class.Rd:32: missing file link ‘dbRollback’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLConnection-class.Rd:34: missing file link ‘dbWriteTable’ PostgreSQLDriver-class html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLDriver-class.Rd:25: missing file link ‘dbUnloadDriver’ PostgreSQLObject-class html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLObject-class.Rd:20: missing file link ‘isSQLKeyword’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLObject-class.Rd:22: missing file link ‘SQLKeywords’ PostgreSQLResult-class html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLResult-class.Rd:31: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLResult-class.Rd:32: missing file link ‘fetch’ S4R html dbApply-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbApply-methods.Rd:27: missing file link ‘fetch’ dbApply html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbApply.Rd:37: missing file link ‘fetch’ dbCallProc-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbCallProc-methods.Rd:31: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbCallProc-methods.Rd:32: missing file link ‘dbCommit’ dbCommit-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbCommit-methods.Rd:36: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbCommit-methods.Rd:37: missing file link ‘dbCommit’ dbConnect-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbConnect-methods.Rd:58: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbConnect-methods.Rd:59: missing file link ‘dbCommit’ dbDataType-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbDataType-methods.Rd:33: missing file link ‘isSQLKeyword’ dbDriver-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbDriver-methods.Rd:26: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbDriver-methods.Rd:44: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbDriver-methods.Rd:45: missing file link ‘dbCommit’ dbGetInfo-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbGetInfo-methods.Rd:47: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbGetInfo-methods.Rd:48: missing file link ‘dbCommit’ dbListTables-methods html dbObjectId-class html dbReadTable-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbReadTable-methods.Rd:119: missing file link ‘isSQLKeyword’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbReadTable-methods.Rd:124: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbReadTable-methods.Rd:125: missing file link ‘dbCommit’ dbSendQuery-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbSendQuery-methods.Rd:40: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbSendQuery-methods.Rd:41: missing file link ‘dbCommit’ dbSetDataMappings-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbSetDataMappings-methods.Rd:33: missing file link ‘fetch’ fetch-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/fetch-methods.Rd:46: missing file link ‘dbCommit’ isPostgresqlIdCurrent html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/isPostgresqlIdCurrent.Rd:34: missing file link ‘fetch’ make.db.names-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/make.db.names-methods.Rd:69: missing file link ‘dbWriteTable’ postgresqlBuildTableDefinition html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/postgresqlBuildTableDefinition.Rd:41: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/postgresqlBuildTableDefinition.Rd:42: missing file link ‘dbCommit’ postgresqlDBApply html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/postgresqlDBApply.Rd:75: missing file link ‘fetch’ postgresqlSupport html summary-methods html ** building package indices ** testing if installed package can be loaded Error in dyn.load(file, DLLpath = DLLpath, ...) : unable to load shared object '/home/jk/R/x86_64-redhat-linux-gnu-library /3.3/RPostgreSQL/libs/RPostgreSQL.so': /home/jk/R/x86_64-redhat-linux-gnu-library/3.3/RPostgreSQL/libs/RPostgreSQL.so: undefined symbol: PQfmod Error: loading failed Execution halted ERROR: loading failed * removing ‘/home/jk/R/x86_64-redhat-linux-gnu-library/3.3/RPostgreSQL’ Warning in install.packages : installation of package ‘RPostgreSQL’ had non-zero exit status A: Okay, I figured out the problem by myself. Answer given by @Manoj at this link helped me to resolve the second error. According to the mentioned link, RPostgreSQL checks for libraries in these directories only: /usr/lib /usr/lib/pgsql /usr/lib/postgresql /usr/local/lib /usr/local/lib/pgsql /usr/local/lib/postgresql /usr/local/pgsql/lib /usr/local/postgresql/lib /opt/lib /opt/lib/pgsql /opt/lib/postgresql /opt/local/lib /opt/local/lib/postgresql /opt/local/lib/postgresql84 /sw/opt/postgresql-8.4/lib /Library/PostgresPlus/8.4SS/lib /sw/lib So as a superuser I copied library files from Postgres installation directory to /usr/lib and run the command again like this in R Studio: install.packages('RPostgreSQL', dependencies=TRUE, repos='http://cran.rstudio.com/') and it worked! :)
Chubby Hubby Rice Krispies Treats Do you remember the amazing Chubby Hubby Truffles from a month or so ago? The truffles that I fashioned after my favorite ice cream of all time – Ben & Jerry’s Chubby Hubby? For those of you who may not have heard of it, Chubby Hubby consists of vanilla ice cream chock full of peanut butter-filled, chocolate-covered pretzels. It got me through college, and I still can’t get enough of that flavor combination to this day. Given how much I loved the truffles, and how much you all said you loved the truffles, I started brainstorming about other things that I could put a chubby hubby spin on and came up with these peanut butter-pretzel Rice Krispies treats topped with chocolate. Talk about addicting! Regular Rice Krispies treats get some peanut butter added to the melted marshmallows, and then in place of some of the Rice Krispies, you add pieces of crushed up pretzels, and I threw in some peanut butter chips for even more peanut butter flavor. Then, of course, they are topped with chocolate. If you want to up the chocolate factor, you could always throw some chocolate chips in with the peanut butter chips (or swap them). I think these have an awesome balance of all three flavors, and can’t stop eating them. Although that’s not necessarily surprising since I am a huge fan for Rice Krispies treats and am always a sucker for any fun variation on the classic recipe. Do you have a favorite version of Rice Krispies treats? I always love finding new varieties! Directions: 1. Grease a 9x13-inch baking dish; set aside. 2. Melt the butter in a large saucepan over low heat. Add the marshmallows and stir until completely melted. Remove from heat. 3. Off the heat, add the peanut butter and stir until completely melted and smooth. Add the Rice Krispies cereal, pretzel pieces and peanut butter chips, stirring gently until all dry ingredients are coated. Turn the mixture into the prepared baking dish and press it into the pan, creating an even top. 4. Microwave the chocolate chips on 50% power in 30-second intervals, stirring after each, until completely melted and smooth. Pour the chocolate over the treats and spread in an even layer with an offset spatula. Let cool at room temperature until set, about 1 hour. (You can also refrigerate the treats for about 30 minutes to speed up the process.) Store leftovers at room temperature in an airtight container. 61 comments Show Me: Comments These look amaaaazing. I think fancy Rice Krispie treats are becoming a thing. I’ve been drooling over a caramel macadamia Rice Krispie treat I saw on a blog a few days ago, and now I’m coveting these as well! First time, long time 🙂 Your chubby hubby krispies sound great. Have you heard of mars bar slice? It is something of an institution here in Australia, a no bake slice based on rice bubbles (krispies) and mars bars! Happy to share if you’re keen. Thank you Kim! I’m wondering if there is a good substitute for the Copha, as you can’t get it here in the U.S. (although I saw it is available on Amazon). Could coconut butter or oil be used? Or vegetable shortening? What do you think? Hi Michelle. To be honest, I would just omit it. I’ve made mars bar slice many times and have never bothered to melt copha into the chocolate and I generally don’t bother to use cooking chocolate for the topping either – just a good quality “eating quality” milk chocolate (that would be cadbury’s here in Oz!) I think I have saved every Rice Krispie treat recipe you have posted. I loved the Reece’s peanut butter cup ones. Sooo addicting! I also love making Rice Krispie treats with seasonal marshmallows. I have posted a few on my site like Pumpkin Spice with Maple Cream Cheese Frosting. Those are always fun! And sometimes I actually like the simplicity (and fewer calories) of plain old Rice Krispie treats! Wow…these sound good. I make traditional RK treats but I add peanut butter to melted chocolate and spread that over the treats. The addition of pretzels sounds really good! I would love to see the recipe from Austrailia…Mars Slice! Bring it on! OMG! I always though Chubby Hubby was how you referred to your Husband! lol These look fantastic!! I think I need to make these, anything with peanut butter and chocolate is going to be good, hopefully I won’t eat the whole pan. I’ll be calling you when I gain 5 more lbs. 🙂 These sound SO good! I like making cake batter rice krispie treats- just add 1/4 cup of cake mix to the marshmallow mixture before adding the rice krispies. They are best when topped with sprinkles, and they go over VERY well! 🙂 These just sound delish!!! Have you ever tried using special K cereal with the strawberys in them in place of the rice krispies? The strawberrys end up flavoring the marshmallows and are so yummy. My mother use to make something like this with reg. Special K cereal and pb. and chocolate on top though. Will have to hunt for this recipe too! Oh I am sooo hungry now!!! Have you tried scotcheroos before? It’s the same idea as these bars but with butterscotch chips melted into the chocolate on top. So. Good. The recipe is on the Rice Krispies website (it’s super simple and only needs one measuring cup!). Great recipe Michelle! My father was Italian, and my mother was French/Canadian. Food was front and center at every French/Canadian event. I know I will love this recipe, because I like the crunch of crushed pretzels in any gooey dessert. I can’t wait to look at your other recipes! My favorite rice krispie square is the Butterscotch Treats at ricekrispies.com that incorporate a bit of butterscotch pudding mix into the recipe, which I change up by pouring on my favorite chocolate frosting. I don’t like to wait for the squares to cool, and this frosting can be poured on the hot squares, or on hot brownies, or hot baked cakes, right out of the oven. Enjoy! Lyn onefoodietoanother.blogspot.com anoldcookbookcollector.blogspot.com everyonesfavoriterecipes.blogspot.com recipefinderforeveryone.blogspot.com Why yes, I do see these awesome krispy treats in my future! I have to admit though, my favorite are the kinda stale plain old fashioned ones you find in the wrapper at gas stations and things. They age like a fine wine 😉 Just found your blog and am excited to explore! These do sound delicious. For me, a key flavor in Chubby Hubby is the malt in the vanilla ice cream. I’m going to try these with some malt powder stirred in. I’ll let you know how they turn out. My favorite version is a Pumpkin Spice Krispie Treats that I make with Erewhon’s Crispy Brown Rice Cereal. My son ate 2 batches really quickly and wants more. Made me make a batch to bring to our friends house on Christmas Day. She loved them too. This looks amazing too. I may make these for Valentine’s Day – for my Valentine – Hubby of 22 years who is a PB fanatic. Thanks for making me sooo hungry. I made these tonight and your focaccia! These came out fantastic! I mistakenly purchased chocolate and peanut butter morsels but they turned out awesome! Just a little more peanut butter deliciousness! The focaccia I tweeked a bit (having worked in a bakery for years) but still came out great! Great website!! Thanks for sharing all your recipes! I have made 5 batches of them and have eaten at least 4 of them myself! As a new mom who is not getting much sleep and relying on sugar to keep me awake, these have come in handy at all hours of the day! I have even upped the pretzel/krispie ratio because I love the addition of the salty. Everyone goes crazy for them, on the rare occasion to choose to share! Thanks for this one!! Help! This recipe totally frustrates me. I’m a pretty avid and adventurous cook/baker and thought these would be easy to whip up. I’ve tried twice now and always encounter the same problem…when I take the butter/marshmallow mixture off the stove and add the peanut butter, the mix starts to stiffen to the point that nothing can be added to it. What am I doing wrong? Please help. Thanks! Hi Jill, You could use the natural peanut butter that does not require stirring or refrigeration – I believe that JIF and Skippy both make versions of this. I would stay away from the natural varieties that require stirring and refrigeration, as it’s generally too oily. Hi, Thanks for this recipe! Another one of your posts led me here, I love the embedded links. These were so good, I made them for a movie party. The pb was so creamy plus salty crunchy pretzels! The chocolate chips on top put me over the edge. I can’t get enough. I keep walking by and cutting off another little hunk, and another . . . I think it will be a staple in the house! Thanks for this creative delicious recipe!
Find a job that you love. Although work is an expected societal norm, your career shouldn't be restraining. If you hate what you do, you aren't going to be happy, plain and simple. You don't need to love every aspect of your job, but it needs to be exciting enough that you don't dread getting out of bed every morning. Finding a job that you are so passionate about you would do it for free. If your job is draining you, and you are finding it difficult to do the things you love outside of work, something is wrong. You may be working in a toxic environment, for a toxic person, or doing a job that you truly don't love. If this is the case, it is time to find a new job.. Share The Work. You don’t have to do everything! Recruit some help from your fellow employees or family members. Delegate responsibilities, and you’ll get everything done in a timely manner. This facilitates teamwork and makes everyone feel like they’ve made a contribution. It also gives you some relief while your helpers gain a sense of accomplishment. Make exercise a must-do, not a should-do It’s easy to cancel the gym, the evening run or the yoga class because a client wants something done yesterday. Instead, ensure exercise is given as much priority as your clients and making money. A healthy body means a fresh mind, which means you will function better and complete tasks in less time. Establish Boundaries Set fair and realistic limits on what you will and will not do both at work and at home. Clearly communicate these boundaries to your supervisor, coworkers, partner and family. For instance, you might commit to not working late on certain days unless there is a crisis. Additionally, set aside a time at home during which you will not check or respond to work-related emails or voice mails. Take a vacation. Sometimes, truly unplugging means taking vacation time and shutting work completely off for a while. Whether your vacation consists of a one-day staycation or a two-week trip to Bali, it's important to take time off to physically and mentally recharge. According to the State of American Vacation 2018 study conducted by the U.S. Travel Association, 52% of employees reported having unused vacation days left over at the end of the year. Employees are often worried that taking time off will disrupt the workflow, and they will be met with a backlog of work when they return. This fear should not restrict you from taking a much-needed break. The truth is, there is no nobility in not taking well-deserved time away from work; the benefits of taking a day off far outweigh the downsides. With proper planning, you can take time away without worrying about burdening your colleagues or contending with a huge workload when you return. Eliminate Distractions. If your personal life interferes with your job, set some boundaries. Be firm when you have work to finish. Discourage frequent personal calls or visits. Discourage frequent personal calls or visits. Leave your cell phone in the car or switch it off during work hours. Delay socializing with co-workers until your work is done. Once you’ve learned to streamline your workday, you’ll find that you no longer dread going to work and bringing it all home (or closing the door to your home office). Imagine knowing that you’re ready to start your day instead of playing catch up from yesterday. Take time to make time Invest in time-tracking tools. There are plenty of tools you can use to track everything from the frequency and duration of meetings, to chasing and converting leads. Time-tracking software allows you to quickly build an understanding of how long a particular task takes. That way, you can effectively estimate how long your next work task will take. Be realistic At the end of each working day, perform a little self-analysis. Ask yourself what worked today, what didn’t, what went wrong and how the issue can be fixed. Remember there are thousands of businesses just like yours learning the same lessons every day. Don’t forget to tap into the valuable resources around you – your peers – for help. Prioritize your health. Your overall physical, emotional and mental health should be your main concern. If you struggle with anxiety or depression and think therapy would benefit you, fit those sessions into your schedule, even if you have to leave work early or ditch your evening spin class. If you are battling a chronic illness, don't be afraid to call in sick on rough days. Overworking yourself prevents you from getting better, possibly causing you to take more days off in the future. Prioritizing your health first and foremost will make you a better employee and person. You will miss less work, and when you are there, you will be happier and more productive. Talk it out with your bosses Talk it out with your bosses It's gainful to keep the correspondence lines open with your work director, HR, and bosses. Be hundred percent legitimate and straightforward. Suppose you can't achieve office on time since you need to drop your child to class, they can enable you to out by keeping your work hours adaptable. On the off chance that that is an issue, be set up with elective answers for show how the course of action won't influence your execution and efficiency. Make time for yourself and your loved ones. While your job is important, it shouldn't be your entire life. You were an individual before taking this position, and you should prioritize the activities or hobbies that make you happy.Achieving work-life balance requires deliberate action. If you do not firmly plan for personal time, you will never have time to do other things outside of work. No matter how hectic your schedule might be, you ultimately have control of your time and life. When planning time with your loved ones, create a calendar for romantic and family dates. It may seem weird to plan one-on-one time with someone you live with, but it will ensure that you spend quality time with them without work-life conflict. Just because work keeps you busy doesn't mean you should neglect personal relationships. Realize that no one at your company is going to love you or appreciate you the way your loved ones do. Also that everyone is replaceable at work, and no matter how important you think your job is, the company will not miss a beat tomorrow if you are gone. Get familiar with the specialty of appointment There's nothing incorrectly in recognizing that you can't do everything all alone and a little help could facilitate your gigantic outstanding burden. By doing everything independent from anyone else, you're exhausting your body as well as setting it up for a breakdown later on. Choose what you should do yourself and what others can deal with. Look for assistance from colleagues, life partner, and relatives. You and your life partner can separate errands so that both of you don't fear coming to home following a long working day at office. Remain associated amid the day On account of innovation that knowing the prosperity and whereabouts of your friends and family isn't a test any longer. Every single working mother can without much of a stretch remain associated with their youngsters while they are working at office. In case you're feeling the loss of your children, you can make a telephone call or even a video call amid your meal break and spotlight on work with no pressure or strains at the backend. This solaces the tyke that you're close and furthermore encourages you traverse a harsh day at work. Limit diversions and time-squanderers When you are a working woman, consistently is crucial — at work and home. You would be amazed to realize that diversions at working environment can cost you over three hours every day. On the off chance that you need to be engaged and profitable, it's fundamental to keep effusive collaborators, easygoing web surfing, cell phones, and different diversions under control. Set explicit time cutoff points to address messages and telephone utilization. At home, abstain from observing excessively of TV and rather, you can utilize that opportunity to reinforce your bond with your accomplice and children. Work Smarter Not Harder Using time more efficiently is an important skill that everyone from the receptionist to the CEO can learn. Adopting the right combination of time-management practices can cut stress and save you up to an hour a day. This can include the use of technology to become more organized, grouping emails and voice messages, avoiding procrastination and learning to say "no." Simplify. Don’t make things complicated. As humans, we tend to make extra work for ourselves even if it’s unnecessary. Why reinvent the wheel? If something works, don’t try to fix it. Always remember the KISS method: Keep It Simple Silly. Leave Work at Work Develop a mental on-off switch between work and home. It helps to establish a transitional activity between the two realms. This might consist of listening to music or recorded books during your evening commute, exercising at the fitness center, running errands, or keeping personal appointments. Scheduling such activities immediately following your normal work hours also prevents you from spending that extra twenty minutes at the office which then turns into several hours. Draw a line among home and work Draw a line among home and work One of the best exercises life has encouraged me is to state NO to things that don't line up with your needs. Trust me, it is the greatest mantra to effectively juggle your own and expert life. Figure out how to define limits so you can give your central core to both the parts of life. Leave work at work, don't get back home with it. While investing energy with your children and accomplice, don't be on the telephone sending messages or talking about work with associates. Be aware of your own connections and begin saying no to things that aren't doing any great to us. Make some time for yourself Setting aside a few minutes to do things you really love is the key to keep up an ideal work-life balance. In some cases, it's alright to consider yourself, have some relaxation time and spoil yourself. Go to a spa, get a back rub, watch reruns of your most loved TV arrangement, read a book, travel solo, or simply do nothing by any means. Figure out how to deal with yourself on the grounds that at exactly that point you would most likely deal with your family and your work. Be sure to start using these ideas today to enjoy a great work-life balance and a more fulfilling life!
8, -700, -252, 196, 644? 1092 What comes next: -121, -358, -595, -832, -1069, -1306? -1543 What is the next term in 109097, 218193, 327289, 436385, 545481? 654577 What is the next term in 153, 350, 661, 1080, 1601, 2218, 2925? 3716 What comes next: 2997, 3034, 3071, 3108, 3145, 3182? 3219 What is the next term in 16690, 16691, 16692, 16693, 16694, 16695? 16696 What is next in 18232, 36473, 54718, 72967? 91220 What comes next: -3743, -7527, -11311, -15095, -18879? -22663 What is next in 14355, 14354, 14353, 14352, 14351, 14350? 14349 What is the next term in -1737, -1471, -1027, -405? 395 What is next in 798, 923, 1050, 1179, 1310, 1443, 1578? 1715 What is the next term in 2464, 4976, 7490, 10006, 12524, 15044? 17566 What is the next term in -4007, -16056, -36137, -64250, -100395, -144572, -196781? -257022 What is next in 710015, 1420028, 2130041, 2840054? 3550067 What is next in 7621, 7577, 7503, 7399, 7265, 7101? 6907 What comes next: -42042, -84059, -126076, -168093? -210110 What comes next: 1408, 2779, 4112, 5407, 6664, 7883? 9064 What is next in 169, 521, 1145, 2071, 3329, 4949, 6961? 9395 What is the next term in 279, 559, 831, 1095, 1351, 1599, 1839? 2071 What comes next: -219, -1320, -3161, -5748, -9087, -13184, -18045? -23676 What is the next term in -71, -332, -775, -1406, -2231, -3256, -4487? -5930 What is the next term in 113, 264, 393, 488, 537, 528? 449 What is next in 234856, 234857, 234858, 234859, 234860, 234861? 234862 What is the next term in 197, 409, 641, 899, 1189? 1517 What is the next term in 11293, 11340, 11425, 11554, 11733, 11968? 12265 What comes next: 17718, 70359, 158094, 280923, 438846, 631863? 859974 What is the next term in 261854, 523706, 785558, 1047410, 1309262? 1571114 What comes next: 2418, 9685, 21814, 38811, 60682? 87433 What is the next term in -433991, -433988, -433983, -433976, -433967, -433956, -433943? -433928 What is next in -494, -554, -616, -680? -746 What comes next: -89, -219, -431, -725, -1101? -1559 What comes next: 284, 270, 234, 164, 48? -126 What is next in 438, 494, 550? 606 What is the next term in 2145, 8541, 19201, 34125, 53313, 76765? 104481 What is next in -1010, -1004, -1012, -1040, -1094? -1180 What comes next: -2453, -2568, -2667, -2744, -2793, -2808, -2783, -2712? -2589 What is next in -1482, -2969, -4458, -5949, -7442, -8937? -10434 What is the next term in -3874313, -3874314, -3874315? -3874316 What comes next: 43, 62, 59, 22, -61? -202 What is the next term in -2769, -3035, -3479, -4101, -4901, -5879? -7035 What is the next term in -290, 179, 662, 1165, 1694, 2255, 2854, 3497? 4190 What is the next term in -374, -1588, -3630, -6506, -10222, -14784, -20198? -26470 What is next in -470, -662, -854, -1046, -1238? -1430 What is the next term in -115873, -115874, -115875, -115876? -115877 What is next in -99442, -99441, -99440? -99439 What comes next: -12950, -12933, -12906, -12869? -12822 What comes next: -1050, -2031, -3004, -3963, -4902? -5815 What is next in -28004, -27788, -27428, -26924? -26276 What is the next term in 1631397, 1631396, 1631395, 1631394, 1631393, 1631392? 1631391 What comes next: 594615, 1189226, 1783835, 2378442, 2973047, 3567650? 4162251 What is next in -232160, -232159, -232168, -232193, -232240, -232315, -232424, -232573? -232768 What comes next: -7428, -14859, -22290, -29721, -37152, -44583? -52014 What is the next term in 142, -1306, -3728, -7130, -11518? -16898 What is next in -19707, -19714, -19723, -19734, -19747, -19762, -19779? -19798 What comes next: 4022, 3953, 3884, 3815, 3746, 3677? 3608 What is next in -13957, -13963, -13969, -13975, -13981, -13987? -13993 What is next in 373, 402, 451, 520, 609? 718 What is next in -1807, -1804, -1801, -1798? -1795 What is the next term in -8914, -9005, -9156, -9367? -9638 What is the next term in 265076, 265075, 265074, 265073, 265072? 265071 What comes next: -30862, -123272, -277282, -492886, -770078, -1108852? -1509202 What is the next term in -1093, -2141, -3155, -4135, -5081? -5993 What is next in -305, -885, -1861, -3239, -5025, -7225, -9845? -12891 What comes next: 246901, 246900, 246899, 246898, 246897, 246896? 246895 What comes next: 158, -317, -794, -1273, -1754, -2237, -2722? -3209 What is the next term in 23158, 46309, 69460? 92611 What comes next: -14483, -29260, -44037, -58814, -73591? -88368 What is next in -790, -1669, -2548, -3427, -4306? -5185 What is the next term in 4172, 16692, 37560, 66776, 104340, 150252, 204512? 267120 What is next in 5324, 5327, 5334, 5345, 5360? 5379 What is the next term in -76264, -76262, -76258, -76252? -76244 What is the next term in -276036, -552068, -828100, -1104132, -1380164? -1656196 What is next in 8356, 8309, 8258, 8203, 8144, 8081, 8014? 7943 What is next in -1217, -4920, -11093, -19736, -30849, -44432? -60485 What is the next term in -4246, -4025, -3814, -3619, -3446, -3301? -3190 What is the next term in -340, -1341, -3008, -5341, -8340, -12005, -16336? -21333 What is next in -8799, -8794, -8789, -8784? -8779 What comes next: -5026, -10311, -15596, -20881, -26166, -31451? -36736 What is the next term in -1831, -3237, -4645, -6049, -7443, -8821? -10177 What comes next: -13571, -13589, -13607? -13625 What is next in -23846, -47706, -71576, -95462, -119370, -143306? -167276 What comes next: 1363, 11023, 37267, 88387, 172675? 298423 What comes next: 43750, 43752, 43754? 43756 What is next in 290, -86, -462, -838, -1214, -1590? -1966 What is the next term in -15059, -15057, -15055, -15053? -15051 What is next in -62, -245, -536, -935, -1442, -2057, -2780? -3611 What is next in -36117, -36113, -36105, -36093, -36077, -36057, -36033? -36005 What is the next term in -1601, -1627, -1653, -1679, -1705? -1731 What comes next: 1360, 1368, 1376, 1384, 1392, 1400? 1408 What is the next term in 733, 2104, 4735, 9256, 16297, 26488, 40459? 58840 What is the next term in 142, 1678, 5846, 13960, 27334, 47282? 75118 What comes next: 37, -84, -205, -326, -447? -568 What comes next: 41903, 41902, 41901? 41900 What is next in -786, -1703, -2608, -3495, -4358, -5191, -5988? -6743 What is the next term in -2516, -4947, -7236, -9311, -11100, -12531, -13532, -14031? -13956 What is next in -910, -1807, -2702, -3595, -4486? -5375 What is the next term in -79, -114, -145, -148, -99? 26 What is next in -335723, -671447, -1007169, -1342889? -1678607 What is the next term in -22821, -91276, -205369, -365100? -570469 What is the next term in -236496, -236487, -236478, -236469, -236460, -236451? -236442 What is next in -93, -217, -347, -483, -625, -773, -927? -1087 What is the next term in -1006, -1018, -1032, -1042, -1042? -1026 What is the next term in -74893, -74907, -74921, -74935? -74949 What comes next: -30, -97, -218, -405, -670? -1025 What is the next term in 1323, 2598, 3873? 5148 What is next in 115250, 230487, 345724, 460961, 576198? 691435 What is next in -102299, -102298, -102297, -102296, -102295? -102294 What is next in 5682, 5858, 6034, 6210, 6386, 6562? 6738 What is the next term in 37986, 75996, 114006? 152016 What is next in 68, 186, 506, 1130, 2160, 3698, 5846? 8706 What comes next: -4033, -7994, -11955, -15916? -19877 What is next in 1972, 1963, 1930, 1861, 1744? 1567 What is the next term in -19156, -38072, -56984, -75892, -94796? -113696 What is next in -38483, -76970, -115457, -153944, -192431? -230918 What is the next term in 151534, 151537, 151540? 151543 What is next in -508843, -1017685, -1526525, -2035363? -2544199 What is next in -126, -614, -1418, -2532, -3950? -5666 What is the next term in -24605, -49206, -73807, -98408, -123009, -147610? -172211 What is the next term in -84085, -168168, -252251? -336334 What is next in -8788, -17574, -26360, -35146? -43932 What is the next term in 3508921, 7017842, 10526763, 14035684? 17544605 What comes next: 4778, 9505, 14232, 18959, 23686, 28413? 33140 What comes next: 3446, 13303, 29732, 52733, 82306, 118451, 161168? 210457 What is next in 3644, 3635, 3608, 3557, 3476, 3359, 3200? 2993 What is next in -287049, -287051, -287055, -287061? -287069 What is next in -55, -21, 165, 569, 1257, 2295? 3749 What is next in -175513, -351016, -526519, -702022, -877525, -1053028? -1228531 What is next in -2701, -2731, -2817, -
Hi all, I had a complete paradigm shift on February 13th of this year. I am wondering if anyone else had an experience like mine. For years I knew most all of the history and issues with the LDS church. I felt like that I wasn't going to ever believe differently then that the church was true, etc. I would always almost enjoy reading the upsetting stuff because I felt like it was something that outsiders didn't understand. I would go on mormon dialogue and talk about different doctrines etc. - but at times I would pray because I was so utterly confused. I would pray for better understanding to reconcile the church belief with other personal understandings about the universe. I did feel for so long that the understanding would come, and what I did understand about the Gospel was pretty amazing stuff, so of course everything else would fall into place. What happened was a complete collapse, almost all at once. Only for a second did I get a quick fear of losing eternal life but it soon left. Has anyone else had the experience of knowing the problematic stuff and it didn't really disturb you, and then all at once there is just a shift? I mean this wasn't a limited knowledge and then complete shock of learning something on a website. Something did change almost instantly though, inside. Thanks. That happened to me. I always thought a lot and eventually I came to the conclusion that there was so many problems. If the Lord cared about Joe's credibility to make a plan B to the 116 pages, why he didn't care to much worse stuff like the Book of Abraham, evolution, BOM translation, Plural Marriage, etc. It was the sheer number of grave problems that began to weight in. I had a major crisis of faith about one years ago, but when I felt the despair, I managed to close my eyes to the problems and it eventually went away. The Race and the Priesthood essay at December was the straw that broke the camel's back to me. I can paraphrase old Joe: "When I first looked upon him, I was afraid; but the fear soon left me." I guess I was aware of some things..... inoculated about disturbing things like polygamy, dna issues, multiple first vision... but then I heard about the Book Of Abraham issue.... I was still kinda holding out hope that I could find an explanation that would apease me, but they were all so absurd. I became obsessed with researching all things mormon. It didn't take long until I knew I couldn't believe it anymore..... I have special admiration for people who question the church based on doctrine or historical contradictions, since it took something as extreme as various thwarted sucide attempts because of my homosexual feelings to push me to finally question the validity of the Chruch. Once it cracks, it all falls apart like dominoes or as you called it, a whole shelf collapsing. After I resigned my membership the metaphor I used a lot was that my entire world had previously been meticulously organised in the form of an elaborate bookshelf, a bookshelf that was now totally collapsed. Having to reorganise all its contents based on my own thoughts and ideas and not the directives of an outside body was a task that took a lot of time and patience, but I am really deeply grateful it was possible to do so. closer2fine Wrote:-------------------------------------------------------> I guess I was aware of some things..... inoculated> about disturbing things like polygamy, dna issues,> multiple first vision... but then I heard about> the Book Of Abraham issue.... I was still kinda> holding out hope that I could find an explanation> that would apease me, but they were all so absurd.> I became obsessed with researching all things> mormon. It didn't take long until I knew I> couldn't believe it anymore.....>> So what was the one thing that did it for you? It was a myriad if things. A was a pious zealot and lived the law to a T. I have been well read on esoteric concepts, and other spirituality - but I would always have to filter what I felt was true through what some guy said at the last conference and reconcile it somehow, the mental gymnastics were a skill like no other. We paid such a generous fast offering for years, and I mean like a ridiculous amount, I seriously believed in the "promise" - we have ended up in a bad place, we honestly couldn't even afford tithing after sometime, and I paid on gross every two weeks for a few years. Incredible. Anyways, I remember out of all the things on the shelf, the one thing on mind was the hero worship that people gave the newest prophet. And then the prophet telling stories, and then more stories, and the. Re-telling stories. Then he would act all weird and everyone would have to laugh at how weird it was. I'm just saying that the feeling for me was not mutual for me. I agree with the other statements of a feeling of relief. I can tell you that in a moment that my whole paradigm changed, and not just what was religious. I am in happy valley and the culture and teachings really take over your whole life focus. I didn't necessarily just say I was had by a fraud, but I could say that the claims are not true, and that's it. It's funny because not until I has that "awakening" was I ready to defend the most ridiculous doctrines and practices. It really takes a shift in consciousness, and I am hoping my wife can come to that point, but I know where she is at and must allow her to process. She has been ultra accepting of all of this. It will cause issues with others, but I am not one who just lives the religion because that's what Pappa did, or because it's still "good", or for any reason- I love the truth, and if the LDS church taught me anything, it taught me that truth must be sought, no matter where it's at. I realized there was misdirection, and I am not going to follow suit. Anyways... I had gone inactive when my gay ex told me he was cheating and he was ex. sec. I didn't want the fallout of our marriage to happen in the public eye--and for other reasons. I had been inactive for quite a while--probably at least 10 to 15 years. I always thought I'd go back when life settled down. I knew some things like that JS was a polygamist. Up to the point I was at when I lost beliefs, I'd experienced A LOT in my own life dealing with leaders and the gay issue. My best friend's daughter was getting married. She told me that when there were problems with the wedding plans, her daughter would say, "The church is still true, so what does it matter." I thought about that statement for a few days and then, yes, it all fell apart. One day in a matter of moments it hit me, "But it mattered to me." I knew right then and there it was not true. It was after I'd lost by beliefs that my therapist told me about this board and I found out much of the history. It was just icing on the cake. Once I was able to verbalize the words "the LDS church is not true", my shelf collapsed at the same instant, because all those questions I once considered important became moot.Knowing the church was not at all what it claimed, by itself, washed all those questions away slicker thn snot! I remember 30 years ago reading scriptures with the kids. They had new scriptures, I had the old. We got to the "white and delightsome" part, but they read "pure and delightsome". I suppose that's where it started. Over the years I'd built myself quite a shelf. I knew probably 80% of the problems, and kept struggling to reconcile it all. Then one night while home alone I was walking through the house (I can still remember where I was), and a powerful impression hit me, that the church wasn't true. It was as though a great burden had been lifted from me. I no longer had to reconcile that pile of contradictions. It was a peaceful feeling, but I had a profound sense of loss (30 yrs in church at that point). I was subsequently called to serve in a bishopric. I hoped I could do some good. When I saw that our bishop / SP were primarily interested in running programs and getting people in the door, and had little interest in helping people grow spiritually, I knew I had nothing to offer. So yes, there was a singular moment for me, ignored for a while, but still there. I knew a few things... mostly BoA as the biggest. I put it on the shelf and continued for several years until I read "Rough Stone Rolling". After finishing that book, I put it away and figured I would continue on as TBM. Then I started thinking about what was in RSR and I said to myself, "you know what, I'm not ok with this." I struggled for a few weeks and then finally wondered, what if it's not true? That's when everything immediately clicked into place and made sense. Then the fallout began... but life is better now :) One thing that has always struck me as odd about the BoA was that Joseph Smith did not add to to the canonized scriptures, and nor did Brigham Young. The Church waited until 1880 when John Taylor was pres to canonize the Pearl of Great Price, which included the BoA. it was probably easier if it all collapsed at once. I kept drilling new holes in the wall for a better shelf, then propping it up over and over. finally all the 2x4's etc. couldn't hold all the weight. I don't think that TBM's appreciate what happens. The moment mine collapsed I had known about polygamy and a few other disturbing things like another version of the first vision and Oliver Cowdrey being excommunicated and Emma not being a part of Utah Mormonism. But I learned Joseph Smith married my ancestor Zina Jacobs after she had been married for only a short time to her legal husband. Collapse. I thought that the founder of Mormonism must have pulled a lot more shiz and then I went and learned about it. Game over for LDS Inc. and they don't want me. I think outside their box. I actually think the pearl of great price was actually put together and named as such by a missionary in England? In the mid 1800's. This guy put it together as a tract, then in becomes scripture a few decades later. But then they decanonized other stuff later on, lectures on faith, etc.
/* * Copyright (c) Microsoft Corporation. All rights reserved. * Licensed under the MIT License. See License.txt in the project root for * license information. * * Code generated by Microsoft (R) AutoRest Code Generator. * Changes may cause incorrect behavior and will be lost if the code is * regenerated. */ import * as msRest from "@azure/ms-rest-js"; import * as msRestAzure from "@azure/ms-rest-azure-js"; import * as Models from "../models"; import * as Mappers from "../models/profilesMappers"; import * as Parameters from "../models/parameters"; import { CustomerInsightsManagementClientContext } from "../customerInsightsManagementClientContext"; /** Class representing a Profiles. */ export class Profiles { private readonly client: CustomerInsightsManagementClientContext; /** * Create a Profiles. * @param {CustomerInsightsManagementClientContext} client Reference to the service client. */ constructor(client: CustomerInsightsManagementClientContext) { this.client = client; } /** * Creates a profile within a Hub, or updates an existing profile. * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param profileName The name of the profile. * @param parameters Parameters supplied to the create/delete Profile type operation * @param [options] The optional parameters * @returns Promise<Models.ProfilesCreateOrUpdateResponse> */ createOrUpdate(resourceGroupName: string, hubName: string, profileName: string, parameters: Models.ProfileResourceFormat, options?: msRest.RequestOptionsBase): Promise<Models.ProfilesCreateOrUpdateResponse> { return this.beginCreateOrUpdate(resourceGroupName,hubName,profileName,parameters,options) .then(lroPoller => lroPoller.pollUntilFinished()) as Promise<Models.ProfilesCreateOrUpdateResponse>; } /** * Gets information about the specified profile. * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param profileName The name of the profile. * @param [options] The optional parameters * @returns Promise<Models.ProfilesGetResponse> */ get(resourceGroupName: string, hubName: string, profileName: string, options?: Models.ProfilesGetOptionalParams): Promise<Models.ProfilesGetResponse>; /** * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param profileName The name of the profile. * @param callback The callback */ get(resourceGroupName: string, hubName: string, profileName: string, callback: msRest.ServiceCallback<Models.ProfileResourceFormat>): void; /** * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param profileName The name of the profile. * @param options The optional parameters * @param callback The callback */ get(resourceGroupName: string, hubName: string, profileName: string, options: Models.ProfilesGetOptionalParams, callback: msRest.ServiceCallback<Models.ProfileResourceFormat>): void; get(resourceGroupName: string, hubName: string, profileName: string, options?: Models.ProfilesGetOptionalParams | msRest.ServiceCallback<Models.ProfileResourceFormat>, callback?: msRest.ServiceCallback<Models.ProfileResourceFormat>): Promise<Models.ProfilesGetResponse> { return this.client.sendOperationRequest( { resourceGroupName, hubName, profileName, options }, getOperationSpec, callback) as Promise<Models.ProfilesGetResponse>; } /** * Deletes a profile within a hub * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param profileName The name of the profile. * @param [options] The optional parameters * @returns Promise<msRest.RestResponse> */ deleteMethod(resourceGroupName: string, hubName: string, profileName: string, options?: Models.ProfilesDeleteMethodOptionalParams): Promise<msRest.RestResponse> { return this.beginDeleteMethod(resourceGroupName,hubName,profileName,options) .then(lroPoller => lroPoller.pollUntilFinished()); } /** * Gets all profile in the hub. * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param [options] The optional parameters * @returns Promise<Models.ProfilesListByHubResponse> */ listByHub(resourceGroupName: string, hubName: string, options?: Models.ProfilesListByHubOptionalParams): Promise<Models.ProfilesListByHubResponse>; /** * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param callback The callback */ listByHub(resourceGroupName: string, hubName: string, callback: msRest.ServiceCallback<Models.ProfileListResult>): void; /** * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param options The optional parameters * @param callback The callback */ listByHub(resourceGroupName: string, hubName: string, options: Models.ProfilesListByHubOptionalParams, callback: msRest.ServiceCallback<Models.ProfileListResult>): void; listByHub(resourceGroupName: string, hubName: string, options?: Models.ProfilesListByHubOptionalParams | msRest.ServiceCallback<Models.ProfileListResult>, callback?: msRest.ServiceCallback<Models.ProfileListResult>): Promise<Models.ProfilesListByHubResponse> { return this.client.sendOperationRequest( { resourceGroupName, hubName, options }, listByHubOperationSpec, callback) as Promise<Models.ProfilesListByHubResponse>; } /** * Gets the KPIs that enrich the profile Type identified by the supplied name. Enrichment happens * through participants of the Interaction on an Interaction KPI and through Relationships for * Profile KPIs. * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param profileName The name of the profile. * @param [options] The optional parameters * @returns Promise<Models.ProfilesGetEnrichingKpisResponse> */ getEnrichingKpis(resourceGroupName: string, hubName: string, profileName: string, options?: msRest.RequestOptionsBase): Promise<Models.ProfilesGetEnrichingKpisResponse>; /** * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param profileName The name of the profile. * @param callback The callback */ getEnrichingKpis(resourceGroupName: string, hubName: string, profileName: string, callback: msRest.ServiceCallback<Models.KpiDefinition[]>): void; /** * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param profileName The name of the profile. * @param options The optional parameters * @param callback The callback */ getEnrichingKpis(resourceGroupName: string, hubName: string, profileName: string, options: msRest.RequestOptionsBase, callback: msRest.ServiceCallback<Models.KpiDefinition[]>): void; getEnrichingKpis(resourceGroupName: string, hubName: string, profileName: string, options?: msRest.RequestOptionsBase | msRest.ServiceCallback<Models.KpiDefinition[]>, callback?: msRest.ServiceCallback<Models.KpiDefinition[]>): Promise<Models.ProfilesGetEnrichingKpisResponse> { return this.client.sendOperationRequest( { resourceGroupName, hubName, profileName, options }, getEnrichingKpisOperationSpec, callback) as Promise<Models.ProfilesGetEnrichingKpisResponse>; } /** * Creates a profile within a Hub, or updates an existing profile. * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param profileName The name of the profile. * @param parameters Parameters supplied to the create/delete Profile type operation * @param [options] The optional parameters * @returns Promise<msRestAzure.LROPoller> */ beginCreateOrUpdate(resourceGroupName: string, hubName: string, profileName: string, parameters: Models.ProfileResourceFormat, options?: msRest.RequestOptionsBase): Promise<msRestAzure.LROPoller> { return this.client.sendLRORequest( { resourceGroupName, hubName, profileName, parameters, options }, beginCreateOrUpdateOperationSpec, options); } /** * Deletes a profile within a hub * @param resourceGroupName The name of the resource group. * @param hubName The name of the hub. * @param profileName The name of the profile. * @param [options] The optional parameters * @returns Promise<msRestAzure.LROPoller> */ beginDeleteMethod(resourceGroupName: string, hubName: string, profileName: string, options?: Models.ProfilesBeginDeleteMethodOptionalParams): Promise<msRestAzure.LROPoller> { return this.client.sendLRORequest( { resourceGroupName, hubName, profileName, options }, beginDeleteMethodOperationSpec, options); } /** * Gets all profile in the hub. * @param nextPageLink The NextLink from the previous successful call to List operation. * @param [options] The optional parameters * @returns Promise<Models.ProfilesListByHubNextResponse> */ listByHubNext(nextPageLink: string, options?: msRest.RequestOptionsBase): Promise<Models.ProfilesListByHubNextResponse>; /** * @param nextPageLink The NextLink from the previous successful call to List operation. * @param callback The callback */ listByHubNext(nextPageLink: string, callback: msRest.ServiceCallback<Models.ProfileListResult>): void; /** * @param nextPageLink The NextLink from the previous successful call to List operation. * @param options The optional parameters * @param callback The callback */ listByHubNext(nextPageLink: string, options: msRest.RequestOptionsBase, callback: msRest.ServiceCallback<Models.ProfileListResult>): void; listByHubNext(nextPageLink: string, options?: msRest.RequestOptionsBase | msRest.ServiceCallback<Models.ProfileListResult>, callback?: msRest.ServiceCallback<Models.ProfileListResult>): Promise<Models.ProfilesListByHubNextResponse> { return this.client.sendOperationRequest( { nextPageLink, options }, listByHubNextOperationSpec, callback) as Promise<Models.ProfilesListByHubNextResponse>; } } // Operation Specifications const serializer = new msRest.Serializer(Mappers); const getOperationSpec: msRest.OperationSpec = { httpMethod: "GET", path: "subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomerInsights/hubs/{hubName}/profiles/{profileName}", urlParameters: [ Parameters.resourceGroupName, Parameters.hubName1, Parameters.profileName1, Parameters.subscriptionId ], queryParameters: [ Parameters.localeCode, Parameters.apiVersion ], headerParameters: [ Parameters.acceptLanguage ], responses: { 200: { bodyMapper: Mappers.ProfileResourceFormat }, default: { bodyMapper: Mappers.CloudError } }, serializer }; const listByHubOperationSpec: msRest.OperationSpec = { httpMethod: "GET", path: "subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomerInsights/hubs/{hubName}/profiles", urlParameters: [ Parameters.resourceGroupName, Parameters.hubName1, Parameters.subscriptionId ], queryParameters: [ Parameters.localeCode, Parameters.apiVersion ], headerParameters: [ Parameters.acceptLanguage ], responses: { 200: { bodyMapper: Mappers.ProfileListResult }, default: { bodyMapper: Mappers.CloudError } }, serializer }; const getEnrichingKpisOperationSpec: msRest.OperationSpec = { httpMethod: "POST", path: "subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomerInsights/hubs/{hubName}/profiles/{profileName}/getEnrichingKpis", urlParameters: [ Parameters.resourceGroupName, Parameters.hubName1, Parameters.profileName1, Parameters.subscriptionId ], queryParameters: [ Parameters.apiVersion ], headerParameters: [ Parameters.acceptLanguage ], responses: { 200: { bodyMapper: { serializedName: "parsedResponse", type: { name: "Sequence", element: { type: { name: "Composite", className: "KpiDefinition" } } } } }, default: { bodyMapper: Mappers.CloudError } }, serializer }; const beginCreateOrUpdateOperationSpec: msRest.OperationSpec = { httpMethod: "PUT", path: "subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomerInsights/hubs/{hubName}/profiles/{profileName}", urlParameters: [ Parameters.resourceGroupName, Parameters.hubName1, Parameters.profileName0, Parameters.subscriptionId ], queryParameters: [ Parameters.apiVersion ], headerParameters: [ Parameters.acceptLanguage ], requestBody: { parameterPath: "parameters", mapper: { ...Mappers.ProfileResourceFormat, required: true } }, responses: { 200: { bodyMapper: Mappers.ProfileResourceFormat }, 202: {}, default: { bodyMapper: Mappers.CloudError } }, serializer }; const beginDeleteMethodOperationSpec: msRest.OperationSpec = { httpMethod: "DELETE", path: "subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CustomerInsights/hubs/{hubName}/profiles/{profileName}", urlParameters: [ Parameters.resourceGroupName, Parameters.hubName1, Parameters.profileName1, Parameters.subscriptionId ], queryParameters: [ Parameters.localeCode, Parameters.apiVersion ], headerParameters: [ Parameters.acceptLanguage ], responses: { 200: {}, 202: {}, 204: {}, default: { bodyMapper: Mappers.CloudError } }, serializer }; const listByHubNextOperationSpec: msRest.OperationSpec = { httpMethod: "GET", baseUrl: "https://management.azure.com", path: "{nextLink}", urlParameters: [ Parameters.nextPageLink ], headerParameters: [ Parameters.acceptLanguage ], responses: { 200: { bodyMapper: Mappers.ProfileListResult }, default: { bodyMapper: Mappers.CloudError } }, serializer };
--- abstract: 'Recent nuclear magnetic resonance (NMR) measurements revealed the coexistence of stripe-type antiferromagnetic (AFM) and ferromagnetic (FM) spin correlations in both the hole- and electron-doped BaFe$_2$As$_2$ families of iron-pnictide superconductors by a Korringa ratio analysis. Motivated by the NMR work, we investigate the possible existence of FM fluctuations in another iron pnictide superconducting family, Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$. We re-analyzed our previously reported data in terms of the Korringa ratio and found clear evidence for the coexistence of stripe-type AFM and FM spin correlations in the electron-doped CaFe$_2$As$_2$ system. These NMR data indicate that FM fluctuations exist in general in iron-pnictide superconducting families and thus must be included to capture the phenomenology of the iron pnictides.' author: - 'J. Cui' - 'P. Wiecki' - 'S. Ran\*' - 'S. L. Bud’ko' - 'P. C. Canfield' - 'Y. Furukawa' title: 'Coexistence of antiferromagnetic and ferromagnetic spin correlations in Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$ revealed by $^{75}$As nuclear magnetic resonance' --- Introduction ============ Since the discovery of high $T_{\rm c}$ superconductivity in iron pnictides,[@Kamihara2008] the interplay between spin fluctuations and the unconventional nature of superconductivity (SC) has been attracting much interest. In most of the Fe pnictide superconductors, the “parent" materials exhibit antiferromagnetic ordering below the Néel temperature.[@Canfield2010; @Johnston2010; @Stewart2011] SC in these compounds emerges upon suppression of the stripe-type antiferromagnetic (AFM) phase by application of pressure and/or chemical substitution, where the AFM spin fluctuations are still strong. Therefore, it is believed that stripe-type AFM spin fluctuations play an important role in driving the SC in the iron-based superconductors, although orbital fluctuations are also pointed out to be important. [@Kim2013] Recently nuclear magnetic resonance (NMR) measurements revealed that ferromagnetic (FM) correlations also play an important role in both the hole- and electron-doped BaFe$_2$As$_2$ families of iron-pnictide superconductors. [@Johnston2010; @Wiecki2015; @Wiecki2015prl] The FM fluctuations are found to be strongest in the maximally-doped BaCo$_2$As$_2$ and KFe$_2$As$_2$, but are still present in the BaFe$_2$As$_2$ parent compound, consistent with its enhanced magneric susceptibility $\chi$. [@Johnston2010] These FM fluctuations are suggested to compete with superconductivity and are a crucial ingredient to understand the variation of $T_{\rm c}$ and the shape of the SC dome. [@Wiecki2015prl] It is interesting and important to explore whether or not similar FM correlations exist in other iron pnictide systems. The CaFe$_2$As$_2$ family has a phase diagram distinct from that for the BaFe$_2$As$_2$ family. Whereas for the BaFe$_2$As$_2$ materials the AFM and orthorhombic phase transitions become second order with Co substitution, the CaFe$_2$As$_2$ family continues to manifest a strongly first order, coupled, structural-magnetic phase transition even as Co substitution suppresses the transition temperature to zero. Another significant difference in the phase diagrams of the CaFe$_2$As$_2$ and BaFe$_2$As$_2$ systems is also found in superconducting phase. Although SC appears when the stripe-type AFM phase is suppressed by Co substitution for Fe in both cases, no coexistence of SC and AFM has been observed in Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$, whereas the coexistence has been reported in Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$. These results are consistent with the difference between a strongly first order versus second order phase transition. Recent NMR measurements revealed that the stripe-type AFM fluctuations are strongly suppressed in the AFM state in the Co-doped CaFe$_2$As$_2$ system, whereas sizable stripe-type AFM spin fluctuations still remain in the AFM state in the Co-doped BaFe$_2$As$_2$ system.[@Cui2015] These results indicate that the residual AFM spin fluctuations play an important role for the coexistence of AFM and SC in Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$. Furthermore, in the case of Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$, pseudogap-like behavior[@Cui2015] has been observed in the temperature dependence of 1/$T_1T$ and in-plane resistivity. The characteristic temperature of the pseudogap was reported to be nearly independent of Co substitution. In this paper, we investigated the possible existence of FM fluctuations in Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$ and found the clear evidence of coexistence of stripe-type AFM and FM correlations based on $^{75}$As NMR data analysis. In contrast to the case of Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ where the relative strength of FM correlations increases with Co substitution, that of the FM correlations are almost independent of the Co content in Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$ from $x$ = 0 to 0.059. Although we have investigated a relatively small Co substitution region, the existence of the FM spin correlations would be consistent with the fact that CaCo$_2$As$_2$, the end member of the electron doped Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$ family of compounds, has an A-type antiferromagnetic ordered state below $T_{\rm N}$ = 52–76 K[@Quirinale2013; @Cheng2012] where the Co moments within the CoAs layer are ferromagnetically aligned along the $c$ axis and the moments in adjacent layers are aligned antiferromagnetically. Since the coexistence of FM and AFM spin correlations are observed in both the hole- and electron-doped BaFe$_2$As$_2$ systems,[@Wiecki2015prl] our results suggest that the FM fluctuations exist in general in iron pnictide superconductors, indicating that theoretical microscopic models should include FM correlations to reveal the feature of the iron pnictides. Experimental ============ The single crystals of Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$ ($x$ = 0, 0.023, 0.028, 0.033 and 0.059) used in the present study are from the same batches as reported in Ref. . These single crystals were grown out of a FeAs/CoAs flux,[@Ran2011; @Ran2012] using conventional high temperature growth techniques.[@Canfield_book; @Canfield_1992] Subsequent to growth, the single crystals were annealed at $T_{\rm a}$ = 350 $^{\circ}$C for 7 days and then quenched. For $x$ = 0, the single crystal was annealed at $T_{\rm a}$ = 400 $^{\circ}$C for 24 hours. Details of the growth, annealing and quenching procedures have been reported in Refs.  and . The stripe-type AFM states have been reported below the Néel temperatures $T_{\rm N}$ = 170, 106, and 53 K for $x$ = 0, 0.023, and 0.028, respectively.[@Goldman2008] The superconducting states are observed below the transition temperature of $T_{\rm c}$ = 15 and 10 K for $x$ = 0.033 and 0.059, respectively.[@Ran2012] NMR measurements were carried out on $^{75}$As (*I* = 3/2, $\gamma/2\pi$ = 7.2919 MHz/T, $Q$ = 0.29 Barns) by using a lab-built, phase-coherent, spin-echo pulse spectrometer. The $^{75}$As-NMR spectra were obtained at a fixed frequency $f$ = 53 MHz by sweeping the magnetic field. The magnetic field was applied parallel to either the crystal $c$ axis or the $ab$ plane where the direction of the magnetic field within the $ab$ plane was not controlled. The $^{75}$As 1/$T_{\rm 1}$ was measured with a recovery method using a single $\pi$/2 saturation $rf$ pulse. Most of NMR experimental results were published elsewhere.[@Furukawa2014; @Cui2015] ![(Color online) (a) Temperature dependence of $^{75}$As NMR shifts $K_{ab}$ and $K_{c}$ for Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$. (b) $K(T)$ versus magnetic susceptibility $\chi(T)$ plots for the corresponding $ab$ and $c$ components of $K$ in Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$ with $T$ as an implicit parameter. The solid and broken lines are linear fits. []{data-label="fig:T-K"}](Fig1){width="8.0"} Results and discussion ====================== In this paper we discuss magnetic correlations in Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$ based on a Korringa ratio analysis of the NMR results. Figure  \[fig:T-K\](a) shows the $x$ and $T$ dependence of the Knight shifts, $K_{\rm ab}$ for $H$ parallel to the $ab$ plane and $K_{\rm c}$ for $H$ parallel to the $c$ axis, where new Knight shift data for $x$ = 0.033 and 0.059 are plotted in addition to the data ($x$ =0, 0.023 and 0.028) reported previously.[@Furukawa2014; @Cui2015] The NMR shift consists of a $T$-independent orbital shift $K_0$ and a $T$-dependent spin shift $K_{\text{spin}}(T)$ due to the uniform magnetic spin susceptibility $\chi(\mathbf{q}=0)$ of the electron system. The NMR shift can therefore be expressed as $K(T)=K_0+K_{\text{spin}}(T)=K_0+A_{\text{hf}}\chi_{\text{spin}}/N$, where $N$ is Avogadro’s number, and $A_{\text{hf}}$ is the hyperfine coupling constant, usually expressed in units of T$/\mu_{\rm B}$. Since detailed analysis of the temperature dependence of $K$ has been reported in Ref. , we are not going to discuss it in this paper. In order to extract $K_{\text{spin}}(T)$, which is needed for the following Korringa ratio analysis, we plot $K(T)$ against the corresponding bulk static uniform magnetic susceptibility $\chi(T)$ with $T$ as an implicit parameter as shown in Fig. \[fig:T-K\](b). From the slope of the linear fit curve, the hyperfine coupling constant can be estimated. The $x$ dependence of the hyperfine coupling constant has been reported in Ref. . From the $y$-intercept of the linear fit curve, one can estimate the orbital shift $K_0$, and extract $K_{\text{spin}}(T)$ to discuss magnetic correlations. A Korringa ratio analysis is applied to extract the character of spin fluctuations in Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$ from $^{75}$As NMR data as has been carried out for both the electron-doped Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ and hole-doped Ba$_{1-x}$K$_x$Fe$_2$As$_2$ families of iron-pnictide SCs.[@Wiecki2015prl] Within a Fermi liquid picture, $1/T_1T$ is proportional to the square of the density of states ${\cal D}(E_{\rm F})$ at the Fermi energy and $K_{\text{spin}} (\propto \chi_{\text{spin}}$) is proportional to ${\cal D}(E_{\rm F})$. In particular, $T_1TK_{\text{spin}}^2$ = $\frac{\hbar}{4\pi k_{\rm B}} \left(\frac{\gamma_{\rm e}}{\gamma_{\rm N}}\right)^2$ = ${\cal S}$, which is the Korringa relation. For the $^{75}$As nucleus ($\gamma_{\rm N}/2\pi=7.2919$ MHz/T), ${\cal S} =8.97\times 10^{-6}$ Ks. Korringa ratio $\alpha\equiv$ ${\cal S}$$/(T_1TK_{\text{spin}}^2)$, which reflects the deviations from ${\cal S}$, can reveal information about how electrons correlate in the material.[@Moriya1963; @Narath1968] $\alpha\sim1$ represents the situation of uncorrelated electrons. On the other hand, $\alpha >1$ indicates AFM correlations while $\alpha <1$ for FM correlations. These come from the enhancement of $\chi(\mathbf{q}\neq 0)$, which increases $1/T_1T$ but has little or no effect on $K_{\text{spin}}$, since the latter probes only the uniform $\chi(\mathbf{q} = 0)$. Therefore, the predominant feature of magnetic correlations, whether AFM or FM, can be determined by the Korringa ratio $\alpha$. To proceed with the Korringa ratio analysis, one needs to take the anisotropy of $K_\text{spin}$ and $1/T_1T$ into consideration. $1/T_1$ picks up the hyperfine field fluctuations at the NMR Larmor frequency, $\omega_{\rm 0}$, perpendicular to the applied field according to $(1/T_1)_{H||i}=\gamma_{\rm N}^2\left[|H^{\rm hf}_j(\omega_{\rm 0})|^2+|H^{\rm hf}_k(\omega_{\rm 0})|^2\right]$, where $(i,j,k)$ are mutually orthogonal directions and $|H^{\rm hf}_j(\omega_{\rm 0})|^2$ represents the power spectral density of the $j$-th component of the hyperfine magnetic field at the nuclear site. Thus, defining $H^{\rm hf}_{ab}\equiv H^{\rm hf}_{a}=H^{\rm hf}_{b}$, which is appropriate for the tetragonal PM state, we have $(1/T_1)_{H||c}=2\gamma_{\rm N}^2|H^{\rm hf}_{ab}(\omega_{\rm 0})|^2\equiv 1/T_{1,\perp}$. The Korringa parameter $\alpha_{\bot}\equiv {\cal S}/T_{1,\bot}TK_{\text{spin},ab}^2$ will then characterize fluctuations in the $ab$-plane component of the hyperfine field. Similarly, we consider the quantity $1/T_{1,\|}\equiv2(1/T_1)_{H||ab}-(1/T_1)_{H||c}= 2\gamma_N^2|H^{\rm hf}_{c}(\omega_{\rm N})|^2$, since $(1/T_1)_{H||ab}=\gamma_N^2\left[|H^{\rm hf}_{ab}(\omega_{\rm N})|^2+|H^{\rm hf}_c(\omega_{\rm N})|^2\right]$. We then pair $K_{\text{spin},c}$ with $1/T_{1,\|}$, so that the Korringa parameter $\alpha_{\|}={\cal S}/T_{1,\|}TK_{\text{spin},c}^2$ characterizes fluctuations in the $c$-axis component of the hyperfine field. ![(Color online) Temperature dependence of 1/$T_1T$ with anisotropy in Ca(Fe$_{\rm 1-x}$Co$_{x}$)$_2$As$_2$. (a) $1/T_{1,\perp}$ = $(1/T_1T)_{H||c}$. (b) $1/T_{1,\|}T=2(1/T_1T)_{H||ab}-(1/T_1T)_{H||c}$.[]{data-label="fig:T1T"}](Fig2){width="8.0cm"} Figure \[fig:T1T\] shows the temperature dependence of $1/T_{1,\perp}T$ and $1/T_{1,\|}T$ in Ca(Fe$_{1-x}$Co$_{x}$)$_2$As$_2$ at $H$ $\sim$ 7.5 T, obtained from the $(1/T_1T)_{H||ab}$ and $ (1/T_1T)_{H||c}$ data reported previously.[@Cui2015] For $x$ = 0, 0.023, and 0.028, $1/T_{1,\|}T$s show a monotonic increase with decreasing $T$ down to $T_{\rm N}$ = 170, 106, and 53 K for $x$ = 0, 0.023, 0.028, respectively, while $1/T_{1,\perp}T$s are nearly independent of $T$ although the slight increase can be seen near $T_{\rm N}$ for each sample. Since the increase of $1/T_{1,\|}T$s originates from the growth of the stripe-type AFM spin fluctuations,[@Cui2015] the results indicate that the AFM spin fluctuations enhance the hyperfine fluctuations at the As sites along the $c$ axis. In the case of superconducting samples with $x$ $\geq$ 0.033, $1/T_{1,\perp}T$ and $1/T_{1,\|}T$ show a slight increase or constant above $T^{*}$ $\sim$ 100 K on cooling and then start to decrease below $T^{*}$. These behaviors are ascribed to pseudogap-like behavior in Ref. . With a further decrease in $T$, both $1/T_{1,\|}T$ and $1/T_{1,\perp}T$ for $x$ = 0.033 and 0.059 show sudden decreases below $T_{\rm c}$ \[15 (10) K for $x$ = 0.033 (0.059)\] due to superconducting transitions. ![image](Fig3){width="16.0cm"} Using the $1/T_{1,\perp}T$, $1/T_{1,\|}T$ data and Knight shift data, we discuss magnetic correlations in Ca(Fe$_{1-x}$Co$_{x}$)$_2$As$_2$ based on the Korringa ratios. The $T$ dependences of the Korringa ratios $\alpha_{\bot}={\cal S}/T_{1,\bot}TK_{\text{spin},ab}^2$ and $\alpha_{\|}={\cal S}/T_{1,\|}TK_{\text{spin},c}^2$ are shown in Fig. \[fig:alpha\](a). All $\alpha_{\|}$ and $\alpha_\perp$ increase with decreasing $T$ down to $T_{\rm N}$ or $T^{*}$. The increase in $\alpha$, which is the increase in $1/T_{1,}TK^2$, clearly indicates the growth of the stripe-type AFM spin correlations as have been pointed out previously.[@Cui2015] It is noted that $\alpha_{\|}$ is always greater than $\alpha_\perp$ for each sample, indicating that stronger hyperfine fluctuations at the As sites due to AFM correlations along the $c$ axis than in $ab$. On the other hand, $\alpha_{\|}$ values seem to be less than unity: the largest value of $\alpha_\perp$ can be found to be $\sim$ 0.4 in $x$ = 0. The even smaller values $\alpha_\perp$ of 0.1 – 0.2 in $x$ = 0.023 and $x$ = 0.028 at high temperatures are observed, suggesting FM fluctuations in the normal state. In the application of the Korringa ratio to the iron pnictides, the question arises as to the role of the hyperfine form factor, which can, in principle, filter out the AFM fluctuations at the As site. This filtering effect could affect the balance of FM vs. AFM fluctuations as measured by the Korringa ratio. [@Jeglic2010] In order to discuss the filtering effects, it is convenient to express 1/$T_1$ in terms of wave-number ($\mathbf{q}$) dependent form factors and $\mathbf{q}$ dependent dynamical spin susceptibility $\chi(\mathbf{q}, \omega_0)$. By an explicit calculation of the form factors (see Appendix A) using the methods of Ref. , we find that $$\frac{1}{T_{1,\|}T}\sim \left[\left(2.7\frac{{\rm T}^2}{\mu_{\rm B}^2}\right)\frac{\chi_{ab}''(\mathbf{Q},\omega_0)}{\hbar\omega_0} +\left(1.5\frac{{\rm T}^2}{\mu_{\rm B}^2}\right)\frac{\chi_{c}''(\mathbf{0},\omega_0)}{\hbar\omega_0}\right],$$ $$\frac{1}{T_{1,\perp}T}\sim \left[\left(3.2\frac{{\rm T}^2}{\mu_{\rm B}^2}\right)\frac{\chi_{ab}''(\mathbf{0},\omega_0)}{\hbar\omega_0} +\left(1.4\frac{{\rm T}^2}{\mu_{\rm B}^2}\right)\frac{\chi_{c}''(\mathbf{Q},\omega_0)}{\hbar\omega_0}\right]$$ where $\chi''(\mathbf{0},\omega_0)$ and $\chi''(\mathbf{Q},\omega_0)$ represent the imaginary part of the dynamical susceptibility for $\mathbf{q}$ = 0 ferromagnetic and $\mathbf{Q}$ = $(\pi,0)/(0,\pi)$ stripe-type AFM components, respectively. The numbers are calculated from the hyperfine coupling constants in units of T/$\mu_{\rm B}$ for CaFe$_2$As$_2$ given in Ref. . From these equations, it is clear that the stripe-type AFM fluctuations are not filtered out for both directions in the iron pnictides. It is also seen that for $1/T_{1,\|}T$, the form factor favors AFM fluctuations, which explains the larger (more AFM) values of $\alpha_\|$. On the other hand, for $1/T_{1\perp}T$, the ferromagnetic fluctuations dominate more than the AFM fluctuations as actually seen in Fig. \[fig:alpha\](a) where $\alpha_\perp$ is less than $\alpha_\|$ for each sample. ![(Color online) (a),(b): Sources of hyperfine field along the $c$-axis. (c),(d): Sources of hyperfine field in the $ab$-plane.[]{data-label="fig:hyperfine"}](hyperfine.eps){width="8.0cm"} ![image](Fig5new.eps){width="17.0"} Now we consider the origin of the hyperfine field at the $^{75}$As site in order to further understand the physics associated with each term in Eqs. (1) and (2). The hyperfine field at the $^{75}$As site is determined by the spin moments on the Fe sites through the hyperfine coupling tensor $\tilde{A}$, according to $\mathbf{H}^{\text{hf}}=\tilde{A}\cdot\mathbf{S}$. In the tetragonal PM phase, the most general form for $\tilde{A}$ is [@Kitagawa2008; @Hirano2012] $$\tilde{A}= \begin{pmatrix} A_\perp & D & B \\ D & A_\perp & B \\ B & B & A_c \end{pmatrix},$$ where $A_i$ is the coupling for FM correlation, $D$ is the coupling for in-plane Neél-type AFM correlation and $B$ is coupling for stripe-type AFM correlations. Since there is no theoretical or experimental reason to expect Neél-type AFM correlation in the iron pnictides, below we simply set $D=0$. We then obtain $H^\text{hf}_\perp=A_\perp S_\perp+BS_{\rm c}$ and $H^\text{hf}_{\rm c}=2BS_\perp+A_{\rm c}S_{\rm c}$. There are therefore two sources of hyperfine field pointing along the $c$ axis[@Kitagawa2008]: fluctuations at $\mathbf{q}=\mathbf{Q}=(\pi,0)/(0,\pi)$ with the spins pointing in plane (as illustrated in Fig. \[fig:hyperfine\](a)) or fluctuations at $\mathbf{q}=0$ with the spins pointing along the $c$ axis (Fig. \[fig:hyperfine\](b)). The first and second fluctuations correspond to the first and second terms, respectively, in $1/T_{1, \|}T$ \[Eq. (1)\]. Similarly, hyperfine field fluctuations in the $ab$ plane can result from fluctuations at $\mathbf{q}=0$ with the spins pointing in plane (Fig. \[fig:hyperfine\](c)), or from fluctuations at $\mathbf{q}=\mathbf{Q}$ with the spins pointing along the $c$ axis (Fig. \[fig:hyperfine\](d)). Again, the first and second fluctuations can be attributed to the first and second terms, respectively, in $1/T_{1, \perp}T$ \[Eq. (2)\]. In what follows, we will refer to the correlations depicted in Fig. \[fig:hyperfine\](a) as “(a)-type” correlations (similarly for the others). To summarize, the value of $\alpha_\|$ reflects the competition between (a)- and (b)-type correlations, while $\alpha_\perp$ reflects the competition between (c)- and (d)-type correlations. Now, since $\alpha_\|$ reflects the character of hyperfine field fluctuations with a $c$-axis component, the strongly AFM $\alpha_\|$ in Fig. \[fig:alpha\] can be attributed to stripe-type AFM correlations with the Fe spins in plane (i.e. (a)-type). These must dominate the (b)-type correlations in order to have an AFM value of $\alpha_\|$. Similarly, since $\alpha_\perp$ reflects the character of the $ab$- plane component of hyperfine field fluctuations, the strongly FM value of $\alpha_\perp$ in the high $T$ region may be attributed to in plane FM fluctuations (Fig. \[fig:hyperfine\](c)), while the increase of $\alpha_\perp$ as the temperature is lowered reflects the increasing dominance of stripe-type AFM correlations with a $c$-axis component to the spin (as in Fig. \[fig:hyperfine\](d)). By examining the $c$-axis and $ab$-plane components of the hyperfine field fluctuations separately via $\alpha_\|$ and $\alpha_\perp$, we see the simultaneous coexistence of FM and AFM fluctuations in Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$. Furthermore, the dominance of (a)- and (c)-type spin fluctuations in the high temperature region suggests that both the AFM and FM fluctuations are highly anisotropic, favoring the $ab$-plane. A similar feature of the coexistence of FM and AFM fluctuations[@Wiecki2015prl] has been reported in Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ and Ba$_{1-x}$K$_x$Fe$_2$As$_2$. It is interesting to separate the FM and the stripe-type AFM fluctuations and extract their $T$ dependence, as has been performed in the hole- and electron-doped BaFe$_2$As$_2$.[@Wiecki2015prl] According to the previous paper,[@Wiecki2015prl] $1/T_1T$ was decomposed into inter- and intraband components according to $1/T_1T=(1/T_1T)_\text{inter}+(1/T_1T)_\text{intra}$, where the $T$ dependence of the interband term is assumed to follow the Curie-Weiss (CW) form appropriate for 2D AFM spin fluctuations: $(1/T_1T)_\text{inter}=C/(T-\Theta_{\rm CW})$. For $T$ dependence of the intraband component, $(1/T_1T)_\text{intra}$ was assumed to be $(1/T_1T)_\text{intra}$ = $\alpha+\beta\text{exp}(-\Delta/k_{\rm B}T)$. Here we also tried to decompose the present $1/T_{1,\|}T$ and $1/T_{1,\perp}T$ data following the procedure. We, however, found large uncertainty in decomposing our data, especially for the $1/T_{1,\perp}T$ case, due to the weak temperature dependence of 1/$T_1T$. Nevertheless, we proceeded with our analysis to qualitatively examine the $x$ dependence of Curie-Weiss parameter $C$, which measures the strength of AFM spin fluctuations, and $\Theta_{\rm CW}$ corresponding to the distance in $T$ from the AFM instability point. Here we fit the data above $T_{\rm N}$ or $T^*$ for each sample. $\Theta_{\rm CW}$ decreases from 38 $\pm$ 17 K ($x$ = 0) to 15 $\pm$ 13 K ($x$ = 0.023), and to a negative values of –33 $\pm$ 21 K ($x$ = 0.028). This suggests that compounds with $x$ = 0.023 and 0.028 are close to the AFM instability point of $\Theta_{\rm CW}$ = 0 K. A similar behavior of $\Theta_{\rm CW}$ is reported in Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ (Refs. ) and Ba(Fe$_{1-x}$Ni$_x$)$_2$As$_2$ (Ref. ). The $x$ dependences of CW parameters $C_\perp$, $C_\|$ and $\Theta_{\rm CW}$ are shown in Figs. \[fig:phase\](a) and (b) together with the phase diagram reported in Ref. . Although these parameters have large uncertainty, $C_\|$ seems to be greater than $C_\perp$, consistent with that the in-plane AFM fluctuations are stronger than the $c$-axis AFM fluctuations. This result is same as in Ba(Fe$_{1-x}$Co$_{x}$)$_2$As$_2$ samples in Ref. . On the other hand, the $C_\perp$ and $C_\|$ parameters are almost independent of $x$ in Ca(Fe$_{1-x}$Co$_{x}$)$_2$As$_2$ in the substitution range of $x$ = 0–0.059, while the $C_\perp$ and $C_\|$ parameters decrease with Co substitution in BaFe$_2$As$_2$ where the $c$-axis component AFM spin fluctuations decrease and die out with $x$ $\geq$ 0.15.[@Ning2010] It is interesting to point out that a similar $x$-independent behavior is also observed in the crossover temperature $T^*$ attributed to the pseudogaplike behavior in the spin excitation spectra of Ca(Fe$_{1-x}$Co$_{x}$)$_2$As$_2$ system.[@Cui2015] Finally we show, in Fig. \[fig:alpha\](b), the intra band Korringa ratios $\alpha_\|^\text{intra}$ and $\alpha_\perp^\text{intra}$ by subtracting the interband scattering term $C$/$(T-\Theta_{\rm CW})$. Both $\alpha_\|^\text{intra}$ and $\alpha_\perp^\text{intra}$ remain roughly constant above $T_{\rm N}$ or $T^*$. We plotted the average value of $\alpha_\|^\text{intra}$ and $\alpha_\perp^\text{intra}$ as a function of $x$ in Fig. \[fig:phase\](b). We find that $\alpha_\perp^\text{intra}$ is smaller than $\alpha_\|^\text{intra}$ for all the samples, confirming again the dominant in-plane FM spin fluctuations. The calculated $\alpha_\perp^\text{intra}$and $\alpha_\|^\text{intra}$ in Ca(Fe$_{1-x}$Co$_{x}$)$_2$As$_2$ are almost same order with those in both the electron and hole doped BaFe$_2$As$_2$. These results indicate that the FM spin correlations exist in general and may be a key ingredient to a theory of superconductivity in the iron pnictides. Summary ======= Motivated by the recent NMR measurements which revealed the coexistence of the stripe-type antiferromagnetic (AFM) and ferromagnetic (FM) spin correlations in both the hole- and electron-doped BaFe$_2$As$_2$ families of iron-pnictide superconductors[@Wiecki2015prl], we have reanalyzed NMR data in Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$ and found clear evidence for the coexistence of the stripe-type AFM and FM spin correlations. In contrast to the case of Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ where the relative strength of FM correlations increases with Co substitution, the FM correlations are almost independent of the Co substitution for our investigated range of $x$ = 0 – 0.059 in Ca(Fe$_{1-x}$Co$_x$)$_2$As$_2$. The Curie-Weiss parameters $C_{\perp,\|}$ representing the strength of the stripe-type AFM correlations are almost independent of the Co doping, close to a feature of $T^*$ representing a characteristic temperature of the pseudogaplike behavior. Our analysis of the NMR data indicates that FM fluctuations exist in general in iron-pnictide superconducting families. Further systematic theoretical and experimental investigation on the role of the FM correlations in iron pnictide superconducting families are highly required. Acknowledgments =============== We thank David C. Johnston for helpful discussions. The research was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. DE-AC02-07CH11358. A calculation of form factor ============================ Here, we directly calculate the appropriate form factors for the PM state of the iron pnictides according to the theory of Ref. . We make the assumption that the external applied field is much larger than the hyperfine field, which is certainly true in the PM state. We further assume that the wave-number $q$ dependent dynamic susceptibility tensor $\chi^{\alpha\beta}(\mathbf{q},\omega_0)$ is diagonal in the PM state. Under these assumptions, the spin-lattice relaxation rate in an external field $\mathbf{h}_{\rm{ext}}$ is given by $$\frac{1}{T_1(\mathbf{h}_{\rm{ext}})}=\lim_{\omega_0 \to 0}\frac{\gamma_N^2}{2N}k_{\rm B}T \sum_{\alpha,\mathbf{q}}{\cal{F}}_\alpha^{\mathbf{h}_{\rm{ext}}}(\mathbf{q}) \frac{\rm{Im}[\chi^{\alpha\alpha}(\mathbf{q},\omega_0)]}{\hbar\omega_0}, \label{eq:T1}$$ where $\alpha=(a,b,c)$ sums over the crystallographic axes. The general expression for the $q$ dependent form factor is $${\cal{F}}_\alpha^{\mathbf{h}_{\rm{ext}}}(\mathbf{q})=\sum_{\gamma,\delta} [R_{\mathbf{h}_{\rm{ext}}}^{x\gamma}R_{\mathbf{h}_{\rm{ext}}}^{x\delta}+(x\leftrightarrow y)] {\cal A}_\mathbf{q}^{\gamma\alpha}{\cal A}_{-\mathbf{q}}^{\delta\alpha}, \label{eq:form}$$ where $R_{\mathbf{h}_{\rm{ext}}}$ is a matrix which rotates a vector from the crystallographic $(a,b,c)$ coordinate system to a coordinate system $(x,y,z)$ whose $z$ axis is aligned with the total magnetic field at the nuclear site. For details we refer the reader to Ref. . When $\mathbf{h}_{\rm{ext}}\|c$, the two coordinate systems coincide so that $$R_{\mathbf{h}_{\rm{ext}}\|c}= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}. \label{eq:rc}$$ For $\mathbf{h}_{\rm{ext}}\|a$, the appropriate matrix is $$R_{\mathbf{h}_{\rm{ext}}\|a}= \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ -1 & 0 & 0 \end{pmatrix}. \label{eq:ra}$$ For the case of the As site in the iron pnictides, the matrix ${\cal A}_\mathbf{q}$ in Eq. \[eq:form\] is given by [@Smerald2011] $${\cal A}_\mathbf{q}=4 \begin{pmatrix} {\cal A}^{aa}c_ac_b & -{\cal A}^{ab}s_as_b & i{\cal A}^{ac}s_ac_b \\ -{\cal A}^{ba}s_as_b & {\cal A}^{bb}c_ac_b & i{\cal A}^{bc}c_as_b \\ i{\cal A}^{ca}s_ac_b & i{\cal A}^{cb}c_as_b & {\cal A}^{cc}c_ac_b \end{pmatrix}, \label{eq:aq}$$ where ${\cal A}^{\alpha\beta}$ are the components of the hyperfine coupling tensor and $$\begin{aligned} c_a&=\cos\frac{q_aa_0}{2}&c_b&=\cos\frac{q_bb_0}{2}\\ s_a&=\sin\frac{q_aa_0}{2}&s_b&=\sin\frac{q_bb_0}{2}.\end{aligned}$$ Here $a_0$ and $b_0$ are lattice constants. Of course, $a_0=b_0$ in the PM state. Combining Eqs. \[eq:form\]-\[eq:aq\], we obtain $$\begin{aligned} {\cal{F}}_a^{\mathbf{h}_{\rm{ext}}\|a}(\mathbf{q})&=16({\cal A}^{ca}s_ac_b)^2+16({\cal A}^{ba}s_as_b)^2\\ {\cal{F}}_b^{\mathbf{h}_{\rm{ext}}\|a}(\mathbf{q})&=16({\cal A}^{cb}c_as_b)^2+16({\cal A}^{bb}c_ac_b)^2\\ {\cal{F}}_c^{\mathbf{h}_{\rm{ext}}\|a}(\mathbf{q})&=16({\cal A}^{cc}c_ac_b)^2+16({\cal A}^{bc}c_as_b)^2\end{aligned}$$ and $$\begin{aligned} {\cal{F}}_a^{\mathbf{h}_{\rm{ext}}\|c}(\mathbf{q})&=16({\cal A}^{aa}c_ac_b)^2+16({\cal A}^{ba}s_as_b)^2\\ {\cal{F}}_b^{\mathbf{h}_{\rm{ext}}\|c}(\mathbf{q})&=16({\cal A}^{bb}c_ac_b)^2+16({\cal A}^{ab}s_as_b)^2\\ {\cal{F}}_c^{\mathbf{h}_{\rm{ext}}\|c}(\mathbf{q})&=16({\cal A}^{ac}s_ac_b)^2+16({\cal A}^{bc}c_as_b)^2.\end{aligned}$$ To calculate $1/T_1$ from Eq. \[eq:T1\], we assume for simplicity that $\chi^{\alpha\beta}(\mathbf{q},\omega_0)$ is non-zero only near the wavevectors $\mathbf{q}=0$, $\mathbf{q}=\mathbf{Q}_a\equiv(\pm\pi/a_0,0)$ and $\mathbf{q}=\mathbf{Q}_b\equiv(0,\pm\pi/b_0)$. By tetragonal symmetry we have $a\leftrightarrow b$. In particular, $\mathbf{Q}_a=\mathbf{Q}_b\equiv\mathbf{Q}$ and $\rm{Im}[\chi^{aa}(\mathbf{q},\omega_0)]=\rm{Im}[\chi^{bb}(\mathbf{q},\omega_0)] \equiv\chi_{ab}''(\mathbf{q},\omega_0)$. We also now write $\rm{Im}[\chi^{cc}(\mathbf{q},\omega_0)] \equiv\chi_{c}''(\mathbf{q},\omega_0)$. We thus obtain $$\begin{aligned} \frac{1}{T_1(\mathbf{h}_{\rm{ext}}\|c)} = \lim_{\omega_0 \to 0}&\frac{8\gamma_N^2}{N}k_{\rm B}T \left[2({\cal A}^{aa})^2\frac{\chi_{ab}''(\mathbf{0},\omega_0)}{\hbar\omega_0}\right. \nonumber\\ &\qquad \left.+ 4({\cal A}^{ac})^2\frac{\chi_{c}''(\mathbf{Q},\omega_0)}{\hbar\omega_0}\right] \label{eq:T1c} $$ and $$\begin{aligned} \frac{1}{T_1(\mathbf{h}_{\rm{ext}}\|a)}&=\lim_{\omega_0 \to 0}\frac{8\gamma_N^2}{N}k_{\rm B}T \left[ 4({\cal A}^{ca})^2\frac{\chi_{ab}''(\mathbf{Q},\omega_0)}{\hbar\omega_0}\right. \nonumber\\ &\qquad \left.+({\cal A}^{aa})^2\frac{\chi_{ab}''(\mathbf{0},\omega_0)}{\hbar\omega_0}\right. \nonumber\\ & \qquad \left.+({\cal A}^{cc})^2\frac{\chi_{c}''(\mathbf{0},\omega_0)}{\hbar\omega_0}\right. \nonumber\\ &\qquad \left.+2({\cal A}^{ac})^2\frac{\chi_{c}''(\mathbf{Q},\omega_0)}{\hbar\omega_0} \right]. \label{eq:T1a} $$ We have summed over four AFM wavevectors $\mathbf{Q}=(\pm\pi/a_0,0)$ and $\mathbf{Q}=(0,\pm\pi/a_0)$, which have the same value of $\chi''(\mathbf{Q},\omega_0)$ in the PM state. Notice that, for both field directions, AFM flucutations at $\mathbf{q}=\mathbf{Q}$ are completely filtered out if ${\cal A}^{ac}=0$, as pointed out in Ref. . However, in the iron pnictides ${\cal A}^{ac}\neq0$, [@Kitagawa2008] and therefore AFM fluctuations are not filtered out. From Eqs. \[eq:T1c\] and \[eq:T1a\] we can easily calculate $1/T_{1,\|}\equiv2/T_1(\mathbf{h}_{\rm{ext}}\|a)-1/T_1(\mathbf{h}_{\rm{ext}}\|c)$ and $1/T_{1,\perp}\equiv1/T_1(\mathbf{h}_{\rm{ext}}\|c)$ $$\begin{aligned} \frac{1}{T_{1,\perp}}=\lim_{\omega_0 \to 0}&\frac{16\gamma_N^2}{N}k_BT \left[({\cal A}^{aa})^2\frac{\chi_{ab}''(\mathbf{0},\omega_0)}{\hbar\omega_0}\right. \nonumber\\ & \qquad \left.+2({\cal A}^{ac})^2\frac{\chi_{c}''(\mathbf{Q},\omega_0)}{\hbar\omega_0}\right]\end{aligned}$$ $$\begin{aligned} \frac{1}{T_{1,\|}}=\lim_{\omega_0 \to 0}&\frac{16\gamma_N^2}{N}k_BT \left[4({\cal A}^{ca})^2\frac{\chi_{ab}''(\mathbf{Q},\omega_0)}{\hbar\omega_0}\right. \nonumber\\ & \qquad \left.+ ({\cal A}^{cc})^2\frac{\chi_{c}''(\mathbf{0},\omega_0)}{\hbar\omega_0}\right]\end{aligned}$$ Notice that the fluctuations probed by $1/T_{1,\|}$ and $1/T_{1,\perp}$ are consistent with the qualitative arguments used in the main text. For the case of CaFe$_2$As$_2$, Ref. gives ${\cal A}^{aa}=1.8$ T$/\mu_{\rm B}$, ${\cal A}^{cc}=1.2$ T$/\mu_{\rm B}$ and ${\cal A}^{ca}={\cal A}^{ac}=0.82$ T$/\mu_{\rm B}$. ${\cal A}^{aa}$ and ${\cal A}^{cc}$ are determined by Knight shift measurements and ${\cal A}^{ac}$ is found by comparing the measured internal field in the AFM state to the value of the ordered moment obtained by neutron scattering.  present address: Department of Physics, University of California, San Diego. California 92093, USA [10]{} Y. Kamihara, T. Watanabe, M. Hirano, and H. Hosono, J. Am. Chem. Soc. [**130**]{}, 3296 (2008). P. C. Canfield and S. L. Bud’ko, Annu. Rev. Condens. Matter Phys. [**1**]{}, 27 (2010). D. C. Johnston, Adv. Phys. [**59**]{}, 803 (2010). G. R. Stewart, Rev. Mod. Phys. [**83**]{}, 1589 (2011). Y. K. Kim, W. S. Jung, G. R. Han, K.-Y. Choi, C.-C. Chen, T. P. Devereaux, A. Chainani, J. Miyawaki, Y. Takata, Y. Tanaka, M. Oura, S. Shin, A. P. Singh, H. G. Lee, J.-Y. Kim, and C. Kim, Phys. Rev. Lett. [**111**]{}, 217001 (2013). P. Wiecki, V. Ogloblichev, A. Pandey, D. C. Johnston, and Y. Furukawa, Phys. Rev. B [**91**]{}, 220406 (R) (2015). P. Wiecki, B. Roy, D. C. Johnston, S. L. Bud’ko, P. C. Canfield, and Y. Furukawa, Phys. Rev. Lett. [**115**]{}, 137001 (2015). J. Cui, B. Roy, M. A. Tanatar, S. Ran, S. L. Bud’ko, R. Prozorov, P. C. Canfield, and Y. Furukawa, Phys. Rev. B [**92**]{}, 184504 (2015). B. Cheng, B. F. Hu, R. H. Yuan, T. Dong, A. F. Fang, Z. G. Chen, G. Xu, Y. G. Shi, P. Zheng, J. L. Luo, and N. L. Wang, Phys. Rev. B [**85**]{}, 144426 (2012). D. G. Quirinale, V. K. Anand, M. G. Kim, Abhishek Pandey, A. Huq, P. W. Stephens, T. W. Heitmann, A. Kreyssig, R. J. McQueeney, D. C. Johnston, and A. I. Goldman, Phys. Rev. B [**88**]{}, 174420 (2013). S. Ran, S. L. Bud’ko, D. K. Pratt, A. Kreyssig, M. G. Kim, M. J. Kramer, D. H. Ryan, W. N. Rowan-Weetaluktuk, Y. Furukawa, B. Roy, A. I. Goldman, and P. C. Canfield, Phys. Rev. B [**83**]{}, 144517 (2011). S. Ran, S. L. Bud’ko, W. E. Straszheim, J. Soh, M. G. Kim, A. Kreyssig, A. I. Goldman, and P. C. Canfield, Phys. Rev. B [**85**]{}, 224528 (2012). P. C. Canfield, [*in Properties and applications of complex intermetallics*]{}, edited by E. Belin-Frré (World Scientific Co. Pte. Ltd, Singapore, 2010), page 93. P. C. Canfield and Z. Fisk, Philos. Mag. B [**65**]{}, 1117 (1992). A.  I. Goldman, D. N. Argyriou, B. Ouladdiaf, T. Chatterji, A. Kreyssig, S. Nandi, N. Ni, S. L. Bud’ko, P. C. Canfield, and R. J. McQueeney, Phys. Rev. B [**78**]{}, 100506(R) (2008). Y. Furukawa, B. Roy, S. Ran, S. L. Bud’ko, and P. C. Canfield, Phys. Rev. B [**89**]{}, 121109 (2014). T. Moriya, J. Phys. Soc. Jpn. [**18**]{}, 516 (1963). A. Narath and H. T. Weaver, Phys. Rev. [**175**]{}, 373 (1968). P. Jeglič, A. Potočnik, M. Klanjšek, M. Bobnar, M. Jagodič, K. Koch, H. Rosner, S. Margadonna, B. Lv, A. M. Guloy, and D. Arčon, Phys. Rev. B [**[81]{}**]{}, 140511(R) (2010). A. Smerald and N. Shannon, Phys. Rev. B [**84**]{}, 184437 (2011). K. Kitagawa, N. Katayama, K. Ohgushi, M. Yoshida, and M. Takigawa, J. Phys. Soc. Jpn. [**77**]{}, 114709 (2008). M. Hirano, Y. Yamada, T. Saito, R. Nagashima, T. Konishi, T. Toriyama, Y. Ohta, H. Fukazawa, Y. Kohori, Y. Furukawa, K. Kihou, C-H Lee, A. Iyo and H. Eisaki, J. Phys. Soc. Jpn. [**81**]{}, 054704 (2012). F. L. Ning, K. Ahilan, T. Imai, A. S. Sefat, M. A. McGuire, B. C. Sales, D. Mandrus, P. Cheng, B. Shen, and H.-H. Wen, Phys. Rev. Lett. [**104**]{}, 037001 (2010). R. Zhou, Z. Li, J. Yang, D. L. Sun, C. T. Lin, and G.-q. Zheng, Nat. Commun. [**4**]{} (2013).