text
stringlengths
7.8k
650k
USAFRICOM and the Militarization of the African Continent: Combating China’s Economic Encroachment As the Obama administration claims to welcome the peaceful rise of China on the world stage, recent policy shifts toward an increased US military presence in Central Africa threaten deepening Chinese commercial activity in the Democratic Republic of the Congo, widely considered the world’s most resource rich nation. Since the time of the British Empire and the manifesto of Cecil Rhodes, the pursuit of treasures on the hopeless continent has demonstrated the expendability of human life. Despite decades of apathy among the primary resource consumers, the increasing reach of social media propaganda has ignited public interest in Africa’s long overlooked social issues. In the wake of celebrity endorsed pro-intervention publicity stunts, public opinion in the United States is now being mobilized in favor of a greater military presence on the African continent. Following the deployment of one hundred US military personnel to Uganda in 2011, a new bill has been introduced to the Congress calling for the further expansion of regional military forces in pursuit of the Lord’s Resistance Army (LRA), an ailing rebel group allegedly responsible for recruiting child soldiers and conducting crimes against humanity. As the Obama administration claims to welcome the peaceful rise of China on the world stage, recent policy shifts toward an American Pacific Century indicate a desire to maintain the capacity to project military force toward the emerging superpower. In addition to maintaining a permanent military presence in Northern Australia, the construction of an expansive military base on South Korea’s Jeju Island has indicated growing antagonism towards Beijing. The base maintains the capacity to host up to twenty American and South Korean warships, including submarines, aircraft carriers and destroyers once completed in 2014 – in addition to the presence of Aegis anti-ballistic systems. In response, Chinese leadership has referred to the increasing militarization in the region as an open provocation. On the economic front, China has been excluded from the proposed Trans-Pacific Partnership Agreement (TPPA), a trade agreement intended to administer US-designed international trading regulations throughout Asia, to the benefit of American corporations. As further fundamental policy divisions emerge subsequent to China and Russia’s UNSC veto mandating intervention in Syria, the Obama administration has begun utilizing alternative measures to exert new economic pressure towards Beijing. The United States, along with the EU and Japan have called on the World Trade Organization to block Chinese-funded mining projects in the US, in addition to a freeze on World Bank financing for China’s extensive mining projects. In a move to counteract Chinese economic ascendancy, Washington is crusading against China’s export restrictions on minerals that are crucial components in the production of consumer electronics such as flat-screen televisions, smart phones, laptop batteries, and a host of other products. In a 2010 white paper entitled “Critical Raw Materials for the EU,” the European Commission cites the immediate need for reserve supplies of tantalum, cobalt, niobium, and tungsten among others; the US Department of Energy 2010 white paper “Critical Mineral Strategy” also acknowledged the strategic importance of these key components. Coincidently, the US military is now attempting to increase its presence in what is widely considered the world’s most resource rich nation, the Democratic Republic of the Congo. China’s unprecedented economic transformation has relied not only on consumer markets in the United States, Australia and the EU – but also on Africa, as a source for a vast array of raw materials. As Chinese economic and cultural influence in Africa expands exponentially with the symbolic construction of the new $200 million African Union headquarters funded solely by Beijing, the ailing United States and its leadership have expressed dissatisfaction toward its diminishing role in the region. During a diplomatic tour of Africa in 2011, US Secretary of State Hilary Clinton herself has irresponsibly insinuated China’s guilt in perpetuating a creeping “new colonialism.” At a time when China holds an estimated $1.5 trillion in American government debt, Clinton’s comments remain dangerously provocative. As China, backed by the world’s largest foreign currency reserves, begins to offer loans to its BRICS counterparts in RMB, the prospect of emerging nations resisting the New American Century appear to be increasingly assured. While the success of Anglo-American imperialism relies on its capacity to militarily drive target nations into submission, today’s African leaders are not obliged to do business with China – although doing so may be to their benefit. China annually invests an estimated $5.5 billion in Africa, with only 29 percent of direct investment in the mining sector in 2009 – while more than half was directed toward domestic manufacturing, finance, and construction industries, which largely benefit Africans themselves – despite reports of worker mistreatment. China’s deepening economic engagement in Africa and its crucial role in developing the mineral sector, telecommunications industry and much needed infrastructural projects is creating “deep nervousness” in the West, according to David Shinn, the former US ambassador to Burkina Faso and Ethiopia. In a 2011 Department of Defense whitepaper entitled “Military and Security Developments Involving the People’s Republic of China”, the US acknowledges the maturity of China’s modern hardware and military technology, and the likelihood of Beijing finding hostility with further military alliances between the United States and Taiwan. The document further indicates that “China’s rise as a major international actor is likely to stand out as a defining feature of the strategic landscape of the early 21st century.” Furthermore, the Department of Defense concedes to the uncertainty of how China’s growing capabilities will be administered on the world stage. Although a US military presence in Africa (under the guise of fighting terrorism and protecting human rights) specifically to counter Chinese regional economic authority may not incite tension in the same way that a US presence in North Korea or Taiwan would, the potential for brinksmanship exists and will persist. China maintains the largest standing army in the world with 2,285,000 personnel and is working to challenge the regional military hegemony of America’s Pacific Century with its expanding naval and conventional capabilities, including an effort to develop the world’s first anti-ship ballistic missile. Furthermore, China has moved to begin testing advanced anti-satellite (ASAT) and Anti Ballistic Missile (ABM) weapons systems in an effort to bring the US-China rivalry into Space warfare. The concept of US intervention into the Democratic Republic of the Congo, South Sudan, Central African Republic and Uganda under the pretext of disarming the Lord’s Resistance Army is an ultimately fraudulent purpose. The LRA has been in operation for over two decades, and presently remains at an extremely weakened state, with approximately 400 soldiers. According the LRA Crisis Tracker, a digital crisis mapping software launched by the Invisible Children group, not a single case of LRA activity has been reported in Uganda since 2006. The vast majority of reported attacks are presently taking place in the northeastern Bangadi region of the Democratic Republic of the Congo, located on the foot of a tri-border expanse between the Central African Republic and South Sudan. The existence of the Lord’s Resistance Army should rightfully be disputed, as the cases of LRA activity reported by US State Department-supported Invisible Children rely on unconfirmed reports – cases where LRA activity is presumed and suspected. Given the extreme instability in the northern DRC after decades of foreign invasion and countless rebel insurgencies, the lack of adequate investigative infrastructure needed to sufficiently examine and confirm the LRA’s presence is simply not in place. The villainous branding of Joseph Kony may well be deserved, however it cannot be overstated that the LRA threat is wholly misrepresented in recent pro-intervention US legislation. An increasing US presence in the region exists only to curtail the increasing economic presence of China in one of the world’s most resource and mineral rich regions. The Lord’s Resistance Army was originally formed in 1987 in northwestern Uganda by members of the Acholi ethnic group, who were historically exploited for forced labor by the British colonialists and later marginalized by the nation’s dominant Bantu ethic groups following independence. The Lord’s Resistance Army originally aimed to overthrow the government of current Ugandan President, Yoweri Museveni – due to a campaign of genocide waged against the Acholi people. The northern Ugandan Acholi and Langi ethnic groups have been historically targeted and ostracized by successive Anglo-American backed administrations. In 1971, Israeli and British intelligence agencies engineered a coup against Uganda’s socialist President Milton Obote, which gave rise to the disastrous regime of Idi Amin. In a detailed report of Museveni’s atrocities, Ugandan writer Herrn Edward Mulindwa offers, “During the 22-year war, Museveni’s army killed, maimed and mutilated thousands of civilians, while blaming it on rebels. In northern Uganda, instead of defending and protecting civilians against rebel attacks, Museveni’s army would masquerade as rebels and commit gross atrocities, including maiming and mutilation, only to return and pretend to be saviors of the affected people.” Despite such compelling evidence of brutality, Museveni has been a staunch US ally since the Reagan administration and received $45 million dollars in military aid from the Obama administration for Ugandan participation in the fight against Somalia’s al Shabaab militia. Since the abhorrent failure of the 1993 US intervention in Somalia, the US has relied on the militaries of Rwanda, Uganda and Ethiopia to carry out US interests in proxy. Since colonial times, the West has historically exploited ethnic differences in Africa for political gain. In Rwanda, the Belgian colonial administration exacerbated tension between the Hutu, who were subjugated as a workforce – and the Tutsi, seen as extenders of Belgian rule. From the start of the Rwandan civil war in 1990, the US sought to overthrow the 20-year reign of Hutu President Juvénal Habyarimana by installing a Tutsi proxy government in Rwanda, a region historically under the influence of France and Belgium. At that time prior to the outbreak of the Rwandan civil war, the Tutsi Rwandan Patriotic Army (RPA) led by current Rwandan President Paul Kagame, was part of Museveni’s United People’s Defense Forces (UPDF). Ugandan forces invaded Rwanda in 1990 under the pretext of Tutsi liberation, despite the fact that Museveni refused to grant citizenship to Tutsi-Rwandan refugees living in Uganda at the time, a move that further offset the 1994 Rwandan genocide. Kagame himself was trained at the U.S. Army Command and Staff College (CGSC) in Leavenworth, Kansas prior to returning to the region to oversee the 1990 invasion of Rwanda as commander of the RPA, which received supplies from US-funded UPDF military bases inside Uganda. The invasion of Rwanda had the full support of the US and Britain, who provided training by US Special Forces in collaboration with US mercenary outfit, Military Professional Resources Incorporated (MPRI). A report issued in 2000 by Canadian Professor Michel Chossudovsky and Belgian economist Senator Pierre Galand concluded that western financial institutions such as the International Monetary Fund and the World Bank financed both sides of the Rwandan civil war, through a process of financing military expenditure from the external debt of both the regimes of Habyarimana and Museveni. In Uganda, the World Bank imposed austerity measures solely on civilian expenditures while overseeing the diversion of State revenue go toward funding the UPDF, on behalf of Washington. In Rwanda, the influx of development loans from the World Bank’s affiliates such as the International Development Association (IDA), the African Development Fund (AFD), and the European Development Fund (EDF) were diverted into funding the Hutu extremist Interhamwe militia, the main protagonists of the Rwandan genocide. Perhaps most disturbingly, the World Bank oversaw huge arms purchases that were recorded as bona fide government expenditures, a stark violation of agreements signed between the Rwandan government and donor institutions. Under the watch of the World Bank, the Habyarimana regime imported approximately one million machetes through various Interhamwe linked organizations, under the pretext of importing civilian commodities. To ensure their reimbursement, a multilateral trust fund of $55.2 million dollars was designated toward postwar reconstruction efforts, although the money was not allocated to Rwanda – but to the World Bank, to service the debts used to finance the massacres. Furthermore, Paul Kagame was pressured by Washington upon coming to power to recognize the legitimacy of the debt incurred by the previous genocidal Habyarimana regime. The swap of old loans for new debts (under the banner of post-war reconstruction) was conditional upon the acceptance of a new wave of IMF-World Bank reforms, which similarly diverted outside funds into military expenditure prior to the Kagame-led invasion of the Congo, then referred to as Zaire. As present day Washington legislators attempt to increase US military presence in the DRC under the pretext of humanitarian concern, the highly documented conduct of lawless western intelligence agencies and defense contractors in the Congo since its independence sheds further light on the exploitative nature of western intervention. In 1961, the Congo’s first legally elected Prime Minister, Patrice Lumumba was assassinated with support from Belgian intelligence and the CIA, paving the way for the thirty-two year reign of Mobutu Sese Seko. As part of an attempt to purge the Congo of all colonial cultural influence, Mobutu renamed the country Zaire and led an authoritarian regime closely allied to France, Belgium and the US. Mobutu was regarded as a staunch US ally during the Cold War due to his strong stance against communism; the regime received billions in international aid, most from the United States. His administration allowed national infrastructure to deteriorate while the Zairian kleptocracy embezzled international aid and loans; Mobutu himself reportedly held $4 billion USD in a personal Swiss bank account. Relations between the US and Zaire thawed at the end of the Cold War, when Mobutu was no longer needed as an ally; Washington would later use Rwandan and Ugandan troops to invade the Congo to topple Mobutu and install a new proxy regime. Following the conflict in Rwanda, 1.2 million Hutu civilians (many of whom who took part in the genocide) crossed into the Kivu province of eastern Zaire fearing prosecution from Paul Kagame’s Tutsi RPA. US Special Forces trained Rwandan and Ugandan troops at Fort Bragg in the United States and supported Congolese rebels under future President, Laurent Kabila. Under the pretext of safeguarding Rwandan national security against the threat of displaced Hutu militias, troops from Rwanda, Uganda and Burundi invaded the Congo and ripped through Hutu refugee camps, slaughtering thousands of Rwandan and Congolese Hutu civilians, many of who were women and children. Reports of brutality and mass killing in the Congo were rarely addressed in the West, as the International Community was sympathetic to Kagame and the Rwandan Tutsi victims of genocide. Both Halliburton and Bechtel (military contractors that profited immensely from the Iraq war) were involved in military training and reconnaissance operations in an attempt to overthrow Mobutu and bring Kabila to power. After deposing Mobutu and seizing control in Kinshasa, Laurent Kabila was quickly regarded as an equally despotic leader after eradicating all opposition to his rule; he turned away from his Rwandan backers and called on Congolese civilians to violently purge the nation of Rwandans, prompting Rwandan forces to regroup in Goma, in an attempt to capture resource rich territory in eastern Congo. Prior to becoming President in 1997, Kabila sent representatives to Toronto to discuss mining opportunities with American Mineral Fields (AMF) and Canada’s Barrick Gold Corporation; AMF had direct ties to US President Bill Clinton and was given exclusive exploration rights to zinc, copper, and cobalt mines in the area. The Congolese Wars perpetrated by Rwanda and Uganda killed at least six million people, making it the largest case of genocide since the Jewish holocaust. The successful perpetration of the conflict relied on western military and financial support, and was fought primarily to usurp the extensive mining resources of eastern and southern Congo; the US defense industry relies on high quality metallic alloys indigenous to the region, used primarily in the construction of high-performance jet engines. In 1980, Pentagon documents acknowledged shortages of cobalt, titanium, chromium, tantalum, beryllium, and nickel; US participation in the Congolese conflict was largely an effort to obtain these needed resources. The sole piece of legislation authored by President Obama during his time as a Senator was S.B. 2125, the Democratic Republic of the Congo Relief, Security, and Democracy Promotion Act of 2006. In the legislation, Obama acknowledges the Congo as a long-term interest to the United States and further alludes to the threat of Hutu militias as an apparent pretext for continued interference in the region; Section 201(6) of the bill specifically calls for the protection of natural resources in the eastern DRC. The Congressional Budget Office’s 1982 report “Cobalt: Policy Options for a Strategic Mineral” notes that cobalt alloys are critical to the aerospace and weapons industries and that 64% of the world’s cobalt reserves lay in the Katanga Copper Belt, running from southeastern Congo into northern Zambia. For this reason, the future perpetration of the military industrial complex largely depends on the control of strategic resources in the eastern DRC. In 2001, Laurent Kabila was assassinated by a member of his security staff, paving the way for his son Joseph Kabila to dynastically usurp the presidency. The younger Kabila derives his legitimacy solely from the support of foreign heads of state and the international business community, due to his ability to comply with foreign plunder. The framework of the deal allocated an additional $3 million to develop cobalt and copper mining operations in Katanga. In 2009, the International Monetary Fund (IMF) demanded renegotiation of the deal, arguing that the agreement between China and the DRC violated the foreign debt relief program for so-called HIPC (Highly Indebted Poor Countries) nations. The vast majority of the DRC’s $11 billion foreign debt owed to the Paris Club was embezzled by the previous regime of Mobuto Sesi Seko. The IMF successfully blocked the deal in May 2009, calling for a more feasibility study of the DRCs mineral concessions. The United States is currently mobilizing public opinion in favor of a greater US presence in Africa, under the pretext of capturing Joseph Kony, quelling Islamist terrorism and putting an end to long-standing humanitarian issues. As well-meaning Americans are successively coerced by highly emotional social media campaigns promoting an American response to atrocities, few realize the role of the United States and western financial institutions in fomenting the very tragedies they are now poised to resolve. While many genuinely concerned individuals naively support forms of pro-war brand activism, the mobilization of ground forces in Central Africa will likely employ the use of predator drones and targeted missile strikes that have been notoriously responsible for civilian causalities en masse. The further consolidation of US presence in the region is part of a larger program to expand AFRICOM, the United States Africa Command through a proposed archipelago of military bases in the region. In 2007, US State Department advisor Dr. J. Peter Pham offered the following on AFRICOM and its strategic objectives of “protecting access to hydrocarbons and other strategic resources which Africa has in abundance, a task which includes ensuring against the vulnerability of those natural riches and ensuring that no other interested third parties, such as China, India, Japan, or Russia, obtain monopolies or preferential treatment.” Additionally, during an AFRICOM Conference held at Fort McNair on February 18, 2008, Vice Admiral Robert T. Moeller openly declared AFRICOM’s guiding principle of protecting “the free flow of natural resources from Africa to the global market,” before citing the increasing presence of China as a major challenge to US interests in the region. The increased US presence in Central Africa is not simply a measure to secure monopolies on Uganda’s recently discovered oil reserves; Museveni’s legitimacy depends solely on foreign backers and their extensive military aid contributions – US ground forces are not required to obtain valuable oil contracts from Kampala. The push into Africa has more to do with destabilizing the deeply troubled Democratic Republic of the Congo and capturing its strategic reserves of cobalt, tantalum, gold and diamonds. More accurately, the US is poised to employ a scorched-earth policy by creating dangerous war-like conditions in the Congo, prompting the mass exodus of Chinese investors. Similarly to the Libyan conflict, the Chinese returned after the fall of Gaddafi to find a proxy government only willing to do business with the western nations who helped it into power. The ostensible role of the first African-American US President is to export the theatresque War on Terror directly to the African continent, in a campaign to exploit established tensions along tribal, ethnic and religious lines. As US policy theoreticians such as Dr. Henry Kissinger, willingly proclaim, ”Depopulation should be the highest priority of US foreign policy towards the Third World,” the vast expanse of desert and jungles in northern and central Africa will undoubtedly serve as the venue for the next decade of resource wars. Disclaimer: The contents of this article are of sole responsibility of the author(s). The Centre for Research on Globalization will not be responsible for any inaccurate or incorrect statement in this article. The Center of Research on Globalization grants permission to cross-post original Global Research articles on community internet sites as long as the text & title are not modified. The source and the author's copyright must be displayed. For publication of Global Research articles in print or other forms including commercial internet sites, contact: [email protected] www.globalresearch.ca contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of "fair use" in an effort to advance a better understanding of political, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than "fair use" you must request permission from the copyright owner.
"Previously on The Returned..." "He got out!" "Do you know how stupid you are?" "Do you?" "Maybe he was just hunting." "Hunting what?" "I know who you are." "You killed my mom." "You killed me." "Hello, Henry." "Tell them who you are." "She's Camille." "She's not my cousin." "She's my sister." "Lena, you're not making any sense." "You shouldn't be out of the hospital." "Why don't you tell them why I was in the hospital?" "She wants me dead like her." "Lena?" "Lena?" "Are you okay?" "Camille." "What?" "She says it happened fast." "She didn't even know it was coming." "She wasn't afraid." "What are you talking about?" "She's gone now." "I'm not understanding what's going on here." "Sometimes... sometimes during sex, I see things." "Or hear things." "It just happens." "Come on, get off." "Get off." "I'm..." "I'm gonna go." "Okay." "_" "Clear." "Bag her." "I'm resuming compressions." "She's flatlined." "No pulse." "Stay with us, Lucy." "Come on." "She's catatonic." "She's in defib." "Body temp 92." "Blood pressure dropping." "Time of death: 2:26." "Pupillary reflexes are good." "Brain function's normal." "What's her BP?" "Liz?" "110 over 60." "Is that good?" "For someone we thought was clinically dead?" "Yeah, that's pretty good." "What happened?" "Why am I here?" "Someone attacked you." "You were stabbed." "When they brought you in, you'd lost so much blood that you slipped into a coma." "An hour ago, you went into sudden cardiac arrest." "Your heart stopped." "I was stabbed?" "Where was I stabbed?" "May I?" "If I knew where she was going," "I wouldn't be calling the police." "Yes, I understand she's an adult." "Let me say it one more time." "My daughter is hurt, and she left the hospital." "Thank you." "All right." "Hi, it's Lena." "Leave me a message, and I'll call you back probably." "Honey, it's me again." "I know you're upset, but you have to call me back." "We're all very worried about you, okay?" "I love you." "It's not like she's dead or anything." "Lena's fine." "I can feel her." "Like we used to." "Remember that winter that you took us all sledding in Silver Springs?" "Lena broke her ankle, and I felt it." "Mom and I were still on the top of the mountain, but I knew she was hurt." "Fine, don't believe me." "No one's saying we didn't believe you, honey." "It happened the day of the accident too." "I felt her right before." "What do you mean, you felt her?" "She wasn't sick that day." "She was with Ben having sex." "You guys are so clueless." "Camille?" "Why was Lena running away from you last night?" "Did..." "Did you do something?" "Claire." "You think I hurt her?" "That she's gone because of me?" "Camille." "What if Lena's right?" "What if Camille isn't who she used to be?" "Camille is our daughter." "So is Lena." "I'm sorry, I didn't mean to..." " Who are you?" " I'm Adam." "It's okay." "You're safe." " You're safe." " Where am I?" "My cabin." "Look, I found you passed out by the river." "Just take it easy." "Look, I was gonna take you to the hospital, but... it looks like you've already been, and I figured..." "I figured you left for a reason." "Look, I couldn't just leave you out in the woods, so I brought you here." "What happened to your back?" "It's a long story." "Do you have any aspirin or... or whiskey?" "It hurts like hell." "No." "I have something better though." "Just wait here." "I remember leaving the Dog Star." "I remember walking." "Then what?" "Well, the doctors said I was found in the tunnel under the highway, but I don't remember being there." "I don't remember anything else." "What about him?" "Does he look familiar?" "Yeah." "I met him at the Dog Star." "He was looking for a girl who used to work there." "Is he the person who attacked you?" "I don't..." "I don't remember." "But it could be him." "I'm sorry." "It's okay." "We'll talk more later." "You just get some rest now." "Doc?" "Stay here." "Did you hear that?" "Hear what?" "Nothing." "Never mind." "What are the chances of her getting her memories back after something like this?" "Something like this?" "I've never seen something like this." "No one has." "There's no medical precedent." "Three hours ago, this girl was dead." "And now?" "She's fine." "All right." "Thanks." "Yeah." "♪ Dug myself out of a grave, no shovel ♪" "♪ Double flip step and dancing with the devil ♪" "♪ Echoes inside of my brain that say meant away ♪" "♪ Quarter never change, I'm still myself, klepto ♪" "Can I help you?" "I'm Claire Winship." "Lena's mom." "Oh, Mrs. Winship." "Sorry, I'm, like, half asleep." "May I come in?" "Uh, yeah." "Yeah, sure." "So how's Lena?" "I mean, we were really worried about her last night." "That's why I'm here." "Lena never came home." "She's not answering her phone." "Oh." "Yeah, she was saying some pretty crazy shit." "Guess she was kind of out of it, painkillers and all." "What happened to her back?" "The doctors aren't sure." "She should be in the hospital." "Did you call Ben?" "Ben hasn't heard from her." "Is there someone new maybe?" "Somebody I'm not thinking of?" "A boyfriend?" "Well, there isn't, like, one." "What do you mean?" "Well, Lena... sh... she knows a lot of different guys." "But last night, I didn't see her with anyone." "Sorry." "What is it?" "Nettles mostly, some ground-up whitebark pine and a bunch of other stuff." "It's my ma's recipe." "Okay, this is gonna hurt." "But it'll make it better." "Mm." "Trust me." "Okay." "Do you live here by yourself?" "Um, I used to live with my ma and my brother, but my ma died." "And your brother?" "He did something I can't forgive." "I had a sister." "I wasn't good to her either." "What happened?" "I did something unforgivable too." "I think that's why this is happening to me." "I should be dead." "Me too." "But we're not." "We're here." "Maybe it's not too late to be different." "Hey." "I got your message." "Did you find Victor?" "No." "But we picked up a homeless woman we think may've been with him." "Witnesses saw them together at the diner." "She skipped out on the check, so we picked her up this morning." "W... what about Victor?" "She says he walked off while she was in the restroom." "You don't think she did something to him?" "She seems off but not violent." "Look, she says she's George Goddard's wife." "George Goddard's wife died 29 years ago." "She's probably off her meds, but she does seem to know a lot about him." "George was a patient of yours, right?" "I was thinking maybe you could talk to her a little, get her to open up." "If she does know where Victor went, maybe she'll tell you." "Who are you?" "I'm Julie." "I'm a doctor." "A shrink?" "I'm not that kind of doctor." "May I sit down?" "Do you have any food?" "I am starving." "I was George's doctor." "I saw pictures of you in his house." "So then you believe me?" "That I am who I say I am?" "Hmm." "George said you died in the flood." "Do you remember that?" "Mm-hmm." "I remember the sound mostly." "It was like a strong wind, but there was no wind." "And then there was the water." "Oh, it was black as tar." "A stew of sludge." "Broken trees." "Glass." "Ugh, floating cars." "Yep." "It felt like the whole world was being swallowed up." "And the little boy, why were you with him yesterday?" "He's like me." "Victor died in the flood too?" "Hmm, you mean Henry?" "No, he died before that." "Do you know how?" "Some men broke into his house looking to rob the place." "That was terrible." "Killed his mother, and then they killed him." "How is this possible?" "The more pertinent question is "why."" "There has to be a reason for us being here, right?" "How do you know if you're dead?" "There's only one way to find out." "You don't seriously believe any of that, do you?" "Look at me." "That woman met Victor at the Triple C two days ago." "She doesn't know anything about him." "She's crazy." "What if she's not?" "What are you saying?" "That she's dead?" "Maybe she and Victor aren't the only ones." "What?" "You mean you?" "Hey, I was there, okay?" "You fought like hell, and you survived." "That man didn't kill you." "You don't get it." "It would be a relief to find out that I'm like her." "It would explain why I haven't been able... to feel my life." "Can you feel that?" "Come with me." "I want to show you something." "Richard and I grew up together." "We dropped out of school and went on the road together to see the country, to... to have adventures." "We made it as far as Caldwell before we ran out of money." "Richard said it would be easy." "Nobody was supposed to be home." "Your mother was reading a book in the living room when we came through the back door." "I had no idea he was gonna do... what he did." "And he told me to go search the house, and that's... that's when I found you." "I'm sorry." "I brought you out here to show you that the man who killed your mother, who killed you, he's dead now." "And he died a slow and horrible death from cancer." "It took... it took five years to eat him up." "Henry, I've spent the last 29 years of my life trying to live... trying to make up for that night." "I've devoted myself to trying to... help others." "But I see now that maybe that's not enough." "Maybe that's why... you've come back." "I am asking for your forgiveness, Henry." "But I understand if you can't give it." "Lena?" "Hey." "I'm fine." "You can stop calling." "Tell me where you are." "I'll come get you." "No." "Lena, you should be in a hospital." "That wound is serious." "It's getting better, really." "You don't have to worry." "You ran away from a hospital." "You won't tell us where you are or who you're with." "I know I haven't been there for you." "I'm sorry." "It's a little late for that, don't you think?" "Lena, don't..." "Hey, it's Ben." "Leave a message." "Ben, it's Alice." "Sorry things got so crazy last night." "Lena was pretty messed up, huh?" "Anyway, call me back when you get this." "I want to see you." "She's with someone." "Did she say that?" "Do you know about out daughter's reputation?" "So when you were hanging out there at the Dog Star with your friend... what did you do?" "You just sit and watch her get drunk and go home with strangers?" "You know, legally she is an adult." "She can do whatever she wants to." "You may not like the way she's living her life, but at least she's living it." "You've been at the bottom of a bottle for four years." "And you've been locked in this house lighting candles to your dead daughter's shrine." "I think..." "I think you're pissed off you got no one left to mourn." "Camille's back." "Lena doesn't need you." "And you don't know what the heck to do." "Can you imagine being anything other than the perfect grieving martyr?" "Get out of here." "Get out." "What are you doing?" "Sorry, I was just looking for something to wear." "The hospital gown's kind of gross." "Is this, uh, is this your mother's?" "Mm-hmm." "Are you sure it's okay?" "Turn around." "Oh." "Okay, you can turn around." "It's my brother." "Just, uh, wait here, okay?" "What are you doing here?" "I just want to talk." "Well, I don't want to talk." "You need to go." "Is someone here?" "What?" "No." "Do you have someone here?" "It's Ma." "She's back." "Like me." " Ma's back?" " Yes." "But she doesn't want to see you." "You killed her son." "She doesn't want to see you ever again." "What's wrong?" "Tony's your brother?" "Tony's brother is dead." "How long have you been back?" "A few days." "Tony told everyone you died because you were sick." "He lied." "Then how did it happen?" "I was a bad person." "You don't seem like a bad person." "Well, you don't know me." "You took care of me." "Oh." "I have been in the dark." "Give yourself a break." "Jack's right." "Now that Camille's back, I..." "I've lived in the shadow of her death." "Looking back is the only thing I know how to do." "Maybe it's time I started seeing what's right in front of me." "Bit of a role reversal, huh, Tony?" "Everything okay?" "It's fam... family stuff." " You want your usual?" " No, no, no, no." "Actually, not tonight." "I'm looking for Lena." "I think she might have gone home with someone last night." "Did she meet anybody here?" "Was she hanging out with anybody?" "No, um..." "No." "Sorry, I didn't." "Thanks." "I didn't think you were coming." "Is anybody else home?" "Whoa." "This is weird." "It's, um... it's exactly how it was when..." "Sit down." "Who are you?" "You know who I am." "Camille?" "Ben." "Ben." "Victor." "Victor, why did you come to me?" "Is it because I'm like you?" "You're the fairy." "My mom said you would protect me." "I'm not a fairy." "I know you are." "Okay." "I'll be the fairy." "Whatever you want." "You didn't have to come." "I'm looking for Lena." "She took off from here yesterday." "Oh." "Sorry." "I hope she's okay." "I came to see you yesterday, but you were unconscious." "You know, when I was standing there beside your bed, I was..." "I wasn't thinking about the money you took or the lies that you told." "I was thinking how I wanted you to wake up." "Hmm." "Here you are." "You should hate me." "Maybe I should." "But I don't." "Everything I said about Camille," "I made it all up." "I know." "It doesn't matter." "Sometimes I even convinced myself that it was true... that I really had a gift." "I'm sorry." "I believed you 'cause I wanted to." "But that was before." "Since I woke up..." "It's really happening now." "What are you talking about?" "Voices in my head, whispering things, scary things, things I don't want to hear." "I don't expect you to believe me." "He was supposed to be at your baseball game." "He told your mom he was tied up at work." "The truth was he was at the bar." "He said there was ice on the road... but really he was drunk." "He's sorry you had to grow up without a father." "And he wants me to tell you..." "What?" "He's scared for you."
hich is smaller: 0 or o? 0 Let j = -10301/34 - -303. Are j and 1 equal? False Let l(c) = 3*c + 9. Let o be l(-6). Which is greater: -10 or o? o Let u(t) = -2*t + 1. Let s be u(-3). Suppose h = -3*a + s + 2, 0 = -a - h + 5. Suppose 3*m + 18 = a*y, -m - 5*y + 6 = -5. Is -5 greater than or equal to m? False Let v = -6 - -6.3. Let m = 0.1 + v. Let j = -0.1 + m. Is 2/7 <= j? True Let n be -3*3/(-9) + (5 - 1). Which is smaller: n or 7/2? 7/2 Let g be -1 + (-29)/(-18)*-44. Let k = g + 72. Is k <= -1? False Let j be (-24)/(-9) - (-4 + 7). Is -26 < j? True Let k(j) = -j**2 - 6*j - 3. Let v be k(-5). Let i be (v/(-6))/((-3)/9). Let p be (0/(-1))/(-2 - -1). Are i and p unequal? True Let w = 0 + 2. Let h = w - 2. Let q = -0.1 + h. Is q equal to 1/2? False Let a = -16 + 26. Let z = a + -1. Is -1 <= z? True Let i(s) = 5*s + 0*s + 7 - 5*s + 2*s. Let p be i(-3). Do p and -1/5 have the same value? False Let x be (-36)/12*2/(-9). Let w = -47/75 + 2/75. Which is greater: w or x? x Suppose 6*r = 4*r - 20. Which is smaller: r or -44/5? r Let s = -6.4 + 7.42. Let f = -0.02 + s. Which is greater: -1 or f? f Let u be (85/(-50) + 2)*-5. Is 5 bigger than u? True Let b(h) = 5*h + 5. Let o be b(-5). Let d be 8/o - 16/10. Which is smaller: 0 or d? d Let x(i) = 4 + i**2 + i - 3*i + 4*i. Let k be x(-3). Let f be 3/(1 - k/4). Which is smaller: f or -5? -5 Let l = -1/111 - -344/1221. Are l and -1 non-equal? True Let b be (-3)/(-12) + (-5)/4. Let o be ((-6)/99)/(b/(-3)). Suppose -2*c - 4 = 2*c. Which is greater: o or c? o Suppose z + 0*g = -g + 3, 5*z = 3*g + 23. Suppose -z*x - 1 - 3 = 0. Which is smaller: 2/21 or x? x Let r(w) = -w**2 + 9*w - 8. Let t be r(7). Suppose -5*i + t*h = 3*h - 26, 30 = 5*i - 5*h. Suppose -4*l - 5*a + 16 = 0, 5*l = -2*a + 7 + 13. Is l smaller than i? False Let p(a) = -2*a - 5. Let j be p(-5). Suppose h = -3*s + s + 17, s = j*h - 41. Let t be 5 - (2 - h/3). Is 6 at least t? True Suppose 8 + 20 = 2*s + 4*h, -3*h = -2*s. Suppose s*b - 5 = b. Which is greater: b or 0? b Let n = 16.1 - 1.1. Let q = n - 13. Which is smaller: q or 1? 1 Let s = -48.8 + 55. Let q = s - 6. Let l = -0.7 - 0.3. Is l less than or equal to q? True Let v be 1/((-1)/((-1)/(-3))). Suppose 9*j + 0*j - 9 = 0. Is v at most as big as j? True Let b = -34 - -36. Let n be 4/(-14) + (-32)/(-14). Is b < n? False Suppose -5*n - 10 = 4*p, -2*n - 5 = -4*p - 1. Suppose p = g + 14 + 22. Let v be ((-2)/(-6))/(16/g). Do v and -2 have the same value? False Let t(y) = -2*y**2 + 3*y. Let n be t(3). Let a = -10 - n. Let g be (-1)/2*(5 - -1). Which is smaller: a or g? g Let d = 22 - 18. Is 2 smaller than d? True Suppose 0 = -4*w - x - 15, -5 = w + 3*w + 3*x. Let l = w - -5. Which is bigger: 2/5 or l? 2/5 Let d be (-1)/(-2)*(3 - -3). Suppose m = 4*r, 4*m + 20 = d*r - 7*r. Is m at most -4? True Let u(m) = 0*m + 4 + 3*m + m**2 - 7*m. Let w be u(5). Let k be (-1)/(-6*w/(-12)). Which is smaller: 0 or k? k Let s(h) = -2*h - 1. Let b(t) = 3*t + 3. Let y(g) = -3*b(g) - 5*s(g). Let k be y(4). Let w = 1/4 - 3/8. Does k = w? False Let f = 29 - 20. Suppose -x = -2*a - 3, f = -4*a + 2*x - x. Let u be a/12 + 14/(-72). Which is greater: -1 or u? u Let h be 24/11 + (-4)/22. Let z = h - -3. Suppose -3*s = -z*f - 2 - 4, -5*s = 4*f - 10. Which is smaller: s or 3? s Let t = 284 + -282. Let p = 2 + 0. Do p and t have different values? False Let y = 31 + -30.93. Which is smaller: 1/5 or y? y Let h = 6 + -12. Let d be -1 + (-2)/((-4)/h). Let p be (1 - 4) + (0 + 2 - 2). Are d and p unequal? True Let r(u) = -u**2 - 7*u - 7. Let w be r(-6). Which is greater: w or 4/17? 4/17 Let j = -99 - -62. Is j smaller than -38? False Let g be 1/((3/1)/33). Let v = g - 11. Which is smaller: v or -1? -1 Suppose -3*w + 20 = w. Suppose -3*l = -4*h - 2*l - 10, h = w*l - 12. Is h > -5? True Let y = 52.9 + -50. Let p = 2.5 + 0.5. Let j = y - p. Which is smaller: j or 1? j Let x = 13 - 13. Is -5 greater than x? False Suppose -l - 13 = 12*l. Which is smaller: l or -1/124? l Suppose -3*b + 5*b = 4. Let x be (-1)/((6/b)/3). Suppose -5*v = 4 + 1. Is v smaller than x? False Let m = -3.07 + 3. Let t = 10346 + -341389/33. Let r = 6/11 - t. Which is greater: r or m? m Suppose 5*r = -l + 26, -2*l + r + 25 = 2*r. Let v = l - 8. Suppose 3*z + 6 = -v. Is z greater than or equal to -3? True Let l = -12467/19 + 656. Is l < 0? True Let x = 119 - 593/5. Let d be (-2 + 2)/((-2)/2). Is x smaller than d? False Let d = -14.2 - -21. Let i = d + -7. Is i greater than -1/5? False Suppose -28*x = -31*x + 39. Suppose -16 = -y - 3. Is y greater than or equal to x? True Let q be (-109)/56 + 2/1. Let n = 224628/13 + -188685479/10920. Let a = n - q. Is a bigger than -2/3? True Let t be (0/2)/((-4)/(-2)). Suppose -3*v = j - 44, -4*v - 8*j + 3*j + 77 = 0. Let n = -40/3 + v. Is t greater than n? True Let j = 0 + -2. Let u be (-20)/12 + (-2)/6. Do j and u have different values? False Let z = 11.03 + -0.03. Let o = z + -9.8. Let d = o + 0.8. Which is greater: d or -1? d Suppose 2*g = -2*g. Let t be -2 + (g - (-1 - 0)). Let x be (t + -2 + 1)/(-2). Which is smaller: x or 2? x Let a = -32 + 31. Which is smaller: a or -1/11? a Let g = 83 + -88. Is g at least -7? True Let n be 1*2*(-75)/(-30). Let x = 2/69 + -77/276. Which is smaller: n or x? x Let u be 136/153*3/2. Which is greater: u or 0? u Suppose -4*n + 3*n - 44 = 0. Let a be (n/33)/(2*1). Is -1 bigger than a? False Suppose -5*w + 45 = 15. Which is smaller: w or -6? -6 Let w be 90/(-50) - (-2 + 0). Is 0.1 not equal to w? True Let v(x) = x**3 + 7*x**2 + 2*x + 15. Let z be v(-7). Which is smaller: z or 2/9? 2/9 Let h be ((-7)/(-21))/((-2)/6). Let j(w) = -w**2 + 5*w - 1. Let z be j(5). Is z smaller than h? False Let i be (-18)/(-8) - (-2)/(-8). Suppose -i*w - w + 6 = 0. Suppose 2*o + 4*h - 7 = 7, 2*h = -3*o + 13. Which is smaller: o or w? w Let k be (3/5)/((-12)/8). Is k greater than 9/4? False Let m = 551/36 + 7/36. Is 15 at most m? True Let x be (-7)/((-140)/8) - 8/10. Which is bigger: 1/2 or x? 1/2 Let o(y) = 8*y**3 + 3*y**3 - 14 - 11*y**2 - 8*y - 3*y - 12*y**3. Let n be o(-10). Is n greater than or equal to -3? False Let l = -861/5 - -179. Is l equal to 8? False Let x = 1/2 - 3/10. Let j(p) = -p**3 - 7*p**2 + 1. Let l(g) = 2*g**3 + 13*g**2 - 2. Let b(f) = -11*j(f) - 6*l(f). Let r be b(1). Is x bigger than r? True Let l = 0.042 - 0.042. Let m = -0.1 + 0.1. Is m at most l? True Let g be 106 + (-5)/(10/6). Let n = 1131/11 - g. Let r be 0*2/(4/(-2)). Which is smaller: n or r? n Let u = 175/828 - -1/92. Which is greater: 7 or u? 7 Let q = 3.8 + -0.8. Let h = -1 + q. Which is greater: h or -2/5? h Let l = -534.94 - -550. Let o = l - 15. Is o at least as big as 1? False Let y(b) = b**2 + 4*b + 1. Let d(j) = j**2 + j. Let c(g) = 4*d(g) - y(g). Let l be c(1). Is l smaller than 2? False Let b(a) = 4*a**2 - a + 5. Let x be b(-3). Is x at least as big as 44? True Let c(s) = -s. Let o be c(-1). Which is bigger: o or -1/5? o Let a be 48/(-18) + (-2)/(-3). Let s be 1 - 7 - (a - 1). Which is smaller: -3/2 or s? s Let p = -11 + 25. Which is bigger: -1 or p? p Let a(q) = -q**2 + 15*q + 17. Let n be a(16). Are n and 1 unequal? False Let u = 334/7 - 48. Which is smaller: u or -3/4? -3/4 Let o = 2088/6251 - 6/329. Let n = 55 + -55. Is n < o? True Let w be (-13 - -9) + 5*(2 + -1). Which is greater: -2/5 or w? w Let b be 34/(-8) + 1/4. Let q be -2 - b - (-249)/(-114). Let g = -153/418 - q. Is 0 > g? True Suppose -s = -2 - 2. Suppose -z = s*z. Let o(f) = -f**2 + 5*f + 5. Let g be o(6). Which is smaller: g or z? g Suppose 6 = 3*v + 21. Let m = 15 - 22. Is m > v? False Let i = -33 + -59. Which is greater: i or -91? -91 Let u(d) = -d - 1. Let z be u(-5). Suppose -3*v = v - z. Which is greater: -0.1 or v? v Let j = -2 + 1. Let a be (-16)/9 - (1 - (0 + 3)). Which is smaller: a or j? j Let o = 1.96 - -0.04. Let g = -2.1 + o. Let p(x) = -x**2 - 11*x. Let a be p(-11). Which is bigger: a or g? a Let h = 0.152 + -1.152. Is 14/29 greater than h? True Let o be -1 + (-2*1)/2. Let d be (-2)/o - 2/3. Let i = -0.39 + 0.79. Do i and d have different values? True Let a = -27 + -13. Let q = 0 - -1. Let o be q/5 - 112/a. Is o not equal to 3? False Let w be (-2)/(6/(-2) + 2). Suppose 2*x + 0 - w = 0. Suppose 2 = z + 6, -2*z = -3*a + 14. Which is bigger: a or x? a Let a =
(n) = -n**2 + 109*n - 762. List the prime factors of q(79). 2, 3, 67 Let r(i) = -16*i + 98. Let x be r(6). Suppose 5*c + x - 1337 = 0. What are the prime factors of c? 3, 89 What are the prime factors of (1112/20)/((-13)/((-93925)/10))? 17, 139 Let l(m) = 3*m**2 + 48*m + 33. Let c be (-18)/27*(-15)/(-2). Let k(o) = 5*o**2 + 72*o + 50. Let r(y) = c*k(y) + 8*l(y). List the prime factors of r(13). 157 Suppose -2628*y = -2597*y - 1086612. What are the prime factors of y? 2, 3, 23, 127 Let d(w) = 20*w**2 + 12*w**2 + 4*w**2 + 11*w**2 - w. Let u be d(1). Suppose -12 = z - u. List the prime factors of z. 2, 17 Suppose g = -6*v + 4601, -367*g + 371*g = -4*v + 3084. Let d(f) = 18*f**3 - 2*f**2 - f - 2. Let p be d(-3). Let r = v + p. What are the prime factors of r? 263 Let v(x) = 90*x**2 + 10*x + 12. Let o(a) = -89*a**2 - 8*a - 11. Let g(d) = -7*o(d) - 6*v(d). List the prime factors of g(1). 2, 3, 7 Suppose 7*a = 2*a - 2*x + 42305, -a + 8469 = 2*x. List the prime factors of a. 11, 769 What are the prime factors of 2/16*-1 - (6 + 26121795/(-280))? 2, 46643 Let g(x) be the second derivative of x**4/12 - x**3/2 - 4*x**2 - x - 5. Suppose 0 = -a - 6. What are the prime factors of g(a)? 2, 23 Let o(t) = 65*t**2 + 7*t + 19. Let h(q) = -21*q**2 - 2*q - 6. Let z(f) = -7*h(f) - 2*o(f). What are the prime factors of z(-4)? 2, 3, 23 Let y(l) = -14*l**2 + 3 - 2*l**2 - 62*l + 20*l**2. What are the prime factors of y(23)? 3, 7, 11 Let y = 28 + -27. Suppose 4*b = 5*t + 1, -4*t - y - 6 = 3*b. List the prime factors of ((-350)/8 + t)/(1/(-4)). 179 Suppose -5*n - z + 16320 = -75251, -4*z = 5*n - 91559. What are the prime factors of n? 3, 5, 11, 37 Let b(g) = -11*g**3 - 13*g**2 - 19*g + 76. Let c(z) = 9*z**3 + 12*z**2 + 19*z - 77. Let m(n) = 5*b(n) + 6*c(n). List the prime factors of m(6). 2, 17 Let w(h) = -795*h + 2. Let t be w(-2). Suppose -5*d + 4*b = -t, -d + 5*b - 617 = -3*d. Suppose -5*g + d = -g. List the prime factors of g. 79 Let y = 89 - 48. Suppose -y + 211 = -5*j. Let h = j - -81. What are the prime factors of h? 47 Let y(h) = -3*h - 9. Let r be y(-5). Suppose d - r = -6. Suppose -2*q - p = -d*q - 154, 3*q - 4*p - 231 = 0. List the prime factors of q. 7, 11 Let n(o) = -2*o**2 + 21*o - 36. Let u be n(8). Suppose -8*q + 7*q + 1012 = u*t, 5*q + 232 = t. What are the prime factors of t? 2, 3, 7 Let d(g) = g**3 + g**2 - 2*g - 2. Let x be d(-1). Suppose x = 2*u - 51 - 11. What are the prime factors of u? 31 Suppose 2189*n + 794280 = 2204*n. List the prime factors of n. 2, 6619 Let v(m) = -m**3 - 21*m**2 - 111*m - 7. Let n be v(-11). Let p = 5 - 3. Suppose -p*s + 3*r + 221 = n*r, -3*s - r + 333 = 0. List the prime factors of s. 2, 7 Let a(f) = 11829*f - 8103. What are the prime factors of a(5)? 2, 3, 47, 181 Let a(g) = -24 - 16 + 211*g - 23 - 18 + 75. List the prime factors of a(1). 5, 41 Let l = -29 + 17. Let m be (l/9)/((-2)/18). List the prime factors of ((-2)/6)/(10/m - 1). 2 Suppose -37*n + 62770 = -27*n. List the prime factors of n. 6277 List the prime factors of (88232/410 + 88)/(0 + (-2)/(-245)). 2, 7, 379 Let y(s) = -s. Let w(r) = 7*r + 2. Let q(o) = w(o) - 4*y(o). Let a be q(-14). List the prime factors of a/(-6) - (-12)/18. 2, 13 Suppose -5*d + 3*k + 2550 = 0, -6*d + 2*d + 5*k + 2040 = 0. Suppose -x - 5*n + 122 = -0*x, -d = -5*x - 5*n. List the prime factors of x. 97 Suppose 0 = 5*o, j = -117*o + 113*o + 3338. What are the prime factors of j? 2, 1669 Let y(d) = -28953*d**3 + 6*d**2 + 9*d + 3. What are the prime factors of y(-1)? 3, 3217 Let g be (-12)/(-30) + 46/10. Suppose w - 269 = 5*t, -2*w - 4*t = -g*t - 583. Suppose -w - 36 = -5*z. List the prime factors of z. 2, 3, 11 Suppose 2*c - i - 126 = 0, -2*i + 4*i = c - 60. Suppose 4*o - 5*z = -z + 32, 5*z + 25 = 0. Suppose c = u + o*u. What are the prime factors of u? 2 Suppose -3*g - 834 = -0*b - 3*b, 0 = -5*g + 4*b - 1389. Let f = 382 - g. Suppose 5*a - 838 = -2*c + c, 4*a - 3*c = f. What are the prime factors of a? 167 Let h(n) = 6*n**3 - 84*n**2 - 55*n - 10. List the prime factors of h(20). 2, 3, 5, 443 Suppose -5*n = -a - 3, 3*n - 7*a = -11*a + 11. List the prime factors of 474 + (0 - n)*-3. 3, 53 Let q(u) be the second derivative of u**5/5 - 2*u**4/3 - 31*u**2/2 + 37*u. List the prime factors of q(7). 13, 73 Let c(n) = n**2 - n - 5. Let t be c(-2). Let d(j) = 4*j**2 - 109*j**3 - 5*j**2 + 6*j + t - 5*j - 20*j**3. List the prime factors of d(-1). 2 Let g(b) = -39*b**2 + 5*b - 7. Let m be g(5). Let s = m - -1852. List the prime factors of s. 5, 179 List the prime factors of (-1395)/1116 + 104161/4. 13, 2003 Let p be (-1 - 44)*(-64)/(-160). List the prime factors of (-4528)/(-12) + 7 + 114/p. 2, 3, 7 Let y = -275 - -47. Let j = -183 - y. What are the prime factors of j? 3, 5 Let y(f) = -f**3 + 56*f**2 + 117*f - 51. List the prime factors of y(56). 3, 11, 197 Let o(t) = t**2 + 9*t - 47. Let i be o(4). Suppose k + 3*l - 1490 = 0, -13*k + 16*k - i*l - 4442 = 0. What are the prime factors of k? 2, 7, 53 Let j = -139 - -142. Suppose 5*u - 2*u = j*f + 2145, 2146 = 3*u - 4*f. What are the prime factors of u? 2, 3, 7, 17 Suppose 4*q + 5*t = 9*t + 87556, 0 = -2*q - t + 43769. List the prime factors of q. 2, 31, 353 Suppose 12*v - 145687 - 109978 = -81881. List the prime factors of v. 2, 13, 557 Let p be 416 + 1*(-5 - -10). Suppose -x = 2*z - 10, 6*z + 4*x = z + 25. Suppose z*i + 3*g - p = 0, i = -2*i + 4*g + 270. What are the prime factors of i? 2, 43 Suppose 0 = -4*t - 6*t + 70122 + 102648. List the prime factors of t. 3, 13, 443 List the prime factors of (42/35)/((-30)/(-72200)). 2, 19 Let u be (-29)/(174/(-32760)) - 6. Suppose 15*m = 6*m + u. List the prime factors of m. 2, 3, 101 Let m(k) = 107*k - 167. Let r be m(2). Suppose -r = 2*c - 39, 5*a = 4*c + 3146. List the prime factors of a. 2, 313 Let x be 347/(-2) - (0 - 6/12). Let z = 664 + x. List the prime factors of z. 491 Let k = -4347 - -50709. List the prime factors of k. 2, 3, 7727 Let v be (1128/(-9))/((-22)/99*3). Let j(x) = -4*x**3 + 1. Let m be j(-1). Suppose -m*r + v - 28 = 0. What are the prime factors of r? 2 Suppose -10*k = -12 - 18. Suppose 4*x = -x - k*x. Suppose x = 6*s - 4*s - 8. What are the prime factors of s? 2 Suppose -3277*z + 3268*z + 38754 = 0. List the prime factors of z. 2, 2153 List the prime factors of (19/(-152)*6)/(2/(-112024)). 3, 11, 19, 67 Let n(u) = -u**3 + 4*u**2 - 3*u + 5414. List the prime factors of n(0). 2, 2707 Let m = 22341 - 14721. List the prime factors of m. 2, 3, 5, 127 Suppose 0 = d + 5*d - 18. Suppose l - 20 = -d*l. Suppose 2*z = 4*b - 22, l*z + 31 - 3 = b. List the prime factors of b. 3 Let y(j) = -134*j**2 + 14*j + 7*j + 40 + 135*j**2. List the prime factors of y(-21). 2, 5 Suppose -4*x + v + 176 = -347, 0 = -x - 4*v + 118. What are the prime factors of 3/9 - 32/(-12)*x? 347 Suppose 52295 = z + 4*v, -20*z = -14*z - 3*v - 313608. What are the prime factors of z? 167, 313 Let g(c) = c**2 - 5*c + 10. Let t be g(0). Suppose t*p - 427 - 183 = 0. List the prime factors of p. 61 Suppose -3*q + 137346 = 63*q. What are the prime factors of q? 2081 List the prime factors of 1*-1109*-1*(0 - -1). 1109 What are the prime factors of (32539 + 6)*4/20? 23, 283 Let b = 369 + -258. Suppose -w + b = -137. Suppose -6*o - w = -10*o. What are the prime factors of o? 2, 31 Suppose 0 = -h + 24 + 50. Suppose 3*o - 202 = -u, o + u = 4*u + h. List the prime factors of (o/6)/(6 - 112/21). 17 Let y = -95 + 84. Let k = y - -15. Suppose f - c - 221 = -k*c, 3*f - 723 = 3*c. What are the prime factors of f? 2, 59 Suppose -9*z = -2*z + 1218. Let c = z - -360. What are the prime factors of c? 2, 3, 31 List the prime factors of (-8)/10 - ((-5802003)/115)/9 - 2. 13, 431 Let r = 5197 + -3637. Let n = -1065 + r. List the prime factors of n. 3, 5, 11 Let x = 43391 - 24891. What are the prime factors of x? 2, 5, 37 Suppose 2*u + 19 = -33. Let i = -22 - u. Suppose -i - 15 = -w. List the prime factors of w. 19 Suppose 0*i + i = -4*r + 3, 23 = 4*r - 3*i. Suppose 0 = r*d + 3*g - 309, -151 = 6*d - 7*d - 5*g. List the prime factors of d. 2, 3, 13 Let i = -426 - -427. What are the prime factors of 4 + -11 + i + 411? 3, 5 List the prime factors of (-93)/434 + 4/(112/1865226). 3, 5, 4441 Suppose j + 4*m - 1260 = 4*j, -3*j - 1236 = 4*m. Let n = -136 - j. List
Introduction {#s1} ============ Glycosylation is the covalent attachment of an oligosaccharide chain to a protein backbone and is considered to be a very common protein modification. The structure and size of the carbohydrate chain can be very diverse and can alter the physico-chemical characteristics of a protein. Two major types of glycosylation, referred to as *N*- and *O*-linked glycosylation, can be distinguished. *N*-glycans are attached to Asn residues of the peptide backbone while *O*-glycans are connected to Ser or Thr residues. Only in recent years, it has been acknowledged that glycosylation of proteins modulates various processes such as subcellular localization, protein quality control, cell-cell recognition and cell-matrix binding events. In turn, these important functions control developmental processes such as embryogenesis or organogenesis [@pone.0016682-Brower1]--[@pone.0016682-Yano1]. Although the overall importance of glycosylation is recognized nowadays, the different types of glycosylated proteins in an organism are mostly unknown indicating that the full range of biological and cellular functions is still not fully understood. Deciphering the complexities in biosynthesis and function of glycoproteins in multicellular organisms is a major challenge for the coming decade. Insects are without any doubt the largest animal taxon found on Earth accounting for more than half of all known living species [@pone.0016682-Sanderson1]. Their unprecedented evolutionary success is the result of an enormous genetic and phenotypic diversification allowing insect species to adapt to a wide variety of ecological niches and environmental challenges. For example, the genetic diversity within one insect order (*e.g.* Diptera) is already much wider than between distant vertebrates such as human and zebrafish, spanning a whole phylum [@pone.0016682-Barbazuk1], [@pone.0016682-Severson1]. Because insects are the most diverse organisms in the history of life, they should provide profound insights into diversification of glycobiology in general and differences of glycosylation in particular. To date, almost all information concerning glycobiology in insects was obtained from studies with the fruit fly, *Drosophila melanogaster* (Diptera), the best studied insect laboratory model organism. For *D. melanogaster*, different glycosyltransferases and glycosylhydrolases which are responsible for synthesis and trimming of *N*-glycans have been reported suggesting the presence of multiple glycan structures on glycoproteins [@pone.0016682-Correia1]--[@pone.0016682-Zhang2]. Moreover, at least 42 discrete *N*-glycans have been identified recently in *D. melanogaster*, mostly containing oligomannose and core fucosylated paucimannosidic *N*-glycans [@pone.0016682-Gutternigg1]--[@pone.0016682-Schachter1]. Considering the broad diversity among insect species, it can be expected that the diversification in glycan patterns will even be more extensive when analyzing glycosylation patterns in different insect species. In this study, the functional diversity of glycoproteins was studied for insect species belonging to five important insect orders. We selected four insects with a complete metamorphosis, the flour beetle *Tribolium castaneum* (Coleoptera), the silkworm *Bombyx mori* (Lepidoptera), the honeybee *Apis mellifera* (Hymenoptera) and the fruit fly *D. melanogaster* (Diptera), as well as one insect species with an incomplete metamorphosis, the pea aphid *Acyrthosiphon pisum* (Hemiptera). In addition to this wide selection in insect diversity, several insect species are good representatives for economically important pest insects such as caterpillars, beetles or aphids, while the honeybee belongs to the group of beneficial insects that are essential for pollination. Flies and mosquitoes, on the other hand, are important transmitters of many (human) diseases. Because protein modifications such as glycosylation are not directly encoded by the genomic code, glycosylation in insects was studied at the proteomics level. Recent developments in high-throughput technology for studying proteomes and the public availability of the genome data of different insect species allowed a comparative study of the glycoproteins present in the different insect species. Lectin affinity chromatography using the snowdrop lectin (*Galanthus nivalis* agglutinin, GNA) was used to selectively purify different sets of mannosylated glycoproteins from different insect species. Subsequently, the purified glycoproteins were identified with LC-MS/MS and characterized according to biological or molecular function. To our knowledge, this is the first report that presents a comparative study of the glycoproteomes present in different insect species. Studying glycoproteomes in different insect species should ultimately result in the development of a more holistic understanding of the importance of glycobiology in insects. Results {#s2} ======= Purification and identification of glycoproteins from insects {#s2a} ------------------------------------------------------------- To study the functional differences in glycoprotein sets derived from insect species belonging to different insect orders, glycoproteins were captured using lectin affinity chromatography based on the snowdrop lectin GNA ([Figure S1](#pone.0016682.s001){ref-type="supplementary-material"}). As shown by the glycan microarray experiments conducted by the Consortium for Functional Glycomics, GNA has a high selectivity for oligomannose *N*-glycans [@pone.0016682-Fouquaert1] that were previously shown to be the most abundant class of *N*-glycans present in insects. The percentage of proteins retained on the GNA column was less than 5% of the total amount of proteins (based on protein concentration estimations using Bradford) for all five insect species. Peptide identification using LC-MS/MS, resulted in 161, 64, 116, 142 and 245 unique (glyco)proteins for *T. castaneum*, *B. mori*, *A. mellifera*, *D. melanogaster* and *A. pisum*, respectively ([Table 1](#pone-0016682-t001){ref-type="table"}). Putative *N*-glycosylation sites were present on 81%, 77%, 75%, 83% and 89% of the glycoproteins from *T. castaneum*, *B. mori*, *A. mellifera*, *D. melanogaster* and *A. pisum*, respectively ([Table 1](#pone-0016682-t001){ref-type="table"}). This suggests that for all insect species at least 11% of the glycoproteins were purified in an *N*-glycan independent way. 10.1371/journal.pone.0016682.t001 ###### Number of glycoproteins purified by GNA affinity chromatography for the different insect species. ![](pone.0016682.t001){#pone-0016682-t001-1} Insect species Insect order No of proteins in database No of generated spectra No of peptides identified No of identified proteins No of putative N-glycosylated proteins --------------------------- -------------- ---------------------------- ------------------------- --------------------------- --------------------------- ---------------------------------------- *Tribolium castaneum* Coleoptera 16.645 3744 572 161 130 *Bombyx mori* Lepidoptera 14.623 3749 118 64 49 *Apis mellifera* Hymenoptera 10.157 3496 381 116 87 *Drosophila melanogaster* Diptera 21.317 3744 655 142 118 *Acyrthosiphon pisum* Hemiptera 34.821 3745 788 245 218 After identification of the different sets of glycoproteins, InterProScan was used to detect functional domains, protein regions or protein signatures in the individual polypeptides for further annotation ([Tables S1](#pone.0016682.s004){ref-type="supplementary-material"}, [S2](#pone.0016682.s005){ref-type="supplementary-material"}, [S3](#pone.0016682.s006){ref-type="supplementary-material"}, [S4](#pone.0016682.s007){ref-type="supplementary-material"}, [S5](#pone.0016682.s008){ref-type="supplementary-material"}). Subsequently, a protein abundance index (emPAI) was calculated to detect the polypeptide sequences that were highly abundant among the captured glycoproteins ([Tables S1](#pone.0016682.s004){ref-type="supplementary-material"}, [S2](#pone.0016682.s005){ref-type="supplementary-material"}, [S3](#pone.0016682.s006){ref-type="supplementary-material"}, [S4](#pone.0016682.s007){ref-type="supplementary-material"}, [S5](#pone.0016682.s008){ref-type="supplementary-material"}). Among the identified glycoproteins typical membrane proteins such as laminin, cadherin, contactin, chaoptin or C-type lectins were found to be abundantly present ([Table S1](#pone.0016682.s004){ref-type="supplementary-material"}, [S2](#pone.0016682.s005){ref-type="supplementary-material"}, [S3](#pone.0016682.s006){ref-type="supplementary-material"}, [S4](#pone.0016682.s007){ref-type="supplementary-material"}, [S5](#pone.0016682.s008){ref-type="supplementary-material"}). Also many leucine-rich repeat transmembrane proteins which are known to contain several glycans on their extracellular part were detected. Transport proteins were lipoproteins, hemocyanin or ferritin. Also vitellogenin, which is a known glycolipoprotein present in the fat body of adult insects and important for reproduction, was detected in *T. castaneum*, *D. melanogaster* and *A. pisum*. Next to the typical receptor proteins or secreted proteins, many GNA-captured glycoproteins were identified as metabolic enzymes (e.g. dehydrogenases, proteases and amylases), ribosomal proteins or intracellular structural proteins (e.g. actin, tubulin). Because many of these proteins are synthesized on free ribosomes and, consequently, do not enter the ER-Golgi pathway, oligomannosidic *N*-glycans are thought to be absent from these proteins. Therefore the putative *N*-glycosylation sites found on the peptide backbone of these proteins ([Tables S1](#pone.0016682.s004){ref-type="supplementary-material"}, [S2](#pone.0016682.s005){ref-type="supplementary-material"}, [S3](#pone.0016682.s006){ref-type="supplementary-material"}, [S4](#pone.0016682.s007){ref-type="supplementary-material"}, [S5](#pone.0016682.s008){ref-type="supplementary-material"}) may not be functional. Comparing the insect specific glycoprotein sets, major differences in both glycoprotein diversity and quantity were observed ([Table S6](#pone.0016682.s009){ref-type="supplementary-material"}). When comparing a particular protein annotation such as leucine-rich transmembrane protein between the different insect species, 15, 4 and 1 glycoprotein(s) were detected for *A. pisum*, *T. castaneum* and *D. melanogaster*, respectively, while for *A. mellifera* and *B. mori* no leucine-rich membrane proteins were found ([Table 2](#pone-0016682-t002){ref-type="table"}). From the 260 different protein annotations found over the different sets of insect-specific glycoproteins, 62% (161 protein annotations) were associated with only one particular insect species while 1.5% of the proteins (only 4 protein annotations) were detected for all five insect species ([Tables 2](#pone-0016682-t002){ref-type="table"} and [S6](#pone.0016682.s009){ref-type="supplementary-material"}). This remarkable diversity in glycoproteome profiles between insect species may reveal underlying differences that can influence certain biological processes. 10.1371/journal.pone.0016682.t002 ###### Summary table for the number of distinct (glyco)proteins found in at least three different insect species. ![](pone.0016682.t002){#pone-0016682-t002-2} Protein description *A. pisum* *D. melanogaster* *A. mellifera* *B. mori* *T. castaneum* ---- ------------------------------------------ ------------ ------------------- ---------------- ----------- ---------------- 1 2-OXOGLUTARATE DEHYDROGENASE 0 1 1 0 1 2 3-HYDROXYACYL-COA DEHYROGENASE 1 1 1 0 1 3 ACETYL-COA C-ACYLTRANSFERASE 1 0 1 0 1 4 ACTIN 2 1 2 2 1 5 ALDEHYDE DEHYDROGENASE 2 1 0 0 2 6 ALPHA-AMYLASE 3 0 1 1 2 7 ALPHA-GALACTOSIDASE 5 1 0 0 1 8 ALPHA-MANNOSIDASE 4 2 0 0 1 9 AMINOPEPTIDASE 4 5 1 1 1 10 DIPEPTIDYL CARBOXYPEPTIDASE 0 2 0 1 1 11 ARGININE KINASE 0 1 1 0 1 12 ASPARTATE AMMONIA LYASE 0 0 1 1 1 13 ATP SYNTHASE SUBUNIT 4 2 6 2 5 14 BETA-HEXOSAMINIDASE 3 1 2 0 1 15 CADHERIN 1 0 1 0 1 16 CARBOXYLESTERASE 4 1 0 1 1 17 CATHEPSIN 1 0 1 0 1 18 CONTACTIN 1 1 1 0 1 19 ELONGATION FACTOR 1-ALPHA 1 0 1 0 1 20 GLYCERALDEHYDE 3-PHOSPHATE DEHYDROGENASE 1 0 1 0 1 21 GLYCOGEN DEBRANCHING ENZYME 2 1 1 0 0 22 GLYCOGEN PHOSPHORYLASE 1 0 1 1 1 23 HEAT SHOCK PROTEINS 5 0 4 1 7 24 HEMOCYANIN 0 1 2 2 6 25 ISOCITRATE DEHYDROGENASE 1 2 1 0 0 26 LAMININ 5 4 0 1 4 27 LEUCINE-RICH TRANSMEMBRANE PROTEIN 15 1 0 0 4 28 LOW DENSITY LIPOPROTEIN RECEPTOR 1 1 1 0 2 29 MUCIN 0 1 2 2 0 30 MYOSIN 1 1 0 2 0 31 PEROXIREDOXIN 0 0 1 1 1 32 PHOSPHOFRUCTOKINASE 1 1 1 0 0 33 PROTEASE S28 PRO-X CARBOXYPEPTIDASE 2 1 0 0 1 34 PROTEIN DISULFIDE ISOMERASE 3 1 0 1 1 35 RIBOSOMAL PROTEINS 18 14 10 5 22 36 SERINE CARBOXYPEPTIDASE 2 1 0 0 1 37 SERINE PROTEASE INHIBITOR, SERPIN 4 1 0 1 4 38 SERINE PROTEASE-RELATED 2 7 0 7 0 39 TRANSKETOLASE 1 0 1 0 1 40 TREHALOSE-6-PHOSPHATE SYNTHASE 1 1 1 0 0 41 TROPOMYOSIN 1 0 0 1 1 42 TUBULIN 4 4 4 1 0 44 VITELLOGENIN-RELATED 1 2 0 0 4 Functional classification of glycoproteins from insects {#s2b} ------------------------------------------------------- After identification and annotation of the different polypeptides, the different sets of glycoproteins were classified according to biological process and molecular function using the web-based WEGO plotting tool ([Figures 1](#pone-0016682-g001){ref-type="fig"} and [2](#pone-0016682-g002){ref-type="fig"}). Hereby, it was clear that glycoproteins captured by GNA are involved in a broad range of biological processes such as cell adhesion (GO: 0007155), cellular homeostasis (GO: 0019725), cell communication (GO: 0007154), stress response (GO: 0006950), transmembrane transport (GO: 0055085), etc. However, for specific biological processes relative differences can be found between insects belonging to different orders. For example, the relative amount of glycoproteins associated with transport (GO: 0006810) was 11%, 16%, 15%, 6% and 5% for the glycoproteins derived from *T. castaneum, B. mori*, *A. mellifera*, *D. melanogaster and A. pisum*, respectively. Between the highest and the lowest relative amount of glycoproteins for the category transport (GO: 0006810), a three-fold difference was observed (*A. mellifera* versus *B. mori*). This illustrates a potential differential importance of glycosylation for a particular biological process between insect species belonging to different orders. In addition, it is striking that a large part of the glycoproteins was associated with several metabolic processes. ![Classification according to biological process of the GNA binding glycoproteins from *A. pisum, D. melanogaster*, *A. mellifera, B. mori* and *T. castaneum* using the WEGO resources.](pone.0016682.g001){#pone-0016682-g001} ![Classification according to molecular function of the GNA binding glycoproteins from *A. pisum, D. melanogaster*, *A. mellifera, B. mori* and *T. castaneum* using the WEGO resources.](pone.0016682.g002){#pone-0016682-g002} When glycoproteins were categorized according to molecular function, many of them were involved with hydrolase activity accounting for 19%, 34%, 27%, 31% and 27% in *T. castaneum*, *B. mori, A. mellifera*, *D. melanogaster* and *A. pisum*, respectively. This is in agreement with the many proteases, esterases, glycoside hydrolases, lipases or phosphatases found in the different insect-specific glycoprotein sets ([Tables S1](#pone.0016682.s004){ref-type="supplementary-material"}, [S2](#pone.0016682.s005){ref-type="supplementary-material"}, [S3](#pone.0016682.s006){ref-type="supplementary-material"}, [S4](#pone.0016682.s007){ref-type="supplementary-material"}, [S5](#pone.0016682.s008){ref-type="supplementary-material"}). Remarkably, many glycoproteins were also associated with nucleotide binding in the different insects, especially for *A. mellifera* and *A. pisum*. This corresponded with 8%, 11%, 27%, 10% and 15% for *T. castaneum*, *B. mori*, *A. mellifera*, *D. melanogaster* and *A. pisum*, respectively, and correlates with the many ribosomal proteins found in the annotation lists ([Tables S1](#pone.0016682.s004){ref-type="supplementary-material"}, [S2](#pone.0016682.s005){ref-type="supplementary-material"}, [S3](#pone.0016682.s006){ref-type="supplementary-material"}, [S4](#pone.0016682.s007){ref-type="supplementary-material"}, [S5](#pone.0016682.s008){ref-type="supplementary-material"}, [S6](#pone.0016682.s009){ref-type="supplementary-material"}). Discussion {#s3} ========== One of the major findings in this paper is that very little overlap was observed between the glycoprotein sets derived from the different insect species. This was expected between insect species sampled at different developmental stages (e.g. *Bombyx* larvae and *Tribolium* adults) because glycosylation profiles change depending on reproductive and developmental stage. However, when comparing only adult insects (e.g. *Tribolium* adults and *Drosophila* adults) the diversity in glycoproteins remained extremely high. Since glycosylation is a post-translational modification, changes in carbohydrate composition that were found to be useful during insect evolution can easily be introduced. Because *N*-glycosylation of proteins occurs in the endoplasmic reticulum (ER) and the Golgi apparatus, it was expected that most glycoproteins would be derived from the luminal part of the secretory pathway such as plasma membrane proteins or secreted proteins. Therefore glycoproteins involved in biological processes such as cell adhesion, cell communication and transmembrane transport were expected to be very dominant. Surprisingly, the cumulative percentage of glycoproteins associated with these processes never exceeded more than 12%. Moreover, it is striking that many glycoproteins were related with metabolic processes associated with certain intracellular compartments such as lysosomes. Many lysosomal enzymes are hydrolases such as proteases, lipases or phosphatases which were found to occur very frequently in the different glycoprotein sets ([Tables S1](#pone.0016682.s004){ref-type="supplementary-material"}, [S2](#pone.0016682.s005){ref-type="supplementary-material"}, [S3](#pone.0016682.s006){ref-type="supplementary-material"}, [S4](#pone.0016682.s007){ref-type="supplementary-material"}, [S5](#pone.0016682.s008){ref-type="supplementary-material"}). These enzymes are synthesized by membrane-bound ribosomes on the ER and transverse the ER-Golgi pathway to leave the Golgi apparatus in transport vesicles that fuse with lysosomes. Moreover, in mammalians the presence of mannose-containing *N*-glycans is crucial for lysosomal enzymes to be recognized for trafficking to lysosomes [@pone.0016682-Dahms1]. Recent evidence for a similar lysosomal protein-sorting machinery in *Drosophila* Schneider S2 cells has been found by identifying a homolog of the mammalian mannose 6-phosphate receptor [@pone.0016682-Kametaka1]. Our findings support this hypothesis by demonstrating that many enzymes with hydrolytic activities which are known to concentrate in lysosomes contain oligo-mannosidic *N*-glycans. Another interesting observation was the occurrence of at least 10--25% of (glyco)proteins without a protein signature for the attachment of an *N*-glycan structure. These observations suggest that mannose-containing *O*-glycosylation may be abundantly present in insect species. To our knowledge, the presence of mannose containing *O*-glycans in insects has only been described in *D. melanogaster* for the dystroglycan protein [@pone.0016682-Ichimiya1], [@pone.0016682-Nakamura1]. Moreover, the *O*-mannosyltransferases that are responsible for the *O*-glycosylation were identified as POMT1 and POMT2 [@pone.0016682-Ichimiya1]. Recessive mutation in a *pomt* gene results in poorly viable flies with defects in muscle development, illustrating the influence of an aberration in *O*-mannosylation on normal development. Using the BLAST search algorithm (EMBL-EBI), we were able to detect predicted protein sequences that are very homologous to POMT1 and POMT2, respectively, for *T. castaneum*, *B. mori, A. mellifera* as well as *A. pisum* ([Table S7](#pone.0016682.s010){ref-type="supplementary-material"}). The construction of a phylogenetic tree for these predicted POMT proteins revealed that at least two distinct *O*-mannosyltransferases resembling POMT1 and POMT2 are conserved among the five insect species ([Figure S2](#pone.0016682.s002){ref-type="supplementary-material"}). Many proteins in the different glycoprotein sets have a known cytosolic localization such as actin, tubulin or glycerol-3-phosphate dehydrogenase. Since POMTs are located in the lumen of the Golgi apparatus, cytosolic proteins are not expected to be modified by glycan structures [@pone.0016682-Lommel1]. However, several reports have demonstrated the existence of a cellular system involving retrograde transport of proteins from the ER to the cytosol [@pone.0016682-Lehrman1]. A dynamic and abundant *O*-glycosylation of serine and threonine was demonstrated for many cytoplasmic/nuclear proteins [@pone.0016682-Dehennaut1]--[@pone.0016682-Zeidan1]. For example, in *Drosophila*, post-translational *O*-GlcNAc modification was shown to be of importance for the regulation of Polycomb gene expression, while in vertebrates tubulin was even shown to contain sialyloligosaccharides [@pone.0016682-Gambetta1], [@pone.0016682-Hino1], [@pone.0016682-Sinclair1]. In addition, other types of cytoplasmic glycosylation may be present. Although at present the expression of a mannosyl transferase in the cytoplasm has never been shown, the addition of mannose residues or mannose containing oligosaccharides to the peptide backbone of cytoplasmic/nuclear proteins may occur in insects. Apart from its use as a tool for affinity chromatography, the snowdrop (*Galanthus nivalis*) lectin was reported to exert strong insecticidal activity against different insect orders [@pone.0016682-Fitches1]--[@pone.0016682-Li1]. Previously, midgut proteins such as ferritin, α-amylase or aminopeptidase were found to be targeted by mannose-binding plant lectins in several economically important pest insects [@pone.0016682-Du1]--[@pone.0016682-Sadeghi1]. Indeed, these three midgut proteins were also found among the GNA binding glycoproteins in several insect species ([Table S6](#pone.0016682.s009){ref-type="supplementary-material"}). Moreover, this report clearly holds supporting evidence for the hypothesis that plant lectins, and in particular GNA, act on pest insects through the simultaneous interaction with multiple target glycoproteins. In this manuscript the first comparative study is presented of glycoprotein sets derived from five phylogenetically diverse insect species. Since earlier reports [@pone.0016682-Gutternigg1]--[@pone.0016682-Schachter1] have shown that the dominant glycan structures in the model insect *D. melanogaster* were of the pauci-mannose *N*-glycan type, the mannose-binding lectin GNA was used in this study to capture insect glycoproteins. However, the percentage of proteins retained on the GNA column was found to be less than 5% of the total protein for the different insect species, suggesting that the number of identified glycoproteins is probably an underestimation of the actual number of glycoproteins. One important reason to explain the low percentage of glycoproteins may be that glycoproteins containing complex glycan structures are more abundant in insects than currently believed, as was recently also shown for *Drosophila* [@pone.0016682-Vandenborre1]. In addition, the identification of glycoproteins also depends on the quality of the insect databases. As illustrated in [Table 1](#pone-0016682-t001){ref-type="table"}, the number of putative protein sequences present in the different insect databases is highly variable, which may indicate differences in the degree of completion between the insect databases. Subsequently, this will influence protein identification. Therefore, we want to emphasize that the data presented in this report do not intend to give a full database for glycoproteins present in *T. castaneum*, *B. mori*, *A. mellifera*, *D. melanogaster* or *A. pisum*. The glycoprotein catalogs are snapshots of a dynamic glycoproteome during the specific developmental stage of the different insects. Materials and Methods {#s4} ===================== Insects and lectin purification {#s4a} ------------------------------- All insects were collected from a laboratory colony that was kept at standard conditions. All stages of *T. castaneum* were kept on wheat flour mixed with brewer\'s yeast (10/1, w/w) [@pone.0016682-Zapata1]. Silkworm *B. mori* (Daizo) larvae were raised on a mulberry-based artificial diet at 25°C (Yakuroto Co., Japan) [@pone.0016682-Soin1]. After collection from hives of an experimental apiary in Ghent, honeybee workers (*A. mellifera*) were kept at 34°C and 70% relative humidity in laboratory cages and fed with sugar water [@pone.0016682-Scharlaken1]. A continuous colony of *D. melanogaster* was maintained on a corn meal-based diet, and the pea aphid *A. pisum* was reared on broad beans (*Vicia faba*) at 23--25°C and 65--70% relative humidity [@pone.0016682-Vandenborre1], [@pone.0016682-Christiaens1]. GNA was purified from the bulbs of snowdrop (*Galanthus nivalis*) using a combination of ion exchange chromatography and affinity chromatography [@pone.0016682-VanDamme1]. The carbohydrate binding specificity of GNA was previously determined in detail using hapten inhibition assays, frontal affinity chromatography and the glycan array technology provided by the Consortium for Functional Glycomics (<http://www.functionalglycomics.org/>) [@pone.0016682-Fouquaert1]. These studies clearly showed that GNA specifically binds to the terminal mannose residues from high-mannose and oligo-mannose *N*-glycans. GNA did not react with more complex *N*-glycans with terminal sugar residues other than mannose. Lectin affinity purification of glycoproteins from insect extracts {#s4b} ------------------------------------------------------------------ For the different protein extracts, adult insect bodies were used for the flour beetle *T. castaneum*, the worker honey bee *A. mellifera* and the fruit fly *D. melanogaster*. For the pea aphid *A. pisum* a mix of nymphs and adults was collected, while for the silkworm *B. mori* only fifth larval instar caterpillars were used for extracting proteins. Insect bodies were crushed in liquid nitrogen using a chilled mortar and pestle and an extraction buffer (0.2 M phosphate buffer pH 7.6 containing 2 mM phenylmethanesulfonylfluoride) was added at a ratio of 3 mL buffer per gram of insect powder. The different insect extracts were homogenized using a glass and Teflon homogenizer (10 strokes at 2,000 rpm) and subsequently centrifuged at 9,500 g for 1 h at 4°C. The supernatants were collected and protein concentrations were determined using the Bradford method (Coomassie Protein Assay kit, Thermo scientific, Rockford, IL). A lectin affinity column (diameter 0.5 cm, height 2 cm) was prepared by coupling the purified GNA to Sepharose 4B using the divinylsulfone method [@pone.0016682-Pepper1]. Approximately 20 mg of total protein was loaded onto the GNA Sepharose column to selectively purify the glycoproteins as described earlier [@pone.0016682-Vandenborre1]. To circumvent non-specific binding of glycoproteins to GNA, peak fractions were pooled and re-chromatographed on the lectin column. Detailed information on the OD values from the elution fractions of the two subsequent GNA affinity purification steps can be found in [Figure S3A](#pone.0016682.s003){ref-type="supplementary-material"}--[S3E](#pone.0016682.s003){ref-type="supplementary-material"}. To specifically analyze the selectivity of the GNA-affinity column, the binding of several protein extracts was analyzed by SDS-PAGE before and after chemical removal of the glycan structures from the glycoproteins. For the non-specific deglycosylation of the proteins the trifluoromethanesulfonic acid (TFMS) (Sigma-Aldrich) deglycosylation procedure was used [@pone.0016682-Egde1]. Preparation of peptides and LC-MS/MS analysis {#s4c} --------------------------------------------- Glycoproteins eluted from the GNA column were completely dried and re-dissolved in freshly prepared 50 mM ammonium bicarbonate buffer (pH 7.8). Prior to digestion, protein mixtures were boiled for 10 min at 95°C followed by cooling down on ice for 15 min. Sequencing-grade trypsin (Promega, Madison, WI, USA) was added in a 1∶100 (trypsin:substrate) ratio (w/w) and digestion was allowed overnight at 37°C. The sample was acidified with 10% acetic acid (final concentration of 1% acetic acid) and loaded for RP-HPLC separation on a 2.1 mm internal diameter ×150 mm 300SB-C18 column (Zorbax®, Agilent technologies, Waldbronn, Germany) using an Agilent 1100 Series HPLC system. Following a 10 min wash with 10 mM ammonium acetate (pH 5.5) in water/acetonitrile (98/2 (v/v), both Baker HPLC analyzed (Mallinckrodt Baker B.V., Deventer, the Netherlands), a linear gradient to 10 mM ammonium acetate (pH 5.5) in water/acetonitrile (30/70, v/v) was applied over 100 min at a constant flow rate of 80 µL/min. Eluting peptides were collected in 60 fractions between 20 and 80 min, and fractions separated by 15 min were pooled and vacuum dried until further analysis. These pooled fractions were re-dissolved in 50 µL of 2.5% acetonitrile (HPLC solvent A). Eight µL of this peptide mixture were applied for nanoLC-MS/MS analysis on an Ultimate (Dionex, Amsterdam, the Netherlands) in-line connected to an Esquire HCT mass spectrometer (Bruker, Bremen, Germany). The sample was first trapped on a trapping column (PepMap™ C18 column, 0.3 mm I.D. ×5 mm, Dionex (Amsterdam, the Netherlands)). After back-flushing from the trapping column, the sample was loaded on a 75 µm I.D. ×150 mm reverse-phase column (PepMap™ C18 (Dionex)). The peptides were eluted with a linear gradient of 3% HPLC solvent B (0.1% formic acid in water/acetonitrile (3/7, v/v)) increase per minute at a constant flow rate of 200 nL/min. Using data dependent acquisition multiply charged ions with intensities above threshold (adjusted for each sequence according to the noise level) were selected for fragmentation. During MS/MS analysis, a MS/MS fragmentation amplitude of 0.7 V and a scan time of 40 ms were used. Protein identification and bioinformatics {#s4d} ----------------------------------------- The fragmentation spectra were converted to mgf files using the Automation Engine software (version 3.2, Bruker) and were searched using the MASCOT database search engine (version 2.2.0, Matrix Science, <http://www.matrixscience.com>) in the appropriate databases. In particular, the Beetlebase (<http://www.Beetlebase.org>; release Glean.prot.51906), Silkbase (<http://silkworm.genomics.org.cn>; release Silkworm_glean_pep), Beebase (<http://genomes.arc.georgetown.edu>; release Amel_pre_release2_OGS_pep), Flybase (<http://flybase.org>; release FB2010_01) and the Aphidbase (<http://www.aphidbase.com/aphidbase>; release ACYPproteins) were used to identify proteins from *T. castaneum*, *B. mori*, *A. mellifera*, *D. melanogaster* and *A. pisum*, respectively [@pone.0016682-Kaplan1]--[@pone.0016682-Legeai1]. Peptide mass tolerance was set at 0.5 Da and peptide fragment mass tolerance at 0.5 Da, with the ESI-IT as selected instrument for peptide fragmentation rules. Peptide charge was set to 1+,2+,3+. Variable modifications were set to methionine oxidation, pyro-glutamate formation of amino terminal glutamine, acetylation of the N-terminus, deamidation of glutamine or asparagines. The enzyme was set to trypsin. Only peptides that were ranked one and scored above the threshold score set at 95% confidence were withheld. The peptide identification results were made publicly accessible in the proteomics identifications (PRIDE) database (experiment accesion number 13290) (<http://www.ebi.ac.uk/pride>). Glycoproteins from different insect species were annotated using the InterProScan tool available from the EBI website (<http://www.ebi.ac.uk/Tools/InterProScan>) [@pone.0016682-Hunter1]. The InterProScan tool is based on protein databases that use the hidden Markov model methodology to indentify functional protein domains/motives in the primary amino acid sequenes such as Panther, Pfam and TIGR. The obtained IntroProScan output files for *T. castaneum*, *B. mori*, *A. mellifera*, *D. melanogaster* and *A. pisum* can be found in [Output File S1](#pone.0016682.s011){ref-type="supplementary-material"}, [S2](#pone.0016682.s012){ref-type="supplementary-material"}, [S3](#pone.0016682.s013){ref-type="supplementary-material"}, [S4](#pone.0016682.s014){ref-type="supplementary-material"}, [S5](#pone.0016682.s015){ref-type="supplementary-material"}. To quantify the presence of certain proteins, an established label-free method was used based on an exponential modified protein abundance index (emPAI) [@pone.0016682-Ishihama1], [@pone.0016682-Vaudel1]. The emPAI index estimates the abundance of a specific glycoprotein based on the number of identified tryptic peptides. In addition, the number of predicted *N*-glycosylation sites present on the polypeptide backbone was calculated using the NetNGlyc 1.0 server (<http://www.cbs.dtu.dk/services/NetNGlyc>). Only Asn-X-Ser/Thr sequences (where X is any amino acid except proline) with a prediction score \>0.5 were withheld as potential *N*-glycosylation sites. Afterwards the annotated glycoproteins were categorized according to the biological process or molecular function using the Web Gene Ontology Annotation Plot (WEGO) software (<http://wego.genomics.org.cn/cgi-bin/wego/index.pl>). The WEGO software is a widely used and freely available tool for visualizing, plotting and comparing annotation results based on classification terms provided by the Gene Ontology (GO) Consortium (<http://www.geneontology.org/>) [@pone.0016682-Ye1]. Supporting Information {#s5} ====================== ###### **Coomassie-stained SDS-PAGE of different elution or run-through fractions obtained after GNA chromatography of protein extracts from *T. castaneum* (T), *D. melanogaster* (D) and *A. pisum* (A).** Lane 0 was loaded with a protein marker (PageRuler™, prestained protein ladder, Fermentas) whereas lanes 1 to 3 were loaded with the peak elution fraction of the GNA chromatography of total proteins extracts from *T. castaneum*, *D. melanogaster* and *A. pisum*, respectively. Lanes 4 to 6 were loaded with run-through samples of GNA chromatography of total protein extracts from *T. castaneum*, *D. melanogaster* and *A. pisum*, respectively. Lanes 7 to 9 were loaded with the peak elution fraction of the GNA chromatography of total proteins extracts after chemical deglycosylation from *T. castaneum*, *D. melanogaster* and *A. pisum*, respectively. (TIF) ###### Click here for additional data file. ###### **Phylogenetic tree showing the evolutionary relationship between the homologous protein sequences for** ***O*** **-mannosyltransferase 1 and 2 in** ***D. melanogaster*** **,** ***T. castaneum*** **,** ***B. mori, A. mellifera*** **and** ***A. pisum*** **.** (TIF) ###### Click here for additional data file. ###### **Elution profiles of GNA affinity chromatography of total proteins extracts from different insect species.** The eluted fractions from the first chromatography were pooled and rechromatographed on the same GNA column. The OD values of the eluted fractions from the two subsequent GNA affinity chromatography steps from *T. castaneum* (A), *B. mori* (B), *A. mellifera* (C), *D. melanogaster* (D) and *A. pisum* (E) are shown. (TIF) ###### Click here for additional data file. ###### **Annotation of the identified glycoproteins for** ***Tribolium castaneum*** **.** The list contains the accession number from Beetlebase, an abundance index (emPAI index) and the putative number of *N*-glycosylation sites. (PDF) ###### Click here for additional data file. ###### **Annotation of the identified glycoproteins for** ***Bombyx mori*** **.** The list contains the accession number from Silkbase, an abundance index (emPAI index) and the putative number of *N*-glycosylation sites. (PDF) ###### Click here for additional data file. ###### **Annotation of the identified glycoproteins for** ***Apis mellifera*** **.** The list contains the accession number from Beebase, an abundance index (emPAI index) and the putative number of *N*-glycosylation sites. (PDF) ###### Click here for additional data file. ###### **Annotation of the identified glycoproteins for** ***Drosophila melanogaster*** **.** The list contains the accession number from Flybase, an abundance index (emPAI index) and the putative number of *N*-glycosylation sites. (PDF) ###### Click here for additional data file. ###### **Annotation of the identified glycoproteins for** ***Acyrthosiphon pisum*** **.** The list contains the accession number from Aphidbase, an abundance index (emPAI index) and the putative number of *N*-glycosylation sites. (PDF) ###### Click here for additional data file. ###### **Comparative analysis of the number of annotated glycoproteins according to protein description for** ***T. castaneum, B. mori, A. mellifera, D. melanogaster*** **and** ***A. pisum*** **.** (PDF) ###### Click here for additional data file. ###### **WU-BLAST analysis to search for proteins homologous to** ***O*** **-mannosyltransferases from** ***Drosophila melanogaster*** **POMT1 (Genbank accession No NP_524025.2) and POMT2 (Genbank accession No NP_569858.1).** (PDF) ###### Click here for additional data file. ###### **InterProScan output file for *Tribolium castaneum*.** (OUT) ###### Click here for additional data file. ###### **InterProScan output file for *Bombyx mori*.** (OUT) ###### Click here for additional data file. ###### **InterProScan output file for *Apis mellifera*.** (OUT) ###### Click here for additional data file. ###### **InterProScan output file for *Drosophila melanogaster*.** (OUT) ###### Click here for additional data file. ###### **InterProScan output file for *Acyrthosiphon pisum*.** (OUT) ###### Click here for additional data file. B.G. is a postdoctoral research fellow of the Fund for Scientific Research Flanders (Belgium). **Competing Interests:**The authors have declared that no competing interests exist. **Funding:**This work was supported by the Research Council of Ghent University (project BOF07/GOA/017 and BOF10/GOA/003), the Fund for Scientific Research-Flanders (3G016306) to GS and EV. BG is a postdoctoral research fellow of the Fund for Scientific Research-Flanders (Belgium). The UGent/VIB lab acknowledges support by research grants from the Fund for Scientific Research-Flanders (Belgium) (project 3G028007), the Concerted Research Actions from the Ghent University and the Inter University Attraction Poles (project BOF07/GOA/012 and IUAP06). The authors also acknowledge the Consortium for Functional Glycomics for glycan array analyses. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. [^1]: Conceived and designed the experiments: GS BG RR KG EV. Performed the experiments: GV BG RR. Analyzed the data: GV BG GM. Contributed reagents/materials/analysis tools: GS KG EV. Wrote the paper: GV GS EV.
INTRODUCTION ============ The National School Feeding Program (PNAE) is considered an outstanding food and nutrition security policy in Brazil^[@B1]^. The recent change in the legal framework of the program included, as a strategy, the mandatory purchase of food from family farms, simultaneously stimulating food production and local sustainability, and expanding the supply of healthy, *in natura* food in schools^[@B2]^. Besides, through resolution no. 38/2009^[@B3]^, the public call was standardized as a simplified process for the public manager and the farmer, which exempts the bureaucratic chain from bidding ordinarily inaccessible to the segment of family farmers unfamiliar with the bidding requirements process^[@B4]^. The connection of family agriculture with programs that affect access and food quality, such as the PNAE, especially in a context of systematic advancement of obesity, suggests a double potential of this policy design, that is, to improve the quality of school feeding and stimulate the production and local markets of family farming crops. This potential translates into the ability to focus on the perverse consequences of the current food system, characterized by an exclusionary productive model guided by the low diversity and increasing consumption of ultra-processed food items by the population, including schoolchildren exposed to an obesogenic environment^[@B5]^. In Brazil, family agriculture has come to be considered a diverse and heterogeneous social category conceived by government managers and social actors and organizations as strategic in the process of social and economic development. The profile of food produced by the farmers' segment can be quite diverse, and the demand for market expansion has contributed to the diversification of products with varying degrees of processing^[@B9]^. Thus, regarding food processing, food obtained from family-based sources may range from *in natura* crops to food with a high degree of processing and additions of densely caloric, and sugary ingredients. The enactment of law no. 11,947^[@B2]^ increased the farmers' access to the institutional market through the PNAE. Some studies point to a positive relationship between increase of income and improvement of farmers' living conditions, diversifying and increasing their production, and improving school meals, with a greater supply of fruits and vegetables^[@B9]-[@B11]^. Thus, the connection between family-based agriculture and the PNAE enhances changes in the local food system, with possibilities of impact on improving the quality of life of farmers and on the provision of healthy meals for schoolchildren. However, many challenges are observed when organizing city councils to meet the legal requirements of the program and ensure the supply of healthier food items in schools through the local purchase of family-based crops. We highlight the complexity and diversity of the characteristics of Brazilian cities regarding the structural, political, social, and institutional aspects that can affect the expected potential for the strategy of regulating the public purchase profile. Brazilian capitals may have advantages and disadvantages concerning less populous and economically less developed municipalities that deserve to be better understood. Besides, the heterogeneity of Brazilian regions and capitals regarding sociodemographic indicators, levels of development, and the number of schoolchildren assisted by the PNAE may represent different challenges and opportunities for the purchase of family-based food still little explored in the literature. Thus, this study aimed to analyze how the food purchase profile of family agriculture is related to socioeconomic and demographic indicators in Brazilian capitals. METHODS ======= This is a cross-sectional and descriptive study, based on secondary data for the years 2016 and 2017 available on the websites of the Brazilian Institute of Geography and Statistics (IBGE)^[@B12]^, the National Fund for the Development of Education (FNDE)^[@B13]^ and the Ministry of Agrarian Development^[@B14]^. Information regarding the capitals' demographic and socioeconomic profile were: population, territorial area, human development index (HDI), and gross domestic product (GDP), obtained on the IBGE website. We also identified the total of the resource transferred by the FNDE and the percentage used for purchases of food from family farming by Brazilian capitals on the funds' website. The total value transferred by the FNDE was used as an approximation of the number of students enrolled in the education network since this value is calculated according to the number of enrollments recorded in the year before the transfer. The public call notices were obtained on the ministry's website, in "Monitoring System of Public Procurement Opportunities of Family Agriculture," and Transparency Portal of all capitals. Data regarding sociodemographics, FNDE funding, and family agriculture refer to the year 2016, and the purchase notices occurred in 2017. We also sought to identify the food requested by cafeterias of Brazilian capitals through the analysis of public calls. They were listed and subsequently classified according to the degree of processing, according to the proposed NOVA classification^[@B15]^. In the data analysis, family agriculture purchases were considered as a dependent variable and divided into two categories: percentage of purchases less than 30% and the percentage of purchases greater than or equal to 30%. The Kolmogorov-Smirnov test was applied to identify the normality of distribution of independent continuous variables (HDI, GDP, values transferred from the FNDE, number of inhabitants, and territorial area). The nonparametric variables identified were GDP and number of inhabitants, submitted to the Mann-Whitney test to identify the difference in the median between the categories of family agriculture purchases. For the parametric variables, Student's t-test was used to identify the difference between the means according to the categories of family agriculture purchases. The parametric variables were expressed in mean and standard deviation and the nonparametric variables in median and quartiles. Variables of FNDE funding, GDP, and HDI, number of inhabitants, and territorial area were organized into distribution quartiles and submitted to the chi-square association test with the categories of purchase of family agriculture. Statistical analysis was performed in the SPSS version 13 program, and, in all tests, the level of significance adopted was 5%. RESULTS ======= Public Purchases of Family Agriculture for the PNAE in Brazilian Capitals ------------------------------------------------------------------------- The average value used in the purchase of family-based crops through the PNAE was higher than that required by the legislation, that is, more than 30% of the money transferred by the FNDE, in 12 Brazilian capitals in 2016. The capitals Boa Vista and Maceió used 100% of the funding, while Rio de Janeiro and Recife did not use any resources with family-based agriculture ([Table 1](#t1){ref-type="table"}). Only the Northern region of the country presented satisfactory results for the purchase of family-based crops since all capitals met the legal requirements regarding the minimum value destined to this segment. The Southern region has only three capitals, with a smaller territorial area and less allocation of funding for the purchase of family-based crops ([Table 1](#t1){ref-type="table"}). Table 1Total funding transferred by the FNDE and percentage used for the purchase of family crops from Brazilian capitals in 2016.RegionStateCapitalTotal funding transferred (R\$)% used with family agricultureMidwest     GOGoiânia13,892,920.7641.10 MSCampo Grande10,232,653.7013.65 MTCuiabá6,999,630.5928.24 DFBrasília44,797,501.274.22North     TOPalmas10,621,273.9731.01 ROPorto Velho4,832,239.5239.02 ACRio Branco2,887,497.0535.15 PABelém6,423,576.2340.32 AMManaus22,193,813.5953.60 APMacapá2,177,893.9844.92 RRBoa Vista2,392,953.95100.00Northeast     PBJoão Pessoa8,697,273.8310.51 BASalvador17,015,380.671.62 SEAracaju995,592.7484.12 ALMaceió170,977.02100.00 PERecife8,963,348.140.00 RNNatal2,213,052.5111.02 CEFortaleza24,438,057.859.00 PITeresina9,416,824.1646.35 MASão Luís19,808,714.7527.22Southeast     SPSão Paulo79,616,147.1110.75 RJRio de Janeiro75,769,080.490.00 ESVitória6,116,291.5432.62 MGBelo Horizonte20,619,170.142.72South     RSPorto Alegre9,593,249.8822.48 SCFlorianópolis4,232,436.4422.57 PRCuritiba21,636,514.961.25 The capitals that used the most resources to purchase family crops (≥ 30%) presented lower mean and median values of funding by the FNDE (p = 0.038), HDI (p = 0.021) and number of inhabitants (p = 0.004) than those who used less than 30% of this resource. ([Table 2](#t2){ref-type="table"}). The analysis of the association between the variables showed that the capitals belonging to the smallest quartiles of income transfer by the FNDE (p = 0.023), HDI (p = 0.005) and number of inhabitants (p = 0.022) are those that buy more family-based crops (\> 30%) ([Table 3](#t3){ref-type="table"}). Table 2Average and median values of socioeconomic and demographic variables according to the categories of purchase of family crops in 2016.Variables% purchases from family agriculturep\< 30% (n = 15)\> 30% (n = 12)FNDE^a^ funding207,611,521.8 (20,933,662.9)6,843,487.9 (6,374,067.9)0.038HDI^a^0.79 (0.30)0.76 (0.36)0.021GDP^b^34,910.1 (24,029.2; 46,122.8)24,169.8 (20,520.4; 31,380.0)0.059Number of inhabitants^b^1,633,697.0 (874,210.0; 2,953,986.0)584,771.0 (368,215.8; 1,346,488.5)0.004Territorial area^a^1,757.4 (2,487.67)76,792.6 (243,744.1)0.278[^3][^4][^5] Table 3Distribution of municipalities according to the quartiles of socioeconomic and demographic variables and categories of purchase of family crops in 2016.Variables% purchases from family agriculturep^a^\< 30% (n = 15)\> 30% (n = 12)FNDE Funding   1º quartil2 (13.3%)5 (41.7%)0.0232º quartil3 (20.0%)4 (33.3%)3º quartil4 (26.7%)2 (16.7%)4º quartil6 (40.0%)1 (8.3%)HDI   1º quartil0 (0%)6 (50.0%)0.0052º quartil5 (33.3%)3 (25.0%)3º quartil4 (26.7%)2 (16.7%)4º quartil6 (40.0%)1 (8.3%)GDP   1º quartil2 (13.3%)5 (41.7%)0.1642º quartil3 (20.0%)4 (33.3%)3º quartil5 (33.3%)2 (16.7%)4º quartil5 (33.3%)1 (8.3%)Number of inhabitants   1º quartil1 (6.7%)6 (50.0%)0.0222º quartil4 (26.7%)3 (25.0%)3º quartil4 (26.7%)3 (25.0%)4º quartil6 (40.0%)0 (0%)Territorial area   1º quartil4 (26.7%)2 (16.7%)0.1572º quartil6 (40.0%)2 (16.7%)3º quartil4 (26.7%)3 (25.0%)4º quartil1 (6.7%)5 (41.7%)[^6][^7] Public Call Notices and Food Classification ------------------------------------------- Searches on the ministry's website and in the Transparency Portal of each of the municipalities allowed the localization of 23 public call notices, totaling 376 items requested for ten Brazilian capitals, during the year 2017. Between those, only four capitals reached the 30% goal of funding spending in family-based agriculture in the previous years, three of them located in the Northern region. Among the food items required in the notices, 94.1% were classified as *in natura* or minimally processed, 4.0% ultra-processed, and 1.9% processed. The requested crops with a higher degree of processing are primarily those destined for desserts or small meals, such as sweets and flavored dairy products ([Table 4](#t4){ref-type="table"}). Table 4Classification of food items requested in public calls from Brazilian capitals in 2017 according to degree of industrial processing.Degree of processingCapitalsNumber of times food items were requestedFood itemsFood *in natura* and minimally processedBelém61Fruits and vegetables, cereals, eggs, pasteurized açaí, starchy goods, tucupi sauce, and othersBoa Vista29Fruits and vegetables, cereals, fruit pulps, small bell peppers, tapioca and honey.Campo Grande34Fruits, vegetables and cerealsFortaleza8Fruits, vegetables, cereals and fruit pulpsJoão Pessoa38Fruits, vegetables, cereals, eggs, fruit pulps and mechanically separated fish meat, among othersPalmas20Fruits, vegetables, cereals and beefRio de Janeiro97Fruits, vegetables, cereals and bay leavesSão Luís26Fruits, vegetables, cereals and fruit pulpsSão Paulo17Fruits, vegetables, cereals, frozen pork and whole grape juice, among othersTeresina24Fruits, vegetables, cereals and fruit pulpsTotal: 354 (94.1%)Processed food itemsFortaleza1Curd cheeseJoão Pessoa2Curd cheese and mozzarellaPalmas4Wheat-based wafers, *cuca* cake, homemade noodles with eggs and homemade breadTotal: 7 (1.9%)Ultra-processed food itemsBelém3Yogurt and creamy fruit sweetsFortaleza1YogurtJoão Pessoa6*Doce de leite*, milk-based drinks, light butter and light *requeijão*Palmas3Pumpkin jam, *doce de leite* and blackberry jamSão Paulo2Milk-based drink, unsalted butter and yogurtTotal: 15 (4.0%) DISCUSSION ========== The analysis of the purchasing profile of family-based crops by Brazilian capitals allows us to point out an asymmetry between capitals and regions in compliance with the current legislation of the PNAE and in the potential to stimulate local production and supply of food *in natura* in schools. The analysis of the territorial distribution by regions, considering the differences in terms of area, number of capitals and municipalities by region, points out an absolute heterogeneity. The Southern region has only three capitals, the smallest territorial area and was the region where the capitals least allocated resources for family farming in 2016. However, the Southern region has a higher percentage of municipalities that meet the minimum criterion of use of the FNDE funding towards family-based agriculture according to different studies^[@B9],[@B10],[@B17],[@B18]^. This region has stood out for its rural tradition, which has better organizational and management structures^[@B16]^. Studies conducted in municipalities in the three states of the region, Rio Grande do Sul (RS), Santa Catarina (SC) and Paraná (PR), showed that, on average, 70% of the municipalities analyzed in RS and SC used more than 30% of the funding with family farming^[@B17],[@B18]^. The Northern region has seven capitals, including the largest Brazilian capital, Manaus, and was the region that best employed the resource for the purchase of family-based crops. Unequal use of the funding throughout the country seems not to be related to the territorial extension, but possibly to the administrative and management structures that characterize the metropoles. A study conducted in 2012 showed that large municipalities, with mixed, decentralized, or outsourced school feeding management and without a nutritionist as technical responsible, presented a lower frequency of purchase of food from family agriculture^[@B10]^. The characterization of the capitals' purchasing profile can inform different challenges regarding the institutional systems and processes demanded by metropolitan municipalities that support a higher number of schoolchildren and, therefore, need to mobilize resources on a large scale. It is possible to infer that there is an additional difficulty in purchasing family crops in the capitals compared to the smaller or less populous municipalities. This difference observed in the purchasing profile between the municipalities may suggest that the institutional procedures and the bureaucratic network of the capitals may hinder the articulation between schools, secretariats, and sectors responsible for the fulfillment of the program^[@B19]^. The difficulty in allocating resources to family agriculture in more urbanized and developed cities has been highlighted in other studies^[@B16],[@B20]^ The specificity of the capital cities may impose difficulties with the processes of purchase of family crops due to the greater distance from agricultural production. Besides, historically, they have more dense and complex bureaucratic management structures, which can delay the adaptation of the new impositions provided for by the PNAE. Capitals mobilize substantial resources in the context of public procurement and, therefore, attract large companies as suppliers. These companies have experience with bidding processes and with political and institutional procedures, which may represent a specific resistance to the entry of new actors in the dispute for access to the public procurement market. The government sectors seem more resistant to change in public procurement mechanisms, which requires new logical and management criteria under the PNAE. In this context, open competition may jeopardize family farms due to institutional relations, companies' interests and public sectors^[@B5],[@B21]^. A recurrent strategy is to use the concept of cost-effectiveness in bid-related legislation^[@B7],[@B22]^to justify price definition for family agriculture and thus hinder the participation of these farmers in public calls. It should be noted that the legislation regulating public calls to family agriculture accounts for differentiated pricing criteria and allows the inclusion of the cost of packaging, charges, and logistics in the final price to be paid by the government^[@B7],[@B23]^. Nevertheless, misuse of public policy can sometimes cause opportunistic behavior on the part of social agents, either by simulating the condition of a family farmer or, in the case of agrarian cooperatives and associations, appropriation of the farmer's profit^[@B24]^. Capital cities have better management structures, greater political representativeness for the implementation of new actions, and enormous scope, which seems to be underutilized as a strategy to strengthen family agriculture and the potential supply of healthier food items^[@B25]^. Even capitals with extensive experience in the field of food and nutritional security, as is the case of Belo Horizonte, are still far below the legal requirements regarding the use of the FNDE funding^[@B5],[@B21]^. On the other hand, the capitals with the lowest FNDE funding, that is, those with the lowest number of enrolled students can most contribute to the purchase of family crops. They need to manage fewer resources and are sometimes heavily dependent on federal funding. The capitals grouped in the last quartile of the HDI variable were more associated with the use of less than 30% of the resource with family agriculture; therefore, supposedly more developed capitals are the ones that buy the least crops of this segment. A study conducted in small municipalities in Western Santa Catarina showed that those with a larger territorial area, population, HDI, number of schools, and school enrollments had more difficulty reaching a minimum of 30% in the use of funding^[@B16]^. Another study indicates that the oscillation in the municipality's capacity to comply with the legislation is related to farmers' production capacity, lack of documentation and the inability to meet the delivery logistics demanded^[@B26]^. It is essential to analyze the type of food primarily demanded in public call notices to understand how the regulation of public purchases impacts the quality of food supply in schools^[@B7]^. Family farmers are a heterogeneous group in territorial distribution, management structures, and economic planning over time^[@B5]^. Moreover, public call notices are neither homogeneous nor standardized; therefore, although they are designed to facilitate farmer access to the institutional market, depending on how they are drafted and disseminated, they may pose another obstacle for local farmers^[@B27]^. A study conducted in the municipality of Araripe, Ceará, found that the agricultural supply to the PNAE has been predominantly carried out by large companies. It is argued that seasonality, insufficient production volume, and difficulties in logistics due to lack of transportation make it impossible to meet the demands of menus prepared for schools. Thus, most family crops in Araripe that meet the criteria of the public call notices are minimally processed or processed food items, with the addition of sugar and fat. The authors highlight that the rural population of Ceará practices subsistence agriculture and is not able to adapt to the requirements of the PNAE^[@B26]^. The food items prioritized in the analyzed public call notices were classified as *in natura*, and therefore favorable for the provision of a healthier diet in schools. However, other sugary and highly processed food items were also ordered in smaller quantities by five of the capitals, three of those with low funding in family agriculture. It is noteworthy that the legislation of the PNAE does not prohibit the supply of this type of food, although it does limit it^[@B2],[@B23]^. Crop perenniality and logistical difficulties, as well as a higher chance of price increase^[@B7]^ can sometimes favor the selection of processed or longer shelf-time food items, such as sweets, to ensure compliance with the legislation, since financial penalties are expected for states and municipalities that do not meet legal requirements without justification^[@B28]^. However, the manager should consider the existence of public agencies' specific health legislation for the purchase of processed or ultra-processed food items^[@B23]^. The PNAE, although very promising and with significant advances, still only represents an alternative market for the family farmer ^[@B29],[@B30]^. The institutionalization of the purchase of food *in natura* primarily through the supply of family-based farmers needs to be signified in the context of the public management of financial resources allocated to the PNAE by the FNDE, and the agencies responsible for public purchases must understand the purposes and principles that guide law no. 11,947/2009^[@B2]^, especially in Brazilian capitals. Metropoles such as capitals have specificities that require additional investment in infrastructure to meet the logistics demand and intersectoral articulation strategies that involve the sectors responsible for public procurement, policy managers, nutritionists, and farmers, as well as technical assistance agencies focused on rural extension throughout the process. The success and full development of this public policy can impact various social benefits, either by strengthening local food production and markets from family-based farmers or by providing fresher and healthier food for schoolchildren. The qualification of this process may represent the possibility of reorienting the logic of the sectors responsible for public purchases towards new principles that go outside the economic perspective in favor of valuing social gains. CONCLUSION ========== The purchase of family crops for the PNAE has advanced in the country; however, it still occurs unevenly in Brazilian capitals, and the resource is used irregularly and unsatisfactory in most regions. Compliance with the minimum criteria established in the legislation on the use of resources for family agriculture is inversely related to metropolitan municipalities' socioeconomic and demographic indicators. The number of public calls available to access is small if the total resources transferred to the capitals are considered and are therefore insufficient to meet the supply demands of schools and pretensions regarding the inclusion of family farmers in the PNAE. It is noteworthy that the disclosure of public calls is still limited, even in municipalities with more considerable institutional and financial resources. Most food items *in natura* or minimally processed may represent the potential for the promotion of adequate and healthy food in schools provided for the PNAE, strengthening it as an essential strategy to promote health in the school context. We highlight the limitations of a study based on secondary data, which, although it offers a national overview of how socioeconomic and demographic indicators related to the execution of institutional purchases of PA for the PNAE, lacks analyses on the specificities and institutional characteristics that may facilitate or hinder compliance with the legislation in force in the capitals of the country. Therefore, an influential research agenda in this area of public policies is suggested. Funding: Scientific initiation scholarship -- FAPERJ -- *Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro.* Artigo Original Compra da agricultura familiar para alimentação escolar nas capitais brasileiras https://orcid.org/0000-0002-0674-8832 Dias Patricia Camacho I https://orcid.org/0000-0002-5665-2775 Barbosa Isis Ribeiro de Oliveira II https://orcid.org/0000-0002-0850-7143 Barbosa Roseane Moreira Sampaio I https://orcid.org/0000-0001-7122-8270 Ferreira Daniele Mendonça I https://orcid.org/0000-0001-7296-5598 Soares Kamilla Carla Bertu II https://orcid.org/0000-0001-5196-9055 Soares Daniele da Silva Bastos I https://orcid.org/0000-0001-8154-0962 Henriques Patrícia I https://orcid.org/0000-0003-0875-6374 Burlandy Luciene I Brasil Universidade Federal Fluminense. Faculdade de Nutrição. Departamento de Nutrição Social. Niterói, RJ, Brasil Brasil Universidade Federal Fluminense. Faculdade de Nutrição. Curso de graduação em nutrição. Niterói, RJ, Brasil Correspondência: Patricia Camacho Dias Faculdade de Nutrição Emília de Jesus Ferreiro Campus Valonguinho Rua Mário Santos Braga, 30, sala 413 24020-140 E-mail: diaspc2\@gmail.com **Contribuição dos Autores:** Concepção do estudo; coleta, análise e interpretação dos dados; pesquisa bibliográfica; redação e revisão final do manuscrito: PCD. Coleta e interpretação dos dados; pesquisa bibliográfica; redação: IROB, KCBS. Interpretação dos dados; redação e revisão final: RMSB, DMF, DSBS, PH, LB. Todos os autores aprovaram a versão final do manuscrito. **Conflito de Interesses:** Os autores declaram não haver conflito de interesses. OBJETIVO ======== Analisar como o perfil de compra de alimentos da agricultura familiar no âmbito do Programa Nacional de Alimentação Escolar se relaciona com indicadores socioeconômicos e demográficos nas capitais brasileiras. MÉTODOS ======= Este estudo transversal e descritivo foi baseado em dados secundários de 2016 e 2017 do governo brasileiro. Foram utilizados dados demográficos e socioeconômicos, o valor do recurso repassado pelo governo federal, o percentual utilizado para compras de alimentos da agricultura familiar e as chamadas públicas disponíveis. RESULTADOS ========== As capitais no maior quartil de índice de desenvolvimento humano e de recursos repassados pelo governo federal utilizaram menos de 30% do recurso para a compra de gêneros da agricultura familiar em 2016. Todas as capitais da região Norte utilizaram acima de 30%, enquanto as regiões Sul e Sudeste não atenderam à legislação. Destaca-se a presença majoritária de alimentos *in natura* nas chamadas públicas analisadas. CONCLUSÕES ========== A execução dessa política pública ocorre de forma desigual nas capitais brasileiras, com maior dificuldade naquelas supostamente com melhor estrutura institucional e maior volume de recursos destinados ao Programa Nacional de Alimentação Escolar, contudo, o programa mantém seu potencial para a promoção da alimentação adequada e saudável nas escolas, em razão da qualidade dos alimentos incluídos nas chamadas públicas. Agricultura Licitação Segurança Alimentar e Nutricional Alimentação Escolar Programas e Políticas de Nutrição e Alimentação INTRODUÇÃO ========== O Programa Nacional de Alimentação Escolar (PNAE) é considerado uma importante política de segurança alimentar e nutricional no Brasil^[@B1]^. A recente mudança no marco legal do programa incluiu, como estratégia, a obrigatoriedade de compra de alimentos da agricultura familiar (AF), com o propósito de estimular simultaneamente a produção de alimentos e a sustentabilidade local, assim como ampliar a oferta de alimentos *in natura* e saudáveis nas escolas^[@B2]^. Além disso, por meio da resolução nº 38/2009^[@B3]^, normatizou-se a chamada pública (CP) como um processo simplificado para o gestor público e para o agricultor, que dispensa a cadeia burocrática da licitação normalmente inacessível para o segmento de agricultores familiares não familiarizados com as exigências do processo licitatório^[@B4]^. A conexão da AF com programas que afetam a dimensão do acesso e a qualidade de alimentos, como o PNAE, sobretudo em um contexto de avanço sistemático da obesidade, sugere um duplo potencial desse desenho de política, qual seja, melhorar a qualidade da alimentação escolar e estimular a produção e mercados locais de gêneros da AF. Tal potencial se traduz na capacidade de incidir sobre as consequências perversas do sistema alimentar vigente, caracterizado por um modelo produtivo excludente orientado pela pouca diversidade e pelo consumo crescente de alimentos ultraprocessados pela população, inclusive por escolares (crianças) expostos a um ambiente obesogênico^[@B5]^. No Brasil, AF passou a ser considerada uma categoria social diversa e heterogênea concebida pelos gestores governamentais e pelos atores e organizações sociais como estratégica no processo de desenvolvimento social e econômico. O perfil de alimentos produzidos pelo segmento de agricultores pode ser bastante diverso, e a demanda de ampliação de mercado tem contribuído para a diversificação de produtos com graus variados de beneficiamento^[@B9]^. Assim, considerando o processamento dos alimentos, é possível encontrar desde alimentos *in natura* até alimentos com elevado grau de processamento ou adição de ingredientes densamente calóricos e açucarados no rol de produtos da AF. A promulgação da lei nº 11.947^[@B2]^ ampliou o acesso dos agricultores ao mercado institucional por meio do PNAE. Alguns estudos apontam uma relação positiva entre o aumento de renda e a melhoria das condições de vida dos agricultores, diversificação e aumento da sua produção e melhoria da alimentação escolar, com maior oferta de frutas, legumes e verduras^[@B9]^. Desse modo, a conexão entre agricultura de base familiar e o PNAE potencializa as mudanças no sistema alimentar local, com possibilidades de impacto na melhoria da qualidade de vida de agricultores e na oferta de refeições saudáveis para os escolares. No entanto, muitos desafios são observados no processo de organização dos municípios para atender às exigências legais do programa e garantir a oferta de alimentos mais saudáveis nas escolas por meio da compra local da AF. Destacam-se a complexidade e a diversidade das características dos municípios brasileiros quanto aos aspectos estruturais, políticos, sociais e institucionais que podem afetar o potencial esperado para a estratégia de regulação do perfil de compras públicas. Considera-se que as capitais brasileiras podem assumir vantagens e desvantagens em relação aos municípios menos populosos e economicamente menos desenvolvidos que merecem ser melhor compreendidas. Além disso, a heterogeneidade das regiões e capitais brasileiras quanto aos indicadores sociodemográficos, níveis de desenvolvimento e o quantitativo de escolares atendidos pelo PNAE podem representar diferentes desafios e oportunidades para a compra de alimentos da AF ainda pouco explorados na literatura. Assim, o objetivo deste estudo foi analisar como o perfil de compra de alimentos da AF se relaciona com indicadores socioeconômicos e demográficos nas capitais brasileiras. MÉTODOS ======= Trata-se de um estudo transversal e descritivo, baseado em dados secundários referentes aos anos de 2016 e 2017 disponíveis nos sítios eletrônicos do Instituto Brasileiro de Geografia e Estatística (IBGE)^[@B12]^, do Fundo Nacional de Desenvolvimento da Educação (FNDE)^[@B13]^ e do Ministério do Desenvolvimento Agrário (MDA)^[@B14]^. As informações referentes ao perfil demográfico e socioeconômico das capitais foram: população, área territorial, índice de desenvolvimento humano (IDH) e produto interno bruto (PIB), obtidas no sítio eletrônico do IBGE. Identificaram-se também o total do recurso repassado pelo FNDE e o percentual utilizado para compras de alimentos provenientes da agricultura familiar pelas capitais brasileiras no sítio eletrônico do fundo. O valor total do recurso transferido pelo FNDE foi utilizado como uma aproximação do quantitativo de alunos matriculados na rede de ensino, uma vez que este valor é calculado em função da quantidade de matrículas registradas no ano anterior ao repasse. Os editais de CP foram obtidos no sítio eletrônico do MDA, na seção "Sistema de Monitoramento de Oportunidades de Compras Públicas da Agricultura Familiar", e no Portal da Transparência de todas as capitais. Os dados sociodemográficos e os dados dos valores de verbas do FNDE e AF são referentes ao ano de 2016 e os dados de editais de compra são referentes ao ano de 2017. Buscou-se ainda identificar, por meio da análise das chamadas públicas os alimentos solicitados para a alimentação escolar pelas capitais brasileiras. Eles foram listados e posteriormente classificados em função do grau de processamento, conforme proposta de classificação NOVA ^[@B15]^. Na análise dos dados, a compra da AF foi considerada como variável dependente e dividida em duas categorias: percentual de compras menor que 30% e percentual de compras maior ou igual a 30%. Para identificar a normalidade de distribuição das variáveis contínuas independentes (IDH, PIB, valores transferidos do FNDE, número de habitantes e área territorial) foi aplicado o teste de Kolmogorov-Smirnov. As variáveis não paramétricas identificadas foram PIB e número de habitantes, submetidas ao teste de Mann-Whitney para identificação da diferença da mediana entre as categorias de compra da AF. Para as variáveis paramétricas, utilizou-se o teste *t* de Student para identificação da diferença entre as médias segundo categorias de compra da AF. As variáveis paramétricas foram expressas em média e desvio-padrão, e as variáveis não paramétricas em mediana e quartis. Os valores transferidos do FNDE, valores de IDH e de PIB, número de habitantes e área territorial foram organizados em quartis de distribuição e submetidos ao teste de associação qui-quadrado com as categorias de compra da AF. A análise estatística foi realizada no programa SPSS versão 13 e, em todos os testes, o nível de significância adotado foi de 5%. RESULTADOS ========== Compras Públicas da Agricultura Familiar para o PNAE nas Capitais Brasileiras ----------------------------------------------------------------------------- O valor médio utilizado na compra de gêneros da AF por meio do PNAE foi superior ao exigido pela legislação, isto é, superior a 30% da verba repassada pelo FNDE, em 12 capitais brasileiras no ano de 2016. As capitais Boa Vista e Maceió utilizaram 100% do recurso repassado pelo FNDE, enquanto Rio de Janeiro e Recife não utilizaram nenhum recurso com a AF ([Tabela 1](#t1002){ref-type="table"}). Apenas a região Norte do país apresentou resultado satisfatório para a compra de gêneros da AF, uma vez que todas as capitais cumpriram as exigências legais quanto ao valor mínimo destinado a esse segmento. A região Sul possui apenas três capitais, que apresentam menor área territorial e menor destinação dos recursos para a AF ([Tabela 1](#t1002){ref-type="table"}). Tabela 1Total do recurso repassado pelo Fundo Nacional de Desenvolvimento da Educação e percentual utilizado para a compra de gêneros da agricultura familiar (AF) das capitais brasileiras no ano de 2016.RegiãoEstadoCapitalTotal do recurso repassado (R\$)% utilizado com a AFCentro-Oeste     GOGoiânia13.892.920,7641,10 MSCampo Grande10.232.653,7013,65 MTCuiabá6.999.630,5928,24 DFBrasília44.797.501,274,22Norte     TOPalmas10.621.273,9731,01 ROPorto Velho4.832.239,5239,02 ACRio Branco2.887.497,0535,15 PABelém6.423.576,2340,32 AMManaus22.193.813,5953,60 APMacapá2.177.893,9844,92 RRBoa Vista2.392.953,95100,00Nordeste     PBJoão Pessoa8.697.273,8310,51 BASalvador17.015.380,671,62 SEAracaju995.592,7484,12 ALMaceió170.977,02100,00 PERecife8.963.348.140,00 RNNatal2.213.052,5111,02 CEFortaleza24.438.057,859,00 PITeresina9.416.824,1646,35 MASão Luís19.808.714,7527,22Sudeste     SPSão Paulo79.616.147,1110,75 RJRio de Janeiro75.769.080,490,00 ESVitória6.116.291,5432,62 MGBelo Horizonte20.619.170,142,72Sul     RSPorto Alegre9.593.249,8822,48 SCFlorianópolis4.232.436,4422,57 PRCuritiba21.636.514,961,25 As capitais que mais utilizaram recursos para compra da AF (≥ 30%) apresentaram menores valores médios e medianos de transferência de recurso pelo FNDE (p = 0,038), IDH (p = 0,021) e número de habitantes (p = 0,004) do que aquelas que utilizaram menos que 30% desse recurso para compra de gêneros da AF ([Tabela 2](#t2002){ref-type="table"}). A análise da associação entre as variáveis mostrou que as capitais pertencentes aos menores quartis de transferência de recurso pelo FNDE (p = 0,023), IDH (p = 0,005) e número de habitantes (p = 0,022) são aquelas que mais compram gêneros da AF (\> 30%) ([Tabela 3](#t3002){ref-type="table"}). Tabela 2Valores médios e medianos das variáveis socioeconômicas e demográficas de acordo com as categorias de compra de gêneros da agricultura familiar (AF) no ano de 2016.Variáveis% compras da AFp\< 30% (n = 15)\> 30% (n = 12)Transferência do FNDE^a^207.611.521,8 (20.933.662,9)6.843.487,9 (6.374.067,9)0,038IDH^a^0,79 (0,30)0,76 (0,36)0,021PIB^b^34.910,1 (24.029,2; 46.122,8)24.169,8 (20.520,4; 31.380,0)0,059Número de habitantes^b^1.633.697,0 (874.210,0; 2.953.986,0)584.771,0 (368.215,8; 1.346.488,5)0,004Área territorial^a^1.757,4 (2.487,67)76.792,6 (243.744,1)0,278[^8][^9][^10] Tabela 3Distribuição dos municípios de acordo com os quartis das variáveis socioeconômicas e demográficas e categorias de compra de gêneros da agricultura familiar (AF) no ano de 2016.Variáveis% compras da AFp\*\< 30% (n = 15)\> 30% (n = 12)Transferência do FNDE   1º quartil2 (13,3%)5 (41,7%)0,0232º quartil3 (20,0%)4 (33,3%)3º quartil4 (26,7%)2 (16,7%)4º quartil6 (40,0%)1 (8,3%)IDH   1º quartil0 (0%)6 (50,0%)0,0052º quartil5 (33,3%)3 (25,0%)3º quartil4 (26,7%)2 (16,7%)4º quartil6 (40,0%)1 (8,3%)PIB   1º quartil2 (13,3%)5 (41,7%)0,1642º quartil3 (20,0%)4 (33,3%)3º quartil5 (33,3%)2 (16,7%)4º quartil5 (33,3%)1 (8,3%)Número de habitantes   1º quartil1 (6,7%)6 (50,0%)0,0222º quartil4 (26,7%)3 (25,0%)3º quartil4 (26,7%)3 (25,0%)4º quartil6 (40,0%)0 (0%)Área territorial   1º quartil4 (26,7%)2 (16,7%)0,1572º quartil6 (40,0%)2 (16,7%)3º quartil4 (26,7%)3 (25,0%)4º quartil1 (6,7%)5 (41,7%)[^11][^12] Chamadas Públicas e a Classificação de Alimentos ------------------------------------------------ A busca no sítio do MDA e no Portal da Transparência de cada um dos municípios permitiu localizar 23 CP, totalizando 376 itens solicitados para dez capitais brasileiras, referentes ao ano de 2017. Dessas, apenas quatro atingiram os 30% da utilização do recurso do FNDE com gêneros da AF no ano anterior, sendo três delas localizadas na região Norte do país. Dentre os alimentos requeridos nos editais, 94,1% foram classificados como *in natura* ou minimamente processados, 4,0% ultraprocessados e 1,9% processados. Os gêneros solicitados com maior grau de processamento são prioritariamente aqueles destinados às sobremesas ou pequenas refeições, tais como doces e derivados de leite acrescidos de sabor ([Tabela 4](#t4002){ref-type="table"}). Tabela 4Classificação do grau de processamento segundo a NOVA15 dos gêneros alimentícios solicitados nas chamadas públicas das capitais do Brasil no ano de 2017.Grau de processamentoCapitaisNº de vezes que os gêneros alimentícios foram solicitadosGêneros alimentíciosAlimentos *in natura* e minimamente processadosBelém61Frutas, hortaliças, leguminosas, cereais, ovo, açaí pasteurizado, farináceos e tucupi, entre outrosBoa Vista29Frutas, hortaliças, leguminosas, cereais, polpa de fruta, pimenta-de-cheiro, goma de tapioca e mel de abelhaCampo Grande34Frutas, hortaliças, leguminosas e cereaisFortaleza8Frutas, hortaliças, leguminosas, cereais e polpa de frutaJoão Pessoa38Frutas, hortaliças, leguminosas, cereais, ovo, polpa de fruta e carne de peixe mecanicamente separada, entre outrosPalmas20Frutas, hortaliças, leguminosas cereais e carne bovinaRio de Janeiro97Frutas, hortaliças, leguminosas, cereais e louroSão Luís26Frutas, hortaliças, leguminosas, cereais e polpa de frutaSão Paulo17Frutas, hortaliças, leguminosas, cereais, carne suína congelada e suco de uva integral, entre outrosTeresina24Frutas, hortaliças, leguminosas, cereais e polpa de frutaTotal: 354 (94,1%)Alimentos processadosFortaleza1Queijo coalhoJoão Pessoa2Queijo coalho e queijo muçarelaPalmas4Bolacha doce de trigo, cuca, macarrão caseiro com ovos e pão caseiroTotal: 7 (1,9%)Alimentos ultraprocessadosBelém3Iogurte e doce de fruta cremosoFortaleza1IogurteJoão Pessoa6Doce de leite, bebida láctea, manteiga light e requeijão lightPalmas3Doce de abóbora, doce de leite e geleia de amoraSão Paulo2Bebida láctea, manteiga sem sal e iogurteTotal: 15 (4,0%) DISCUSSÃO ========= A análise do perfil de compra de gêneros da AF pelas capitais brasileiras permite apontar uma assimetria entre as capitais e regiões no cumprimento da legislação vigente do PNAE e no potencial de estímulo à produção local e oferta de alimentos *in natura* nas escolas. A análise da distribuição territorial por regiões, considerando as diferenças em termos de área, número de capitais e municípios por região, aponta certa heterogeneidade. A região Sul possui apenas três capitais, constitui a menor área territorial e foi a região em que as capitais menos destinaram recursos para a agricultura familiar em 2016. No entanto, o Sul apresenta maior percentual de municípios que atendem ao critério mínimo de utilização do recurso do FNDE com a AF segundo diferentes estudos^[@B9],[@B10],[@B17],[@B18]^. Essa região tem se destacado por sua tradição rural, que possui melhores níveis de organização e estruturas de gestão^[@B16]^. Estudos realizados em municípios dos três estados da região, Rio Grande do Sul (RS), Santa Catarina (SC) e Paraná (PR), mostraram que, em média, 70% dos municípios analisados no RS e em SC utilizaram mais de 30% do recurso com a AF^[@B17],[@B18]^. A região Norte possui sete capitais, incluindo a maior capital brasileira, Manaus, e foi a região que melhor empregou o recurso para a compra de gêneros da agricultura familiar. O emprego desigual do recurso nessa compra em todo o território nacional parece não estar relacionado com a extensão territorial, mas possivelmente com as estruturas administrativas e de gestão que caracterizam as metrópoles. O resultado de um estudo realizado em 2012 apontou que municípios de grande porte, com gestão da alimentação escolar do tipo mista, descentralizada ou terceirizada e sem nutricionista como responsável técnico, apresentaram menor frequência de compra de alimentos da agricultura familiar^[@B10]^. A caracterização do perfil de compra pelas capitais pode informar diferentes desafios quanto aos sistemas e processos institucionais demandados por municípios metropolitanos que atendem a um quantitativo elevado de escolares e, portanto, mobilizam recursos em larga escala. É possível inferir que existe uma dificuldade adicional para o processo de compra de alimentos da AF nas capitais em comparação com os municípios menores e ou menos populosos. Essa diferença observada no perfil de compra entre os municípios pode sugerir que os trâmites institucionais e a rede burocrática das capitais podem dificultar a articulação entre escolas, secretarias e setores responsáveis para o cumprimento do programa^[@B19]^. A dificuldade na destinação de recursos para a AF em cidades mais urbanizadas e desenvolvidas vem sendo destacada em outros estudos^[@B16],[@B20]^. A especificidade das capitais pode impor dificuldades com os processos de compra da AF em razão da maior distância da produção agrícola. Além disso, historicamente, possuem estruturas burocráticas de gestão mais densas e complexas, que podem retardar a adaptação das novas imposições previstas pelo PNAE. As capitais mobilizam recursos vultuosos no âmbito das compras públicas e, portanto, atraem empresas de grande porte como fornecedores. Tais empresas possuem experiência com processos licitatórios e com os trâmites políticos e institucionais, o que pode representar certa resistência à entrada de novos atores na disputa por acesso ao mercado das compras públicas. Os setores de governo parecem mais refratários a uma mudança nos mecanismos de compras públicas, que exige novas lógicas e critérios de gestão no âmbito do PNAE. A abertura de concorrência para a AF nesse contexto pode ser dificultada pelas relações institucionais, os interesses das empresas e dos setores públicos^[@B5],[@B21]^. Nesse sentido, é recorrente o uso do princípio da economicidade que rege a legislação que trata de licitação^[@B7],[@B22]^ para justificar a definição de preços para AF e assim dificultar a participação dos agricultores nas CP. Cabe destacar que a legislação que regulamenta as chamadas públicas para AF prevê critérios de definição de preços diferenciados e possibilita ainda a inclusão do custo com embalagens, encargos e logística no preço final a ser pago pelo setor público^[@B7],[@B23]^. Não obstante, o uso indevido da política pública pode ocasionar por vezes comportamento oportunista por parte de agentes sociais, seja simulando a condição de agricultor familiar ou, no caso de cooperativas e associações, a apropriação do lucro do agricultor^[@B24]^. As capitais possuem melhores estruturas de gestão, maior representatividade política para a implementação de novas ações e maior abrangência, que parece estar sendo subaproveitada como estratégia de fortalecimento da AF e de potencial oferta de alimentos mais saudáveis^[@B25]^. Mesmo aquelas capitais com larga experiência no campo da segurança alimentar e nutricional, como é o caso de Belo Horizonte, ainda se encontram muito aquém das exigências legais quanto à utilização do recurso do FNDE^[@B5],[@B21]^. Já as capitais com menores valores de repasse do FNDE, ou seja, aquelas com menor quantitativo de alunos matriculados, são as que mais conseguem aportar recursos na compra de gêneros da AF. Elas precisam gerir menor volume de recursos e por vezes são fortemente dependentes de recursos federais. As capitais agrupadas no último quartil da variável IDH foram mais associadas à utilização de menos do que 30% do recurso com a AF; portanto, capitais supostamente mais desenvolvidas são as que menos compram gêneros desse segmento. Em um estudo realizado em municípios de pequeno porte da mesorregião Oeste Catarinense apontou que aqueles com maior área territorial, população, IDH, número de escolas e matrículas escolares tiveram mais dificuldade em atingir o mínimo de 30%^[@B16]^. Outro estudo aponta que a oscilação na capacidade de o município atender a legislação está relacionada com a capacidade de produção dos agricultores, falta de documentação e incapacidade para atender à logística de entrega demandada^[@B26]^. É importante analisar o tipo de alimento prioritariamente demandado nas CP para compreender como a regulação das compras públicas impacta na qualidade da oferta de alimentos nas escolas^[@B7]^. A categoria de agricultores familiares é bastante heterogênea e se diferencia em distribuição territorial, estruturas de gestão e planejamento econômico ao longo do tempo^[@B5]^. Além disso, os instrumentos de CP também não são homogêneos e nem padronizados; portanto, embora sejam concebidos para facilitar o acesso do agricultor ao mercado institucional, dependendo de como são redigidos e divulgados, podem representar mais um obstáculo para os agricultores locais^[@B27]^. Um estudo realizado no município de Araripe, no Ceará, constatou que o fornecimento agrícola para o PNAE vem sendo predominantemente realizado por grandes empresas. Argumenta-se que a sazonalidade, o volume insuficiente da produção e as dificuldades na logística pela falta de transporte inviabilizam o atendimento das demandas dos cardápios elaborados para as escolas. Assim, a maioria dos gêneros oriundos da AF em Araripe que conseguem cumprir os critérios dos editais de CP são alimentos minimamente processados ou processados, com adição de açúcar e gordura. Os autores destacam que a população rural da região do Ceará pratica a agricultura de subsistência, não sendo ainda capaz de adequar-se às exigências do PNAE^[@B26]^. Os alimentos priorizados nas CP analisadas foram classificados como *in natura*, sendo, portanto, favoráveis para a oferta de uma alimentação mais saudável nas escolas. Contudo, alguns outros alimentos ricos em açúcar e alguns processados e ultraprocessados também foram solicitados em menor quantidade por cinco capitais, das quais três com baixo investimento do recurso na AF. Destaca-se que a legislação do PNAE, ainda que limite, não proíbe a oferta desse tipo de alimento^[@B2],[@B23]^. A perenidade das frutas e hortaliças associada às dificuldades na logística de entrega e maior possibilidade de agregação de valor comercial^[@B7]^ por vezes pode favorecer a inclusão de alimentos processados ou com maior tempo de prateleira, como doces, para garantir o cumprimento do uso dos 30% dos recursos previstos na legislação, uma vez que existe a previsão de penalidades financeiras para os estados e municípios que não atendem às exigências legais sem justificativa^[@B28]^. No entanto, o gestor deve considerar a existência de legislações sanitárias específicas dos órgãos públicos para a compra de alimentos processados ou ultraprocessados^[@B23]^. O PNAE, embora muito promissor e com significativos avanços, ainda representa um canal de comercialização apenas alternativo para o agricultor familiar^[@B29],[@B30]^. A institucionalização da compra de alimentos *in natura* prioritariamente por meio da oferta dos agricultores de base familiar precisa ser significada no âmbito da gestão pública dos recursos financeiros destinados ao PNAE pelo FNDE, sendo fundamental que os órgãos responsáveis pelas compras públicas compreendam os propósitos e princípios que orientam a lei nº 11.947/2009^[@B2]^, especialmente nas capitais brasileiras. Os municípios de regiões metropolitanas, como é o caso das capitais, possuem especificidades e necessitam de investimento adicional em infraestrutura para atender à demanda logística e em estratégias de articulação intersetorial que envolva os setores responsáveis pelas compras públicas, gestores de política, nutricionistas e agricultores, bem como os órgãos de assistência técnica voltados para a extensão rural em todo o processo. O sucesso e o pleno desenvolvimento dessa política pública podem impactar em vários benefícios sociais, seja pelo fortalecimento da produção e dos mercados locais de alimentos provenientes de agricultores de base familiar, seja pela oferta de alimentos mais frescos e saudáveis para escolares. A qualificação desse processo pode representar a possibilidade de reorientar a lógica dos setores responsáveis pelas compras públicas em direção a novos princípios que extrapolem a perspectiva econômica em favor da valorização dos ganhos sociais. CONCLUSÃO ========= A compra de gêneros da AF para o PNAE avançou no país; contudo, nas capitais brasileiras ainda ocorre de forma desigual, e o recurso é empregado de forma irregular e insatisfatória na maioria das regiões. O cumprimento dos critérios mínimos estabelecidos na legislação do programa acerca da utilização dos recursos para AF está inversamente relacionado aos indicadores socioeconômicos e demográficos dos municípios metropolitanos. O número de CP disponíveis ao acesso público é pequeno se considerado o total dos recursos repassados para as capitais, sendo, portanto, insuficiente para atender às demandas de abastecimento das escolas e as pretensões quanto à inclusão dos agricultores familiares no PNAE. Destaca-se que divulgação das CP ainda é limitada, mesmo nos municípios com maiores recursos institucionais e financeiros. A presença majoritária de alimentos *in natura* ou minimamente processados pode representar o alcance do potencial para a promoção da alimentação adequada e saudável nas escolas previsto para o PNAE, fortalecendo-o como uma importante estratégia de promoção da saúde no contexto escolar. Destacam-se as limitações de um estudo baseado em dados secundários, que, ainda que ofereça um panorama nacional de como os indicadores socioeconômicos e demográficos se relacionam com a execução das compras institucionais da AF para o PNAE, carece de análises sobre as especificidades e características institucionais que podem facilitar ou dificultar o cumprimento da legislação vigente nas capitais do país. Sugere-se, portanto, uma importante agenda de pesquisa nessa área de políticas públicas. Financiamento: Bolsa de iniciação científica - FAPERJ - Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro. [^1]: **Authors' Contribution:** Study conception and planning; data collection, analysis, and interpretation; writing and review of the manuscript: PCD. Data collection, bibliographic research, writing of the article. IROB, KCBS. Data interpretation; writing and final review. RMSB, DMF, DSBS, PH, LB. All authors approved the final version of the manuscript. [^2]: **Conflict of Interests:** The authors declare no conflict of interest. [^3]: FNDE: National Fund for the Development of Education; HDI: human development index; GDP: gross domestic product [^4]: ^a^ Parametric variables expressed in mean (standard deviation); Student's *t-*test. [^5]: ^b^ Nonparametric variables expressed in median; Mann-Whitney test. [^6]: FNDE: National Fund for the Development of Education; HDI: human development index; GDP: gross domestic product [^7]: ^a^ Chi-squared test. [^8]: FNDE: Fundo Nacional de Desenvolvimento da Educação; IDH: índice de desenvolvimento humano; PIB: produto interno bruto [^9]: ^a^ Variáveis paramétricas expressas em média (desvio-padrão); teste *t* de Student. [^10]: ^b^ Variáveis não paramétricas expressas em mediana (p25; p75); teste de Mann-Whitney. [^11]: FNDE: Fundo Nacional de Desenvolvimento da Educação; IDH: índice de desenvolvimento humano; PIB: produto interno bruto [^12]: \* Teste qui-quadrado.
Q: How to change questions in a multiple choice activity and pass the score to the next activity I'm working on a quiz application using Retrofit to parsing the array of question. There are 10 questions, each question has 4 choices (radio button) that will be changed if "next" button's clicked. While the button's clicked, it should save the user's answer and if the answer's right, the score should increase by 10 points. The total score should be shown in the next activity after the user completes answering all the question. I've already looked for references but still, I am confused regarding how to set the changed text when "next" button's clicked and store the user's answer and count the score at the same time in my project. Here's my JSON response { "error": false, "status": "success", "result": [ { "id": 96, "description": "Meyakini dalam hati, mengucapkan dengan lisan, dan mengamalkan dalam kehidupan sehari-hari adalah arti dari . . . .", "A": "iman", "B": "islam", "C": "ihsan", "D": "takwa", "Answer": "iman", "discussion": "Iman kepada Allah Swt. adalah percaya dengan sepenuh hati bahwa Dia itu ada, diucapkan dengan lisan, dan diamalkan dalam perbuatan sehari-hari." }, { "id": 97, "description": "Fatimah disuruh membeli minyak goreng di sebuah warung. Ketika menerima uang kembalian, ia tahu bahwa jumlahnya lebih dari seharusnya, lalu ia mengembalikannya. Ia sadar bahwa Allah Swt. selalu mengawasi perbuatannya, karena Allah Swt. bersifat . . . .", "A": "al-'Aliim", "B": "al-Khabiir", "C": "as-Samii'", "D": "al-Basiir", "Answer": "al-Basiir", "discussion": "Allah Maha Mengawasi yang berarti juga Allah Maha Melihat (al_Basiir)." }, { "id": 98, "description": "Subhanallah, indahnya alam semesta dengan segala isinya. Semuanya tercipta dengan teratur dan seimbang. Fenomena alam tersebut merupakan bukti bahwa Allah Maha . . . .", "A": "mengetahui", "B": "teliti", "C": "mendengar", "D": "melihat", "Answer": "teliti", "discussion": "Semuanya tercipta dengan teratur dan seimbang yang berarti Allah Maha Teliti." }, { "id": 99, "description": "Hasan selalu berhati-hati dalam setiap ucapan dan perbuatannya, karena ia yakin bahwa Allah Swt. senantiasa mendengarnya. Perbuatan tersebut merupakan pengamalan dari keyakinannya bahwa Allah Swt. bersifat . . . .", "A": "al-'Aliim", "B": "al-Khabiir", "C": "as-Samii'", "D": "al-Basiir", "Answer": "as-Samii'", "discussion": "Allah Swt. senantiasa mendengarnya yang berarti Allah Maha Mendengar (as-Samii')." }, { "id": 100, "description": "Di antara bentuk pengamalan dari keyakinan terhadap al-'Aliim adalah . . . .", "A": "rajin dalam menimba ilmu", "B": "berusaha menghindari kemungkaran", "C": "bersikap dermawan kepada sesama", "D": "bersikap pemaaf kepada sesama", "Answer": "rajin dalam menimba ilmu", "discussion": "Allah Swt. sangat menyukai orang yang rajin mencari ilmu pengetahuan dan mengamalkannya" }, { "id": 101, "description": "Allah Swt. sendirilah yang mengetahui kapan terjadinya hari kiamat, mengetahui apa yang terkandung di dalam rahim, mengetahui kapan akan turun hujan. Allah Swt. Maha Mengetahui merupakan makna dari . . . .", "A": "al-'Aliim", "B": "al-Khabiir", "C": "as-Samii'", "D": "al-Basiir", "Answer": "al-'Aliim", "discussion": "Dari kasus di atas berarti Allah Maha Mengetahui (al-'Aliim)." }, { "id": 102, "description": "Di antara bentuk pengamalan dari keyakinan terhadap al-Khabiir adalah . . . .", "A": "suka berbagi pengalaman dan pengetahuan", "B": "senang menolong orang yang sedang susah", "C": "menjadi suri teladan bagi orang lain", "D": "bersemangat dan kreatif dalam segala hal", "Answer": "bersemangat dan kreatif dalam segala hal", "discussion": "Allah Swt. menciptakan milyaran makhluk dengan berbagai ragamnya. Semuanya diketahui oleh Allah dengan detail, penuh kecermatan dan kewaspadaan, baik secara lahir maupun batin." }, { "id": 103, "description": "Allah Swt. Maha Mendengar suara apa pun yang ada di alam semesta ini. Pendengaran Allah tidak terbatas, tidak ada satu pun suara yang lepas dari pendengaran-Nya. Allah Swt. Maha Mendengar merupakan makna dari . . . .", "A": "al-'Aliim", "B": "al-Khabiir", "C": "as-Samii'", "D": "al-Basiir", "Answer": "as-Samii'", "discussion": "Allah Maha Mendengar atau disebut juga dengan as-Samii'." }, { "id": 104, "description": "Allah Swt. Maha Melihat segala sesuatu walaupun lembut dan kecil. Allah Swt. pun melihat apa yang ada di bumi dan di langit. Allah Maha Melihat merupakan makna . . . .", "A": "al-'Aliim", "B": "al-Khabiir", "C": "as-Samii'", "D": "al-Basiir", "Answer": "al-Basiir", "discussion": "Allah Maha Melihat atau disebut juga dengan al-Basiir." }, { "id": 105, "description": "Di antara bentuk pengamalan dari keyakinan terhadap al-Basiir adalah . . . .", "A": "introspeksi diri untuk kebaikan", "B": "introspeksi diri untuk kebaikan", "C": "amar ma’ruf nahi munkar", "D": "menjadi suri tauladan bagi orang lain", "Answer": "introspeksi diri untuk kebaikan", "discussion": "Kita diharuskan selalu introspeksi diri untuk melihat kelebihan dan kekurangan kita sendiri agar hidup menjadi lebih terarah, ini merupakan salah satu pengalaman dari al-Basiir" } ] } My corresponding model class public class Task { @SerializedName("id") @Expose private int id_soal; @SerializedName("description") @Expose private String soal; @SerializedName("A") @Expose private String option_A; @SerializedName("B") @Expose private String option_B; @SerializedName("C") @Expose private String option_C; @SerializedName("D") @Expose private String option_D; @SerializedName("Answer") @Expose private String jawaban; @SerializedName("discussion") @Expose private String pembahasan; public Task(int id_soal, String soal, String option_A, String option_B, String option_C, String option_D, String jawaban, String pembahasan) { this.id_soal = id_soal; this.soal = soal; this.option_A = option_A; this.option_B = option_B; this.option_C = option_C; this.option_D = option_D; this.jawaban = jawaban; this.pembahasan = pembahasan; } And my TaskActivity public class TaskActivity extends AppCompatActivity { private ArrayList<Task> tasks; TextView task_question; RadioGroup choices_group; RadioButton choice_A, choice_B, choice_C, choice_D; Button next, previous; ProgressDialog loading; Token auth = PreferencesConfig.getInstance(this).getToken(); String token = "Bearer " + auth.getToken(); int score; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_banksoal_test); task_question = findViewById(R.id.pertanyaan); choices_group = findViewById(R.id.rg_question); choice_A = findViewById(R.id.option_A); choice_B = findViewById(R.id.option_B); choice_C = findViewById(R.id.option_C); choice_D = findViewById(R.id.option_D); next = findViewById(R.id.bNext); previous = findViewById(R.id.bPrevious); next.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { //???????? } }); } @Override protected void onResume() { super.onResume(); alert_start(); } public void alert_start(){ AlertDialog.Builder alertDialog = new AlertDialog.Builder(this); alertDialog.setMessage("Mulai?"); alertDialog.setNegativeButton("Jangan dulu, saya belum siap!", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { Intent intent = new Intent(TaskActivity.this, BanksoalShelvesActivity.class); startActivity(intent); } }); alertDialog.setPositiveButton("Ayo, dimulai!", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { task(); dialog.dismiss(); } }); AlertDialog alert = alertDialog.create(); alert.show(); } public void task(){ loading = ProgressDialog.show(this, null, "Please wait...",true, false); Intent intent = getIntent(); final int task_id = intent.getIntExtra("task_id", 0); int classes = intent.getIntExtra("task_class", 0); Call<ResponseTask> call = RetrofitClient .getInstance() .getApi() .taskmaster_task(token, task_id, classes); call.enqueue(new Callback<ResponseTask>() { @Override public void onResponse(Call<ResponseTask> call, Response<ResponseTask> response) { loading.dismiss(); ResponseTask responseTask = response.body(); Log.d("TAG", "Response " + response.body()); if (response.isSuccessful()){ if (responseTask.getStatus().equals("success")){ Log.i("debug", "onResponse : SUCCESSFUL"); tasks = responseTask.getTasks(); showQuestion(); }else { Log.i("debug", "onResponse : FAILED"); } } } @Override public void onFailure(Call<ResponseTask> call, Throwable t) { Log.e("debug", "onFailure: ERROR > " + t.getMessage()); loading.dismiss(); Toast.makeText(TaskActivity.this, "Kesalahan terjadi.", Toast.LENGTH_LONG).show(); } }); } public void showQuestion(){ for (int i = 0; i < tasks.size(); i++){ task_question.setText(tasks.get(i).getSoal()); choice_A.setText(tasks.get(i).getOption_A()); choice_B.setText(tasks.get(i).getOption_B()); choice_C.setText(tasks.get(i).getOption_C()); choice_D.setText(tasks.get(i).getOption_D()); } } } A: First of all, you need to store currentTaskId: private int currentTaskId = 0; Then you should load received tasks in your variable: tasks = responseTask.getTasks(); loadQuestion(); And somewhere in TaskActivity you need to write this method which shows the question to the user: private void loadQuestion(){ Task task = tasks.get(currentTaskId); task_question.setText(task.getSoal()); choice_A.setText(task.getOption_A()); choice_B.setText(task.getOption_B()); choice_C.setText(task.getOption_C()); choice_D.setText(task.getOption_D()); next.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { //check task answer, if task was answered correct than score+=10 if(currentTaskId<tasks.size()){ currentTaskId++; loadQuestion(); } else { //open new activity } } }); previous.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { if(currentTaskId>0){ currentTaskId--; loadQuestion(); } else { //do nothing as we are already on the first question } } }); }
Welcome to LLVM! In order to get started, you first need to know some basic information. First, LLVM comes in three pieces. The first piece is the LLVM suite. This contains all of the tools, libraries, and header files needed to use LLVM. It contains an assembler, disassembler, bitcode analyzer and bitcode optimizer. It also contains basic regression tests that can be used to test the LLVM tools and the Clang front end. The second piece is the Clang front end. This component compiles C, C++, Objective C, and Objective C++ code into LLVM bitcode. Once compiled into LLVM bitcode, a program can be manipulated with the LLVM tools from the LLVM suite. There is a third, optional piece called Test Suite. It is a suite of programs with a testing harness that can be used to further test LLVM’s functionality and performance. The LLVM Getting Started documentation may be out of date. So, the Clang Getting Started page might also be a good place to start. Here’s the short story for getting up and running quickly with LLVM: Read the documentation. Read the documentation. Remember that you were warned twice about reading the documentation. In particular, the relative paths specified are important. Checkout LLVM: cdwhere-you-want-llvm-to-live svncohttp://llvm.org/svn/llvm-project/llvm/trunkllvm Checkout Clang: cdwhere-you-want-llvm-to-live cdllvm/tools svncohttp://llvm.org/svn/llvm-project/cfe/trunkclang Checkout Extra Clang Tools [Optional]: cdwhere-you-want-llvm-to-live cdllvm/tools/clang/tools svncohttp://llvm.org/svn/llvm-project/clang-tools-extra/trunkextra Checkout LLD linker [Optional]: cdwhere-you-want-llvm-to-live cdllvm/tools svncohttp://llvm.org/svn/llvm-project/lld/trunklld Checkout Polly Loop Optimizer [Optional]: cdwhere-you-want-llvm-to-live cdllvm/tools svncohttp://llvm.org/svn/llvm-project/polly/trunkpolly Checkout Compiler-RT (required to build the sanitizers) [Optional]: cdwhere-you-want-llvm-to-live cdllvm/projects svncohttp://llvm.org/svn/llvm-project/compiler-rt/trunkcompiler-rt Checkout Libomp (required for OpenMP support) [Optional]: cdwhere-you-want-llvm-to-live cdllvm/projects svncohttp://llvm.org/svn/llvm-project/openmp/trunkopenmp Checkout libcxx and libcxxabi [Optional]: cdwhere-you-want-llvm-to-live cdllvm/projects svncohttp://llvm.org/svn/llvm-project/libcxx/trunklibcxx svncohttp://llvm.org/svn/llvm-project/libcxxabi/trunklibcxxabi Get the Test Suite Source Code [Optional] cdwhere-you-want-llvm-to-live cdllvm/projects svncohttp://llvm.org/svn/llvm-project/test-suite/trunktest-suite Configure and build LLVM and Clang: Warning: Make sure you’ve checked out all of the source code before trying to configure with cmake. cmake does not pickup newly added source directories in incremental builds. The build uses CMake. LLVM requires CMake 3.4.3 to build. It is generally recommended to use a recent CMake, especially if you’re generating Ninja build files. This is because the CMake project is constantly improving the quality of the generators, and the Ninja generator gets a lot of attention. To use LLVM modules on Win32-based system, you may configure LLVM with -DBUILD_SHARED_LIBS=On. MCJIT not working well pre-v7, old JIT engine not supported any more. Note that Debug builds require a lot of time and disk space. An LLVM-only build will need about 1-3 GB of space. A full build of LLVM and Clang will need around 15-20 GB of disk space. The exact space requirements will vary by system. (It is so large because of all the debugging information and the fact that the libraries are statically linked into multiple tools). If you are space-constrained, you can build only selected tools or only selected targets. The Release build requires considerably less space. The LLVM suite may compile on other platforms, but it is not guaranteed to do so. If compilation is successful, the LLVM utilities should be able to assemble, disassemble, analyze, and optimize LLVM bitcode. Code generation should work as well, although the generated native code may not work on your platform. Compiling LLVM requires that you have several software packages installed. The table below lists those required packages. The Package column is the usual name for the software package that LLVM depends on. The Version column provides “known to work” versions of the package. The Notes column describes how LLVM uses the package and provides other details. LLVM is very demanding of the host C++ compiler, and as such tends to expose bugs in the compiler. We are also planning to follow improvements and developments in the C++ language and library reasonably closely. As such, we require a modern host C++ toolchain, both compiler and standard library, in order to build LLVM. For the most popular host toolchains we check for specific minimum versions in our build systems: Clang 3.1 GCC 4.8 Visual Studio 2015 (Update 3) Anything older than these toolchains may work, but will require forcing the build system with a special option and is not really a supported host platform. Also note that older versions of these compilers have often crashed or miscompiled LLVM. For less widely used host toolchains such as ICC or xlC, be aware that a very recent version may be required to support all of the C++ features used in LLVM. We track certain versions of software that are known to fail when used as part of the host toolchain. These even include linkers at times. GNU ld 2.16.X. Some 2.16.X versions of the ld linker will produce very long warning messages complaining that some “.gnu.linkonce.t.*” symbol was defined in a discarded section. You can safely ignore these messages as they are erroneous and the linkage is correct. These messages disappear using ld 2.17. GNU binutils 2.17: Binutils 2.17 contains a bug which causes huge link times (minutes instead of seconds) when building LLVM. We recommend upgrading to a newer version (2.17.50.0.4 or later). GNU Binutils 2.19.1 Gold: This version of Gold contained a bug which causes intermittent failures when building LLVM with position independent code. The symptom is an error about cyclic dependencies. We recommend upgrading to a newer version of Gold. This section mostly applies to Linux and older BSDs. On Mac OS X, you should have a sufficiently modern Xcode, or you will likely need to upgrade until you do. Windows does not have a “system compiler”, so you must install either Visual Studio 2015 or a recent version of mingw64. FreeBSD 10.0 and newer have a modern Clang as the system compiler. However, some Linux distributions and some other or older BSDs sometimes have extremely old versions of GCC. These steps attempt to help you upgrade you compiler even on such a system. However, if at all possible, we encourage you to use a recent version of a distribution with a modern system compiler that meets these requirements. Note that it is tempting to install a prior version of Clang and libc++ to be the host compiler, however libc++ was not well tested or set up to build on Linux until relatively recently. As a consequence, this guide suggests just using libstdc++ and a modern GCC as the initial host in a bootstrap, and then using Clang (and potentially libc++). The first step is to get a recent GCC toolchain installed. The most common distribution on which users have struggled with the version requirements is Ubuntu Precise, 12.04 LTS. For this distribution, one easy option is to install the toolchain testing PPA and use it to install a modern GCC. There is a really nice discussions of this on the ask ubuntu stack exchange. However, not all users can use PPAs and there are many other distributions, so it may be necessary (or just useful, if you’re here you are doing compiler development after all) to build and install GCC from source. It is also quite easy to do these days. For more details, check out the excellent GCC wiki entry, where I got most of this information from. Once you have a GCC toolchain, configure your build of LLVM to use the new toolchain for your host compiler and C++ standard library. Because the new version of libstdc++ is not on the system library search path, you need to pass extra linker flags so that it can be found at link time (-L) and at runtime (-rpath). If you are using CMake, this invocation should produce working binaries: If you fail to set rpath, most LLVM binaries will fail on startup with a message from the loader similar to libstdc++.so.6:version`GLIBCXX_3.4.20'notfound. This means you need to tweak the -rpath linker flag. When you build Clang, you will need to give it access to modern C++11 standard library in order to use it as your new host in part of a bootstrap. There are two easy ways to do this, either build (and install) libc++ along with Clang and then use it with the -stdlib=libc++ compile and link flag, or install Clang into the same prefix ($HOME/toolchains above) as GCC. Clang will look within its own prefix for libstdc++ and use it if found. You can also add an explicit prefix for Clang to look in for a GCC toolchain with the --gcc-toolchain=/opt/my/gcc/prefix flag, passing it to both compile and link commands when using your just-built-Clang to bootstrap. The remainder of this guide is meant to get you up and running with LLVM and to give you some basic information about the LLVM environment. The later sections of this guide describe the general layout of the LLVM source tree, a simple example using the LLVM tool chain, and links to find more information about LLVM or to get help via e-mail. Throughout this manual, the following names are used to denote paths specific to the local system and working environment. These are not environment variables you need to set but just strings used in the rest of this document below. In any of the examples below, simply replace each of these names with the appropriate pathname on your local system. All these paths are absolute: SRC_ROOT This is the top level directory of the LLVM source tree. OBJ_ROOT This is the top level directory of the LLVM object tree (i.e. the tree where object files and compiled programs will be placed. It can be the same as SRC_ROOT). If you have the LLVM distribution, you will need to unpack it before you can begin to compile it. LLVM is distributed as a set of two files: the LLVM suite and the LLVM GCC front end compiled for your platform. There is an additional test suite that is optional. Each file is a TAR archive that is compressed with the gzip program. This will create an ‘llvm’ directory in the current directory and fully populate it with the LLVM source code, Makefiles, test directories, and local copies of documentation files. If you want to get a specific release (as opposed to the most recent revision), you can check it out from the ‘tags’ directory (instead of ‘trunk’). The following releases are located in the following subdirectories of the ‘tags’ directory: Release 3.5.0 and later: RELEASE_350/final and so on Release 2.9 through 3.4: RELEASE_29/final and so on Release 1.1 through 2.8: RELEASE_11 and so on Release 1.0: RELEASE_1 If you would like to get the LLVM test suite (a separate package as of 1.4), you get it from the Subversion repository: Git mirrors are available for a number of LLVM subprojects. These mirrors sync automatically with each Subversion commit and contain all necessary git-svn marks (so, you can recreate git-svn metadata locally). Note that right now mirrors reflect only trunk for each project. Note On Windows, first you will want to do gitconfig--globalcore.autocrlffalse before you clone. This goes a long way toward ensuring that line-endings will be handled correctly (the LLVM project mostly uses Linux line-endings). You can do the read-only Git clone of LLVM via: % git clone https://git.llvm.org/git/llvm.git/ If you want to check out clang too, run: %cd llvm/tools % git clone https://git.llvm.org/git/clang.git/ If you want to check out compiler-rt (required to build the sanitizers), run: Since the upstream repository is in Subversion, you should use gitpull--rebase instead of gitpull to avoid generating a non-linear history in your clone. To configure gitpull to pass --rebase by default on the master branch, run the following command: This leaves your working directories on their master branches, so you’ll need to checkout each working branch individually and rebase it on top of its parent branch. For those who wish to be able to update an llvm repo/revert patches easily using git-svn, please look in the directory for the scripts git-svnup and git-svnrevert. To perform the aforementioned update steps go into your source directory and just type git-svnup or gitsvnup and everything will just work. If one wishes to revert a commit with git-svn, but do not want the git hash to escape into the commit message, one can use the script git-svnrevert or gitsvnrevert which will take in the git hash for the commit you want to revert, look up the appropriate svn revision, and output a message where all references to the git hash have been replaced with the svn revision. To commit back changes via git-svn, use gitsvndcommit: % git svn dcommit Note that git-svn will create one SVN commit for each Git commit you have pending, so squash and edit each commit before executing dcommit to make sure they all conform to the coding standards and the developers’ policy. On success, dcommit will rebase against the HEAD of SVN, so to avoid conflict, please make sure your current branch is up-to-date (via fetch/rebase) before proceeding. The git-svn metadata can get out of sync after you mess around with branches and dcommit. When that happens, gitsvndcommit stops working, complaining about files with uncommitted changes. The fix is to rebuild the metadata: % rm -rf .git/svn % git svn rebase -l Please, refer to the Git-SVN manual (mangit-svn) for more information. While this is using SVN under the hood, it does not require any interaction from you with git-svn. After a few minutes, gitpull should get back the changes as they were committed. Note that a current limitation is that git does not directly record file rename, and thus it is propagated to SVN as a combination of delete-add instead of a file rename. The SVN revision of each monorepo commit can be found in the commit notes. git does not fetch notes by default. The following commands will fetch the notes and configure git to fetch future notes. Use gitnotesshow$commit to look up the SVN revision of a git commit. The notes show up gitlog, and searching the log is currently the recommended way to look up the git commit for a given SVN revision. Once checked out from the Subversion repository, the LLVM suite source code must be configured before being built. This process uses CMake. Unlinke the normal configure script, CMake generates the build files in whatever format you request as well as various *.inc files, and llvm/include/Config/config.h. Variables are passed to cmake on the command line using the format -D<variablename>=<value>. The following variables are some common options used by people developing LLVM. Variable Purpose CMAKE_C_COMPILER Tells cmake which C compiler to use. By default, this will be /usr/bin/cc. CMAKE_CXX_COMPILER Tells cmake which C++ compiler to use. By default, this will be /usr/bin/c++. CMAKE_BUILD_TYPE Tells cmake what type of build you are trying to generate files for. Valid options are Debug, Release, RelWithDebInfo, and MinSizeRel. Default is Debug. CMAKE_INSTALL_PREFIX Specifies the install directory to target when running the install action of the build files. LLVM_TARGETS_TO_BUILD A semicolon delimited list controlling which targets will be built and linked into llc. This is equivalent to the --enable-targets option in the configure script. The default list is defined as LLVM_ALL_TARGETS, and can be set to include out-of-tree targets. The default value includes: AArch64,AMDGPU,ARM,BPF,Hexagon,Mips,MSP430,NVPTX,PowerPC,Sparc,SystemZ,X86,XCore. LLVM_ENABLE_DOXYGEN Build doxygen-based documentation from the source code This is disabled by default because it is slow and generates a lot of output. LLVM_ENABLE_SPHINX Build sphinx-based documentation from the source code. This is disabled by default because it is slow and generates a lot of output. Sphinx version 1.5 or later recommended. LLVM_BUILD_LLVM_DYLIB Generate libLLVM.so. This library contains a default set of LLVM components that can be overridden with LLVM_DYLIB_COMPONENTS. The default contains most of LLVM and is defined in tools/llvm-shlib/CMakelists.txt. LLVM_OPTIMIZED_TABLEGEN Builds a release tablegen that gets used during the LLVM build. This can dramatically speed up debug builds. Unlike with autotools, with CMake your build type is defined at configuration. If you want to change your build type, you can re-run cmake with the following invocation: % cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=type SRC_ROOT Between runs, CMake preserves the values set for all options. CMake has the following build types defined: Debug These builds are the default. The build system will compile the tools and libraries unoptimized, with debugging information, and asserts enabled. Release For these builds, the build system will compile the tools and libraries with optimizations enabled and not generate debug info. CMakes default optimization level is -O3. This can be configured by setting the CMAKE_CXX_FLAGS_RELEASE variable on the CMake command line. RelWithDebInfo These builds are useful when debugging. They generate optimized binaries with debug information. CMakes default optimization level is -O2. This can be configured by setting the CMAKE_CXX_FLAGS_RELWITHDEBINFO variable on the CMake command line. Once you have LLVM configured, you can build it by entering the OBJ_ROOT directory and issuing the following command: % make If the build fails, please check here to see if you are using a version of GCC that is known not to compile LLVM. If you have multiple processors in your machine, you may wish to use some of the parallel build options provided by GNU Make. For example, you could use the command: % make -j2 There are several special targets which are useful when working with the LLVM source code: makeclean Removes all files generated by the build. This includes object files, generated C/C++ files, libraries, and executables. makeinstall Installs LLVM header files, libraries, tools, and documentation in a hierarchy under $PREFIX, specified with CMAKE_INSTALL_PREFIX, which defaults to /usr/local. makedocs-llvm-html If configured with -DLLVM_ENABLE_SPHINX=On, this will generate a directory at OBJ_ROOT/docs/html which contains the HTML formatted documentation. It is possible to cross-compile LLVM itself. That is, you can create LLVM executables and libraries to be hosted on a platform different from the platform where they are built (a Canadian Cross build). To generate build files for cross-compiling CMake provides a variable CMAKE_TOOLCHAIN_FILE which can define compiler flags and variables used during the CMake test operations. The result of such a build is executables that are not runnable on the build host but can be executed on the target. As an example the following CMake invocation can generate build files targeting iOS. This will work on Mac OS X with the latest Xcode: The LLVM build system is capable of sharing a single LLVM source tree among several LLVM builds. Hence, it is possible to build LLVM for several different platforms or configurations using the same source tree. Change directory to where the LLVM object files should live: %cd OBJ_ROOT Run cmake: % cmake -G "Unix Makefiles" SRC_ROOT The LLVM build will create a structure underneath OBJ_ROOT that matches the LLVM source tree. At each level where source files are present in the source tree there will be a corresponding CMakeFiles directory in the OBJ_ROOT. Underneath that directory there is another directory with a name ending in .dir under which you’ll find object files for each source. If you’re running on a Linux system that supports the binfmt_misc module, and you have root access on the system, you can set your system up to execute LLVM bitcode files directly. To do this, use commands like this (the first command may not be required if you are already using the module): Public header files exported from the LLVM library. The three main subdirectories: llvm/include/llvm All LLVM-specific header files, and subdirectories for different portions of LLVM: Analysis, CodeGen, Target, Transforms, etc… llvm/include/llvm/Support Generic support libraries provided with LLVM but not necessarily specific to LLVM. For example, some C++ STL utilities and a Command Line option processing library store header files here. llvm/include/llvm/Config Header files configured by the configure script. They wrap “standard” UNIX and C header files. Source code can include these header files which automatically take care of the conditional #includes that the configure script generates. A comprehensive correctness, performance, and benchmarking test suite for LLVM. Comes in a separate Subversion module because not every LLVM user is interested in such a comprehensive suite. For details see the Testing Guide document. Executables built out of the libraries above, which form the main part of the user interface. You can always get help for a tool by typing tool_name-help. The following is a brief introduction to the most important tools. More detailed information is in the Command Guide. bugpoint bugpoint is used to debug optimization passes or code generation backends by narrowing down the given test case to the minimum number of passes and/or instructions that still cause a problem, whether it is a crash or miscompilation. See HowToSubmitABug.html for more information on using bugpoint. llvm-ar The archiver produces an archive containing the given LLVM bitcode files, optionally with an index for faster lookup. llvm-link, not surprisingly, links multiple LLVM modules into a single program. lli lli is the LLVM interpreter, which can directly execute LLVM bitcode (although very slowly…). For architectures that support it (currently x86, Sparc, and PowerPC), by default, lli will function as a Just-In-Time compiler (if the functionality was compiled in), and will execute the code much faster than the interpreter. llc llc is the LLVM backend compiler, which translates LLVM bitcode to a native code assembly file. opt opt reads LLVM bitcode, applies a series of LLVM to LLVM transformations (which are specified on the command line), and outputs the resultant bitcode. ‘opt-help’ is a good way to get a list of the program transformations available in LLVM. opt can also run a specific analysis on an input LLVM bitcode file and print the results. Primarily useful for debugging analyses, or familiarizing yourself with what an analysis does. Utilities for working with LLVM source code; some are part of the build process because they are code generators for parts of the infrastructure. codegen-diff codegen-diff finds differences between code that LLC generates and code that LLI generates. This is useful if you are debugging one of them, assuming that the other generates correct output. For the full user manual, run `perldoccodegen-diff'. emacs/ Emacs and XEmacs syntax highlighting for LLVM assembly files and TableGen description files. See the README for information on using them. getsrcs.sh Finds and outputs all non-generated source files, useful if one wishes to do a lot of development across directories and does not want to find each file. One way to use it is to run, for example: xemacs`utils/getsources.sh` from the top of the LLVM source tree. llvmgrep Performs an egrep-H-n on each source file in LLVM and passes to it a regular expression provided on llvmgrep’s command line. This is an efficient way of searching the source base for a particular regular expression. makellvm Compiles all files in the current directory, then compiles and links the tool that is the first argument. For example, assuming you are in llvm/lib/Target/Sparc, if makellvm is in your path, running makellvmllc will make a build of the current directory, switch to directory llvm/tools/llc and build it, causing a re-linking of LLC. TableGen/ Contains the tool used to generate register descriptions, instruction set descriptions, and even assemblers from common TableGen description files. vim/ vim syntax-highlighting for LLVM assembly files and TableGen description files. See the README for how to use them. This document is just an introduction on how to use LLVM to do some simple things… there are many more interesting and complicated things that you can do that aren’t documented here (but we’ll gladly accept a patch if you want to write something up!). For more information about LLVM, check out:
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726. Re-inflating the Ball: The Intel Approach to Loan Sales Charles T. Marshall | Jun 20, 2016 The world’s largest microprocessor maker may have a thing or two to teach commercial mortgage lenders about loan sales. Andy Grove, the colorful management guru of Intel in its formative years who died earlier this year, once described the secret to the company’s success as more than just taking the ball and running with it. “At Intel,” he explained, “you take the ball, let the air out, fold it up and put it in your pocket. Then you take another ball and run with it. When you’ve crossed the goal you take the first ball out of your pocket, re-inflate it and score two touchdowns instead of one.” In the secondary market game, loan sales have been largely reserved for failed banks and non-performing loans or as an end-of-the-bench substitute for securitization. Facing formidable macroeconomic and regulatory defenses as the current real estate cycle ages, however, lenders are increasingly learning that loan sales offer additional scoring opportunities. The market dislocation beginning in late 2015 and early 2016 caused, according to Gerard Sansosti and Daniel O’Donnell of the debt placement and loan sale advisory groups at HFF, “an uptick in the sale of performing loans, particularly floating rate, as portfolio lenders prune their loan holdings for exposure issues, scratch and dent sales, and attractive pricing.” Unacceptable concentrations of loans to one sponsor, in limited or challenged geographic areas or asset types, or with similar loan maturities, are among the exposure issues motivating loan sales. Portfolio lenders have not been the only loan sellers running with the ball. Confronted with widening spreads and less frequent securitizations, CMBS lenders too are pursuing loan sales as a cost-effective alternative. Private equity lenders, for example, often face both the holding and opportunity cost of having capital tied up in loans subject to increased warehouse interest charges and uncertain exits. Regulatory requirements such as risk retention and increased capital reserves offer additional motivation for banks and CMBS lenders to consider loan sale exits. Whether to sell, hold or securitize a mortgage loan, and similarly whether to originate or purchase, involves the interplay of a comparative matrix of factors. Transaction costs, retained liability, speed and certainty of exit, and pricing (and whether premium or discount to par) may vary significantly between a securitization and a loan sale exit. The roster of CMBS issuers is substantially smaller than that of potential loan purchase investors, and each has different tolerances for cash flow, location and stabilization of properties, and loan terms. Among investors, institutional purchasers may have more rigorous requirements than high yield investors. And depending on business objectives, each player may read the macroeconomic tea leaves differently. To maximize exit options, loan originators must also consider whether and how to tailor loan terms and underwriting to match the demands of these different purchasers. Non-institutional investors must evaluate whether they have the expertise (and stomach) for hands-on asset management, a cost that also impacts pricing. Whatever the game plan or the player, whole loan sellers and purchasers must address numerous documentation and legal issues. Sale Agreement. Loan sales have historically been documented like real estate transactions, with agreements requiring earnest money, a defined diligence period, some negotiated level of loan and property representations, physical delivery of loan documents generally via escrow closings with a title company and damages as remedy for termination and breach. While many transactions still employ this documentation approach, CMBS technology has refined the loan sale process. Industry-accepted representations and warranties and remedies, standard mortgage file documentation, advanced loan data capture and presentation, and custodial possession of documents have streamlined transactions. The loan sale agreement may mirror the terms of a mortgage loan purchase agreement (MLPA) executed in connection with a securitization, but increasingly transactions are documented as simply as a trade confirmation, going directly to an assignment and assumption agreement executed on the date of closing. Additional terms can mimic the scope of an MLPA, including for loans intended for securitization, post-closing securitization cooperation covenants. Identity of Loan Seller. Purchasers will insist on a parent company as loan seller to backstop loan representations and warranties. If loans have been assigned to affiliates, for example in connection with a loan warehouse financing, the loans will need to be reassigned as of the closing date. Warehouse financing, however, creates additional closing issues. Representations and Warranties. Securitization reps and warranties represent the standard for loan and property diligence and are obviously the expectation for CMBS loans, so dialing back could impact pricing. More conservatively, representations can be subject to post-closing adjustment to pick up any variation in those actually required for the securitization. In CMBS securitizations, SEC regulations require a certification of loan information by, and imposition of personal liability on, the chief executive officer of the issuer, which is often in turn required from each loan seller. In loan sale agreements, on the other hand, reps and warranties are made by the loan seller only, with no officer-level certification or liability. Portfolio, bridge and seasoned loan sales typically do not include such rigorous reps and warranties. Such loan sellers take the position that no rep or warranty should be provided to the extent a loan or property feature can be determined by due diligence review and that representations should be limited to seller’s knowledge or lack of receipt of written notice of an event and subject to all matters contained in the diligence file. The promulgated FDIC loan sale agreement, for example, conveys loans “as-is” with no reps and warranties, but provides for seller repurchase of a loan for certain specified material issues. Robust diligence information is the best answer for limited reps and warranties and enhances loan pricing. Prudent lenders are scrubbing loans as they close in order to ensure compliance with the most stringent reps and warranties and to anticipate either loan sale or securitization exit so that curative items such as missing documents or documentation errors can be corrected and loan and property level information and disclosures can be compiled in advance. Survival Period. The survival period of reps and warranties is negotiable. For a non-securitized pool of loans with pricing discount, a survival period of three to six months would be customary. Sales of loans intended for securitization often have a survival period of 18–24 months. As a reference point, absent parties’ contractual agreement otherwise, New York law, the typical governing law for loan sale transactions, applies a six-year statute of limitation on alleged breaches of reps and warranties that accrue from the date made (i.e., the date of loan sale closing). Purchasers may start with a “life-of-loan” ask, but that’s not market. Property-related conditions that existed at the time of sale can generally be determined within a short period of time, and causation for loan loss lessens with passage of time anyway. Loan document defects can often be determined by pre-closing diligence or during post-closing servicing (though of course the ultimate test for document defects is when remedies are exercised). Leverage in times of volatility may favor CMBS loan purchasers who insist on aligning reps and warranty survivability with their securitization exposure. Loan Document Schedule. Completeness of the loan document schedule in a loan sale agreement is important and should reference any sourcing/servicing interest strips or rights that may have been previously assigned by separate agreements and survive the sale. If documents are held by a custodian, the custodial receipt and loan document schedule should conform. Generally the transfer of a “mortgage file,” unless all such items are held by a custodian, is contemplated promptly after the closing. It is typical for CMBS loans to use an industry-standard mortgage file definition; for other loans, either a list of loan-specific mortgage file contents or a generic definition of items in the seller’s possession or control is used. Remedies. The typical CMBS loan sale remedial formula is a cure period and, if not cured, either an agreed-value settlement or a repurchase obligation. A cure period and cause of action for damages (actual damages, not speculative, punitive, or contingent) in lieu of repurchase obligations is also seen. Market volatility may dictate that the loan sale be documented more conservatively and aligned with securitization risks: the purchase price could be subject to reduction based on a recalculation of value at the purchaser’s ultimate securitization to take into account the pricing exposure an actual loan seller faces in the securitization, such as price concessions demanded by a B-piece buyer, allocable securitization transaction expenses and any subordination level required by rating agencies generally or for the specific loan. The repurchase remedy could apply also if, through no fault of the purchaser, the loan is kicked out of the purchaser’s intended securitization or if the securitization has not occurred by a specified date. The parties might include a negotiated right for the seller to substitute a separate loan if the purchased loan is deemed defective or kicked out of the securitization. Closing. Loan sale closings are simplest if loan documents are held by a custodian, in which case only original assignment documents need to be physically delivered at closing. This is particularly true when the seller, purchaser, or both have obtained warehouse/purchase financing secured by pledged loans. Rather than physical review and delivery of original documents at closing, custodians provide trust receipts and exception reports confirming the documents held, status of whether original or copy/recorded or unrecorded, and any exceptions noted in their review for approval or curative repair. The loan document transfer can be achieved by delivery of transfer documents to the custodian and simple bailee or escrow letters by which the custodian holds the documents for each party pending the closing. If one or more transaction parties have warehouse/purchase financing, a multiparty escrow agreement among loan facility lender(s), custodian(s), seller, purchaser, and escrow agent(s) will be needed to accommodate the assignment and purchase price payment and/or loan facility payoff, which will require lead time due to multiparty signoff. Assignment Documents. It is important to agree upon the form of assignment documents early in the process, particularly if multiple states and mortgages are involved (which is magnified in single-family rental loans) and/or one or more parties has a loan purchase facility. With loan facility parties as seller and/or purchaser, tiers of assignments to SPE affiliates may be required. This is one of the most time-consuming parts of the loan sale process, so planning and lead time is required. Whether performing loan sales are just a late-game Hail Mary pass or a permanent addition to a sophisticated secondary market playbook remains to be seen. Several factors may provide the answer: the increasing number of yield-chasing debt funds, with nimble, opportunistic business models, filling the capital gap in commercial real estate financing; the cresting wave of maturing loans originated at the peak of the prior real estate cycle; and streamlining of loan sale transactions with broadly accepted information gathering and legal process technologies developed for CMBS transactions. Just as single-family rental loans emerged as a new asset class in response to the Great Recession, performing loan sales may emerge from the current market volatility and regulation as a vital business option available throughout the game. Reinflating the ball and scoring with performing loan sales has never been easier, or more timely.
Blog While my reflection on the first 1/2 of my term as moderator is in the works, I wanted to first get offer up some thoughts that I am SURE will get some comments and I hope some good discussion: membership decline. As you know, the most recent membership numbers where just released and, for various arguable reasons, the PC(USA) declined in membership by 69,381 members. As we see these numbers announced each year, the theorizing and punditry around the decline is nothing new and I suspect it will continue as long as there are people with opinions and who care about the church. The prevailing reasons that are usually sent my way are basically three: We are in decline because we are too liberal, having stopped being a people of The Book and are caving to cultural trends especially around homosexuality. We are in decline because we are far too conservative, no longer live the love that Christ calls us to and the world no longer sees us as a place of welcome. Our 1960's members trends were but a blimp in our history for churches in the United States . . . so numbers need to be taken in context. Now it is obviously easy to assign blame for our decline in membership, often falling into a far too simple rhetoric that there is indeed only ONE reason for our decline. Now regardless of how you value the use of numbers as measure of worth, I think that we are more nuanced than that and that if we really think about it there are probably multiple reasons for our decline. Now as a new church development pastor, I have never been solely driven by numbers. Not surprisingly, like most things, I find God speaking to me in the gray, somewhere between only finding worth in numbers and thinking that numbers are silly and irrelevant. I think numbers are an important measurement that can give us some useful indications of trends and developments, but we can also get into trouble when our ONLY drive is numerical. In the end, I want us to impact lives that in turn impact the world and believe that if we are faithful to God's calling upon our lives, we will grow to the size that God hopes us to be. Still, our decline may give us some indications of our life together and I am not immune from offering some thoughts in the issue. Now I have written upon this before, Number 1 reason why PC(USA) churches are dying a slow, painful, sad, drawn-out, death and other happy thoughts, but let me add something more as I have continued to listen to and reflect on what I am hearing as Moderator. I believe that one of the main factors in our failure to grow is that we still operate with an institutional worldview that is not built for the fluid, adaptive and complex nature of the world today. Theological and ideological perspectives aside, we – at all levels of our church life – still operate with a 1960's worldview that simply does not speak to the world today. We spoke well to the United States culture during a long stretch of our denominational life, but we have forgotten how to speak to the world in a way that offers a transformational experience of the Gospel life in a Presbyterian context. I grieve this because I have been so fed and formed by my Presbyterian heritage and deep theological history that I am compelled to find ways to meaningfully pass this rich tradition on to my kids. But sadly, as I look around the church, those under 35 are painfully absent. And while many of us would like to hold onto our youthful spirits for as long as we can, 60 is not the new 50 and 40 is not young. We who hold power and influence in the church must stop pretending that we are the future. We are not. In fact, as those with power and influence in the church, if we do not joyfully embrace our changing rolls in our institutional life, we will die with no reason to expect resurrection. Simply put, we must ask ourselves hard questions and learn to adapt if we are to impact the world as Presbyterians for any length of time into the future. To get things started, here are some of the questions I think we need to address: Is Jesus enough? What ARE our essentials and non-negotiables as we gather as a denominational gathering of the Body of Christ? Do we live the Trinity? Do we fully understand the nature of living in community and living out our understanding of the Triune God? Are we committed to connectionalism and if so, how committed are we to creating healthy Presbyteries? Because unless we have Presbyteries that are vibrant and at the heart of our lives together, we are no longer Presbyterian. Can we handle an abundance of manifestations of the Presbyterian family where congregations look, feel and operate in drastically different ways? Can we fathom the idea of the death of some parts of our structural and institutional life together trusting that where resurrections is to happen it will happen? Are those who hold power and authority willing to create space for who are not part of our life but will best be able to help us navigate our way into their world? Can we find a way for an institution to live the peace of Christ in a world of chaos? Will we be able to respond well even if the answer is, "We do not have the capacity to adapt, the time of our current way of being is done." Can we truly embrace the unknown, but yet joyfully strive to seek God's intentions? These are obviously not all the questions that we need ask of ourselves and as hard as it may be to believe, I would not want to place values on the answers to these questions. But, if we do not venture onto some deeper questions about our future, we will never fully be able to navigate our way into who God hopes us to become as a Presbyterian people. So . . . there you have it, what other questions do we need to ask of ourselves? What are more reasons for our decline? Does it even matter? What say ye Presby bloggosphere? Share: Related 37 Comments Jesus Christ demands we measure Medicare overhead as a % of dollars and not per patient (see article below)? Jesus Christ demands we ignore Medicare fraud, when things like preventing fraud are the reasons for ‘overhead’? Jesus Christ teaches us that Medicaid and the VA are efficient? Verse, please? Jesus Christ demands we ignore the question of whether it is profits in the US that drive medical innovation, and the lives we may save now will be more than offset by lives lost in the future due to undeveloped medicines? Jesus demands we ignore the question of other nations’ lower costs being freeloading on the US, who pays for innovation? Jesus Christ demands we ignore the idea of “regulatory capture”, where the platonic ideal of a hypothetical perfect system meets lobbyists for entities to be impacted (or Jesus demands campaign finance ‘reform’ to make sure the wrong people don’t have enough political clout to stop the initiatives Jesus endorses)? Jesus Christ says this is constitutional? You consider that a settled issue? Does Jesus, or was Jesus silent about the 9th and 10th amendments to the US Constitution?http://www.layman.org/news.aspx?article=26247 Presbyterian Church (USA) Stated Clerk Gradye Parsons weighed in on the national health care reform debate Friday by reiterating a resolution from the denomination’s 2008 General Assembly that demeans the quality of American health care, condemns profits earned by private insurance companies, dedicates mission money to lobbying efforts and supports the call for a government-run, single-payer system… …A resolution approved by the 218th General Assembly, which met last summer in San Jose, Calif., outlines the denomination’s support of a single-payer system of health insurance for the country’s uninsured. The action also routed $25,000 from the denomination’s mission budget to a political action network called Presbyterian Health, Education and Welfare Association for the purpose of hosting 10 regional, one-day seminars supporting universal health care…. One more for Kevin, Article in Saturday’s Tennessean caught my eye. They talked about some of the churches in Middle TN presbytery, including Jim Kitchens of Second Pres.:http://www.tennessean.com/article/20090801/NEWS06/908010329/1017/NEWS03/Plun “Local churches have grown by doing the basics right, said the Rev. Jim Kitchens, pastor of Second Presbyterian in South Nashville. If they do a good job taking care of youth and children, that can draw in young families. Second Presbyterian has started several initiatives that bring in newcomers. This year, eight young adult volunteers are taking part in the Nashville Epiphany Project. They’ll attend the church, live in a Christian community, and volunteer in programs like the Martha O’Bryan Center. The church also is up front about being a more progressive church, and welcoming to people who are gay and lesbian. It’s part of the Faith and Justice Congregational Network, with ties to Sojourners, a progressive Christian magazine. That more inclusive view helped bring Kim Huguley and her husband to Second Presbyterian. “We were looking for a church with a bigger view of who is in God’s family,” she said. “We’d been looking for a church for several months. The first time we walked in, we knew it was a good match.” Kitchens tries not to worry about the future of his denomination. Less than half his congregation grew up Presbyterian, and most newcomers come from a number of denominational backgrounds. For most people, he said, denominational ties matter less than making sure a local church is a good fit.” A good example of a Reciprocating Church. Best, john Now we are talking. Excellent comment, Kevin. There is a lot to go into reasons why we are not affiliating with volunteer organizations. Most of these organizations had their big day post WW2 as our greatest generation was rebuilding society. Times have changed. Most people I know my age (48 and younger) are so strung out with suburbia, two jobs, who all knows. The last thing my folks see as relevant is the presbytery connection (this goes for all three congregations I have pastored), even as I have been quite involved and always urging and trying to find ways to connect. It isn’t that the presbytery is doing anything wrong, nor the people in the congregations. They are just on different paths. You are exactly right about getting involved as a citizen and how the church can facilitate that. For me it is about encouraging involvement, and giving people permission (and the property) to be creative and do stuff. Frankly, all the worry over this is moot. We are facing major, major changes caused by energy and environment. In the coming years (less than a decade), we will desperately need social networks for basic needs. The church will be re-created again as it has through history. Hi Bruce, I appreciate your views and especially the ideas around creating conversational space and a consideration of the death of some parts of our structural and institutional life together. Yes. Resurrection can occur. But it is not likely to occur without our being acted on by an outside force. It occurs to me that God (outside force) often intervenes with honest, authentic, reliable data. In my view, it is precisely this lack of data that keeps any remedy to the membership decline shrouded. I offer what I hope is a shroud-lifting comment that may be resurrection material. You decide. Follow. News of the steepest membership loss in twenty-five years comes as no surprise to Newark Presbytery in New Jersey. We address the evidence of these statistics every day. I am proud of our growing effort to build collaborative energy to increase the capacity of every one of our congregations to be viable, healthy, and effective. Each of our churches is a delivery station of the Good News. Moving forward is a challenge. How we respond to our membership decline is important. I continue to listen and engage in conversation with our denominational upstream in Louisville about our decline. The PC(USA) messages have included: Try harder at what you have been doing; Try something new; Invite neighbors to church, Blend your worship, Become multicultural, Support General Assembly Mission directly; Apply for grants; and in the meantime, Louisville will downsize the denominational structure (again). We still decline. The reason that these directives often fail to alter our experience of institutional trauma or the congregational outcomes from decades of decline is that Louisville attributes the decline, at least in part, to death, people being removed from the rolls, and to a “gradual” drifting away from our congregations. Gradual drifting? What’s gradual about twenty-five years of consistent decline? Even the Pew Forum, whose research was referenced by denominational execs, seems more like a distraction than a reason, as it identifies why people change religious affiliation rather than addressing the real reasons people do not affiliate at all. North Americans have consistently withdrawn their volunteer association affiliation for more than thirty years. The questions we ask define our assumptions. In this case, the PC(USA) and Pew, ask the question: “How do our neighbors choose between Protestant or Roman Catholic affiliation?” suggesting that the focus of our concern is religious affiliation. The critical question is not Protestant v Catholic, or Christian v Muslim v Jewish, etc. The critical, core, question we must consider together is: “Why do people fail to affiliate with volunteer associations at all, church or otherwise?” Almost every volunteer association in America has been in decline for decades. From the Boy Scouts, Girl Scouts, AMA, PTA, Elks, Lions, etc., to the political, civic, religious, and professional groups, membership is down. There is a direct correlation between the membership decline of volunteer associations in North America and the associations’ lack of community engagement. Even more consequential, corresponding benefits from these association networks to influence reciprocal behaviors (doing things for each other) have diminished. It has been documented that Americans have steadily reduced their investment in “outside the family” activities. Our North American cultural milieu has normalized self-engagement and isolation. Our increasingly time-shifted ways to connect has corresponded to the rise of social media sites and technologies. We no longer derive value from connecting in person. In short, the church has experienced a reduction in its membership. However, the reduction in membership corresponds to the church’s prior failure to return sufficient value to the community outside itself which could have sustained the community gathering “at the church.” This destructive cycle has been perpetuated over the decades. As Presbyterians, we have focused on ourselves, mistakenly believing that our “decline” was a Presbyterian one. We seemed to think it was our problem. How many curriculums, conferences, coaching, and action plans directed us to do something within ourselves and our space without realizing it was our almost narcissistic framing of the problem and our solution that made the situation worse. As a denomination, we missed opportunities to lead a revival of the re-investment of social capital, volunteerism, and instead, with little reflection, followed the status quo. The good news is that our decline can be reversed by swift and decisive realignment of our congregational resources to tangibly benefit the communities we are located in. Our disconnect from the community reduced the community’s connection to us. Instead of merely asking our congregants to bring a friend to church, (a fine but insufficient remedy), we must ask our congregants to re-engage in their communities. We need to invite our congregants back into their communities. The Church is peculiarly well-suited for this transformational mandate of re-engaging communities since God has sent the Church into the world, not to be served, but to serve. We can lead our congregations as servants, empowering them to become a Reciprocating Church. A Reciprocating Church is a church that reinvests its experience of God’s love into the world, so that their community knows God loves it, too. A Reciprocating Church will ensure congruence between its congregation and building capacities and by God’s grace, be a healthy and effective demonstration of the Christian gospel in the Church and the world. The opportunities to be a Reciprocating Church are huge. Let’s explore them, transforming together. Kevin ……………………………………………………. Dr. Kevin Yoho, General Presbyter Newark Presbytery, PC(USA)kevin@newarkpresbytery.orghttp://www.newarkpresbytery.orghttp://www.kevinyoho.com Twitter: @kevinyoho Susan, Birthrate? That is assuming we don’t evangelize and only grow by current members having babies—who then must stay in the PCUSA, a rather big assumption in today’s world where there isn’t much “brand loyalty” out there. The sort of thinking reflected in that Presbyterians Today bit is part of why we are where we are. Think about this—in 1969 the U. S. population was right at 200 million with reported church attendance of 46%. That would mean 92 million in church during an average week. 2009 the U. S. population is 306 million and even with the drop of church attendance to 40% by some studies, that means 122 million in church during an average week. This means, while we were undergoing this long-term loss of members and shrinking in size, the numbers of people in church increased by 30 million people. Let that soak in for a bit and you get a better picture of how bad things are going for us. We shrink while there is great growth in numbers going on. God’s blessings to you, Matt Ferguson Hillsboro, IL A couple of years ago an article in Presbyterians Today claimed that sociologically speaking 70% of our denominational decline since the 1960’s was due to decreased birthrates (see july/aug 2007 here: http://www.pcusa.org/research/gofigure/index.htm). I see this in the small town in which I serve where many friends in conservative / non-denom churches have 3 or four children. I showed the article to our Session, but only one Elder and I were of “child-bearing age” and neither of us were interested. Good job Bruce for increasing our odds — I’m just hoping to replace myself. I heard Stacey Johnson (I hope I’m remembering this right Stacey) suggest last year how amazing it would be for the powers-that-be in the institution to simply hand over the resources, investments, trusts to the young and see what kind of ministry might happen. Who knows what G-d might do? I am late to this conversation, but thought I would offer a few thoughts anyway. In the history of Christendom, people went to church (or were church members) because they were forced to by either political or social pressure. Now churches find that they need to market their wares. I appreciate your thoughtful questions but I wonder if the reasons aren’t due to our ineptitude but to social factors beyond our control? There is an assumption that church is good for people whether they think so or not. We just need to show these boneheads that we can overcome their superficial objections and meet their needs. It could be that folks simply aren’t interested and are living perfectly happy and fulfilled lives without us. Hey Bruce, I’ll be thinking on your questions for some time. It looks as though books like “The Starfish and the Spider” should be mainstay guides for church leaders at the moment. Tod Bolsinger spent quite alot of time reflecting on Presbyterian leadership in this starfish paradigm (http://bolsinger.blogs.com/weblog/starfish-and-the-spider/). It’s worth the time! I’d like to speak to Martha’s comment re: why we exist. For our children? For ourselves? For the glory of God (whatever that means? Because my hunch is that we differ on what this means. Back to the liberal/conservative debate.) If we take seriously The Great Commission, I believe we are supposed to exist to make disciples of all nations. In other words, we are to exist for those who do not yet follow Jesus. The problem is that we tend to exist for ourselves – to comfort our own harried souls, to serve our own needs and personal preferences, to have a place to be married/buried. The basic conflict in the congregation I serve seems to be this one: some believe the church exists for them and others believe the church exists for those “out there.” Good post, Bruce, as always. I know whenever these numbers come out, people tend to blame the loss on whatever it is that we don’t like about the church and then espouse as the answer whatever we think the church should be. A few observations: 1. Certainly you have to see some of this loss as due to issues related to how we are dealing with the homosexuality issue. A good portion of the loss was from congregations leaving and moving to more theologically conservative congregations. I don’t see how you can debate that and I suspect these kinds of losses will only continue. 2. If someone in business sees a company losing market share, the first thing they do is to look at other companies that are gaining market share. I suspect the main reason we’re unwilling to do that is because most of the churches/denominations that are growing are theologically conservative (even if they are progressive and innovative in their programs or worship). Southern Baptists are probably the best example. But look also at Calvary Chapel, Willow Creek, Saddleback, and the wonderfully Calvinistic Mars Hill in Seattle. All rapidly growing while we shrink. All theologically classically orthodox. Why not see what makes them successful and then see how that can apply to us? 3. Bruce, I know there is much to admire in your list and it mirrors a lot of what I hear in the “emergent church” circles. But to be honest, most of the people I hear talking about this haven’t really shown it in action (with the exception of Erwin McManus at Mosaic). It’s often presented by people who have little real experience in congregations that have actually grown to significant numbers and stayed there. I see even your congregation has shrunk to the point where there are financial difficulties. These are all great theories, I just don’t see them working anywhere. There are some good general “my leaving the church” stories on exchristian.net if you skip over the atheist advocacy. “Yet, look at the impact and change they were able to bring about.” You mean by becoming the State Church after an emperor thought Jesus took his side in one of Rome’s civil wars because he had a “vision”, with said emperor then ordering the church around and having quite a say in the development of Christianity? 🙂 (Or more likely, a would-be emperor thinking the early church was a big, untapped source of political support for his becoming a military dictator over a big chunk of the world’s population and the church compromising itself to gain political influence over a new military dictator.) I just don’t think Jesus said one word about, for example, whether a national healthcare system will save money thru efficiency or thru arbitrary rationing, or whether it will trade access by having money for access by having political connections. Not to start a debate on that specific topic, but I’m just saying no matter how convinced you are that Jesus supports your end goal, that doesn’t mean that Jesus thinks your means will work or agrees with your method. All government activities are coercive – “Jesus would support this goal” doesn’t necessarily mean “Jesus would support having armed agents of The State make people act in accordance with this goal thru explicit and implicit threats of violence”. To bring this back to the topic, with fewer people being straight line political party supporters, maybe the key to church recovery ISN’T becoming indistinguishable from a political party with a laundry list of political actions to support. Do you want to become an organization that circles the wagons around something like Clinton’s perjury about adultery or W’s questionable use of WMD intelligence (or outright lies, take your pick) to get us into a war just to maintain its power and influence? That is the mindset that leads to a church circling the wagons around pedophile priests (that’s a reference to what’s been in the news, not a vague accusation that isn’t already in the public sphere). It’s not even a matter of agreement or disagreement. A local Big Baptist church pastor writes for the local paper as a pastor every now and then. His last two articles have been on lowering taxes and teaching creationism in schools. I agreed with the politics of the first and not of the second, but had the same reaction to BOTH when I saw them. My reaction was “you’re a minister, shut up about that”. Mark, Many young folks outside the church may think we are too conservative but, if you care to take a look, many of those young folks are searching for something solid and are finding their ways to conservative churches. Just take at look at Tim Keller in PCA and what he is spreading through his work (recent Christianity Today feature on him) or Mark Driscoll and the whole Acts 29 group, and that the Roman Catholic Church returned to growth under Pope JP 2 and his strong movement back to a more conservative way, and that the Southern Baptist Church has, for the most part, been one of the few growing denominations, and . . . We have a growing number of younger folks here (19 years ago we were 80% over 65 now we likely have the reverse / are far, far younger). And yes, I do think we are politically liberal and that is a problem because the majority in the pew and on Session and even in the pulpit disagree with those pronouncements and it causes more division. JS Howard, You have a good viewpoint to consider. I heard Larry Osbourn (Northcoast church) make the observation that the early church did have political rallies, etc. and were living in a society worse than ours. Yet, look at the impact and change they were able to bring about. Maybe the link will take you there. If so, Osbourne’s comments are on track about 3 minutes inhttp://www.northcoastchurch.com/fileadmin/audios/Simple/cs04/cs04player.html If that doesn’t work, his sermon from February 2 – 3, track 4, title Ministry Made Simple. Bruce, I should say I like your first question on essentials. After all these years (I have written and spoken on this topic for nearly 20 years now—shows you how much I impact) “#1. in essentials, unity; #2. in non-essentials, #3. liberty; in all things, charity” but the key to part 2 in that observation is doing #1. Until we do #1, we will never be able to allow for #2 because we will see everything as part of the debate to getting to #1 and we will then debate and argue over way too many things to make sure something is or isn’t include in #1 or doesn’t impact things when we finally try to define #1. Define the essentials, keep them as few, and then we can more readily move on to liberty in non-essentials. God’s blessings to you all, Matt Ferguson, Hillsboro, IL I have a question for you. Pretend God Himself decends from Heaven and tells you that, no matter what, no politician or arm of government will ever listen to anything the church has to say, so political activity on the part of the church (for “family values” or for “social justice”) is pointless. Where would you have the church spend the time, money, and energy that would be freed up from political activity, pronouncements on proposed laws, and other attempts to gain power to make others do what you believe Jesus wants them to do? Matt, While you may find the PCUSA liberal, I think that if you ask the unchurched – particularly the young unchurched – you’ll find that in many ways the PCUSA is considered conservative. The political positions may be liberal, but compared to no faith at all, Robert’s Rules and sitting in rows listening to organ music with a bunch of gray-haired folks in suits is very conservative. That’s not all the PCUSA is, but that’s what it looks like to those who haven’t experienced it. Thanks Bruce for your words. It was very timely for me to read. I serve a small church that is declining and recently they were celebrating the reason for existing as a church is “for our children.” These children are 20 somethings are grown, moved out of town and come home maybe once or twice a year. As a pastor, it made my blood boil. This church (and I suspect it isn’t a unique situation) has forgotten why we have a church. I think you are correct that it has nothing to do with being too liberal or conservative. As someone who leans one way or the other depending on the issue, I know that I have a lot of company in the church. Thank you all for your wonderfully thoughtful response and interactions. I know it seems like I say that all the time, but hey, truth hurts 😉 I actually have not felt the need to respond to folks on this one because of the depth at which folks have obviously been thinking about this. Just a few thoughts . . . GEOFF – Thanks for your notes as always. First, as you know I come out of campus ministry and have always been a advocate of campus ministry. At the same time, I think there is some challenges that face ministries that have for so long been so tied into denominational support. I REALLY think that we all need to stop giving so much worth to the institution and stop seeing our only future to be lived with their support, especially fiscally. I think that if there is passionate ministry happening, we need to be able to find those who will support it. Yes, the denomination will need to be supportive as they can, but the realities of the future, like NCD’s and other like ministries, is that the very assumption that a national body will or should have “control” over the entities it supports is changing. Campus ministry is the most scrappy and I think most prepared for what is happening in the world as a whole and some need to embrace the opportunity to see new ways of building a ministry presence on our campuses. TALITHA – YES, great questions. I guess, I don’t really care about preserving THIS particular institution but finding ways that core values of being Presbyterian may be preserved. Now if our connectional nature is over, that is one thing, but if we beleive that the nature of our governance is important, there is a need for some kind of institutional structure. STUSHIE – I never said post-modernism is THE ANSWER, but if we can;t wrap our heads and hearts around the very nature of it we will never know what we stand against and what we embrace. I know you may think so, but I am NOT an “everything goes” kind of guy. Now we may disagree about how we interpret our understandings of Jesus and the trinity in our lives, but I can firmly say that in the midst of all of this I DO have absolute faith in Christ. Post-modernism is not the answer, Bruce. Absolute faith in Christ is. it’s as simple as that, but too many people don’t want to read or hear the truth. We have strayed from Christ and syncretized our beliefs to fit in with a world view. We have forgotten that Christ is Sovereign of the World. Our loyalty needs to be re-aligned to Him. Hey Bruce- Always a thoughtful, reflective post and I thank you for raising these important questions. Without belaboring what’s been said above, mainly because I agree with the majority of the comments as to why we are declining (and it has nothing to do with being too liberal/conservative…), I would simply offer that we need to have some serious conversation and come to some serious conclusions as to what is essential and what is non-essential. I think this conversation particularly needs to take place on the local, congregational level. The young people I am around and in relationship with are seeking inspiration. They want to make a difference. They want to know if the church is a competent and credible vehicle to pour their resources into to make that difference. Because we cannot articulate with any kind of clarity the Gospel or the essentials of our faith, we are unable to present a compelling and inspiring vision to anyone, much less young people who still believe (thankfully!) that the world can change. It would be a truly courageous step if you, as the moderator, would press for this conversation around essentials. Surely we can come to some agreement on some basic tenets…like the Nicene Creed/Apostle’s Creed for instance? From there we could engage congregations in a deep study of their identity in light of these “essential tenets” which in turn could lead to a fruitful clarification/articulation of God’s mission for them. Peace, Doug Thanks Bruce for this and your earlier blog. At a small membership church meeting yesterday I stated that the first place we need to start, in terms of understanding and moving through these times of our collective lives, is that God has a future in mind for our church. Unless we grasp the depth of that, then we’re only reacting to the chaos around us, instead of engaging (an overused word, admittedly) it. When tectonic plates collide, new worlds are formed. Peace Bruce, If those are the only 3 theories that you’re hearing, then you have to go incognito and visit some churches without a schedule or an invitation. I’m afraid that you may be getting insulated within the institution. Those who are speaking of the 4th theory – that we’re shrinking because we are failing to include/inspire the new generations – are right on the money. And more than that …. one thing that I learned at the Princeton seminary on Emerging Adulthood (18-29) was that we need to give young people real responsibility AND let them fail a bit while backstopping them. We have become risk-averse and afraid to try new things – or we try them with a top-down huge investment in a “program”. This is why things like Beau Weston’s “Re-building the Presbyterian Establishment” paper give me fits. They are trying to solve the opposite problem to the one that we have. 1 and 2 (too conservative and too liberal) aren’t mutually exclusive. They are both two sides of “Jesus died to give me political power – I know what Jesus wants and I’m going to use The State to make you obey him”. I left because of the combination of a too liberal denomination and too conservative churches. Some of us don’t like having political positions shoved down our throats by either side. and another question. How much does it matter that we preserve our institution? Can we be a movement? (instead of, or at least alongside our institutionality) if we “die” to the outer observer, have we planted seeds that will spring up in new ways? Bob Coote brought my attention to the fact that the mustard plant (to which Jesus likens the kingdom, now, the kingdom =/= pc(USA) but we can use the same metaphor) is an ANNUAL plant not a permanent tree. It dies and re-grows every season. Can we be brave enough to die and re-grow every generation? Along the same line I know a pastor who is thinking she’s ready for a change, for a new job… she’s thinking of sending out a PIF. Knowing her awesome church I did slip her something to think about — maybe she’ll get a new job, a new position, a new activity description, at the same place! could it happen on a wider scale? Grace and Peace,Bruce. Thanks for your timely blog on the church and membership. If the “Generation Theory” writers are correct we have entered a new generation cycle. This is the time for creating new organizational structures to serve the new generation cycle. The organizations created in the 1940s and 1950s have served well. Now it is time to think about new ways of being church. Personally, I believe that the decline is a positive sign as well as sign of this period of transition. We might begin with asking how the church can better reflect the radical teachings of the prophets of Israel, Jesus and Paul. We may need to lose our life in order to let God show us our new life. I have further throughts on my blog: http://www.saltandlightpages.com. Bruce, thank you for your very timely blog on numbers. If the “Generation Theory” folks are accurate, we are in fact in a period of time when new institutional structures are needed to meet the life of the new generation cycle. The ones created in the 1950s and 60s served well the needs of the generations at the time. Perhaps, we need to look at how the decline is an opportunity to follow God’s spirit into a new generation cycle that creates a church structure that reflects in a new way the radical teachings of the prophets, Jesus and Paul. A faithful time if we chose to let God make us faithful. It may be time to lose our life in order to find it. I have a few thoughts about a different way of being church on my blog:www.saltandlightpages.com Good post, Bruce. I think underlying our malaise, as well as oher mainline churches in North America is the assumption that we have or even know what “church” is. I don’t mean he shpe of the church, traditional, emerging/emergent, etc. But I mean church in a really basic sense. One way I try to get at that is to ask folks to read Matthew 18:15-20 and then ask them if they can honestly, with a straight face, and no fingers crossed behind their backs, say they are in a church or know of a church that could and would stand for or even see a need for that level of accountability. I have not been in or known of any in my 30 years of ministry in the PCUSA. I believe the acids of individualism and choice and a consumeristic mindset make it nearly impossible for folks to even imagine the kind of commitment and accountability Jesus envisions for his people. And without that, verything else is pretty much window dressing! Peace, Lee Bruce, not sure where to post, here or FB. Thank you for great reflection! In serving small rural churches I would say that the keyword is Institution. Most of our battles are about controlling or changing the constitution, the safeguard of the Institution. As you clearly point out death is the future and the willingness to engage/travel with the Spirit through and into resurrection is our challenge. I believe the PCUSA focuses more on engaging and protecting the institution than in following God. In seminary our church history prof. talked about how the church is the oldest human institution on the planet. I realize now I should have wept at those words. Will we recognize Calvin and much of the creeds speak to yesterdays world and not today? Will we be able to stop protecting turf and declare the core essentials? Mine? God: Salvation in Christ Jesus: Led by Spirit. Relationship is more important than knowledge. Rules kill us. Buildings are a close second. Community is not about control but blessing and healing. Worship without unity feels hollow to me. We are still wrapped up in the “if we build it they will come” mentality AND what I hear sometimes from my dear brothers and sisters is “we built so they should be coming! What’s wrong with THEM?!” Yet when I read the NT I see disciples who were going OUT into the culture and meeting people and serving them where they were both physically and spiritually. Sadly I think we are too much like the rich, young ruler who could not give up all his possessions to follow Jesus. We in the North American Church have too many things to give up – land, buildings, budgets, programs, traditions, egos. Could we do as Jesus asked his disciples, to go out into all the various towns with no money or provisions? Trusting that someone will feed us and give us a place to sleep? All to perform miraculous healings and share the Good News? Which begs another question: has our church actually produced disciples who would be willing to give everything up to represent Jesus out in the world? Bruce, I believe the short reason for our decline is that we are creating more barriers to Jesus Christ than we are opening doors. I think this is the reason for decline in Christianity throughout the Western world, regardless of denomination. People don’t see Christ in the institutional heritages, occasional political infighting, and well-maintained properties. I don’t think most people see Christ in professional preaching, high-quality music, or well-crafted worship centers, either. And those who do come to know Christ don’t aspire to committee participation, endless intellectual learning, or “friendly family” churches. We, individually and collectively, must reclaim a passionate, outrageous love affair with Christ. We need to grab hold of the experience of Christ that apocalyptically changed our lives, that motivates us to the point of martyrdom, that is the reason and source of new life. I suspect many of our brothers and sisters don’t even know Christ. I don’t say this to slight anyone, but to lift up a very real illness in the Body. That anyone could participate in a church and not intimately know our Lord is lamentable. We must cling to Christ and express him throughout our lives. If we cannot we really should just close the doors. Great post, Geoff. Thank you, thank you, thank you. I am a mother of three young adults who grew up in the PCUSA, and I get really angry sometimes because I feel like the church has utterly let them down. I was just in a meeting last night where the youth program at my church was discussed, and I was a little disturbed that we were patting ourselves on the back for having a basically good youth program. But at age 18 we dump these students by the wayside, right at the time many are still struggling to understand what they believe in the face of a culture that tells them religion is irrelevant, and – in the case of strident atheists – dangerous. So absolutely, we as a denomination have to address this big time. (And while I’m venting here, I had one Session member tell me, when I suggested we needed to figure out how to reach 18-25 year olds, that “we don’t do enough to reach people our own (middle) age” which is true, but it was a total smokescreen, because ultimately what she was suggesting was to do nothing at all in terms of reaching out to anyone) Hello Bruce, Thanks for raising this important issue for our church. This is the subject that I wanted to talk with you about in relation to campus ministry because I think it tells a lot about the future of the church. I work at Stanford as a campus minister and as you may know, our denomination along with other mainline denominations, began to cut back on funding of campus ministry back in the 60s through the 80s. So in my capacity as the campus minister for United Campus Christian Ministry (UCCM) at Stanford, I represent the Presbyterians, Methodists, UCC, American Baptists, & Disciples of Christ denominations. All of those denominations together are supporting part of my single half-time position. By comparison, the Catholic community at Stanford has 8 staff positions and the Jewish community has 9 staff! What do you think that tells us about the future of the church? One of the examples I use when I talk to congregations is I explain to them that major companies like Apple, Microsoft, HP, Dell, etc. spend millions of dollars in subsidizing the software and hardware purchases of students on campus for two reasons: 1)they know that this is an unparalleled opportunity to reach their prospective customers and 2)they know that if they can get their product into the hands of these prospective customers now, they will be much more likely to use their product after the leave the university. I believe it is the same situation that we have for communicating the value and importance of the church in the lives of the students. If we think that they will return to the church after they leave, we are making the wrong bet. Thanks much, Geoff Great post, for which I have no answers. The question about why we are declining is too complex to sum up neatly. The answers I may have given a year ago while serving a suburban church are vastly different than the answers I find now that I am serving a rural church. I serve a church that I am dragging back from the edge of becoming a statistic. It is in a town that time has forgotten. The youngest people in our church are 8 and 14, and the next youngest is their mother, 39. Most young adults leave town for college and never come back, or they leave to find jobs. I am knocking myself out to help them let go of past mind-sets and practices that no longer serve them, trying to give them hope that if they are willing to embrace a new vision and mission for themselves in this small, depressed town, they will once again have a vital and transforming ministry. I’ve got key leaders on board, excited about a new future, but is that enough? They can only afford me for about 2 more months, after which, they will, hopefully, continue the work but with a new structure of leadership. The questions you ask, and the answers people give, they are good, but they don’t work here where people are leaving town in droves. I feel that the key here is economic development. That, paired with their new energy and vision, just might keep them from becoming another denominational statistic, and might even help them grow. So the questions about post modernity and young adults and cultural shifts are all contextual. For some of us, hte issues are more basic. Bruce… thanks for your thoughtful set of questions regarding the Presbyterian Church which apply to Christianity in general and other traditions as well… first off, while some focus/worry about numbers and size, I am way more concerned about what Kind of Church we are than in what Size of Church we were, we are, we will become. Second, if we embraced the mysteries of God, life, love and each other more and let go of thinking we have it all figured out, and let go of dogma and law…then God’s love could flow more naturally in and through all of us. As a gay man and Christian navigating our Church, it is my hope and prayer that we would stop throwing sticks in each others’ path and let God’s love be the bridge among all of God’s children… Bruce… I struggle with those who refuse to admit times are actually changing, as if that devalues the world of our past. I am constantly searching for ways to validate the modern values while straddling the ways of postmodernity. I most appreciate the way you have embraced moving beyond the world of young adults. As an “insider” spokesperson for young adults in the PC(USA) for so long, it is important to recognize the young, too, grow up. While I will still hold onto my young adult status for a few more years, I have put myself in check quite a few times recently. I am no longer the youngest voice or the only young adult voice. It’s a great reminder for those who have been paving the way… we might need get out of the way to allow even newer ideas to have space… and to meet them with hospitality. Good post Bruce. I enjoyed your article and thought you made some excellent points. Mainly, our culture continues to change around us and as our denomination ages, our overall interest and enthusiasm to stay connected to that change decreases (not in all but in many). I believe we must seek God’s creation of a church within a church. We need to comfort and minister to those who enjoy the status quo but also remember we have a calling to build up the church for new and future generations. Christ is before us and we need to follow him. All the best and in Christ, Tom Bruce Reyes-Chow One of those “consultant” types who spends his time, blogging, teaching, speaking and writing. He also happens to be a Presbyterian Teaching Elder, father to three daughters, smug San Franciscan and FANatic of the Oakland Athletics Baseball club. Thanks for reading.
Who Pulls John Gray’s Strings? John Gray, emeritus professor of European Thought at the London School of Economics, is an enigma. He began his intellectual life on theleft but moved right in the late 1970s, becoming a fan of Nobel Prize-winning free-market economist F.A. Hayek. Gray’s libertarianism was tempered, however, by studying British philosopher Michael Oakeshott’s critique of “rationalism in politics.” During the 1990s, Gray was associated with New Labour—the center-left ideology that brought Tony Blair to power in Westminster—and he became a prominent critic of global capitalism with his 1998 book False Dawn. Recently he appears to have embraced something of a nihilistic stoicism, whose spirit suffuses The Soul of the Marionette. In these pages he undertakes a sort of jazz improvisation on the theme of human freedom, surveying an omnium-gatherum of earlier writers’ and cultures’ thoughts on the topic from the point of view of a “freedom-skeptic.” Gray sees the modern, supposedly secular belief in human freedom as a creed that will not admit its character: “Throughout much of the world … the Gnostic faith that knowledge can give humans a freedom that no other creature can possess has become the predominant religion.” Gray finds the Gnostic frame of mind even among “hard-headed” scientists: The crystallographer J. D. Bernal … envisioned ‘an erasure of individuality and mortality’ in which human beings would cease to be distinct physical entities … ‘consciousness itself might end or vanish … becoming masses of atoms in space communicating by radiation, and ultimately perhaps resolving itself entirely into light.’ In another vignette of a thinker he finds relevant to his inquiry, Gray discusses the philosophy of the 19th-century Italian writer Giacomo Leopardi, most famous for penning the classic poem “L’Infinito.” Leopardi was a staunch materialist who nevertheless found religion to be a necessary illusion. He understood Christianity as an essential response to the rise of skepticism in Greco-Roman culture; in Leopardi’s view, “What was destroying the [ancient] world was the lack of illusion.” Christianity had now gone into decline, but this was not to be celebrated; as Gray quotes Leopardi, “There is no doubt that the progress of reason and the extinction of illusions produce barbarism.” What was arising from the “secular creeds” of his time was only “the militant evangelism of Christianity in a more dangerous form.” Gray finds Edgar Allan Poe’s vision of a world where “human reason could never grasp the nature of things” congenial and devotes several pages to the American poet. He also takes up the trope of the golem as evinced in Mary Shelley’s Frankenstein, declaring “Humans have too little self-knowledge to be able to fashion a higher version of themselves”—a view on the surface at odds with his later proclamations about the coming age of artificial intelligence. Continuing his odyssey, Gray arrives at the isle—or rather, planet—of Stanislav Lem’s novel Solaris (which was made into a 2002 movie starring George Clooney). It features a water-covered world involved in “ontological auto-metamorphosis.” According to the “heretical” scientific theories its discovery spawned, the planet has a “sentient ocean”: Lem was prefiguring something like the Gaia hypothesis of James Lovelock that Gray has invoked favorably here and in earlier works. Gray also takes interest in the work of renowned American science fiction writer Philip K. Dick, who wrote a series of novels that advanced one of the most compelling paranoid metaphysics of our time. Gray notes that Dick is an archetypal Gnostic, as shown by lines like “Behind the counterfeit universe lies God … it is not man who is estranged from God; it is God who is estranged from God.” For Dick, it is unlikely that anyone can ever penetrate to a “true” reality through the veil of illusion: “were we to penetrate [that veil] for any reason, this strange, veil-like dream would reinstate itself retroactively, in terms of our perceptions and in terms of our memory. The mutual dreaming would resume as before…” Dick ultimately concluded that the flawed world he lived in was just a costume concealing the good world that is the true reality. But if this is so, Gray asks, how did this veil come into being? If an all-powerful God created it, then He must have wanted the veil to exist. But if it is the creation of some sub-deity, a Demiurge, then the “top” God is not all-powerful since he could not prevent the veil from coming into being. Of course, this is the ancient problem of theodicy restated in different terms, but it is to Gray’s credit that he recognizes it at play in Dick’s oeuvre. And as Gray notes, Dick was a very modern Gnostic in that he incorporated into his philosophy the idea of an evolution towards higher states of being taking place over time. In fact, it is “not least when it is intensely hostile to religion” that modern thought most embraces tales of the historical redemption of humanity. Gray argues that “All modern philosophies in which history is seen as a process of human emancipation … are garbled versions of [the] Christian narrative.” The next section of the book, called “In the puppet theatre,” begins with a look at the Aztec penchant for mass ritual killing. He quotes anthropologist Inga Clendinnen at length on the gruesome nature of the practice, including descriptions like: “On high occasions warriors carrying gourds of human blood or wearing the dripping skins of their captives ran through the streets … the flesh of their victims seethed in domestic cooking pots; human thighbones, scraped and dried, were set up in the courtyard of the households…” Gray contends the Aztecs were superior to modern state-based killers in that their victims were not “seen as less than human.” But only two pages later he claims, “In the ritual killings, nothing was left of human pride. If they were warriors, the victims were denied any status they had in society” and were “trussed like deer,” which certainly makes it sound as though they were seen as less than human. In any case, Gray views Aztec society as a lesson in the inevitability of human violence. We tamp it down in one place, only to see it pop back up in another. He is skeptical of statistics that seem to show a long-term decline in violence. He cites violence-caused famines and epidemics, deaths in labor camps, the gigantic U.S. prison population, the revival of torture in the most “civilized” societies, and other modern atrocities to call these figures into doubt. And he sees the false sense that we have overcome this human tendency to violence in “enlightened” Western societies as connected to our arrogant approach in dealing with “unenlightened” societies: By intervening in societies of which they know nothing, western elites are advancing a future they believe is prefigured in themselves—a new world based on freedom, democracy and human rights. The results are clear—failed states, zones of anarchy and new and worse tyrannies; but in order that they may see themselves as world-changing figures, our leaders have chosen not to see what they have done. Gray turns his attention to French Marxist Guy Debord, finding “nothing of interest” in his standard Marxist schema but noting that Debord was ahead of his time in analyzing celebrity. With work no longer giving life meaning, it is necessary that our “culture of celebrity” offers everyone “fifteen minutes of fame” to reconcile us to the “boredom of the rest of [our] lives.” He quotes Debord on the rising social importance of “media status”: “Where ‘media status’ has acquired infinitely more importance than the value of anything one might actually be capable of doing, it is normal for this status to be readily transferable…” This quote gets at the heart of why in 2015 we see headline coverage of a dispute between singer Elton John and fashion designers Domenico Dolce and Stefano Gabbana on the proper form for the family. Are fashion designers or pop songwriters experts on child development or the ethics of the family? If not, why is anyone paying any attention to this feud? Well, because they are celebrities with high “media status,” and that status is “readily transferable” to any other field whatsoever. Bored modern individuals are also rootless. Gray sees the rise of the surveillance state as tied to that condition: When people are locked into local communities they are subject to continuous informal monitoring of their behaviour. Modern individualism tends to condemn these communities because they repress personal autonomy … The informal controls on behavior that exist in a world of many communities are unworkable in a world of highly mobile individuals, so … near-ubiquitous technological monitoring is a consequence of the decline of cohesive societies that has occurred alongside the rising demand for individual freedom. As The Soul of the Marionette draws to a close, Gray heads off into a sort of nature mysticism where his thinking is—to me, at least—at its most obscure. Considering climate change, he claims: “Whatever is done now, human expansion has triggered a shift that will persist for thousands of years. A sign of the planet healing itself, climate change will continue regardless of its impact on humankind.” But how does Gray know climate change is a “sign of the planet healing itself,” rather than, say, a sign of its decline or something the planet itself is completely indifferent to? Gray’s gloomy vision seeps through in his prognosis for the human race too: “However it ends, the Anthropocene”—the epoch of humanity’s rule—“will be brief.” Again, I wonder how Gray knows this? Here he appears as the anti-Hegel, somehow sussing out the future of man much like the German philosopher, but from pessimistic rather than optimistic presuppositions. Although Gray is an atheist and a materialist of some sort or another, he correctly understands what science can and can’t tell us: Nothing carries so much authority today as science, but there is actually no such thing as ‘the scientific world-view.’ Science is a method of inquiry, not a view of the world. Knowledge is growing at accelerating speed; but no advance in science will tell us whether materialism is true or false, or whether humans possess free will. He also gets at the deep meaning behind religious stories: “being divided from yourself goes with being self-aware. This is the truth in the Genesis myth: the Fall is not an event at the beginning of history, but the intrinsic condition of self-conscious beings.” (Albert Camus, like Gray a nonbeliever, understood this very well: see his novel The Fall.) Yet there is a problem with the coherence of Gray’s outlook. He urges us to adopt a stoical attitude towards our predicament as marionettes. But if we are free to choose our attitude, why are we not also free to make other choices about our lives? Then again, perhaps Gray isn’t really to blame for this incoherence: it could be that some unknown puppeteer, pulling on Gray’s strings, made him write this book. MORE FROM THIS AUTHOR Hide 11 comments 11 Responses to Who Pulls John Gray’s Strings? Most of this is way, way over my head. I guess that’s why I’m so delighted to learn that even an acclaimed modern thinker like John Gray sometimes writes simple things that I can actually understand – and agree with: “By intervening in societies of which they know nothing, western elites are advancing a future they believe is prefigured in themselves — a new world based on freedom, democracy and human rights. The results are clear — failed states, zones of anarchy and new and worse tyrannies; but in order that they may see themselves as world-changing figures, our leaders have chosen not to see what they have done.” Nothing carries so much authority today as science, but there is actually no such thing as ‘the scientific world-view.’ Science is a method of inquiry, not a view of the world. Knowledge is growing at accelerating speed; but no advance in science will tell us whether materialism is true or false, or whether humans possess free will. Gray seems to do that annoying thing here where academics project their own ignorance. Neuroscience does, in fact, have a lot to say on the topic of free will (or lack thereof), and physics continues to do an excellent job of broadening our scope of the physical world, thus making words like “materialism” kind of pointless. I’ve mostly read John Gray in short articles written for the London Review and other such periodicals, and he strikes me as just another in the long line of, if you like, ‘popular’ philosophers––’popular’ meaning not widely read, but more or less non-technical and not substantially engaged in ‘academic’ philosophy––who locate the faults of modern society in its ideals: in the main ‘progress’ and ‘rationalism,’ maybe ‘materialism’ as well. But they never manage to argue convincingly that ideas drive history––at best, a rationalist, progressive materialism is an incoherent ideological cocktail of ideas assimilated from different philosophical traditions which, very possibly, fit together only so as to serve the broader political, economic, and social interests of modern Western capitalist states. It reminds me of Bertrand Russell blaming Nazism and communism entirely on the German Romantic philosophers––those theorists had quite a bit of influence in the British universities (F.H. Bradley, Bernard Bosanquet) prior to the Great War, at which point, very conveniently, Hegel and Nietzsche became the scapegoats du jour and ideas which had been entirely present in England without leading to British totalitarianism (Locke’s liberal empiricism was good enough for British imperialism anyhow) were suddenly explanations for German aggression in Europe. In sum: either address the peculiarities of how ideas serve history, or spare me the philosopher’s prejudice about ideas determining history. And on a pedantic note: Stanislaw Lem’s “Solaris” was first adapted into a film by Andrei Tarkovsky in 1972. Sorry, as a film buff, I couldn’t let that oversight slide––you can’t mention the 2002 Soderbergh film and leave the classic Tarkovsky by the wayside (much better, much more famous). Good piece. I look forward to reading John Gray’s latest. One nitpick, however: Avoid at all costs the 2002 film version of Solaris and check out instead Andrei Tarkovsky’s 1972 classic, which is a far more realistic meditation on space travel and its effects on the human psyche. Science cannot ever answer the question of being: “why is there anything at all?” And neuroscience can never transcend the ego, properly understood. Phenomenology retains its legitimacy as a method no matter how refined the wielders of scalpels can become. John Gray would find an ill omen in a rainbow. One of the first documents found (Egyptian, 3000 BCE) talked about the tragedy (and absurdity)of life. Humanity has been going to hell for a long, long time. We ain’t going nowhere. Gray should not only stop to smell roses, he should plant a few for himself and others. Human history makes it easy to think any positive action is naive futility. It is also a hollow excuse for inaction. Even the smallest of good deeds makes the world a little bit better, if only for a moment. Mele, of course, demonstrates that misinterpretations of findings in neuroscience combined with philosophical ignorance on the part of neuroscientists do not demonstrate the absence of free will, but the lack of a basic education in the humanities on the part of neuroscientists. For those, contra Einstein, who want to keep it simple, more simple than is possible, I recommend avoiding Mele’s book at all cost. There is also Denis Noble’s book, the Music of Life, addressing the failure of reductionism, and the increasing empirical case for holism as demonstrated by the use of mathematical complex systems for biological modeling: And then there is E.O. Wilson and others reviving group selection in evolution, and people like Peter Turchin who are using ideas like group selection and complex systems to model the development of human society. Needless to say, the underlying anthropology is much closer to Aristotle than Locke (which to the philosophically astute means the underlying ontology is not only post-Newtonian but also pre-Modern). “…This is the truth in the Genesis myth: the Fall is not an event at the beginning of history, but the intrinsic condition of self-conscious beings.” Both are true. The Fall is an event at the beginning of history AND the intrinsic condition of self-conscious beings. It might be that the Fall is the outcome of creating self-conscious beings and is thus intrinsic to them. We inherit the condition by being human; the Fall captures that reality.
= Post Release (Successful) :Notice: Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at. http://www.apache.org/licenses/LICENSE-2.0 . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. :page-partial: The release process consists of: * the release manager xref:comguide:ROOT:cutting-a-release.adoc[cutting the release] * members of the Apache Isis PMC xref:comguide:ROOT:verifying-releases.adoc[verifying] and voting on the release * the release manager performing post-release tasks, for either a successful or an xref:comguide:ROOT:post-release-unsuccessful.adoc[unsuccessful] vote (former documented below) For a vote to succeed, there must be +3 votes from PMC members, and the vote must have been open at least 72 hours. If there are not +3 votes after this time then it is perfectly permissible to keep the vote open longer. This section describes the steps to perform if the vote has been successful. == Inform dev ML Post the results to the `dev@isis.a.o` mailing list: [source,subs="attributes+"] ---- [RESULT] [VOTE] Apache Isis Core release {page-isisrel} ---- using the body (alter last line as appropriate): [source] ---- The vote has completed with the following result : +1 (binding): ... list of names ... +1 (non binding): ... list of names ... -1 (binding): ... list of names ... -1 (non binding): ... list of names ... The vote is SUCCESSFUL. I\'ll now go ahead and complete the post-release activities. ---- == Release to Maven Central CAUTION: We release from Maven Central before anything else; we don't want to push the git tags (an irreversible action) until we know that this has worked ok. From the http://repository.apache.org[ASF Nexus repository], select the staging repository and select 'release' from the top menu. image::release-process/nexus-release-1.png[width="600px",link="{imagesdir}/release-process/nexus-release-1.png"] This moves the release artifacts into an Apache releases repository; from there they will be automatically moved to the Maven repository. == Set environment variables As we did for the cutting of the release, we set environment variables to parameterize the following steps: [source,bash,subs="attributes+"] ---- export ISISJIRA=ISIS-9999 # <.> export ISISTMP=/c/tmp # <.> export ISISREL={page-isisrel} # <.> export ISISRC=RC1 # <.> export ISISBRANCH=release-$ISISREL-$ISISRC export ISISART=isis env | grep ISIS | sort ---- <.> set to an "umbrella" ticket for all release activities. (One should exist already, xref:comguide:ROOT:post-release-successful.adoc#create-new-jira[created at] the beginning of the development cycle now completing). <.> adjust by platform <.> adjust as required <.> adjust as necessary if there was more than one attempt to release Open up a terminal, and switch to the correct release branch: [source,bash,subs="attributes+"] ---- git checkout $ISISBRANCH ---- == Update tags Replace the `-RCn` tag with another without the qualifier. You can do this using the `scripts/promoterctag.sh` script; for example: [source,bash,subs="attributes+"] ---- sh scripts/promoterctag.sh $ISISART-$ISISREL $ISISRC ---- This script pushes the tag under `refs/tags/rel`. As per Apache policy (communicated on 10th Jan 2016 to Apache PMCs), this path is 'protected' and is unmodifiable (guaranteeing the provenance that the ASF needs for releases). == Update JIRA === Close tickets Close all JIRA tickets for the release, or moved to future releases if not yet addressed. Any tickets that were partially implemented should be closed, and new tickets created for the functionality on the ticket not yet implemented. === Generate Release Notes From the root directory, generate the release notes for the current release, in Asciidoc format; eg: [source,bash,subs="attributes+"] ---- sh scripts/jira-release-notes.sh ISIS $ISISREL > /tmp/1 ---- [NOTE] ==== This script uses 'jq' to parse JSON. See the script itself for details of how to install this utility. ==== === Mark the version as released In JIRA, go to the link:https://issues.apache.org/jira/plugins/servlet/project-config/ISIS/versions[administration section] for the Apache Isis project and update the version as being released. In the link:https://issues.apache.org/jira/secure/RapidBoard.jspa?rapidView=87[Kanban view] this will have the effect of marking all tickets as released (clearing the "done" column). [#create-new-jira] === Create new JIRA Create a new JIRA ticket as a catch-all for the _next_ release. == Update Release Notes In the main `isis` repo (ie containing the asciidoc source): * Create a new `relnotes.adoc` file to hold the JIRA-generated release notes generated above. + This should live in `antora/components/relnotes/modules/ROOT/pages/yyyy/vvv/relnotes.adoc` ** where `yyyy` is the year ** where `vvv` is the version number * Update the `nav.adoc` file to reference these release notes + In `antora/components/relnotes/ROOT/nav.adoc` * Update the table in the `about.adoc` summary + In `antora/components/relnotes/ROOT/pages/about.adoc` * update the `doap_isis.rdf` file (which provides a machine-parseable description of the project) with details of the new release. Validate using the http://www.w3.org/RDF/Validator/[W3C RDF Validator] service. + TIP: For more on DOAP files, see these link:http://projects.apache.org/doap.html[Apache policy docs]. * Update the link:https://github.com/apache/isis/blob/master/STATUS[STATUS] file (in root of Apache Isis' source) should be updated with details of the new release. * commit the changes + [source,bash,subs="attributes+"] ---- git add . git commit -m "$ISISJIRA: updates release notes, STATUS and doap_isis.rdf" ---- == Release Source Zip As described in the link:http://www.apache.org/dev/release-publishing.html#distribution_dist[Apache documentation], each Apache TLP has a `release/TLP-name` directory in the distribution Subversion repository at link:https://dist.apache.org/repos/dist[https://dist.apache.org/repos/dist]. Once a release vote passes, the release manager should `svn add` the artifacts (plus signature and hash files) into this location. The release is then automatically pushed to http://www.apache.org/dist/[http://www.apache.org/dist/] by `svnpubsub`. Only the most recent release of each supported release line should be contained here, old versions should be deleted. Each project is responsible for the structure of its directory. The directory structure of Apache Isis reflects the directory structure in our git source code repo: [source] ---- isis/ core/ ---- If necessary, checkout this directory structure: [source,bash] ---- svn co https://dist.apache.org/repos/dist/release/isis isis-dist ---- Next, add the new release into the appropriate directory, and delete any previous release. The `upd.sh` script can be used to automate this: [source,bash] ---- old_ver=$1 new_ver=$2 # constants repo_root=https://repository.apache.org/content/repositories/releases/org/apache/isis zip="source-release.zip" asc="$zip.asc" md5="$zip.md5" # # isis-core # type="core" fullname="isis-parent" pushd isis-core curl -O $repo_root/$type/$fullname/$new_ver/$fullname-$new_ver-$asc svn add $fullname-$new_ver-$asc curl -O $repo_root/$type/$fullname/$new_ver/$fullname-$new_ver-$md5 svn add $fullname-$new_ver-$md5 curl -O $repo_root/$type/$fullname/$new_ver/$fullname-$new_ver-$zip svn add $fullname-$new_ver-$zip svn delete $fullname-$old_ver-$asc svn delete $fullname-$old_ver-$md5 svn delete $fullname-$old_ver-$zip popd ---- [source,bash,subs="attributes+"] ---- sh upd.sh [previous_release] {page-isisrel} ---- The script downloads the artifacts from the Nexus release repository, adds the artifacts to subversion and deletes the previous version. Double check that the files are correct; there is sometimes a small delay in the files becoming available in the release repository. It should be sufficient to check just the `md5` or `.asc` files that these look valid (aren't HTML 404 error pages): [source,bash,subs="attributes+"] ---- vi `find . -name *.md5` ---- Assuming all is good, commit the changes: [source,subs="attributes+"] ---- svn commit -m "publishing isis source releases to dist.apache.org" ---- If the files are invalid, then revert using `svn revert . --recursive` and try again in a little while. == Final website updates Apply any remaining documentation updates: * If there have been documentation changes made in other branches since the release branch was created, then merge these in. * If there have been updates to any of the schemas, copy them over: ** copy the new schema(s) from + `api/schema/src/main/resources/o.a.i.s.xxx` + to its versioned: + `antora/supplemental-ui/schema/xxx/xxx-ver.xsd` ** ensure the non-versioned is same as the highest versioned + `antora/supplemental-ui/schema/xxx/xxx.xsd` * Commit the changes: + [source,bash,subs="attributes+"] ---- git add . git commit -m "$ISISJIRA: merging in final changes to docs" ---- We are now ready to xref:#generate-website[generate the website]. [#generate-website] == Generate website We use Antora to generate the site, not only the version being release but also any previous versions listed in `site.yml`. This is done using the `content.sources.url[].branches` properties. We use branches for all cases - note that the branch name appears in the generated UI. If there are patches to the documentation, we move the branches. We therefore temporarily modify all of the `antora.yml` files (and update `index.html`) file and create a branch for this change; then we update `site.yml` with a reference to that new branch. All of this is changed afterwards. === Create doc branch First, we prepare a doc branch to reference: * Update all `antora.yml` files, eg using an IDE: + ** `version: latest` -> `version: {page-isisrel}` * Commit all these changes: + [source,bash,subs="attributes+"] ---- git add . git commit -m "$ISISJIRA: bumps antora.yml and index.html to $ISISREL" ---- We now create a branch to reference in the `site.yml`, later on. * We create the `{page-isisrel}` branch. + This mirrors the "rel/isis-{page-isisrel}" used for the formal (immutable) release tag, but is a branch because it allows us to move it, and must have this simplified name as it is used in the "edit page" link of the site template. + [source,bash,subs="attributes+"] ---- git branch {page-isisrel} git push origin {page-isisrel} ---- Finally, revert the last commit (backing out changes to `antora.yml` files): [source,bash,subs="attributes+"] ---- git revert HEAD ---- === Update `index.html` & `site.yml` & generate Lastly, we update `index.html` and then `site.yml` * Update the home page of the website, `antora/supplemental-ui/index.html` + Note that this isn't performed in the docs branch (xref:#create-doc-branch[previous section]) because the supplemental files are _not_ versioned as a doc component: ** update any mention of `master` -> `{page-isisrel}` + This should be the two sets of starter app instructions for helloworld and simpleapp. ** update any mention of `latest` -> `{page-isisrel}` + This should be in hyperlinks, `<a href="docs/...">` * Now update `site.yml` + This will reference the new branch (and any previous branches). Every content source needs to be updated: + ** `branches: HEAD` -> `branches: {page-isisrel}` * commit this change, too (there's no need to push): + [source,bash,subs="attributes+"] ---- git add . git commit -m "$ISISJIRA: adds tag to site.yml" ---- We are now in a position to actually generate the Antora website: * generate the website: + [source,bash,subs="attributes+"] ---- sh preview.sh ---- + This will write to `antora/target/site`; we'll use the results in the xref:#publish-website[next section]. Finally, revert the last commit (backing out changes to `site.yml`): [source,bash,subs="attributes+"] ---- git revert HEAD ---- [#update-the-algolia-search-index] == Update the Algolia search index == Index the site Create a `algolia.env` file holding the `APP_ID` and the admin `API_KEY`, in the root of `isis-site`: [source,ini] .algolia.env ---- APPLICATION_ID=... API_KEY=... ---- CAUTION: This file should not be checked into the repo, because the API_KEY allows the index to be modified or deleted. We use the Algolia-provided link:https://hub.docker.com/r/algolia/docsearch-scraper[docker image] for the crawler to perform the search (as per the link:as per https://docsearch.algolia.com/docs/run-your-own/#run-the-crawl-from-the-docker-image[docs]): [source,bash] ---- cd content docker run -it --env-file=../algolia.env -e "CONFIG=$(cat ../algolia-config.json | jq -r tostring)" algolia/docsearch-scraper ---- This posts the index up to the link:https://algolia.com[Algolia] site. NOTE: Additional config options for the crawler can be found link:https://www.algolia.com/doc/api-reference/crawler/[here]. [#publish-website] == Publish website We now copy the results of the Antora website generation over to the `isis-site` repo: * in the `isis-site` repo, check out the `asf-site` branch: + [source,bash,subs="attributes+"] ---- cd ../isis-site git checkout asf-site git pull --ff-only ---- * still in the `isis-site` repo, delete all the files in `content/` _except_ for the `schema` and `versions` directories: + [source,bash,subs="attributes+"] ---- pushd content for a in $(ls -1 | grep -v schema | grep -v versions) do rm -rf $a done popd ---- * Copy the generated Antora site to `isis-site` repo's `contents` directory: + [source,bash,subs="attributes+"] ---- cd ../isis cp -Rf antora/target/site/* ../isis-site/content/. ---- * Back in the `isis-site` repo, commit the changes and preview: + [source,bash,subs="attributes+"] ---- cd ../isis-site git add . git commit -m "$ISISJIRA : production changes to website" sh preview.sh ---- * If everything looks ok, then push the changes to make live, and switch back to the `isis` repo: + [source,bash,subs="attributes+"] ---- git push origin asf-site cd ../isis ---- == Merge in release branch Because we release from a branch, the changes made in the branch should be merged back from the release branch back into the `master` branch. In the `isis` repo (adjust if not on RC1): [source,bash,subs="attributes+"] ---- git checkout master # update master with latest git pull git merge release-{page-isisrel}-RC1 # merge branch onto master git push origin --delete release-{page-isisrel}-RC1 # remote branch no longer needed git branch -d release-{page-isisrel}-RC1 # branch no longer needed ---- == Bump \{page-isisrel} in `site.yml` In `site.yml` file, bump the version of `\{page-isisrel}`, and commit. == Update the ASF Reporter website Log the new release in the link:https://reporter.apache.org/addrelease.html?isis[ASF Reporter website]. == Announce the release Announce the release to link:mailto:users@isis.apache.org[users mailing list]. For example, for a release of Apache Isis Core, use the following subject: [source,subs="attributes+"] ---- [ANN] Apache Isis version {page-isisrel} Released ---- And use the following body (summarizing the main points as required): [source,subs="attributes+"] ---- The Apache Isis team is pleased to announce the release of Apache Isis {page-isisrel}. New features in this release include: * ... Full release notes are available on the Apache Isis website at [1]. You can access this release directly from the Maven central repo [2]. Alternatively, download the release and build it from source [3]. Enjoy! --The Apache Isis team [1] http://isis.apache.org/relnotes/{page-isisrel}/about.html [2] https://search.maven.org [3] https://isis.apache.org/docs/{page-isisrel}/downloads/how-to.html ---- == Blog post link:https://blogs.apache.org/roller-ui/login.rol[Log onto] the http://blogs.apache.org/isis/[Apache blog] and create a new post. Copy-n-paste the above mailing list announcement should suffice. == Update dependencies With the release complete, now is a good time to bump versions of dependencies (so that there is a full release cycle to identify any possible issues). You will probably want to create a new JIRA ticket for these updates (or if minor then use the "catch-all" JIRA ticket raised earlier for the next release). === Merge in any changes from `org.apache:apache` Check (via link:http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache%22%20a%3A%22apache%22[search.maven.org]) whether there is a newer version of the Apache parent `org.apache:apache`. If there are, merge in these changes to the `isis-parent` POM. === Update plugin versions The `maven-versions-plugin` should be used to determine if there are newer versions of any of the plugins used to build Apache Isis. Since this goes off to the internet, it may take a minute or two to run: [source,bash] ---- mvn versions:display-plugin-updates > /tmp/foo grep "\->" /tmp/foo | /bin/sort -u ---- Review the generated output and make updates as you see fit. (However, if updating, please check by searching for known issues with newer versions). === Update dependency versions The `maven-versions-plugin` should be used to determine if there are newer versions of any of Isis' dependencies. Since this goes off to the internet, it may take a minute or two to run: [source,bash] ---- mvn versions:display-dependency-updates > /tmp/foo grep "\->" /tmp/foo | /bin/sort -u ---- Update any of the dependencies that are out-of-date. That said, do note that some dependencies may show up with a new dependency, when in fact the dependency is for an old, badly named version. Also, there may be new dependencies that you do not wish to move to, eg release candidates or milestones. For example, here is a report showing both of these cases: [source,bash] ---- [INFO] asm:asm ..................................... 3.3.1 -> 20041228.180559 [INFO] commons-httpclient:commons-httpclient .......... 3.1 -> 3.1-jbossorg-1 [INFO] commons-logging:commons-logging ......... 1.1.1 -> 99.0-does-not-exist [INFO] dom4j:dom4j ................................. 1.6.1 -> 20040902.021138 [INFO] org.datanucleus:datanucleus-api-jdo ................ 3.1.2 -> 3.2.0-m1 [INFO] org.datanucleus:datanucleus-core ................... 3.1.2 -> 3.2.0-m1 [INFO] org.datanucleus:datanucleus-jodatime ............... 3.1.1 -> 3.2.0-m1 [INFO] org.datanucleus:datanucleus-rdbms .................. 3.1.2 -> 3.2.0-m1 [INFO] org.easymock:easymock ................................... 2.5.2 -> 3.1 [INFO] org.jboss.resteasy:resteasy-jaxrs ............. 2.3.1.GA -> 3.0-beta-1 ---- For these artifacts you will need to search http://search.maven.org[Maven central repo] directly yourself to confirm there are no newer dependencies not shown in this list.
/* * Copyright 2015 the original author or authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * https://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.springframework.social.connect.web; import static java.util.Arrays.*; import java.util.List; import java.util.Map.Entry; import javax.servlet.http.HttpServletRequest; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.springframework.social.connect.Connection; import org.springframework.social.connect.ConnectionFactory; import org.springframework.social.connect.support.OAuth1ConnectionFactory; import org.springframework.social.connect.support.OAuth2ConnectionFactory; import org.springframework.social.oauth1.AuthorizedRequestToken; import org.springframework.social.oauth1.OAuth1Operations; import org.springframework.social.oauth1.OAuth1Parameters; import org.springframework.social.oauth1.OAuth1Version; import org.springframework.social.oauth1.OAuthToken; import org.springframework.social.oauth2.AccessGrant; import org.springframework.social.oauth2.OAuth2Operations; import org.springframework.social.oauth2.OAuth2Parameters; import org.springframework.util.LinkedMultiValueMap; import org.springframework.util.MultiValueMap; import org.springframework.web.client.HttpClientErrorException; import org.springframework.web.context.request.NativeWebRequest; import org.springframework.web.context.request.WebRequest; /** * Provides common connect support and utilities for Java web/servlet environments. * Used by {@link ConnectController} and {@link ProviderSignInController}. * @author Keith Donald */ public class ConnectSupport { private final static Log logger = LogFactory.getLog(ConnectSupport.class); private boolean useAuthenticateUrl; private String applicationUrl; private String callbackUrl; private SessionStrategy sessionStrategy; public ConnectSupport() { this(new HttpSessionSessionStrategy()); } public ConnectSupport(SessionStrategy sessionStrategy) { this.sessionStrategy = sessionStrategy; } /** * Flag indicating if this instance will support OAuth-based authentication instead of the traditional user authorization. * Some providers expose a special "authenticateUrl" the user should be redirected to as part of an OAuth-based authentication attempt. * Setting this flag to true has {@link #buildOAuthUrl(ConnectionFactory, NativeWebRequest) oauthUrl} return this authenticate URL. * @param useAuthenticateUrl whether to use the authenticat url or not * @see OAuth1Operations#buildAuthenticateUrl(String, OAuth1Parameters) * @see OAuth2Operations#buildAuthenticateUrl(OAuth2Parameters) */ public void setUseAuthenticateUrl(boolean useAuthenticateUrl) { this.useAuthenticateUrl = useAuthenticateUrl; } /** * Configures the base secure URL for the application this controller is being used in e.g. <code>https://myapp.com</code>. Defaults to null. * If specified, will be used to generate OAuth callback URLs. * If not specified, OAuth callback URLs are generated from {@link HttpServletRequest HttpServletRequests}. * You may wish to set this property if requests into your application flow through a proxy to your application server. * In this case, the HttpServletRequest URI may contain a scheme, host, and/or port value that points to an internal server not appropriate for an external callback URL. * If you have this problem, you can set this property to the base external URL for your application and it will be used to construct the callback URL instead. * @param applicationUrl the application URL value */ public void setApplicationUrl(String applicationUrl) { this.applicationUrl = applicationUrl; } /** * Configures a specific callback URL that is to be used instead of calculating one based on the application URL or current request URL. * When set this URL will override the default behavior where the callback URL is derived from the current request and/or a specified application URL. * When set along with applicationUrl, the applicationUrl will be ignored. * @param callbackUrl the callback URL to send to providers during authorization. Default is null. */ public void setCallbackUrl(String callbackUrl) { this.callbackUrl = callbackUrl; } /** * Builds the provider URL to redirect the user to for connection authorization. * @param connectionFactory the service provider's connection factory e.g. FacebookConnectionFactory * @param request the current web request * @return the URL to redirect the user to for authorization * @throws IllegalArgumentException if the connection factory is not OAuth1 based. */ public String buildOAuthUrl(ConnectionFactory<?> connectionFactory, NativeWebRequest request) { return buildOAuthUrl(connectionFactory, request, null); } /** * Builds the provider URL to redirect the user to for connection authorization. * @param connectionFactory the service provider's connection factory e.g. FacebookConnectionFactory * @param request the current web request * @param additionalParameters parameters to add to the authorization URL. * @return the URL to redirect the user to for authorization * @throws IllegalArgumentException if the connection factory is not OAuth1 based. */ public String buildOAuthUrl(ConnectionFactory<?> connectionFactory, NativeWebRequest request, MultiValueMap<String, String> additionalParameters) { if (connectionFactory instanceof OAuth1ConnectionFactory) { return buildOAuth1Url((OAuth1ConnectionFactory<?>) connectionFactory, request, additionalParameters); } else if (connectionFactory instanceof OAuth2ConnectionFactory) { return buildOAuth2Url((OAuth2ConnectionFactory<?>) connectionFactory, request, additionalParameters); } else { throw new IllegalArgumentException("ConnectionFactory not supported"); } } /** * Complete the connection to the OAuth1 provider. * @param connectionFactory the service provider's connection factory e.g. FacebookConnectionFactory * @param request the current web request * @return a new connection to the service provider */ public Connection<?> completeConnection(OAuth1ConnectionFactory<?> connectionFactory, NativeWebRequest request) { String verifier = request.getParameter("oauth_verifier"); AuthorizedRequestToken requestToken = new AuthorizedRequestToken(extractCachedRequestToken(request), verifier); OAuthToken accessToken = connectionFactory.getOAuthOperations().exchangeForAccessToken(requestToken, null); return connectionFactory.createConnection(accessToken); } /** * Complete the connection to the OAuth2 provider. * @param connectionFactory the service provider's connection factory e.g. FacebookConnectionFactory * @param request the current web request * @return a new connection to the service provider */ public Connection<?> completeConnection(OAuth2ConnectionFactory<?> connectionFactory, NativeWebRequest request) { if (connectionFactory.supportsStateParameter()) { verifyStateParameter(request); } String code = request.getParameter("code"); try { AccessGrant accessGrant = connectionFactory.getOAuthOperations().exchangeForAccess(code, callbackUrl(request), null); return connectionFactory.createConnection(accessGrant); } catch (HttpClientErrorException e) { logger.warn("HttpClientErrorException while completing connection: " + e.getMessage()); logger.warn(" Response body: " + e.getResponseBodyAsString()); throw e; } } private void verifyStateParameter(NativeWebRequest request) { String state = request.getParameter("state"); String originalState = extractCachedOAuth2State(request); if (state == null || !state.equals(originalState)) { throw new IllegalStateException("The OAuth2 'state' parameter is missing or doesn't match."); } } protected String callbackUrl(NativeWebRequest request) { if (callbackUrl != null) { return callbackUrl; } HttpServletRequest nativeRequest = request.getNativeRequest(HttpServletRequest.class); if (applicationUrl != null) { return applicationUrl + connectPath(nativeRequest); } else { return nativeRequest.getRequestURL().toString(); } } // internal helpers private String buildOAuth1Url(OAuth1ConnectionFactory<?> connectionFactory, NativeWebRequest request, MultiValueMap<String, String> additionalParameters) { OAuth1Operations oauthOperations = connectionFactory.getOAuthOperations(); MultiValueMap<String, String> requestParameters = getRequestParameters(request); OAuth1Parameters parameters = getOAuth1Parameters(request, additionalParameters); parameters.putAll(requestParameters); if (oauthOperations.getVersion() == OAuth1Version.CORE_10) { parameters.setCallbackUrl(callbackUrl(request)); } OAuthToken requestToken = fetchRequestToken(request, requestParameters, oauthOperations); sessionStrategy.setAttribute(request, OAUTH_TOKEN_ATTRIBUTE, requestToken); return buildOAuth1Url(oauthOperations, requestToken.getValue(), parameters); } private OAuth1Parameters getOAuth1Parameters(NativeWebRequest request, MultiValueMap<String, String> additionalParameters) { OAuth1Parameters parameters = new OAuth1Parameters(additionalParameters); parameters.putAll(getRequestParameters(request)); return parameters; } private OAuthToken fetchRequestToken(NativeWebRequest request, MultiValueMap<String, String> requestParameters, OAuth1Operations oauthOperations) { if (oauthOperations.getVersion() == OAuth1Version.CORE_10_REVISION_A) { return oauthOperations.fetchRequestToken(callbackUrl(request), requestParameters); } return oauthOperations.fetchRequestToken(null, requestParameters); } private String buildOAuth2Url(OAuth2ConnectionFactory<?> connectionFactory, NativeWebRequest request, MultiValueMap<String, String> additionalParameters) { OAuth2Operations oauthOperations = connectionFactory.getOAuthOperations(); String defaultScope = connectionFactory.getScope(); OAuth2Parameters parameters = getOAuth2Parameters(request, defaultScope, additionalParameters); String state = connectionFactory.generateState(); parameters.add("state", state); sessionStrategy.setAttribute(request, OAUTH2_STATE_ATTRIBUTE, state); if (useAuthenticateUrl) { return oauthOperations.buildAuthenticateUrl(parameters); } else { return oauthOperations.buildAuthorizeUrl(parameters); } } private OAuth2Parameters getOAuth2Parameters(NativeWebRequest request, String defaultScope, MultiValueMap<String, String> additionalParameters) { OAuth2Parameters parameters = new OAuth2Parameters(additionalParameters); parameters.putAll(getRequestParameters(request, "scope")); parameters.setRedirectUri(callbackUrl(request)); String scope = request.getParameter("scope"); if (scope != null) { parameters.setScope(scope); } else if (defaultScope != null) { parameters.setScope(defaultScope); } return parameters; } private String connectPath(HttpServletRequest request) { String pathInfo = request.getPathInfo(); return request.getServletPath() + (pathInfo != null ? pathInfo : ""); } private String buildOAuth1Url(OAuth1Operations oauthOperations, String requestToken, OAuth1Parameters parameters) { if (useAuthenticateUrl) { return oauthOperations.buildAuthenticateUrl(requestToken, parameters); } else { return oauthOperations.buildAuthorizeUrl(requestToken, parameters); } } private OAuthToken extractCachedRequestToken(WebRequest request) { OAuthToken requestToken = (OAuthToken) sessionStrategy.getAttribute(request, OAUTH_TOKEN_ATTRIBUTE); sessionStrategy.removeAttribute(request, OAUTH_TOKEN_ATTRIBUTE); return requestToken; } private String extractCachedOAuth2State(WebRequest request) { String state = (String) sessionStrategy.getAttribute(request, OAUTH2_STATE_ATTRIBUTE); sessionStrategy.removeAttribute(request, OAUTH2_STATE_ATTRIBUTE); return state; } private MultiValueMap<String, String> getRequestParameters(NativeWebRequest request, String... ignoredParameters) { List<String> ignoredParameterList = asList(ignoredParameters); MultiValueMap<String, String> convertedMap = new LinkedMultiValueMap<String, String>(); for (Entry<String, String[]> entry : request.getParameterMap().entrySet()) { if (!ignoredParameterList.contains(entry.getKey())) { convertedMap.put(entry.getKey(), asList(entry.getValue())); } } return convertedMap; } private static final String OAUTH_TOKEN_ATTRIBUTE = "oauthToken"; private static final String OAUTH2_STATE_ATTRIBUTE = "oauth2State"; }
--- abstract: 'We prove the full range of estimates for a five-linear singular integral of Brascamp-Lieb type. The study is methodology-oriented with the goal to develop a sufficiently general technique to estimate singular integral variants of Brascamp-Lieb inequalities that are not of Hölder type. The invented methodology constructs localized analysis on the entire space from local information on its subspaces of lower dimensions and combines such tensor-type arguments with the generic localized analysis. A direct consequence of the boundedness of the five-linear singular integral is a Leibniz rule which captures nonlinear interactions of waves from transversal directions.' address: - 'Department of Mathematics, Cornell University, Ithaca, NY ' - 'Laboratoire de Mathématiques, Université de Nantes, Nantes' author: - Camil Muscalu - Yujia Zhai title: 'Five-Linear Singular Integral Estimates of Brascamp-Lieb Type' --- Introduction ============ Background and Motivation ------------------------- Brascamp-Lieb inequalities refer to inequalities of the form $$\begin{aligned} \label{classical_bl} \displaystyle \int_{\mathbb{R}^n} \big|\prod_{j=1}^{m}F_j(L_j(x))\big| dx \leq \text{BL}(\textbf{L,p})\prod_{j=1}^{m}\left(\int_{\mathbb{R}^{k_j}}|F_j|^{p_j}\right)^{\frac{1}{p_j}},\end{aligned}$$ where $\text{BL}(\textbf{L,p})$ represents the Brascamp-Lieb constant depending on $\textbf{L} := (L_j)_{j=1}^m$ and $\textbf{p} := (p_j)_{j=1}^m$. For each $1 \leq j \leq m$, $L_j: R^{n} \rightarrow R^{k_j}$ is a linear surjection and $p_j \geq 1$. One equivalent formulation of (\[classical\_bl\]) is $$\begin{aligned} \label{classical_bl_exp} \displaystyle \bigg(\int_{\mathbb{R}^n} \big|\prod_{j=1}^{m}F_j(L_j(x))\big|^r dx\bigg)^{\frac{1}{r}} \leq \text{BL}(\textbf{L},r\textbf{p})\prod_{j=1}^{m}\left(\int_{\mathbb{R}^{k_j}}|F_j|^{rp_j}\right)^{\frac{1}{rp_j}},\end{aligned}$$ for any $r > 0$. Brascamp-Lieb inequalities have been well-developed in [@bl], [@bcct], [@bbcf], [@bbbf], [@chv]. Examples of Brascamp-Lieb inequalities consist of Hölder’s inequality and the Loomis-Whitney inequality. Singular integral estimates corresponding to Hölder’s inequality have been studied extensively, including boundedness of single-parameter paraproducts [@cm] and multi-parameter paraproducts [@cptt], [@cptt_2], single-parameter flag paraproducts [@c_flag], bilinear Hilbert transform [@lt], multilinear operators of arbitrary rank [@mtt2002], etc. But it is of course natural to ask if there are similar singular integral estimates corresponding to Brascamp-Lieb inequalities that are not necessarily of Hölder type. This question was asked to us by Jonathan Bennett during a conference in Matsumoto, Japan, in February 2016. Since then, we adopted the informal definition of *singular integral estimate of Brascamp-Lieb type* as the singular integral estimate which is reduced to a classical Brascamp-Lieb inequality when the kernels are replaced by Dirac distributions. For the readers familiar with the recent expository work of Durcik and Thiele in [@dt2], this is similar to the generic estimate (2.3) from [@dt2]. So far, to the best of our knowledge, the only research article in the literature where the term “singular Brascamp-Lieb” has been used is the recent work by Durcik and Thiele [@dt]. However, we would like to emphasize that the basic inequalities[^1] corresponding to the “cubic singular expressions” considered in [@dt] are still of Hölder type, and the term “singular Brascamp-Lieb” was used to underline that the necessary and sufficient boundedness condition (1.6) of [@dt] is of the same flavor as the one for classical Brascamp-Lieb inequalities stated as (8) in [@bcct]. Techniques to tackle multilinear singular integral operators corresponding to Hölder’s inequality [@cm], [@cptt], [@cptt_2], [@c_flag], [@lt], [@mtt2002] usually involve localizations on phase space subsets of the full-dimension. In contrast, the understanding of singular integral estimates corresponding to Brascamp-Lieb inequalities with $k_j < n$ for some $k_j$ in (\[classical\_bl\_exp\]) (and thus not of Hölder scaling) is far beyond satisfaction. The ultimate goal would be to develop a general methodology to treat a large class of singular Brascamp-Lieb estimates that are not of Hölder type. It is natural to believe that such an approach would need to extract and integrate local information on subspaces of lower dimensions. Also due to its multilinear structure, localizations on the entire space could be necessary as well and a hybrid of both localized analyses would be demanded. The subject of our study in this present paper is one of the simplest multilinear operators, whose complete understanding cannot be reduced to earlier results[^2] and which requires such a new type of analysis. More precisely, it is the five-linear operator defined by $$\begin{aligned} \label{bi_flag_int} & T_{K_1K_2}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) \nonumber \\ = & p.v. \displaystyle \int_{\mathbb{R}^{10}} K_1\big((t_1^1, t_1^2),(t_2^1,t_2^2))K_2((s_1^1,s_1^2), (s_2^1,s_2^2), (s_3^1,s_3^2)\big) \cdot \nonumber \\ &\quad \quad \quad f_1(x-t_1^1-s_1^1)f_2(x-t_2^1-s_2^1)g_1(y-t_1^2-s_1^2)g_2(y-t_2^2-s_2^2)h(x-s_3^1,y-s_3^2) d\vec{t_1} d\vec{t_2} d \vec{s_1} d\vec{s_2} d\vec{s_3},\end{aligned}$$ where $\vec{t_i} = (t_i^1, t_i^2)$, $\vec{s_j} = (s_j^1,s_j^2)$ for $i = 1, 2$ and $j = 1,2,3$. In (\[bi\_flag\_int\]), $K_1$ and $K_2$ are Calderón-Zygmund kernels that satisfy $$\begin{aligned} & |\nabla K_1(\vec{t_1}, \vec{t_2})| \lesssim \frac{1}{|(t_1^1,t_2^1)|^{3}}\frac{1}{|(t_1^2,t_2^2)|^{3}}, \nonumber \\ & |\nabla K_2(\vec{s_1}, \vec{s_2}, \vec{s_3})| \lesssim \frac{1}{|(s_1^1,s_2^1,s_3^1)|^{4}}\frac{1}{|(s_1^2,s_2^2, s_3^2)|^{4}} .\end{aligned}$$ As one can see, the operator $T_{K_1K_2}$ takes two functions depending on the $x$ variable ($f_1$ and $f_2$), two functions depending on the $y$ variable ($g_1$ and $g_2$) and one depending on both $x$ and $y$ (namely $h$) into another function of $x$ and $y$. Our goal is to prove that $T_{K_1K_2}$ satisfies the mapping property $$L^{p_1}(\mathbb{R}_x) \times L^{q_1}(\mathbb{R}_x) \times L^{q_1}(\mathbb{R}_y) \times L^{q_2}(\mathbb{R}_y) \times L^{s}(\mathbb{R}^2) \rightarrow L^{r}(\mathbb{R}^2)$$ for $1 < p_1, p_2, q_1, q_2, s \leq \infty$, $r >0$, $(p_1,q_1), (p_2,q_2) \neq (\infty, \infty)$ with $$\label{bl_exp} \frac{1}{p_1} + \frac{1}{q_1} + \frac{1}{s} = \frac{1}{p_2} + \frac{1}{q_2} + \frac{1}{s} = \frac{1}{r}.$$ To verify that the boundedness of $T_{K_1K_2}$ qualifies to be a singular integral estimate of Brascamp-Lieb type, one can remove the singularities by setting $$\begin{aligned} & K_1(\vec{t_1}, \vec{t_2}) = \delta_{\textbf{0}}(\vec{t_1}, \vec{t_2}), \nonumber \\ & K_2(\vec{s_1}, \vec{s_2}, \vec{s_3}) = \delta_{\textbf{0}}(\vec{s_1}, \vec{s_2}, \vec{s_3}),\end{aligned}$$ and express its boundedness explicitly as $$\begin{aligned} \label{flag_bl} \|f_1(x) f_2(x) g_1(y) g_2(y) h(x,y)\|_{r} \lesssim \|f_1\|_{L^{p_1}(\mathbb{R}_x)}\|f_2\|_{L^{q_1}(\mathbb{R}_x)}\|g_1\|_{L^{p_2}(\mathbb{R}_y)} \|g_4\|_{L^{q_2}(\mathbb{R}_y)}\|h\|_{L^{s}(\mathbb{R}^2)}.\end{aligned}$$ The above inequality follows from Hölder’s inequality and the Loomis-Whitney inequality, which, in this simple two dimensional case, is the same as Fubini’s theorem. Clearly, it is an inequality of the same type as (\[classical\_bl\_exp\]), with a different homogeneity than Hölder. Moreover, this reduction shows that (\[bl\_exp\]) is indeed a necessary condition for the boundedness exponents of (\[flag\_bl\]) and thus of (\[bi\_flag\_int\]). Connection with Other Multilinear Objects ----------------------------------------- The connection with other well-established multilinear operators that we will describe next justifies that $T_{K_1K_2}$ defined in (\[bi\_flag\_int\]) is a reasonably simple and interesting operator to study, with the hope of inventing a general method that can handle a large class of singular integral estimates of Brascamp-Lieb type with non-Hölder scaling. Let $\mathcal{M}(\mathbb{R}^d)$ denote the set of all bounded symbols $m \in L^{\infty}(\mathbb{R}^d)$ smooth away from the origin and satisfying the Marcinkiewicz-Hörmander-Mihlin condition $$\left|\partial^{\alpha} m(\xi) \right| \lesssim \frac{1}{|\xi|^{|\alpha|}}$$ for any $\xi \in \mathbb{R}^d \setminus \{0\}$ and sufficiently many multi-indices $\alpha$. The simplest singular integral operator which corresponds to the two-dimensional Loomis-Whitney inequality would be $$\label{tensor_ht} T_{m_1m_2}(f^x, g^y)(x,y) := \int_{\mathbb{R}^2} m_1(\xi)m_2(\eta) {\widehat}{f}(\xi) {\widehat}{g}(\eta) e^{2 \pi i x \xi} 2^{2\pi i y\eta}d\xi d\eta,$$ where $m_1, m_2 \in \mathcal{M}(\mathbb{R})$. (\[tensor\_ht\]) is a tensor product of Hilbert transforms whose boundedness are well-known. The bilinear variant of (\[tensor\_ht\]) can be expressed as $$\begin{aligned} \label{tensor_para} &T_{m_1m_2}(f_1^x,f_2^x, g_1^y, g_2^y)(x,y) \nonumber \\ := & \int_{\mathbb{R}^4} m_1(\xi_1,\xi_2) m_2(\eta_1,\eta_2) {\widehat}{f_1}(\xi_1) {\widehat}{f_2}(\xi_2){\widehat}{g_1}(\eta_1) {\widehat}{g_2}(\eta_2)e^{2 \pi i x(\xi_1+\xi_2)}e^{2 \pi i y(\eta_1+\eta_2)} d\xi_1 d\xi_2 d\eta_1 d\eta_2,\end{aligned}$$ where $m_1, m_2 \in \mathcal{M}(\mathbb{R}^2)$. It can be separated as a tensor product of single-parameter paraproducts whose boundedness are proved by Coifman-Meyer’s theorem [@cm]. To avoid trivial tensor products of single-parameter operators, one then completes (\[tensor\_para\]) by adding a generic function of two variables thus obtaining $$\begin{aligned} \label{bi_pp} &T_{b}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y})(x,y) \nonumber \\ :=& \int_{\mathbb{R}^4} b((\xi_1,\eta_1),(\xi_2,\eta_2),(\xi_3,\eta_3)) {\widehat}{f_1 \otimes g_1}(\xi_1, \eta_1) {\widehat}{f_2 \otimes g_2}(\xi_2, \eta_2) {\widehat}{h}(\xi_3,\eta_3) \nonumber \\ & \quad \quad \cdot e^{2 \pi i x(\xi_1+\xi_2+ \xi_3)}e^{2 \pi i y(\eta_1+\eta_2+ \eta_3)} d\xi_1 d\xi_2 d\eta_1 d\eta_2,\end{aligned}$$ where $$\begin{aligned} & \left|\partial^{\alpha}_{(\xi_1,\xi_2,\xi_3)} \partial^{\beta}_{(\eta_1,\eta_2, \eta_3)} b \right| \lesssim \frac{1}{|(\xi_1,\xi_2,\xi_3)|^{|\alpha|}|(\eta_1,\eta_2,\eta_3)|^{|\beta|}}\end{aligned}$$ for sufficiently many multi-indices $\alpha$ and $\beta$. Such a multilinear operator is indeed a bi-parameter paraproduct whose theory has been developed by Muscalu, Pipher, Tao and Thiele [@cptt]. It also appeared naturally in nonlinear PDEs, such as Kadomtsev-Petviashvili equations studied by Kenig [@k]. To reach beyond bi-parameter paraproducts, one then replaces the singularity in each subspace by a flag singularity. In one dimension, the corresponding trilinear operator takes the form $$\label{flag} T_{m_1m_2}(f_1,f_2,f_3)(x) := \int_{\mathbb{R}^3} m_1(\xi_1,\xi_2)m_2(\xi_1,\xi_2,\xi_3) {\widehat}{f_1}(\xi_1) {\widehat}{f_2}(\xi_2) {\widehat}{f_3}(\xi_3) e^{2 \pi i x(\xi_1+\xi_2+\xi_3)} d\xi_1 d\xi_2 d\xi_3,$$ where $m_1 \in \mathcal{M}(\mathbb{R}^2)$ and $m_2 \in \mathcal{M}(\mathbb{R}^3)$. The operator (\[flag\]) was studied by Muscalu [@c_flag] using time-frequency analysis which applies not only to the operator itself, but also to all of its adjoints. Miyachi and Tomita [@mt] extended the $L^p$-boundedness for $p>1$ established in [@c_flag] to all Hardy spaces $H^p$ with $p > 0$. The single-parameter flag paraproduct and its adjoints are closely related to various nonlinear partial differential equations, including nonlinear Schrödinger equations and water wave equations as discovered by Germain, Masmoudi and Shatah [@gms]. Its bi-parameter variant is indeed related to the subject of our study and is equivalent to (\[bi\_flag\_int\]): $$\begin{aligned} \label{bi_flag_mult} T_{ab}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) := \int_{\mathbb{R}^6} & a((\xi_1,\eta_1),(\xi_2,\eta_2)) b((\xi_1,\eta_1),(\xi_2,\eta_2),(\xi_3,\eta_3)) {\widehat}{f_1 \otimes g_1}(\xi_1, \eta_1) {\widehat}{f_2 \otimes g_2}(\xi_2, \eta_2) \nonumber \\ & \cdot {\widehat}{h}(\xi_3,\eta_3) e^{2\pi i x(\xi_1+\xi_2+\xi_3)}e^{2\pi i y(\eta_1+\eta_2+\eta_3)} d \xi_1 d\xi_2 d\xi_3 d\eta_1 d\eta_2 d\eta_3,\end{aligned}$$ where $$\begin{aligned} & \left|\partial^{\alpha_1}_{(\xi_1,\xi_2)} \partial^{\beta_1}_{(\eta_1,\eta_2)} a\right| \lesssim \frac{1}{|(\xi_1,\xi_2)|^{|\alpha_1|}|(\eta_1,\eta_2)|^{|\beta_1|}}, \nonumber \\ & \left|\partial^{\alpha_2}_{(\xi_1,\xi_2,\xi_3)} \partial^{\beta_2}_{(\eta_1,\eta_2, \eta_3)} b \right| \lesssim \frac{1}{|(\xi_1,\xi_2,\xi_3)|^{|\alpha_2|}|(\eta_1,\eta_2,\eta_3)|^{|\beta_2|}},\end{aligned}$$ for sufficiently many multi-indices $\alpha_1, \beta_1, \alpha_2$ and $\beta_2$. The equivalence can be derived with $$\begin{aligned} & a = {\widehat}{K_1}, \nonumber \\ & b = {\widehat}{K_2}.\end{aligned}$$ The general bi-parameter trilinear flag paraproduct is defined on larger function spaces where the tensor products are replaced by general functions in the plane.[^3] From this perspective, $T_{ab}$ or equivalently $T_{K_1K_2}$ defined in (\[bi\_flag\_mult\]) and (\[bi\_flag\_int\]) respectively can be viewed as a trilinear operator with the desired mapping property $$T_{ab}: L^{p_1}_{x}(L^{p_2}_y) \times L^{q_1}_{x}(L^{q_2}_y) \times L^{s}(\mathbb{R}^2) \rightarrow L^{r}(\mathbb{R}^2)$$ for $ 1 < p_1, p_2, q_1, q_2, s \leq \infty$, $r > 0$, $(p_1, q_1), (p_2,q_2) \neq (\infty, \infty)$ and $\frac{1}{p_1} + \frac{1}{q_1} + \frac{1}{s} = \frac{1}{p_2} + \frac{1}{q_2} + \frac{1}{s} = \frac{1}{r}$, where the first two function spaces are restricted to be tensor-product spaces. The condition that $(p_1, q_1), (p_1,q_2) \neq (\infty, \infty)$ is inherited from single-parameter flag paraproducts and can be verified by the unboundedness of the operator when $f_1, f_2 \in L^{\infty}(\mathbb{R}_x)$ are constant functions. Lu, Pipher and Zhang [@lpz] showed that the general bi-parameter flag paraproduct can be reduced to an operator given by a symbol with better singularity using an argument inspired by Miyachi and Tomita [@mt]. The boundedness of the reduced multiplier operator still remains open. The reduction allows an alternative proof of $L^p$-boundedness for (\[bi\_flag\_mult\]) as long as $p \neq \infty$. However, we emphasize again that we will not take this point of view now, and instead, we treat our operator $T_{ab}$ as a five-linear operator. Methodology ----------- As one may notice from the last section, the five-linear operator $T_{ab}$ ( or $T_{K_1K_2}$) contains the features of the bi-parameter paraproduct defined in (\[bi\_pp\]) and the single-parameter flag paraproduct defined in (\[flag\]), which hints that the methodology would embrace localized analyses of both operators. Nonetheless, it is by no means a simple concatenation of two existing arguments. The methodology includes 1. **tensor-type stopping-time decomposition** which refers to an algorithm that first implements a one-dimensional stopping-time decomposition for each variable and then combines information for different variables to obtain estimates for operators involving several variables; 2. **general two-dimensional level sets stopping-time decomposition** which refers to an algorithm that partitions the collection of dyadic rectangles such that the dyadic rectangles in each sub-collection intersect with a certain level set non-trivially; and the main novelty lies in (i) the construction of two-dimensional stopping-time decompositions from stopping-time decompositions on one-dimensional subspaces; (ii) the hybrid of tensor-type and general two-dimensional level sets stopping-time decompositions in a meaningful fashion. The methodology outlined above is considered to be robust in the sense that it captures all local behaviors of the operator. The robustness may also be verified by the entire range of estimates obtained. After closer inspection of the technique, it would not be surprising that the technique gives estimates involving $L^{\infty}$-norms. In particular, the tensor-type stopping-time decompositions process information on each subspaces independently. As a consequence, when some function defined on some subspace lies in $L^{\infty}$, one simply “forgets" about that function and glues the information from subspaces in an intelligent way specified later. Structure --------- The paper is organized as follows: main theorems are stated in Chapter 2 followed by preliminary definitions and theorems introduced in Chapter 3. Chapter 4 describes the reduced discrete model operators and estimates one needs to obtain for the model operators while the reduction procedure is postponed to Appendix II. Chapter 5 gives the definition and estimates for the building blocks in the argument - sizes and energies. Chapter 6 - 9 focus on estimates for the model operators in the Haar case. All four chapters start with a specification of the stopping-time decompositions used. Chapter 10 extends all the estimates in the Haar setting to the general Fourier case. It is also important to notice that Chapter 6 develops an argument for one of the simpler model operators with emphasis on the key geometric feature implied by a stopping-time decomposition, that is the sparsity condition. Chapter 7 focuses on a more complicated model which requires not only the sparsity condition, but also a Fubini-type argument which is discussed in details. Chapter 8 and 9 are devoted to estimates involving $L^{\infty}$-norms and the arguments for those cases are similar to the ones in Chapter 6, in the sense that the sparsity condition is sufficient to obtain the results. Acknowledgements. ----------------- We thank Jonathan Bennett for the inspiring conversation we had in Matsumoto, Japan, in February 2016, that triggered our interest in considering and understanding singular integral generalizations of Brascamp-Lieb inequalities, and, in particular, the study of the present paper. We also thank Guozhen Lu, Jill Pipher and Lu Zhang for discussions about their recent work in [@lpz]. Finally, we thank Polona Durcik and Christoph Thiele for the recent conversation which clarified the similarities and differences between the results in [@dt] and those in our paper and [@bm2]. The first author was partially supported by a Grant from the Simons Foundation. The second author was partially supported by the ERC Project FAnFArE no. 637510. Main Results ============ We state the main results in Theorem \[main\_theorem\] and \[main\_thm\_inf\]. Theorem \[main\_theorem\] proves the boundedness when $p_i, q_i$ are strictly between $1$ and infinity whereas Theorem \[main\_thm\_inf\] deals with the case when $p_i = \infty$ or $q_j = \infty$ for some $i\neq j$. \[main\_theorem\] Suppose $a \in L^{\infty}(\mathbb{R}^4)$, $b\in L^{\infty}(\mathbb{R}^6)$, where $a$ and $b$ are smooth away from $\{(\xi_1,\xi_2) = 0 \} \cup \{(\eta_1,\eta_2) = 0 \}$ and $\{(\xi_1, \xi_2,\xi_3) = 0 \} \cup \{(\eta_1,\eta_2,\eta_3) = 0\}$ respectively and satisfy the following Marcinkiewicz conditions: $$\begin{aligned} & |\partial^{\alpha_1}_{\xi_1} \partial^{\alpha_2}_{\eta_1} \partial^{\beta_1}_{\xi_2} \partial^{\beta_2}_{\eta_2} a(\xi_1,\eta_1, \xi_2,\eta_2)| \lesssim \frac{1}{|(\xi_1,\xi_2)|^{\alpha_1 + \beta_1}} \frac{1}{|(\eta_1,\eta_2)|^{\alpha_2+\beta_2}}, \nonumber \\ & |\partial^{\bar{\alpha_1}}_{\xi_1} \partial^{\bar{\alpha_2}}_{\eta_1} \partial^{\bar{\beta_1}}_{\xi_2} \partial^{\bar{\beta_2}}_{\eta_2}\partial^{\bar{\gamma_1}}_{\xi_3} \partial^{\bar{\gamma_2}}_{\eta_3}b(\xi_1,\eta_1, \xi_2,\eta_2, \xi_3, \eta_3)| \lesssim \frac{1}{|(\xi_1,\xi_2, \xi_3)|^{\bar{\alpha_1} + \bar{\beta_1}+\bar{\gamma_1}}} \frac{1}{|(\eta_1,\eta_2, \eta_3)|^{\bar{\alpha_2}+\bar{\beta_2}+ \bar{\gamma_2}}}\end{aligned}$$ for sufficiently many multi-indices $\alpha_1,\alpha_2,\beta_1,\beta_2, \bar{\alpha_1}, \bar{\alpha_2},\bar{\beta_1},\bar{\beta_2}, \bar{\gamma_1}, \bar{\gamma_2} \geq 0$. For $f_1, f_2, g_1,g_2 \in \mathcal{S}(\mathbb{R})$ and $h \in \mathcal{S}(\mathbb{R}^2)$ where $\mathcal{S}(\mathbb{R})$ and $\mathcal{S}(\mathbb{R}^2)$ denote the Schwartz spaces, define $$\begin{aligned} \label{bi_flag} \displaystyle T_{ab}(f^x_1, f^x_2, g_1^y ,g^y_2,h^{x,y}) := \int_{\mathbb{R}^6} & a(\xi_1,\eta_1,\xi_2,\eta_2) b(\xi_1,\eta_1,\xi_2,\eta_2,\xi_3,\eta_3) \nonumber \\ & \hat{f_1}(\xi_1)\hat{f_2}(\xi_2)\hat{g_1}(\eta_1)\hat{g_2}(\eta_2)\hat{h}(\xi_3,\eta_3) \nonumber \\ & e^{2\pi i x(\xi_1+\xi_2+\xi_3)}e^{2\pi i y(\eta_1+\eta_2+\eta_3)} d \xi_1 d\xi_2 d\xi_3 d\eta_1 d\eta_2 d\eta_3.\end{aligned}$$ Then for $1< p_1, p_2, q_1, q_2, < \infty, 1 < s \leq \infty$, $r > 0$, $\frac{1}{p_1} + \frac{1}{q_1} + \frac{1}{s} =\frac{1}{p_2} + \frac{1}{q_2} + \frac{1}{s} = \frac{1}{r} $, $T_{ab}$ satisfies the following mapping property $$T_{ab}: L^{p_1}(\mathbb{R}_x) \times L^{q_1}(\mathbb{R}_x) \times L^{p_2}(\mathbb{R}_y) \times L^{q_2}(\mathbb{R}_y) \times L^{s}(\mathbb{R}^2) \rightarrow L^{r}(\mathbb{R}^2).$$ \[main\_thm\_inf\] Let $T_{ab}$ be defined as (\[bi\_flag\]). Then for $1< p < \infty$, $1 < s \leq \infty$, $r >0$, $\frac{1}{p} + \frac{1}{s} = \frac{1}{r}$, $T_{ab}$ satisfies the following mapping property $$\begin{aligned} T_{ab}: & L^{p}(\mathbb{R}_x) \times L^{\infty}(\mathbb{R}_x) \times L^{p}(\mathbb{R}_y) \times L^{\infty}(\mathbb{R}_y) \times L^{s} \rightarrow L^{r} \nonumber $$ where $p_1 = p_2 = p$ as imposed by (\[bl\_exp\]). The cases $(i) q_1 = q_2 < \infty$ and $p_1= p_2= \infty$ $(ii) p_1 = q_2 < \infty$ and $p_2 = q_1 = \infty$ $(iii) q_1 = p_2 < \infty$ and $p_1 = q_2 = \infty$ follows from the same argument by the symmetry. Restricted Weak-Type Estimates ------------------------------ For the Banach estimates when $r > 1$, Hölder’s inequality involving hybrid square and maximal functions is sufficient. The argument resembles the Banach estimates for the single-parameter flag paraproduct. The quasi-Banach estimates when $r < 1$ is trickier and requires a careful treatment. In this case, we use multilinear interpolations and reduce the desired estimates specified in Theorem \[main\_theorem\] and Theorem \[main\_thm\_inf\] to the following restricted weak-type estimates for the associated multilinear form[^4]. \[thm\_weak\] Let $T_{ab}$ denote the operator defined in (\[bi\_flag\]). Suppose that $1< p_1, p_2, q_1, q_2, < \infty, 1 < s <2$, $0 < r <1$, $\frac{1}{p_1} + \frac{1}{q_1} + \frac{1}{s} =\frac{1}{p_2} + \frac{1}{q_2} + \frac{1}{s} = \frac{1}{r}$. Then for any measurable set $F_1 \subseteq \mathbb{R}_{x} , F_2 \subseteq \mathbb{R}_{x}, G_1\subseteq \mathbb{R}_y, G_2\subseteq \mathbb{R}_y, E \subset \mathbb{R}^2$ of positive and finite measure and any measurable function $|f_1(x)| \leq \chi_{F_1}(x)$, $|f_2(x)| \leq \chi_{F_2}(x)$, $|g_1(y)| \leq \chi_{G_1}(y)$, $|g_2(y)| \leq \chi_{G_2}(y)$, $h \in L^{s}(\mathbb{R}^2)$, there exists $E' \subseteq E$ with $|E'| > |E|/2$ such that the multilinear form associated to $T_{ab}$ satisfies $$\label{thm_weak_explicit} |\Lambda(f_1^x, f_2^x, g_1^y, g_2^y, h^{xy},\chi_{E'}) | \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}} \|h\|_{L^{s}(\mathbb{R}^2)}|E|^{\frac{1}{r'}}.$$ .15in \[thm\_weak\_inf\] Let $T_{ab}$ denote the operator defined in (\[bi\_flag\]). Suppose that $1< p < \infty$, $1 < s < 2$, $0 < r < 1$, $\frac{1}{p} + \frac{1}{s} = \frac{1}{r}$. Then for any measurable set $F_1\subseteq \mathbb{R}_{x}$, $G_1 \subseteq \mathbb{R}_y$, $E \subset \mathbb{R}^2$ of positive and finite measure and every measurable function $|f_1(x)| \leq \chi_{F_1}(x)$, $|g_1(y)| \leq \chi_{G_1}(y)$, $f_2 \in L^{\infty}(\mathbb{R}_x)$, $g_2 \in L^{\infty}(\mathbb{R}_y)$, $h \in L^{s}(\mathbb{R}^2)$, there exists $E' \subseteq E$ with $|E'| > |E|/2$ such that the multilinear form associated to $T_{ab}$ satisfies $$\label{thm_weak_inf_explicit} |\Lambda(f_1^x, f_2^x, g_1^y, g_2^y, h^{xy},\chi_{E'}) | \lesssim |F_1|^{\frac{1}{p}} |G_1|^{\frac{1}{p}} \|f_2\|_{L_x^{\infty}} \|g_2\|_{L_y^{\infty}} \|h\|_{L^s(\mathbb{R}^2)}|E|^{\frac{1}{r'}}.$$ Theorem \[thm\_weak\] and \[thm\_weak\_inf\] hint the necessity of localization and the major subset $E'$ of $E$ is constructed based on the philosophy to localize the operator where it is well-behaved. The reduction of Theorem \[main\_theorem\] and \[main\_thm\_inf\] to Theorem \[thm\_weak\] and \[thm\_weak\_inf\] respectively will be postponed to Appendix I. In brief, it depends on the interpolation of multilinear forms described in Lemma 9.6 of [@cw] and a tensor-product version of Marcinkiewicz interpolation theorem. .15in Application - Leibniz Rule -------------------------- A direct corollary of Theorem \[main\_theorem\] is a Leibniz rule which captures the nonlinear interaction of waves coming from transversal directions. In general, Leibniz rules refer to inequalities involving norms of derivatives. The derivatives are defined in terms of Fourier transforms. More precisely, for $\alpha \geq 0$ and $f \in \mathcal{S}(\mathbb{R}^d)$ a Schwartz function in $\mathbb{R}^d$, define the homogeneous derivative of $f$ as $$D^{\alpha}f := \mathcal{F}^{-1}\left(|\xi|^{\alpha}{\widehat}{f}(\xi)\right).$$ Leibniz rules are closely related to boundedness of multilinear operators discussed in Section 1.2. For example, the boundedness of one-parameter paraproducts give rise to a Leibniz rule by Kato and Ponce [@kp]. For $f, g \in \mathcal{S}(\mathbb{R}^d)$ and $\alpha > 0$ sufficiently large, $$\label{lb_para} \| D^{\alpha} (fg)\|_r \lesssim \|D^{\alpha} f \|_{p_1} \|g \|_{q_1} + \| f \|_{p_2} \|D^{\alpha}g \|_{q_2}$$ with $1 < p_i, q_i < \infty, \frac{1}{p_i}+ \frac{1}{q_i} = \frac{1}{r}, i= 1,2.$ The inequality in (\[lb\_para\]) generalizes the trivial and well-known Leibniz rule when $\alpha = 1$ and states that the derivative for a product of two functions can be dominated by the terms which involve the highest order derivative hitting on one of the functions. The reduction of (\[lb\_para\]) to the boundedness of one-parameter paraproducts is routine (see Chapter 2 in [@cw] for details) and can be applied to other Leibniz rules with their corresponding multilinear operators, including the boundedness of our operator $T_{ab}$ and its Leibniz rule stated in Theorem \[lb\_main\] below. The Leibniz rule stated in Theorem \[lb\_main\] deals with partial derivatives, where the partial derivative of $f \in \mathcal{S}(\mathbb{R}^d)$ is defined, for $(\alpha_1,\ldots, \alpha_d)$ with $\alpha_1, \ldots, \alpha_d \geq 0$, as $$D_1^{\alpha_1}\cdots D_d^{\alpha_d}f := \mathcal{F}^{-1}\left(|\xi_1|^{\alpha_1} \cdots |\xi_d|^{\alpha_d}{\widehat}{f}(\xi_1,\ldots, \xi_d)\right).$$ \[lb\_main\] Suppose $f_1, f_2 \in \mathcal{S}(\mathbb{R}_x)$, $ g_1, g_2 \in \mathcal{S}(\mathbb{R}_y)$ and $h \in \mathcal{S}(\mathbb{R}^2).$ Then for $\beta_1, \beta_2, \alpha_1, \alpha_2 > 0$ sufficiently large and $1 < p^j_1, p^j_2, q^j_1, q^j_2, s^j \leq \infty$, $r >0$, $(p^j_1, q^j_1), (p^j_2, q^j_2) \neq (\infty, \infty)$, $\frac{1}{p^j_1} + \frac{1}{q^j_1} + \frac{1}{s^j}= \frac{1}{p^j_2} + \frac{1}{q^j_2} + \frac{1}{s^j}= \frac{1}{r} $ for each $j = 1, \ldots, 16 $, $$\begin{aligned} & \|D_1^{\beta_1} D_2^{\beta_2}(D_1^{\alpha_1}D_2^{\alpha_2}(f_1^x f_2^x g_1^y g_2^y) h^{x,y})\|_{L^r(\mathbb{R}^2)} \nonumber \\ \lesssim & \ \ \text{sum of \ \ }16 \text{\ \ terms of the forms: \ \ } \nonumber \\ & \|D_1^{\alpha_1+\beta_1}f_1\|_{L^{p^1_1}(\mathbb{R})} \|f_2\|_{L^{q^1_1}(\mathbb{R})} \|D_2^{\alpha_2 + \beta_2}g_1\|_{L^{p^1_2}(\mathbb{R})} \|g_2\|_{L^{q^1_2}(\mathbb{R})} \|h\|_{L^{s^1}(\mathbb{R}^2)} + \nonumber \\ & \|f_1\|_{L^{p^2_1}(\mathbb{R})} \|D_1^{\alpha+\beta_1}f_2\|_{L^{q^2_1}(\mathbb{R})} \|D_2^{\alpha_2 + \beta_2}g_1\|_{L^{p^2_2}(\mathbb{R})} \|g_2\|_{L^{q^2_2}(\mathbb{R})} \|h\|_{L^{s^2}(\mathbb{R}^2)} + \nonumber \\ & \|D_1^{\alpha+\beta_1}f_1\|_{L^{p^3_1}(\mathbb{R})} \|f_2\|_{L^{q^3_1}(\mathbb{R})} \|D_2^{\alpha_2}g_1\|_{L^{p^3_2}(\mathbb{R})} \|g_2\|_{L^{q^3_2}(\mathbb{R})} \|D_2^{\beta_2}h\|_{L^{s^3}(\mathbb{R}^2)} + \ldots\end{aligned}$$ The reasoning for the number “16” is that (i) for $\alpha_1$, there are $2$ possible distributions of highest order derivatives thus yielding 2 terms; (ii) for $\alpha_2$, there are $2$ terms for the same reason in (i); (iii) for $\beta_1$, it can hit $h$ or some function which comes from the dominant terms of $D^{\alpha_1}(f_1 f_2)$ and which have two choices as illustrated in (i), thus generating $2 \times 2 = 4$ terms; (iv) for $\beta_2$, there would be $4$ terms for the same reason in (iii). By summarizing (i)-(iv), one has the count $4 \times 4 = 16$. As commented in the beginning of this section, $f_1$ and $f_2$ in Theorem \[lb\_main\] can be viewed as waves coming from one direction while $g_1$ and $g_2$ are waves from the orthogonal direction. The presence of $h$, as a generic wave in the plane, makes the interaction nontrivial. Preliminaries ============= Terminology ----------- We will first introduce some notations which will be useful throughout the paper. Suppose $I \in \mathbb{R}$ is an interval. Then we say a smooth function $\phi$ is *adapted to $I$* if $$|\phi^{(l)}(x)| \leq C_l C_M \frac{1}{|I|^l} \frac{1}{\big(1+\frac{|x-x_I|}{|I|}\big)^M}$$ for sufficiently many derivatives $l$, where $x_I$ denotes the center of the interval $I$. \[bump\] Suppose $\mathcal{I}$ is a collection of dyadic intervals. Then a family of $L^2$-normalized bump functions $(\phi_I)_{I \in \mathcal{I}}$ is *lacunary* if and only if for every $ I \in \mathcal{I}$, $$\text{supp}\ \ {\widehat}{\phi_I} \subseteq [-4|I|^{-1}, \frac{1}{4}|I|^{-1}] \cup [\frac{1}{4}|I|^{-1}, 4|I|^{-1}].$$ A family of $L^2$-normalized bump functions $(\phi_I)_{I \in \mathcal{I}}$ is *non-lacunary* if and only if for every $ I \in \mathcal{I}$, $$\text{supp}\ \ {\widehat}{\phi_I} \subseteq [-4|I|^{-1}, 4|I|^{-1}].$$ We usually denote bump functions in lacunary family by $(\psi_I)_I$ and those in non-lacunary family by $({\varphi}_I)_I$. A simplified variant of bump functions given in Definition \[bump\] is specified as follows - Haar wavelets correspond to lacunary family of bump functions and $L^2$-normalized indicator functions are analogous to non-lacunary family of bump functions. \[bump\_walsh\] Define $$\psi^H(x) := \begin{cases} 1 \ \ \text{for}\ \ x \in [0,\frac{1}{2})\\ -1 \ \ \text{for}\ \ x \in [\frac{1}{2},1).\\ \end{cases}$$ Let $I := [n2^{k},(n+1)\cdot2^k)$ denote a dyadic interval. Then the Haar wavelet on $I$ is defined as $$\psi^H_I(x) := 2^{-\frac{k}{2}}\psi^H(2^{-k}x-n).$$ The $L^2$-normalized indicator function on $I$ is expressed as $${\varphi}^H_I(x) := |I|^{-\frac{1}{2}}\chi_{I}(x).$$ We shall remark that the boundedness of the multilinear form described in Theorem \[thm\_weak\] and \[thm\_weak\_inf\] can be reduced to the estimates of discrete model operators which are defined in terms of bump functions of the form specified in Definition \[bump\]. The precise statements are included in Theorem \[thm\_weak\_mod\] and \[thm\_weak\_inf\_mod\] and the proof is discussed in Appendix II. However, we will first study the simplified model operators with the general bump functions replaced by Haar wavelets and indicator functions defined in Definition \[bump\_walsh\]. The arguments for the simplified models capture the main challenges while avoiding some technical aspects. We will leave the generalization and the treatment of the technical details to Chapter 10. The simplified models would be denoted as Haar models and we will highlight the occasions when the Haar models are considered. Useful Operators - Definitions and Theorems ------------------------------------------- We also give explicit definitions for the Hardy-Littlewood maximal function, the discretized Littlewood-Paley square function and the hybrid square-and-maximal functions that will appear naturally in the argument. The *Hardy-Littlewood maximal operator* $M$ is defined as $$Mf(\vec{x}) = \sup_{\vec{x} \in B}\int_{B}|f(\vec{u})|d\vec{u}$$ where the supremum is taken over all open balls $B \subseteq \mathbb{R}^d$ containing $\vec{x}$. Suppose $\mathcal{I}$ is a finite family of dyadic intervals and $(\psi_I)_I$ a lacunary family of $L^2$-normalized bump functions. The *discretized Littlewood-Paley square function operator* $S$ is defined as $$Sf(x) = \bigg(\sum_{I \in \mathcal{I}}\frac{|\langle f, \psi_I\rangle|^2 }{|I|}\chi_{I}(x)\bigg)^{\frac{1}{2}}$$ Suppose $\mathcal{R}$ is a finite collection of dyadic rectangles. Let $(\phi_R)_{R \in \mathcal{R}}$ denote the family of $L^2$-normalized bump functions with $\phi_R = \phi_I \otimes \phi_J$ where $R= I \times J$. 1. the *double square function operator* $SS$ is defined as $$\displaystyle SSh(x,y) = \bigg(\sum_{I \times J } \frac{|\langle h, \psi_{I} \otimes \psi_J \rangle|^2 }{|I||J|} \chi_{I \times J} (x,y)\bigg)^{\frac{1}{2}};$$ 2. the *hybrid maximal-square operator* $MS$ is defined as $$MSh(x,y) = \sup_{I}\frac{1}{|I|^{\frac{1}{2}}} \bigg(\sum_{J} \frac{|\langle h, {\varphi}_I \otimes \psi_J \rangle|^2}{|J|} \chi_{J}(y)\bigg)^{\frac{1}{2}}\chi_I(x);$$ 3. the *hybrid square-maximal operator* $SM$ is defined as $$\displaystyle SMh(x,y) = \bigg(\sum_{I} \frac{\big(\sup_{J}\frac{|\langle h,\psi_I \otimes {\varphi}_J \rangle|}{|J|}\chi_J(y) \big)}{|I|}\chi_{I}(x)\bigg)^{\frac{1}{2}};$$ 4. the *double maximal function* $MM$ is defined as $$MM h(x,y) = \sup_{(x,y) \in R} \frac{1}{|R|}\int_{R}|h(s,t)| ds dt,$$ where the supremum is taken over all dyadic rectangles in $\mathcal{R}$ containing $(x,y)$. The following theorem about the operators defined above is used frequently in the argument. The proof of the theorem and other contexts where the hybrid operators appear can be found in [@cw], [@cf] and [@fs]. \[maximal-square\] 1. $M$ is bounded in $L^{p}(\mathbb{R}^{d})$ for $1< p \leq \infty$ and $M: L^{1} \longrightarrow L^{1,\infty}$. 2. $S$ is bounded in $L^{p}(\mathbb{R})$ for $1< p < \infty$. 3. The hybrid operators $SS, MS, SM, MM$ are bounded in $L^{p}(\mathbb{R}^2)$ for $1 < p < \infty$. .25in Discrete Model Operators ======================== In this chapter, we will introduce the discrete model operators whose boundedness implies the estimates specified in Theorem \[thm\_weak\] and Theorem \[thm\_weak\_inf\]. The reduction procedure follows from a routine treatment which has been discussed in [@cw]. The details will be enclosed in Appendix II for the sake of completeness. The model operators are usually more desirable because they are more “localizable”. The discrete model operators are defined as follows. \[discrete\_model\_op\] Suppose $\mathcal{I}, \mathcal{J}, \mathcal{K}$, $\mathcal{L}$ are finite collections of dyadic intervals. Suppose $\displaystyle(\phi^i_I)_{I\in \mathcal{I}}$, $ (\phi^j_J)_{J \in \mathcal{J}}$, $(\phi^k_K)_{K \in \mathcal{K}}$, $(\phi^{l}_L)_{L \in \mathcal{L}}$, $i, j, k, l = 1, 2, 3$ are families of $L^2$-normalized bump functions adapted to $I, J, K, L$ respectively. We further assume that at least two families of $(\phi^i_I)_{I\in \mathcal{I}}, i = 1, 2, 3, $ are lacunary. Same conditions are assumed for families $ (\phi^j_J)_{J \in \mathcal{J}}$, $(\phi^k_K)_{K \in \mathcal{K}} $ and $(\phi^l_L)_{L \in \mathcal{L}} $. In some models, we specify the lacunary and non-lacunary families by explicitly denoting the functions in the lacunary family as $\psi$ and those in the non-lacunary family as ${\varphi}$. Let $\#_1, \#_2$ denote some positive integers. Define 1. $$\Pi_{\text{flag}^0 \otimes \text{paraproduct}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) := \displaystyle \sum_{I \times J \in \mathcal{I} \times \mathcal{J}} \frac{1}{|I|^{\frac{1}{2}} |J|} \langle B_I(f_1,f_2),{\varphi}_I^1 \rangle \langle g_1,\phi^1_J \rangle \langle g_2, \phi^2_J \rangle \langle h, \psi_I^{2} \otimes \phi_{J}^2 \rangle \psi_I^{3} \otimes \phi_{J}^3$$ where $$B_I(f_1,f_2)(x) := \displaystyle \sum_{K \in \mathcal{K}:|K| \geq |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1 \rangle \langle f_2, \phi_K^2 \rangle \phi_K^3(x).$$ 2. $$\Pi_{\text{flag}^{\#_1} \otimes \text{paraproduct}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) := \displaystyle \sum_{I \times J \in \mathcal{I} \times \mathcal{J}} \frac{1}{|I|^{\frac{1}{2}} |J|} \langle B^{\#_1}_I(f_1,f_2),{\varphi}_I^1 \rangle \langle g_1,\phi^1_J \rangle \langle g_2, \phi^2_J \rangle \langle h, \psi_I^{2} \otimes \phi_{J}^2 \rangle \psi_I^{3} \otimes \phi_{J}^3$$ where $$B^{\#_1}_I(f_1,f_2)(x) := \displaystyle \sum_{K \in \mathcal{K}:|K| \sim 2^{\#_1} |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1 \rangle \langle f_2, \phi_K^2 \rangle \phi_K^3(x).$$ 3. $$\Pi_{\text{flag}^0 \otimes \text{flag}^0}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) := \displaystyle \sum_{I \times J \in \mathcal{I} \times \mathcal{J}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),{\varphi}_I^1 \rangle \langle \tilde{B_J}(g_1, g_2), {\varphi}_J^1 \rangle \langle h, \psi_I^{2} \otimes \psi_J^{2} \rangle \psi_I^{3} \otimes \psi_J^{3}$$ where $$B_I(f_1,f_2)(x) := \displaystyle \sum_{K \in \mathcal{K}:|K| \geq |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1 \rangle \langle f_2, \phi_K^2 \rangle \phi_K^3(x),$$ $$\tilde{B}_J(g_1,g_2)(y) := \displaystyle \sum_{L \in \mathcal{L}:|L| \geq |J|} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, \phi_L^1 \rangle \langle g_2, \phi_L^2 \rangle \phi_L^3(y).$$ 4. $$\Pi_{\text{flag}^0 \otimes \text{flag}^{\#_2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) := \displaystyle \sum_{I \times J \in \mathcal{I} \times \mathcal{J}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),{\varphi}_I^1 \rangle \langle \tilde{B}_J^{\#_2}(g_1, g_2), {\varphi}_J^1 \rangle \langle h, \psi_I^{2} \otimes \psi_J^{2} \rangle \psi_I^{3} \otimes \psi_J^{3}$$ where $$B_I(f_1,f_2)(x) := \displaystyle \sum_{K \in \mathcal{K}:|K| \geq |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1 \rangle \langle f_2, \phi_K^2 \rangle \phi_K^3(x),$$ $$\tilde{B}_J^{\#_2}(g_1,g_2)(y) := \displaystyle \sum_{L \in \mathcal{L}:|L| \sim 2^{\#_2}|J|} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, \phi_L^1 \rangle \langle g_2, \phi_L^2 \rangle \phi_L^3(y).$$ 5. $$\Pi_{\text{flag}^{\#_1}\otimes \text{flag}^{\#_2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}) := \displaystyle \sum_{I \times J \in \mathcal{I} \times \mathcal{J}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B^{\#_1}_I(f_1,f_2),{\varphi}_I^1 \rangle \langle \tilde{B}^{\#_2}_J(g_1, g_2), {\varphi}_J^1 \rangle \langle h, \psi_I^{2} \otimes \psi_J^{2} \rangle \psi_I^{3} \otimes \psi_J^{3}$$ where $$B^{\#_1}_I(f_1,f_2)(x) := \displaystyle \sum_{K \in \mathcal{K}:|K| \sim 2^{\#_1} |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1 \rangle \langle f_2, \phi_K^2 \rangle \phi_K^3(x),$$ $$\tilde{B}_J^{\#_2}(g_1,g_2)(y) := \displaystyle \sum_{L \in \mathcal{L}:|L| \sim 2^{\#_2} |J|} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, \phi_L^1 \rangle \langle g_2, \phi_L^2 \rangle \phi_L^3(y).$$ \[thm\_weak\_mod\] Let $\Pi_{\text{flag}^0 \otimes \text{paraproduct}}$, $ \Pi_{\text{flag}^{\#_1} \otimes \text{paraproduct}}$, $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$, $\Pi_{\text{flag}^0 \otimes \text{flag}^{\#_2}}$ and $\Pi_{\text{flag}^{\#_1}\otimes \text{flag}^{\#_2}}$ be multilinear operators specified in Definition \[discrete\_model\_op\]. Then all of them satisfy the mapping property stated in Theorem \[thm\_weak\], where the constants are independent of $\#_1,\#_2$ and the cardinalities of the collections $\mathcal{I}, \mathcal{J}, \mathcal{K}$ and $\mathcal{L}$. \[thm\_weak\_inf\_mod\] Let $\Pi_{\text{flag}^0 \otimes \text{paraproduct}}$, $ \Pi_{\text{flag}^{\#_1} \otimes \text{paraproduct}}$, $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$, $\Pi_{\text{flag}^0 \otimes \text{flag}^{\#_2}}$ and $\Pi_{\text{flag}^{\#_1}\otimes \text{flag}^{\#_2}}$ be multilinear operators specified in Definition \[discrete\_model\_op\]. Then all of them satisfy the mapping property stated in Theorem \[thm\_weak\_inf\], where the constants are independent of $\#_1,\#_2$ and the cardinalities of the collections $\mathcal{I}, \mathcal{J}, \mathcal{K}$ and $\mathcal{L}$. The following chapters are devoted to the proofs of Theorem \[thm\_weak\_mod\] and \[thm\_weak\_inf\_mod\] which would imply Theorem \[thm\_weak\] and \[thm\_weak\_inf\]. We will mainly focus on discrete model operators defined in $(3)$ (Chapter 7) and $(5)$ (Chapter 6), whose arguments consist of all the essential tools that are needed for other discrete models. Sizes and Energies ================== The notion of sizes and energies appear first in [@mtt] and [@mtt2]. Since they will play important roles in the main arguments, the explicit definitions of sizes and energies are introduced and some useful properties are highlighted in this chapter. Let $\mathcal{I}$ be a finite collection of dyadic intervals. Let $(\psi_I)_{I \in \mathcal{I}}$ denote a lacunary family of $L^2$-normalized bump functions and $({\varphi}_I)_{I \in \mathcal{I}}$ a non-lacunary family of $L^2$-normalized bump functions. Define (1) $$\text{size}_{\mathcal{I}}((\langle f, {\varphi}_I \rangle)_{I \in \mathcal{I}}) := \sup_{I \in \mathcal{I}} \frac{|\langle f, {\varphi}_I\rangle|}{|I|^{\frac{1}{2}}};$$ (2) $$\text{size}_{\mathcal{I}}((\langle f, \psi_I \rangle)_{I \in \mathcal{I}}) := \sup_{I_0 \in \mathcal{I}} \frac{1}{|I_0|}\left\Vert \bigg(\sum_{\substack{I \subseteq I_0 \\ I \in \mathcal{I}}} \frac{|\langle f, \psi_I \rangle|^2}{|I|} \chi_{I}\bigg)^{\frac{1}{2}}\right\Vert_{1,\infty};$$ (3) $$\text{energy} _{\mathcal{I}}((\langle f, {\varphi}_I \rangle)_{I \in \mathcal{I}}) := \sup_{n \in \mathbb{Z}} 2^{n} \sup_{\mathbb{D}_n} \sum_{I \in \mathbb{D}_n} |I|$$ where $\mathbb{D}_n$ ranges over all collections of disjoint dyadic intervals in $\mathcal{I}$ satisfying $$\frac{|\langle f,{\varphi}_I \rangle|}{|I|^{\frac{1}{2}}} > 2^n;$$ (4) $$\text{energy} _{\mathcal{I}}((\langle f, \psi_I \rangle)_{I \in \mathcal{I}}) := \sup_{n \in \mathbb{Z}} 2^{n} \sup_{\mathbb{D}_n} \sum_{I \in \mathbb{D}_n} |I|$$ where $\mathbb{D}_n$ ranges over all collections of disjoint dyadic intervals in $\mathcal{I}$ satisfying $$\frac{1}{|I|}\left\Vert \bigg(\sum_{\substack{\tilde{I} \subseteq I \\ \tilde{I} \in \mathcal{I}}} \frac{|\langle f, \psi_{\tilde{I}} \rangle|^2}{|\tilde{I}|} \chi_{\tilde{I}}\bigg)^{\frac{1}{2}}\right\Vert_{1,\infty} > 2^{n};$$ (5) For $t>1$, define $$\text{energy}^{t}_{\mathcal{I}}((\langle f, {\varphi}_I \rangle)_{I \in \mathcal{I}}) := \left(\sum_{n \in \mathbb{Z}}2^{tn}\sup_{\mathbb{D}_n}\sum_{I \in \mathbb{D}_n}|I| \right)^{\frac{1}{t}}$$ where $\mathbb{D}_n$ ranges over all collections of disjoint dyadic intervals in $\mathcal{I}$ satisfying $$\frac{|\langle f,{\varphi}_I \rangle|}{|I|^{\frac{1}{2}}} > 2^n.$$ Useful Facts about Sizes and Energies ------------------------------------- The following propositions describe facts about sizes and energies which will be heavily employed later on. Proposition \[JN\] and \[size\] are routine and the proofs can be found in Chapter 2 of [@cw]. Proposition \[energy\_classical\] consists of two parts - the first part is discussed in [@cw] while the second part is less standard. We will include the proof of both parts in Section 5.3 for the sake of completeness. Proposition \[size\_cor\], Proposition \[B\_en\_global\] and Proposition \[B\_en\] highlight the useful size and energy estimates involving the operators $B$ and $\tilde{B}$ in the Haar model. The emphasis on the Haar model assumption keeps track of the arguments we need to modify for the general Fourier case. It is noteworthy that Proposition \[B\_en\_global\] describes a “global” energy estimate while Proposition \[size\_cor\] and \[B\_en\] take into the consideration that the operators $B$ and $\tilde{B}$ are localized to intersect certain level sets which carry crucial information for the estimates of the sizes and energies for $B$ and $\tilde{B}$. While the proof of Proposition \[B\_en\_global\] follows from the boundedness of paraproducts ([@cm], [@cw]), the arguments for Proposition \[size\_cor\] and Proposition \[B\_en\] request localizations and more careful treatments that will be discussed in subsequent sections. \[JN\] Let $\mathcal{I}$ be a finite collection of dyadic intervals. For any sequence $(a_I)_{I \in \mathcal{I}}$ and $r > 0$, define the BMO-norm for the sequence as $$\|(a_I)_I\|_{\text{BMO}(r)} := \sup_{I_0 \in \mathcal{I}}\frac{1}{|I_0|^{\frac{1}{r}}} \left\Vert \left(\sum_{I \subseteq I_0} \frac{|a_I|^2}{|I|}\chi_{I}(x)\right)^{\frac{1}{2}}\right\Vert_r.$$ Then for any $0 < p < q < \infty$, $$\|(a_I)_I\|_{\text{BMO}(p)} \simeq \|(a_I)_I \|_{\text{BMO}(q)}.$$ \[size\] Suppose $f \in L^1(\mathbb{R})$. Then $$\text{size}_{\mathcal{I}}\big((\langle f,{\varphi}_I \rangle)_I\big), \text{size}_{\mathcal{I}}\big((\langle f,\psi_I \rangle)_I \big) \lesssim \sup_{I \in \mathcal{I}}\int_{\mathbb{R}}|f|\tilde{\chi}_I^M dx$$ for $M > 0$ and the implicit constant depends on $M$. $\tilde{\chi}_I$ is an $L^{\infty}$-normalized bump function adapted to $I$. \[energy\_classical\] 1. Suppose $f \in L^1(\mathbb{R})$. Then $$\text{energy}_{\mathcal{I}}((\langle f, {\varphi}_I \rangle))_I, \text{energy}_{\mathcal{I}}((\langle f, \psi_I \rangle))_I \lesssim \|f\|_1.$$ 2. Suppose $f \in L^t(\mathbb{R})$ for $t >1$. Then $$\text{energy}^t_{\mathcal{I}}((\langle f, {\varphi}_I \rangle))_I \lesssim \|f\|_t.$$ \[B\_en\_global\] Suppose that $F_1, F_2 \subseteq \mathbb{R}_x$ and $G_1, G_2 \subseteq \mathbb{R}_y$ are sets of finite measure and $|f_i| \leq \chi_{F_i}$, $|g_j| \leq \chi_{G_j}$, $i, j = 1,2$. Suppose that $\mathcal{K}$ and $\mathcal{L}$ are finite collections of dyadic intervals. Define $$\begin{aligned} & B(f_1,f_2)(x):= \sum_{K \in \mathcal{K}} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \phi_K^3 (x), \nonumber \\ & \tilde{B}(g_1,g_2)(y):= \sum_{L \in \mathcal{L}} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, \phi_L^1\rangle \langle g_2, \phi_L^2 \rangle \phi_L^3 (y).\end{aligned}$$ 1. Then for any $0 < \rho,\rho'<1$, one has $$\begin{aligned} & \text{energy}_{\mathcal{I}}((\langle B(f_1,f_2), {\varphi}_I \rangle)_{I \in \mathcal{I}}) \lesssim |F_1|^{\rho}|F_2|^{1-\rho}, \nonumber \\ & \text{energy}_{\mathcal{J}}((\langle \tilde{B}(g_1,g_2), {\varphi}_J \rangle)_{J \in \mathcal{J}}) \lesssim |G_1|^{\rho'}|G_2|^{1-\rho'}. \nonumber \\\end{aligned}$$ 2. Suppose that $t,s >1$. Then for any $0 \leq \theta_1, \theta_2, \zeta_1, \zeta_2 <1$, with $\theta_1 + \theta_2 = \frac{1}{t}$ and $\zeta_1 + \zeta_2 = \frac{1}{s}$, one has $$\begin{aligned} & \text{energy}^t_{\mathcal{I}}((\langle B(f_1,f_2), {\varphi}_I \rangle)_{I \in \mathcal{I}}) \lesssim |F_1|^{\theta_1}|F_2|^{\theta_2}, \nonumber \\ & \text{energy}^s_{\mathcal{J}}((\langle \tilde{B}(g_1,g_2), {\varphi}_J \rangle)_{J \in \mathcal{J}}) \lesssim |G_1|^{\zeta_1}|G_2|^{\zeta_2}. \nonumber \\\end{aligned}$$ It is not difficult to observe that Proposition \[B\_en\_global\] follows immediately from Proposition \[energy\_classical\] and the following lemma. \[B\_global\_norm\] Suppose that $1 < p_1,p_2 \leq \infty $ and $ 1 < q_1,q_2 \leq \infty$ with $(p_i, q_i) \neq (\infty, \infty)$ for $i = 1, 2$. Further assume that $\frac{1}{t} := \frac{1}{p_1}+ \frac{1}{q_1} <1$ and $\frac{1}{s} := \frac{1}{p_2}+ \frac{1}{q_2} <1$. Then for any $f_1 \in L^{p_1}$, $f_2 \in L^{q_1}$, $g_1 \in L^{p_2}$ and $g_2 \in L^{q_2}$, $$\begin{aligned} & \|B(f_1,f_2)\|_{t} \lesssim \|f_1\|_{L^{p_1}} \|f_2\|_{L^{q_1}}, \nonumber \\ & \|\tilde{B}(g_1,g_2)\|_{s} \lesssim \|g_1\|_{L^{p_2}} \|g_2\|_{L^{q_2}}.\end{aligned}$$ By identifying that $B$ and $\tilde{B}$ are one-parameter paraproducts, Lemma \[B\_global\_norm\] is a restatement of Coifman-Meyer’s theorem on the boundedness of paraproducts [@cm]. We will now turn our attention to local size estimates for $(\langle B_I^{\#_1,H}, {\varphi}_I \rangle)_I$ and $(\langle \tilde{B}_J^{\#_2,H}, {\varphi}_J \rangle)_J$ and local energy estimates for $(\langle B_I^H, {\varphi}_I \rangle )_I$ and $(\langle B_J^H, {\varphi}_J \rangle)_J$ in the Haar model. The precise definitions for the operators $B_I^{\#_1,H}, \tilde{B}_J^{\#_2,H}, B_I^H$ and $B_J^H$ are stated as follows. \[B\_def\] Suppose that $I$ and $J$ are fixed dyadic intervals and $\mathcal{K}$ and $\mathcal{L}$ are finite collections of dyadic intervals. Suppose that $(\phi_{K}^{i})_{K \in \mathcal{K}}, (\phi_{L}^{j})_{L \in \mathcal{L}}$ for $i, j = 1,2$ are families of $L^2$-normalized bump functions. Further assume that $(\phi_K^{3,H})_{K \in \mathcal{K}}$ and $(\phi_L^{3,H})_{L \in \mathcal{L}} $ are families of Haar wavelets or $L^2$-normalized indicator functions. A family of Haar wavelets are considered to be a lacunary family and a family of $L^2$-normalized indicator functions to be a non-lacunary family. Suppose that at least two families of $(\phi_{K}^{1})_K, (\phi_{K}^2)_K$ and $(\phi_{K}^{3,H})_K$ are lacunary and that at least two families of $(\phi_{L}^{1})_L, (\phi_{L}^2)_L$ and $(\phi_{L}^{3,H})_L$ are lacunary. Let (i) $$\begin{aligned} \label{B_size_haar} & B_I^{\#_1,H}(f_1, f_2)(x) := \displaystyle \sum_{K \in \mathcal{K}:|K| \sim 2^{\#_1} |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1 \rangle \langle f_2, \phi_K^2 \rangle \phi_K^{3,H}(x), \nonumber \\ & \tilde{B}_J^{\#_2,H}(g_1,g_2) (y) := \displaystyle \sum_{L \in \mathcal{L}:|L| \sim 2^{\#_2} |J|} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, \phi_L^1 \rangle \langle g_2, \phi_L^2 \rangle \phi_L^{3,H}(y);\end{aligned}$$ (ii) $$\begin{aligned} & B_{I}^H(f_1, f_2)(x) := \sum_{K: |K| \geq |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \phi_K^{3,H} (x), \nonumber \\ & \tilde{B}_{J}^H(g_1, g_2)(y) := \sum_{L: |L| \geq |J|} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, \phi_L^1\rangle \langle g_2, \phi_L^2 \rangle \phi_L^{3,H} (y).\end{aligned}$$ In the Haar model, for any fixed dyadic intervals $I$ and $K$, the only non-degenerate case $\langle \phi_K^{3,H}, {\varphi}_I^H \rangle \neq 0$ is that $K \supseteq I$. Such observation provides natural localizations for the sequence $(\langle B_I^{\#_1,H}, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'}$ and thus for the sequences $(\langle f_1, \phi_K^1 \rangle)_{K}$ and $(\langle f_2, \phi_K^2 \rangle)_{K}$ as explicitly stated in the following lemma. \[B\_size\] Suppose that $S$ is a measurable subset of $\mathbb{R}_{x}$ and $S'$ a measurable subset of $\mathbb{R}_{y}$. If $\mathcal{I}', \mathcal{J}' $ are finite collections of dyadic intervals such that $I \cap S \neq \emptyset$ for any $I \in \mathcal{I}'$ and $J \cap S' \neq \emptyset$ for any $J \in \mathcal{J}'$, then $$\text{size}_{\mathcal{I'}}((\langle B_I^{\#_1,H}, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'}) \lesssim \sup_{K \cap S \neq \emptyset}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \sup_{K \cap S \neq \emptyset}\frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}},$$ $$\text{size}_{\mathcal{J}'}((\langle \tilde{B}_J^{\#_2,H}, {\varphi}^H_J \rangle)_{J \in \mathcal{J}'}) \lesssim \sup_{L \cap S' \neq \emptyset}\frac{|\langle g_1, \phi_L^1 \rangle|}{|L|^{\frac{1}{2}}} \sup_{L \cap S' \neq \emptyset}\frac{|\langle g_2, \phi_L^2 \rangle|}{|L|^{\frac{1}{2}}}.$$ The localization generates more quantitative and useful estimates for the sizes involving $B_I^{\#_1,H}$ and $\tilde{B}_J^{\#_2,H}$ when $S$ and $S'$ are level sets of the Hardy-Littlewood maximal functions $Mf_1$ and $Mg_1$ as elaborated in the following proposition. One notational comment is that $C_1, C_2$ and $C_3$ used throughout the paper denote some sufficiently large constants greater than 1. \[Local Size Estimates in the Haar Model\]\[size\_cor\] Suppose that $F_1, F_2 \subseteq \mathbb{R}_x$ and $G_1, G_2 \subseteq \mathbb{R}_y$ are sets of finite measure and $|f_i| \leq \chi_{F_i}$, $|g_j| \leq \chi_{G_j}$, $i, j = 1,2$. Let $n_1, m_1, n_2, m_2$ denote some integers. Let $\mathcal{U}_{n_1,m_1}:=\{x: Mf_1(x) \leq C_12^{n_1} |F_1|\} \cap \{ x: Mf_2(x) \leq C_1 2^{m_1} |F_2|\}$ and $\mathcal{U}'_{n_2,m_2} := \{y: Mg_1(y) \leq C_2 2^{n_2} |G_1|\} \cap \{y: Mg_2(y) \leq C_1 2^{m_2} |G_2|\}$. If $ \mathcal{I}', \mathcal{J}' $ are finite collections of dyadic intervals such that $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$ for any $I \in \mathcal{I}'$ and $J \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset$ for any $J \in \mathcal{J}'$, then $$\text{size}_{\mathcal{I'}}((\langle B_I^{\#_1,H}, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'}) \lesssim (C_1 2^{n_1}|F_1|)^{\alpha_1} (C_1 2^{m_1}|F_2|)^{\beta_1},$$ $$\text{size}_{\mathcal{J}'}((\langle \tilde{B}_J^{\#_2,H}, {\varphi}^H_J \rangle)_{J \in \mathcal{J}'}) \lesssim (C_2 2^{n_2}|G_1|)^{\alpha_2} (C_2 2^{m_2}|G_2|)^{\beta_2},$$ for any $ 0 \leq \alpha_1, \alpha_2, \beta_1, \beta_2 \leq 1$. The proof of the proposition follows directly from Lemma \[B\_size\] and the trivial estimates $$\sup_{K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \lesssim \min(C_1 2^{n_1}|F_1|,1),$$ $$\sup_{K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}\frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} \lesssim \min(C_1 2^{m_1}|F_2|,1),$$ $$\sup_{L \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset}\frac{|\langle g_1, \phi_L^1 \rangle|}{|L|^{\frac{1}{2}}} \lesssim \min(C_2 2^{n_2}|G_1|,1),$$ $$\sup_{L \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset}\frac{|\langle g_2, \phi_L^2 \rangle|}{|L|^{\frac{1}{2}}} \lesssim \min(C_2 2^{m_2}|G_2|,1).$$ We will also explore the local energy estimates which are “stronger” than the global energy estimates. Heuristically, in the case when $f_1 \in L^{p_1}$ and $ f_2 \in L^{q_1}$ with $|f_1| \leq \chi_{F_1}$ and $|f_2| \leq \chi_{F_2}$ for $p_1,q_1>1$ and close to $1$, the global energy estimates would not yield the desired boundedness exponents for $|F_1|$ and $|F_2| $ whereas one could take advantages of the local energy estimates to obtain the result. In the Haar model, a perfect localization can be achieved for energy estimates involving bilinear operators $B^H_{I}$ and $\tilde{B}^H_J$ specified in Definition \[B\_def\](ii). In particular, the corresponding energy estimates can be compared to the energy estimates for $(\langle B^{n_1,m_1}_0, {\varphi}_I \rangle )_{I \in \mathcal{I}'}$ and $(\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J \rangle )_{J \in \mathcal{J}'}$ where $B^{n_1,m_1}_0$ and $\tilde{B}^{n_2,m_2}_0$ are localized operators defined as follows. Let $\mathcal{U}_{n_1,m_1}, \mathcal{U}'_{n_2,m_2}$ be defined as levels sets described in Proposition \[size\_cor\]. And suppose that $\mathcal{I}', \mathcal{J}' $ are finite collections of dyadic intervals such that $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$ for any $I \in \mathcal{I}'$ and $J \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset$ for any $J \in \mathcal{J}'$. Define $$B^{n_1,m_1}_0(f_1,f_2)(x):= \begin{cases} \displaystyle \sum_{K: K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset} \frac{1}{|K|^{\frac{1}{2}}}|\langle f_1, \psi_K^1\rangle| |\langle f_2, \psi_K^2 \rangle| |{\varphi}_K^{3,H} (x)| \ \ \text{if}\ \ \phi_K^{3,H} \ \ \text{is}\ \ L^2 \text{-normalized indicator func.} \\ \displaystyle \sum_{K: K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1\rangle \langle f_2, \psi_K^2 \rangle \psi_K^{3,H} (x) \quad \ \ \ \text{if}\ \ \phi_K^{3,H} \ \ \text{is Haar wavelet}, \\ \end{cases}$$ $$\tilde{B}^{n_2,m_2}_0(g_1,g_2)(y):= \begin{cases} \displaystyle \sum_{L: L \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset} \frac{1}{|L|^{\frac{1}{2}}}|\langle g_1, \psi_L^1\rangle| |\langle g_2, \psi_L^2 \rangle| |{\varphi}_L^{3,H} (y)| \ \ \text{if}\ \ \phi_L^{3,H} \ \ \text{is}\ \ L^2 \text{-normalized indicator func.} \\ \displaystyle \sum_{L: L \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset} \frac{1}{|L|^{\frac{1}{2}}}\langle g_1, {\varphi}_L^1\rangle \langle g_2, \psi_L^2 \rangle \psi_L^{3,H} (y) \quad \ \ \ \text{if}\ \ \phi_L^{3,H} \ \ \text{is Haar wavelet}. \\ \end{cases}$$ We would like to emphasize that $B^{n_1,m_1}_0$ and $\tilde{B}^{n_2,m_2}_0$ are localized to intersect level sets $\mathcal{U}_{n_1,m_1}$ and $\mathcal{U}'_{n_2,m_2}$ nontrivially. It is not difficult to imagine that the energy estimates for $(\langle B^{n_1,m_1}_0, {\varphi}_I \rangle )_{I \in \mathcal{I}'}$ and $(\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J \rangle )_{J \in \mathcal{J}'}$ would be better than the “global” energy estimates ($i.e. \ \ \text{energy}(\langle B(f_1, f_2), {\varphi}_I\rangle_{I} )$ and $\text{energy}(\langle \tilde{B}(g_1, g_2), {\varphi}_J\rangle_{J} )$) since one can now employ the information about intersections with level sets to control $$\frac{|\langle f_1, \phi_I \rangle|}{|I|^{\frac{1}{2}}}, \frac{|\langle f_2, \phi_I \rangle|}{|I|^{\frac{1}{2}}}, \frac{|\langle g_1, \phi_J \rangle|}{|J|^{\frac{1}{2}}}, \frac{|\langle g_2, \phi_J \rangle|}{|J|^{\frac{1}{2}}}.$$ The energy estimates for $(\langle B^H_I, {\varphi}_I \rangle )_{I \in \mathcal{I}'}$ and $(\langle \tilde{B}^H_J, {\varphi}_J \rangle )_{J \in \mathcal{J}'}$ can indeed be reduced to the energy estimates for $(\langle B^{n_1,m_1}_0, {\varphi}_I \rangle )_{I \in \mathcal{I}'}$ and $(\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J \rangle )_{J \in \mathcal{J}'}$ as stated in Lemma \[localization\_haar\]. \[localization\_haar\] Suppose that $\mathcal{I}', \mathcal{J}' $ are finite collections of dyadic intervals such that $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$ for any $I \in \mathcal{I}'$ and $J \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset$ for any $J \in \mathcal{J}'$. Then $$\begin{aligned} & \text{energy}_{\mathcal{I}'}((\langle B^H_I, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'}) \leq \text{energy}_{\mathcal{I}'}((\langle B^{n_1,m_1}_0, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'}), \nonumber \\ & \text{energy}_{\mathcal{J}'}((\langle \tilde{B}^H_J, {\varphi}^H_J \rangle)_{J \in \mathcal{J}'}) \leq \text{energy}_{\mathcal{J}'}((\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J^H \rangle)_{J \in \mathcal{J}'}).\end{aligned}$$ The following local energy estimates will play a crucial role in the proof of our main theorem. \[B\_en\] Suppose that $F_1, F_2 \subseteq \mathbb{R}_x$ and $G_1, G_2 \subseteq \mathbb{R}_y$ are sets of finite measure and $|f_i| \leq \chi_{F_i}$, $|g_j| \leq \chi_{G_j}$, $i, j = 1,2$. Assume that $\mathcal{I}', \mathcal{J}' $ are finite collections of dyadic intervals such that $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$ for any $I \in \mathcal{I}'$ and $J \cap \mathcal{U}'_{n_2,m_2} \neq \emptyset$ for any $J \in \mathcal{J}'$. Further assume that $\frac{1}{p_1} + \frac{1}{q_1}=\frac{1}{p_2}+ \frac{1}{q_2} > 1$. (i) Then for any $0 \leq \theta_1,\theta_2 <1$ with $\theta_1 + \theta_2 = 1$ and $0 \leq \zeta_1,\zeta_2 <1$ with $\zeta_1 + \zeta_2= 1$, one has $$\begin{aligned} &\text{energy}_{\mathcal{I}'}((\langle B^H_I, {\varphi}_I^H \rangle)_{I \in \mathcal{I}'}) \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2} 2^{n_1(\frac{1}{p_1} - \theta_1)} 2^{m_1(\frac{1}{q_1} - \theta_2)} |F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}}, \nonumber \\ & \text{energy}_{\mathcal{J}'}((\langle \tilde{B}^H_J, {\varphi}_J^H \rangle)_{J \in \mathcal{J}'}) \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2} 2^{n_2(\frac{1}{p_2} - \zeta_1)} 2^{m_2(\frac{1}{q_2} - \zeta_2)} |G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}}. $$ (ii) Suppose that $t,s >1$. Then for any $0 \leq \theta_1,\theta_2, \zeta_1, \zeta_2 <1$ with $\theta_1 + \theta_2 = \frac{1}{t}$ and $\zeta_1 + \zeta_2= \frac{1}{s}$, one has $$\begin{aligned} & \text{energy}^{t} _{\mathcal{I}'}((\langle B^H_I, {\varphi}_I^H \rangle)_{I \in \mathcal{I}'}) \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2}2^{n_1(\frac{1}{p_1} - \theta_1)}2^{m_1(\frac{1}{q_1} - \theta_2)}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}, \nonumber \\ & \text{energy}^{t} _{\mathcal{J}'}((\langle \tilde{B}^H_J, {\varphi}_J^H \rangle)_{J \in \mathcal{J}'}) \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2}2^{n_2(\frac{1}{p_2} - \zeta_1)}2^{m_2(\frac{1}{q_2} - \zeta_2)}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}. $$ The condition that $$\label{diff_exp} \frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2}+ \frac{1}{q_2} > 1$$ is required in the proof the proposition. Moreover, the energy estimates in Proposition \[B\_en\] are useful for the range of exponents specified as (\[diff\_exp\]). A simpler argument without the use of Proposition \[B\_en\] can be applied for the other case $$\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2}+ \frac{1}{q_2} \leq 1.$$ \[loc\_easy\_haar\] Thanks to the localization specified in Lemma \[localization\_haar\], it suffices to prove that $$\text{energy}_{\mathcal{I}'}((\langle B^{n_1,m_1}_0, {\varphi}_I^H \rangle)_{I \in \mathcal{I}'}),$$ $$\text{energy}_{\mathcal{J}'}((\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J^H \rangle)_{J \in \mathcal{J}'})$$ satisfy the same estimates on the right hand side of the inequalities in Proposition \[B\_en\], equivalently 1. for any $0 \leq \theta_1,\theta_2 <1$ with $\theta_1 + \theta_2 = 1$ and $0 \leq \zeta_1,\zeta_2 <1$ with $\zeta_1 + \zeta_2= 1$, $$\begin{aligned} &\text{energy}_{\mathcal{I}'}((\langle B^{n_1,m_1}_0, {\varphi}_I^H \rangle)_{I \in \mathcal{I}'}) \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2} 2^{n_1(\frac{1}{p_1} - \theta_1)} 2^{m_1(\frac{1}{q_1} - \theta_2)} |F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}},\nonumber \\ & \text{energy}_{\mathcal{J}'}((\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J^H \rangle)_{J \in \mathcal{J}'}) \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2} 2^{n_2(\frac{1}{p_2} - \zeta_1)} 2^{m_2(\frac{1}{q_2} - \zeta_2)} |G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}}; $$ 2. for any $0 \leq \theta_1,\theta_2,\zeta_1, \zeta_2 <1$ with $\theta_1 + \theta_2 = \frac{1}{t}$ and $\zeta_1 + \zeta_2= \frac{1}{s}$, $$\begin{aligned} & \text{energy}^{t} _{\mathcal{I}'}((\langle B^{n_1,m_1}_0, {\varphi}_I^H \rangle)_{I \in \mathcal{I}'}) \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2}2^{n_1(\frac{1}{p_1} - \theta_1)}2^{m_1(\frac{1}{q_1} - \theta_2)}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}, \nonumber \\ & \text{energy}^{t} _{\mathcal{J}'}((\langle \tilde{B}^{n_2,m_2}_0, {\varphi}_J^H \rangle)_{J \in \mathcal{J}'}) \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2}2^{n_2(\frac{1}{p_2} - \zeta_1)}2^{m_2(\frac{1}{q_2} - \zeta_2)}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}. $$ Due to Proposition \[energy\_classical\], the proofs of (i’) and (ii’) and thus of (i) and (ii) can be reduced to verifying Lemma \[B\_loc\_norm\]. \[B\_loc\_norm\] Suppose that $F_1, F_2 \subseteq \mathbb{R}_x$ and $G_1, G_2 \subseteq \mathbb{R}_y$ are sets of finite measure and $|f_i| \leq \chi_{F_i}$, $|g_j| \leq \chi_{G_j}$, $i, j = 1,2$. Fix $t,s \geq 1$. Then for any $0 \leq \theta_1,\theta_2, \zeta_1, \zeta_2 <1$ with $\theta_1 + \theta_2 = \frac{1}{t}$ and $\zeta_1 + \zeta_2= \frac{1}{s}$, one has $$\begin{aligned} & \|B_0^{n_1,m_1}(f_1,f_2)\|_t \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2} 2^{n_1(\frac{1}{p_1} - \theta_1)} 2^{m_1(\frac{1}{q_1} - \theta_2)} |F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}}, \nonumber \\ & \|\tilde{B}_0^{n_2,m_2}(g_1,g_2)\|_{s} \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2} 2^{n_2(\frac{1}{p_2} - \zeta_1)} 2^{m_2(\frac{1}{q_2} - \zeta_2)} |G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}}. \nonumber \\\end{aligned}$$ .15in Proof of Proposition \[energy\_classical\] ------------------------------------------ (i) One notices that there exists an integer $n_0$ and a disjoint collection of intervals, denoted by $\mathbb{D}^{0}_{n_0}$ such that $$\text{energy} _{\mathcal{I}'}((\langle f, {\varphi}_I \rangle)_{I \in \mathcal{I}'}) = 2^{n_0} \sum_{\substack{I \in \mathbb{D}_{n_0}^{0}\\ I \in \mathcal{I'}}}|I|\label{energy_1}$$\[B\_energy\] where for any $I \in \mathbb{D}^0_n$, $$\frac{|\langle f, {\varphi}_I \rangle|}{|I|^{\frac{1}{2}}} > 2^{n_0}.$$ Meanwhile for any $x \in I$, $$Mf(x) \geq \frac{|\langle f, {\varphi}_I \rangle|}{|I|^{\frac{1}{2}}}, $$ which implies that $$I \subseteq \{Mf(x) > 2^{n_0}\}$$ for any $I \in \mathbb{I}'$ satisfying (\[st\_interval\]). Then by the disjointness of $\mathbb{D}^0_{n_0}$, one can estimate the energy as follows $$\text{energy} _{\mathcal{I}}((\langle B_I, {\varphi}_I \rangle)_{I \in \mathcal{I}'} \leq 2^{n_0 } |\{Mf > 2^{n_0}\}| \leq \|Mf\|_{1,\infty} \lesssim \|f\|_1.$$ (ii) One observes that for each $n$, there exists a disjoint collection of intervals, denoted by $\mathbb{D}^{0}_n$ such that $$\text{energy}^{t} _{\mathcal{I}'}((\langle f, {\varphi}_I \rangle)_{I \in \mathcal{I}'}) = \bigg(\sum_{n}2^{tn} \sum_{\substack{I \in \mathbb{D}_n^{0}\\ I \in \mathcal{I'}}}|I|\bigg)^{\frac{1}{t}}\label{energy_p}$$\[B\_energy\] where for any $I \in \mathbb{D}^0_n$, $$\frac{|\langle f, {\varphi}_I \rangle|}{|I|^{\frac{1}{2}}} > 2^{n}.$$ By the same reasoning in (i), $$I \subseteq \{Mf(x) > 2^{n}\}.$$ Then by the disjointness of $\mathbb{D}^0_n$, one can estimate the energy as follows $$\text{energy}^{t} _{\mathcal{I}'}((\langle f, {\varphi}_I \rangle)_{I \in \mathcal{I}'} \leq \big(\sum_{n}2^{tn } |\{Mf(x) > 2^{n}\}|\big)^{\frac{1}{t}} \lesssim \|Mf\|_{t}.$$ One can then apply the fact that the mapping property of maximal operator $M: L^{t} \rightarrow L^{t}$ for $t >1$ and derive $$\|Mf\|_{t} \lesssim \|f\|_{t}.$$ Proof of Propositions \[B\_size\] --------------------------------- Without loss of generality, we will prove the first size estimate and the second follows from the same argument. One recalls the definition of $$\text{size}_{\mathcal{I'}}((\langle B_I^{\#_1,H}, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'} = \frac{|\langle B^{\#_1,H}_{I_0}(f_1,f_2),{\varphi}_{I_0}^H \rangle|}{|I_0|^{\frac{1}{2}}}$$ for some $I_0 \in \mathcal{I}'$ with the property that $I_0 \cap S \neq \emptyset$ by the assumption. Then $$\begin{aligned} \frac{|\langle B^{\#_1,H}_{I_0}(f_1,f_2),{\varphi}_{I_0}^H \rangle|}{|I_0|^{\frac{1}{2}}} \leq & \frac{1}{|I_0|}\sum_{K:|K|\sim 2^{\#_1}|I_0|}\frac{1}{|K|^{\frac{1}{2}}}|\langle f_1, \phi_K^1 \rangle| |\langle f_2, \phi_K^2 \rangle| |\langle |I_0|^{\frac{1}{2}}{\varphi}^H_{I_0},\phi_K^{3,H} \rangle| \nonumber \\ = & \frac{1}{|I_0|}\sum_{K:|K|\sim 2^{\#_1}|I_0|}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} |\langle |I_0|^{\frac{1}{2}}{\varphi}_{I_0}^H, |K|^{\frac{1}{2}}\phi_K^{3,H} \rangle|. \nonumber \\\end{aligned}$$ Since $ {\varphi}_{I_0}^H$ and $\phi_K^{3,H}$ are compactly supported on $I_0$ and $K$ respectively with $|I_0| \leq |K|$, one has $$\langle |I_0|^{\frac{1}{2}}{\varphi}_{I_0}^H, |K|^{\frac{1}{2}}\phi_K^{3,H} \rangle \neq 0$$ if and only if $$I_0 \subseteq K.$$ By the hypothesis that $I_0 \cap S \neq \emptyset$, one derives that $K \cap S\neq \emptyset$ and $$\begin{aligned} \frac{|\langle B^{\#_1}_{I_0}(f_1,f_2),{\varphi}_{I_0}^1 \rangle|}{|I_0|^{\frac{1}{2}}} \leq &\frac{1}{|I_0|} \sup_{K \cap S \neq \emptyset}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \sup_{K \cap S \neq \emptyset}\frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}}\sum_{K:|K|\sim 2^{\#_1}|I_0|}|\langle |I_0|^{\frac{1}{2}}{\varphi}^H_{I_0}, |K|^{\frac{1}{2}}\phi_K^{3,H} \rangle| \nonumber \\ \lesssim & \frac{1}{|I_0|} \sup_{K \cap S \neq \emptyset}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \sup_{K \cap S\neq \emptyset}\frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} \cdot |I_0|,\end{aligned}$$ where the last inequality holds trivially given that $|I_0|^{\frac{1}{2}}{\varphi}^H_{I_0}$ is an $L^{\infty}$-normalized characteristic function of $I_0$ and $|K|^{\frac{1}{2}}\phi_K^{3,H}$ an $L^{\infty}$-normalized characteristic function of $K$. This completes the proof of the proposition. Proof of Lemma \[localization\_haar\] ------------------------------------- Suppose that for any $I \in \mathcal{I}'$, $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$. By definition of energy, there exists $n \in \mathbb{Z}$ and a disjoint collection of dyadic intervals $\mathbb{D}^0_{n}$ such that $$\text{energy} _{\mathcal{I}'}((\langle B^H_I, {\varphi}^H_I \rangle)_{I \in \mathcal{I}'}) := 2^{n} \sum_{\substack{I \in \mathbb{D}^0_{n}\\ I \in \mathcal{I'}}}|I| \label{energy}$$\[B\_energy\] where $$\label{st_interval} \frac{|\langle B^H_I, {\varphi}^H_I \rangle|}{|I|^{\frac{1}{2}}} > 2^{n}.$$ **Case I. $(\phi^3_K)_K$ is lacunary.** One recalls that in the Haar model, $$\langle B^H_I, {\varphi}_I^H \rangle:= \frac{1}{|I|^{\frac{1}{2}}} \sum_{\substack{K \in \mathcal{K} \\ |K| \geq |I|}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \langle {\varphi}^H_I,\psi_K^{3,H} \rangle$$ where ${\varphi}^H_1$ is an $L^2$-normalized indicator function of $I$ and $\psi_K^{3,H}$ is a Haar wavelet on $K$. It is not difficult to observe that $$\label{haar_biest_cond} \langle {\varphi}^H_I,\psi_K^{3,H} \rangle \neq 0 \iff K \supseteq I.$$ Given $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$, one can deduce that $K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$. As a consequence, $$\begin{aligned} \label{haar_biest} \langle B^H_I, {\varphi}^H_I \rangle =& \sum_{\substack{K \in \mathcal{K} \\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset \\ |K| \geq |I|}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \langle {\varphi}^H_I,\psi_K^{3,H} \rangle \nonumber \\ = &\sum_{\substack{K \in \mathcal{K} \\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \langle {\varphi}^H_I,\psi_K^{3,H} \rangle.\end{aligned}$$ Let $$B^{n_1,m_1}_0(f_1, f_2)(x) := \sum_{\substack{K \in \mathcal{K} \\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \psi_K^{3,H}(x).$$ Then $$\langle B^H_I, {\varphi}^H_I \rangle = \langle B^{n_1,m_1}_0, {\varphi}^H_I \rangle.$$ In the Haar model, equation (\[haar\_biest\]) trivially holds due to (\[haar\_biest\_cond\]). Such technique of replacing the operator defined in terms of $I$ (namely $B_I^H$) by another operator independent of $I$ (namely $B^{n_1,m_1}_0$) is called **biest trick** which allows neat energy estimates for $$\begin{aligned} & \text{energy}((\langle B_0^{n_1,m_1}, {\varphi}_I \rangle)_{I \in \mathcal{I}}), \nonumber \\ & \text{energy}((\langle \tilde{B}_0^{n_2,m_2}, {\varphi}_J \rangle)_{J \in \mathcal{J}}), \nonumber \end{aligned}$$ and yields a local energy estimates described in Proposition \[B\_en\]. **Case II: $(\phi^3_K)_K$ is non-lacunary.** Since $\phi^{3,H}_K$ and ${\varphi}_I^H$ are $L^2$-normalized indicator functions of $K$ and $I$ respectively, $|K| \leq |I|$ implies that $K \supseteq I$. As a result, $K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$ given $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$. Then $$\begin{aligned} \frac{|\langle B_I^H, {\varphi}_I^H \rangle|}{|I|^{\frac{1}{2}}} = & \frac{1}{|I|^{\frac{1}{2}}} \bigg|\sum_{\substack{K \in \mathcal{K} \\ K \supseteq I \\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \langle {\varphi}^{H}_I,{\varphi}_K^3 \rangle \bigg| \nonumber \\ \leq & \frac{1}{|I|^{\frac{1}{2}}} \sum_{\substack{K \in \mathcal{K} \\ K \supseteq I \\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} |\langle f_1, \phi_K^1\rangle| |\langle f_2, \phi_K^2 \rangle| |\langle |{\varphi}^{H}_I|,|{\varphi}_K^3| \rangle.\end{aligned}$$ One can drop the condition $K \supseteq I$ in the sum and bound the above expression by $$\frac{|\langle B_I^H, {\varphi}_I^H \rangle|}{|I|^{\frac{1}{2}}} \leq \frac{1}{|I|} \sum_{\substack{K \in \mathcal{K}\\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} |\langle f_1, \phi_K^1\rangle| |\langle f_2, \phi_K^2 \rangle| \langle |{\varphi}_I^H|,|{\varphi}_K^3| \rangle.$$ One can define the localized operator in this case $$B^{n_1,m_1}_0(x) := \displaystyle \sum_{\substack{K \in \mathcal{K}\\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} |\langle f_1, \phi_K^1\rangle| |\langle f_2, \phi_K^2 \rangle| |{\varphi}_K^3|(x).$$ The discussion above yields that $$\frac{|\langle B_I^H, {\varphi}_I^H \rangle|}{|I|^{\frac{1}{2}}} \leq \frac{|\langle B_0^{n_1,m_1}, {\varphi}_I^H \rangle|}{|I|^{\frac{1}{2}}}$$ and therefore $$\text{energy}_{\mathcal{I}'}(\langle B_I^H, {\varphi}_I^H \rangle_{I \in \mathcal{I'}}) \leq \text{energy}_{\mathcal{I}'}(\langle B^{n_1,m_1}_0, {\varphi}_I^H \rangle_{I \in \mathcal{I'}}).$$ This completes the proof of the lemma. $B^{n_1,m_1}_0$ is perfectly localized in the sense that the dyadic intervals (that matter) intersect with $\mathcal{U}_{n_1,m_1}$ nontrivially given that $I \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$. As will be seen from the proof of Lemma \[B\_loc\_norm\], such localization is essential in deriving desired estimates. In the general Fourier case, more efforts are needed to create similar localizations as will be discussed in Chapter 10. Proof of Lemma \[B\_loc\_norm\] ------------------------------- The estimates described in Lemma \[B\_loc\_norm\] can be obtained by a very similar argument for proving the boundedness of one-parameter paraproducts discussed in Chapter 2 of [@cw]. We would include the customized proof here since the argument depends on a one-dimensional stopping-time decomposition which is also an important ingredient for our tensor-type stopping-time decompositions that will be introduced in later chapters. ### One-dimensional stopping-time decomposition - maximal intervals Given finiteness of the collection of dyadic intervals $\mathcal{K}$, there exists some $K_1 \in \mathbb{Z}$ such that $$\frac{|\langle f_1, {\varphi}^1_K \rangle|}{|K|^{\frac{1}{2}}} \leq C_1 2^{K_1} \text{energy}_{\mathcal{K}}((\langle f_1, {\varphi}_K \rangle)_{K \in \mathcal{K}}).$$ We can pick the largest interval $K_{\text{max}}$ such that $$\frac{|\langle f_1, {\varphi}^1_{K_{\text{max}}} \rangle|}{|K_{\text{max}}|^{\frac{1}{2}}} > C_1 2^{K_1-1}\text{energy}_{\mathcal{K}}((\langle f_1, {\varphi}_K \rangle)_{K \in \mathcal{K}}).$$ Then we define a tree $$U:= \{K \in \mathcal{K}: K \subseteq K_{\text{max}}\},$$ and let $K_U := K_{\text{max}},$ usually called as tree-top. Now we look at $\mathcal{K} \setminus U$ and repeat the above step to choose maximal intervals and collect their subintervals in their corresponding sets. Since $\mathcal{K}$ is finite, the process will eventually end. We then collect all $U$’s in a set $\mathbb{U}_{K_1-1}$. Next we repeat the above algorithm to $\displaystyle \mathcal{K} \setminus \bigcup_{U \in \mathbb{U}_{K_1-1}} U$. We thus obtain a decomposition $\displaystyle \mathcal{K} = \bigcup_{k}\bigcup_{U \in \mathbb{U}_{k}}U$. If, otherwise, the sequence is formed in terms of bump functions in lacunary family, then the same procedure can be performed to $$\frac{1}{|K|} \left\Vert \bigg(\sum_{K' \subseteq K}\frac{|\langle f_2, \psi_{K'} \rangle|^2 }{|K'|}\chi_{K'}\bigg)^{\frac{1}{2}}\right\Vert_{1,\infty}.$$ .15in The next proposition summarizes the information from the stopping-time decomposition and the details of the proof are included in Chapter 2 of [@cw]. \[st\_prop\] Suppose $\displaystyle \mathcal{K} = \bigcup_{k}\bigcup_{U \in \mathbb{U}_{k}}U$ is a decomposition obtained from the stopping-time algorithm specified above, then for any $k \in \mathbb{Z}$, one has $$\displaystyle 2^{k-1}\text{energy}_{\mathcal{K}}((\langle f_1, \phi_K \rangle)_{K \in \mathcal{K}}) \leq \text{size}_{\bigcup_{U \in \mathbb{U}_k}U}((\langle f_1, \phi_K \rangle)_{K \in \mathcal{K}}) \leq \min(2^{k}\text{energy}_{\mathcal{K}}((\langle f_1, \phi_K \rangle)_{K \in \mathcal{K}}),\text{size}_{\mathcal{K}}((\langle f_1, \phi_K \rangle)_{K \in \mathcal{K}})).$$ In addition, $$\sum_{U \in \mathbb{U}_k} |K_{U}| \lesssim 2^{-k}.$$ The next lemma follows from the stopping-time decomposition, Proposition \[st\_prop\] and Proposition \[JN\], whose proof is discussed carefully in Chapter 2.9 of [@cw]. It plays an important role in proving Lemma \[B\_loc\_norm\] as can be seen in Section 5.2.2. \[s-e\] Suppose $\mathcal{K}$ is a finite collection of dyadic intervals. Then $$\bigg|\sum_{K \in \mathcal{I}}\frac{1}{|K|}\langle f_1,\phi_K \rangle \langle f_2,\phi_K \rangle \langle f_3,\phi_K\rangle \bigg| \lesssim \prod_{i=1}^3 \text{size}_{\mathcal{K}} \big((\langle f_i, \phi_K \rangle)_{K \in \mathcal{K}} \big)^{1-\theta_i}\text{energy}_{\mathcal{K}}\big((\langle f_i, \phi_K \rangle)_{K \in \mathcal{K}} \big)^{\theta_i}.$$ ### Proof of Lemma \[B\_loc\_norm\] 1. **Estimate of $ \|B_0^{n_1,m_1}\|_1$.** For any $\eta \in L^{\infty}$ one has $$\begin{aligned} |\langle B_0^{n_1,m_1},\eta \rangle |\leq & \sum_{\substack{K \in \mathcal{K} \\ K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} |\langle f_1, \phi_K^1\rangle| |\langle f_2, \phi_K^2 \rangle| |\langle \eta, \phi_K^{3} \rangle |, \nonumber \\\end{aligned}$$ where $$\phi^{3}_{K}:=\begin{cases} \psi^{3,H}_K \quad \quad \ \ \text{in Case}\ \ I\\ |{\varphi}^{3,H}_{K}| \quad \quad \text{in Case} \ \ II. \end{cases}$$ Let $\mathcal{K}'$ denote the sub-collection $$\mathcal{K'}:= \{K \in \mathcal{K}: K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset \}.$$ Then, one can now apply Lemma \[s-e\] to obtain $$\begin{aligned} & |\langle B_0^{n_1,m_1}, \eta \rangle | \nonumber \\ \lesssim & \text{\ \ size}_{\mathcal{K}'} ((\langle f_1, \phi^1_K \rangle)_{K})^{1-\theta_1}\text{size}_{\mathcal{K}'}((\langle f_2, \phi^2_K \rangle)_{K})^{1-\theta_2} \text{size}_{\mathcal{K}'}((\langle \eta, \phi^3_K \rangle)_{K})^{1-\theta_3} \nonumber \\ & \text{\ \ energy} _{\mathcal{K}'}((\langle f_1, \phi^1_K\rangle)_{K})^{\theta_1}\text{energy} _{\mathcal{K}'}((\langle f_2, \phi^2_K\rangle)_{K})^{\theta_2} \text{energy} _{\mathcal{K}'}((\langle \eta, \phi^3_K\rangle)_{K})^{\theta_3},\end{aligned}$$ for any $0 \leq \theta_1,\theta_2, \theta_3 <1$ with $\theta_1 + \theta_2 + \theta_3 = 1$. By applying Proposition \[size\] and the fact that $K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset$ for any $K \in \mathcal{K}'$, one deduces that $$\begin{aligned} \label{f_size} & \text{size}_{\mathcal{K}'}((\langle f_1, \phi^1_K \rangle)_{K}) \lesssim 2^{n_1}|F_1|, \nonumber \\ & \text{size}_{\mathcal{K}'}((\langle f_1, \phi^1_K \rangle)_{K}) \lesssim 2^{n_1}|F_1|.\end{aligned}$$ One also recalls that $\eta \in L^{\infty}$, which gives $$\label{inf_size} \text{size}_{K \in \mathcal{K}}((\langle \eta, \phi^3_K \rangle)_{K}) \lesssim 1.$$ By choosing $\theta_3 = 0$ and combining the estimates (\[f\_size\]), (\[inf\_size\]) with the energy estimates described in Proposition \[energy\_classical\], one obtains $$\begin{aligned} |\langle B_0^{n_1,m_1}, \eta \rangle |\lesssim & (C_12^{n_1}|F_1|)^{\alpha_1(1-\theta_1)} (C_1 2^{m_1}|F_2|)^{\beta_1(1-\theta_2)} \|\eta\|_{L^{\infty}}|F_1|^{\theta_1}|F_2|^{\theta_2} \nonumber \\ = & C_1^{\alpha_1(1-\theta_1)+ \beta_1(1-\theta_2)}2^{n_1\alpha_1(1-\theta_1)}2^{m_1\beta_1(1-\theta_2)}|F_1|^{\alpha_1(1-\theta_1)+\theta_1}|F_2|^{\beta_1(1-\theta_2)+\theta_2} \|\eta\|_{L^{\infty}},\end{aligned}$$ where $\theta_1 + \theta_2 = 1$, $ 0 \leq \alpha_1, \beta_1 \leq 1$. Therefore, one can conclude that $$\begin{aligned} & \|B_0^{n_1,m_1}\|_1 \lesssim C_1^{\alpha_1(1-\theta_1)+ \beta_1(1-\theta_2)}2^{n_1\alpha_1(1-\theta_1)}2^{m_1\beta_1(1-\theta_2)}|F_1|^{\alpha_1(1-\theta_1)+\theta_1}|F_2|^{\beta_1(1-\theta_2)+\theta_2}. $$ By choosing $\alpha_1(1-\theta_1)+\theta_1 = \frac{1}{p_1}$ and $\beta_1(1-\theta_2)+\theta_2 = \frac{1}{q_1}$ which is possible given $\frac{1}{p_1} + \frac{1}{q_1} > 1$, one obtains the desired result. 0.25 in 2. **Estimate of $\| B_0^{n_1,m_1}\|_{t}$ for $t >1$.** We will first prove restricted weak-type estimates for $B_0^{n_1,m_1}$ specified in Claim \[en\_weak\_p\] and then the strong-type estimates in Claim \[en\_strong\_p\] follows from the standard interpolation technique. \[en\_weak\_p\] $\| B_0^{n_1,m_1}(f_1,f_2)\|_{\tilde{t},\infty} \lesssim C_1^{\frac{1}{p_1} + \frac{1}{q_1}-\theta_1 -\theta_2}2^{n_1(\frac{1}{p_1}-\theta_1)}2^{m_1(\frac{1}{q_1}-\theta_2)}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}},$ where $\theta_1 + \theta_2 = \frac{1}{\tilde{t}}$ and $\tilde{t} \in (t-\delta, t+ \delta)$ for some $\delta > 0 $ sufficiently small. \[en\_strong\_p\] $\| B_0^{n_1,m_1}(f_1,f_2)\|_{\tilde{t}} \lesssim C_1^{\frac{1}{p_1} + \frac{1}{q_1}-\theta_1-\theta_2} 2^{n_1(\frac{1}{p_1}-\theta_1)}2^{m_1(\frac{1}{q_1}-\theta_2)}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}},$ where $\theta_1 + \theta_2 = \frac{1}{t}$. It suffices to apply the dualization and prove that for any $\chi_S \in L^{\tilde{t}'}$, $$|\langle B_0^{n_1,m_1}, \chi_S \rangle| \lesssim 2^{n_1(\frac{1}{p_1}-\theta_1)}2^{m_1(\frac{1}{q_1}-\theta_2)}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|S|^{\frac{1}{\tilde{t}'}}$$ where $\theta_1 + \theta_2 = \frac{1}{\tilde{t}}$. The multilinear form can be estimated using a similar argument described in the proof of (i). In particular, let $$\mathcal{K}' := \{K \in \mathcal{K}: K \cap \mathcal{U}_{n_1,m_1} \neq \emptyset\}.$$ Then $$\begin{aligned} \label{linear_form_p} & |\langle B_0^{n_1,m_1}, \chi_S \rangle| \nonumber \\ \lesssim & \text{\ \ size}_{\mathcal{K}'}((\langle f_1, \phi^1_K \rangle)_{K})^{1-\theta_1}\text{size}_{\mathcal{K}'}((\langle f_2, \phi^2_K \rangle)_{K})^{1-\theta_2} \text{size}_{\mathcal{K}'}((\langle \chi_S, \phi^3_K \rangle)_{K})^{1-\theta_3} \nonumber \\ & \text{\ \ energy} _{\mathcal{K}'}((\langle f_1, \phi^1_K\rangle)_{K})^{\theta_1}\text{energy} _{\mathcal{K}'}((\langle f_2, \phi^2_K\rangle)_{K})^{\theta_2} \text{energy} _{\mathcal{K}'}((\langle \chi_S , \phi^3_K\rangle)_{K})^{\theta_3},\end{aligned}$$ for any $0 \leq \theta_1,\theta_2, \theta_3 <1$ with $\theta_1 + \theta_2 + \theta_3 = 1$. The size and energy estimates involving $f_1, f_2$ in part (1) are still valid. Here $\phi^3_K$ are defined differently in Case $I$ and $II$. However, one applies the same straightforward estimates that $$\begin{aligned} \label{set_size_en} & \text{size}_{\mathcal{K}'}((\langle \chi_S , \phi^3_K \rangle)_{K}) \lesssim 1, \nonumber \\ & \text{energy} _{\mathcal{K}'}((\langle \chi_S , \phi^3_K\rangle)_{K}) \lesssim |S|. \end{aligned}$$ By plugging in the above estimate (\[set\_size\_en\]) and (\[f\_size\]) into (\[linear\_form\_p\]), one has $$\begin{aligned} |\langle B_0^{n_1,m_1}, \chi_S \rangle| \lesssim & (C_1 2^{n_1}|F_1|)^{\alpha_1(1-\theta_1)} (C_1 2^{m_1}|F_2|)^{\beta_1(1-\theta_2)}|F_1|^{\theta_1}|F_2|^{\theta_2}|S|^{\theta_3} \nonumber \\ = & C_1^{\alpha_1(1-\theta_1) + \beta_1(1-\theta_2)}2^{n_1\alpha_1(1-\theta_1)}2^{m_1\beta_1(1-\theta_2)}|F_1|^{\alpha_1(1-\theta_1) + \theta_1} |F_2|^{\beta_1(1-\theta_2)+ \theta_2} |S|^{\theta_3}, $$ for any $0 \leq \alpha_1, \beta_1 \leq 1$. Let $\theta_3 = \frac{1}{\tilde{t}'}$, then $\theta_1 + \theta_2 = \frac{1}{\tilde{t}}$. One can then conclude $$\begin{aligned} \|B_0^{n_1,m_1}\|_{\tilde{t},\infty} \lesssim & C_1^{\alpha_1(1-\theta_1) + \beta_1(1-\theta_2)}2^{n_1\alpha_1(1-\theta_1)}2^{m_1\beta_1(1-\theta_2)}|F_1|^{\alpha_1(1-\theta_1) + \theta_1} |F_2|^{\beta_1(1-\theta_2)+ \theta_2}. $$ Since $\frac{1}{p_1} + \frac{1}{q_1} >1$, one can choose $0 \leq \alpha_1, \beta_1 \leq 1$ and $\theta_1,\theta_2$ with $\theta_1 + \theta_2 = \frac{1}{\tilde{t}} \sim \frac{1}{t}$ such that $$\alpha_1(1-\theta_1) + \theta_1 = \frac{1}{p_1},$$ $$\beta_1(1-\theta_2)+ \theta_2 = \frac{1}{q_1},$$ the claim then follows. Proof of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}$ - Haar Model ======================================================================================================== In this chapter, we will first specify the localization for the discrete model $\Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}$, which can be viewed as a starting point for the stopping-time decompositions. Then we will introduce different stopping-time decompositions used in the estimates. Finally, we will discuss how to apply information from the multiple stopping-time decompositions to obtain estimates. The organizations of Chapters 7-9 will follow the same scheme. Localization ------------ The definition of the exceptional set, which settles the starting point for the stopping-time decompositions, is expected to be compatible with the stopping-time algorithms involved. There would be two types of stopping-time decompositions undertaken for the estimates of $\Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}$ - one is the *tensor-type stopping-time decomposition* and the other one the *general two-dimensional level sets stopping-time decomposition*. While the second algorithm is related to a generic exceptional set (denoted by $\Omega^2$), the first algorithm aims to integrate information from two one-dimensional decompositions, which corresponds to the creation of a two-dimensional exceptional set (denoted by $\Omega^1$) as a Cartesian product of two one-dimensional exceptional sets. One defines the exceptional set, denoted by $\tilde{\Omega}$, as follows. Let $$\tilde{\Omega} := \{ M\chi_{\Omega} > \frac{1}{100}\}$$ with $$\Omega := \Omega^1 \cup \Omega^2,$$ where $$\begin{aligned} \displaystyle \Omega^1 := &\bigcup_{\tilde{n} \in \mathbb{Z}}\{Mf_1 > C_1 2^{\tilde{n}}|F_1|\} \times \{Mg_1 > C_2 2^{-\tilde{n}}|G_1|\}\cup \nonumber \\ & \bigcup_{\tilde{\tilde{n}} \in \mathbb{Z}}\{Mf_2 > C_1 2^{\tilde{\tilde{n}}}|F_2|\} \times \{Mg_2 > C_2 2^{-\tilde{\tilde{n}}}|G_2|\}\cup \nonumber \\ &\bigcup_{\tilde{\tilde{\tilde{n}}} \in \mathbb{Z}}\{Mf_1 > C_1 2^{\tilde{\tilde{\tilde{n}}}}|F_1|\} \times \{Mg_2 > C_2 2^{-\tilde{\tilde{\tilde{n}}}}|G_2|\}\cup \nonumber \\ & \bigcup_{\tilde{\tilde{\tilde{\tilde{n}}}} \in \mathbb{Z}}\{Mf_2 > C_1 2^{\tilde{\tilde{\tilde{\tilde{n}}}} }|F_2|\} \times \{Mg_1 > C_2 2^{-\tilde{\tilde{\tilde{\tilde{n}}}} }|G_1|\}, \nonumber \\ \Omega^2 := & \{SSh > C_3 \|h\|_{L^s(\mathbb{R}^2)}\}. \nonumber \\\end{aligned}$$ \[subset\] Given by the boundedness of the Hardy-Littlewood maximal operator and the double square function operator, it is not difficult to check that if $C_1, C_2, C_3 \gg 1$, then $ |\tilde{\Omega}| \ll 1. $ For different model operators, we will define different exceptional sets based on different stopping-time decompositions to employ. Nevertheless, their measures can be controlled similarly using the boundedness of the maximal operator and hybrid maximal and square operators. By scaling invariance, we will assume without loss of generality that $|E| = 1$ throughout the paper. Let $$\label{set_E'} E' := E \setminus \tilde{\Omega},$$ then $ |E'| \sim |E| $ and thus $|E'| \sim 1$. Our goal is to show that (\[thm\_weak\_explicit\]) holds with the corresponding subset $E' \subseteq E$(which will be different for each discrete model operator). In the current setting, this is equivalent to proving that the multilinear form $$\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}, \chi_{E'}) := \langle \Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}), \chi_{E'} \rangle$$ satisfies the following restricted weak-type estimate $$|\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}} \|h\|_{L^{s}(\mathbb{R}^2)}.$$ It is noteworthy that the discrete model operators are perfectly localized to $E'$ in the Haar model. In particular, $$\begin{aligned} \label{haar_local} \langle \Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}, \chi_{E'} \rangle := & \displaystyle \sum_{I \times J \in \mathcal{R}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),\phi_I^{1,H} \rangle \langle \tilde{B_J}(g_1, g_2), \phi_J^{1,H} \rangle \langle h, \phi_I^{2,H} \otimes \phi_J^{2.H} \rangle \langle \chi_{E'}, \phi_I^{3,H} \otimes \phi_J^{3,H} \rangle \nonumber \\ = & \displaystyle \sum_{\substack{I \times J \in \mathcal{R} \\ I \times J \cap \tilde{\Omega}^c \neq \emptyset}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),\phi_I^{1,H} \rangle \langle \tilde{B_J}(g_1, g_2), \phi_J^{1,H} \rangle \langle h, \phi_I^{2,H} \otimes \phi_J^{2.H} \rangle \langle \chi_{E'}, \phi_I^{3,H} \otimes \phi_J^{3,H} \rangle, \end{aligned}$$ because for $I \times J \cap \tilde{\Omega}^c = \emptyset$, $I \times J \cap E' = \emptyset$ and thus $ \langle \chi_{E'}, \phi_I^{3,H} \otimes \phi_J^{3,H} \rangle = 0, $ which means that dyadic rectangles satisfying $I \times J \cap \tilde{\Omega}^c = \emptyset$ do not contribute to the multilinear form. In the Haar model, we would heavily rely on the localization (\[haar\_local\]) and consider only the dyadic rectangles $I \times J \in \mathcal{R}$ such that $I \times J \cap \tilde{\Omega}^c \neq \emptyset$. Tensor-type stopping-time decomposition I - level sets ------------------------------------------------------ The first tensor-type stopping time decomposition, denoted by the *tensor-type stopping-time decomposition I*, will be performed to obtain estimates for $\Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}$. It aims to recover intersections with two-dimensional level sets from intersections with one-dimensional level sets for each variable. Another tensor-type stopping-time decomposition, denoted by the *tensor-type stopping-time decomposition II*, involves maximal intervals and plays an important role in the discussion for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$. We will focus on the *tensor-type stopping-time decomposition I* in this chapter. ### One-dimensional stopping-time decompositions - level sets One can perform a one-dimensional stopping-time decomposition on $\mathcal{I} := \{I: I \times J \in \mathcal{R}\}$. Let $$\Omega^{x}_{N_1} := \{ Mf_1 > C_1 2^{N_1}|F_1|\},$$ for some $N_1 \in \mathbb{Z}$ and $$\mathcal{I}_{N_1} := \{I \in \mathcal{I}: |I \cap \Omega^{x}_{N_1}| > \frac{1}{10}|I| \}.$$ Define $$\Omega^{x}_{N_1-1} := \{ Mf_1 > C_1 2^{N_1-1}|F_1|\},$$ and $$\mathcal{I}_{N_1-1} := \{I \in \mathcal{I} \setminus \mathcal{I}^{N_1}: |I \cap \Omega^{x}_{N_1-1}| > \frac{1}{10}|I| \}.$$                                                      The procedure generates the sets $(\Omega^{x}_{n_1})_{n_1}$ and $(\mathcal{I}_{n_1})_{n_1}$. Independently define $$\Omega^{x}_{M_1} := \{ Mf_2 > C_1 2^{M_1}|F_2|\},$$ for some $M_1 \in \mathbb{Z}$ and $$\mathcal{I}_{M_1} := \{I \in \mathcal{I}: |I \cap \Omega^{x}_{M_1}| > \frac{1}{10}|I| \}.$$ Define $$\Omega^{x}_{M_1-1} := \{ Mf_2 > C_1 2^{M_1-1}|F_2|\},$$ and $$\mathcal{I}_{M_1-1} := \{I \in \mathcal{I} \setminus \mathcal{I}^{M_1}: |I \cap \Omega^{x}_{M_1-1}| > \frac{1}{10}|I| \}.$$                                                      The procedure generates the sets $(\Omega^{x}_{m_1})_{m_1}$ and $(\mathcal{I}_{m_1})_{m_1}$. Now define $\mathcal{I}_{n_1,m_1} := \mathcal{I}_{n_1} \cap \mathcal{I}_{m_1}$ and the decomposition on $\displaystyle \mathcal{I} = \bigcup_{n_1,m_1}\mathcal{I}_{n_1,m_1}$. Same algorithm can be applied to $\mathcal{J}:= \{J: I \times J \in \mathcal{R}\}$ with respect to the level sets in terms of $Mg_1$ and $Mg_2$, which produces the sets (i) $(\Omega^{y}_{n_2})_{n_2}$ and $(\mathcal{J}_{n_2})_{n_2}$, where $$\Omega^{y}_{n_2} := \{ Mg_1 > C_2 2^{n_2}|G_1|\},$$ and $$\mathcal{J}_{n_2} := \{J \in \mathcal{J} \setminus \mathcal{J}_{n_2+1}: |J \cap \Omega^{y}_{n_2}| > \frac{1}{10}|J| \}.$$ (ii) $(\Omega^{y}_{m_2})_{m_2}$ and $(\mathcal{J}_{m_2})_{m_2}$, where $$\Omega^{y}_{m_2} := \{ Mg_2 > C_2 2^{m_2}|G_2|\},$$ and $$\mathcal{J}_{m_2} := \{J \in \mathcal{J} \setminus \mathcal{J}_{m_2+1}: |J \cap \Omega^{y}_{m_2}| > \frac{1}{10}|J| \}.$$ One thus obtains the decomposition $\displaystyle \mathcal{J} = \bigcup_{n_2, m_2} \mathcal{J}_{n_2,m_2}$, where $\mathcal{J}_{n_2,m_2} := \mathcal{J}_{n_2} \cap \mathcal{J}_{m_2}$. ### Tensor product of two one-dimensional stopping-time decompositions - level sets If we assume that all dyadic rectangles satisfy $I \times J \cap \tilde{\Omega}^{c} \neq \emptyset$ as in the Haar model, then we have the following observation. \[obs\_indice\] If $I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2}$, then $n_1 , m_1 ,n_2, m_2 \in \mathbb{Z}$ satisfies $n_1+n_2 < 0$ and $m_1 + m_2 < 0$. (Equivalently, $\forall I \times J \cap \tilde{\Omega}^{c} \neq \emptyset$, $I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}$, for some $n_2, m_2 \in \mathbb{Z}$ and $n, m > 0$.) The observation shows that how a rectangle $I \times J$ intersects with a two-dimensional level sets is closely related to how the corresponding intervals intersect with one-dimensional level sets (namely $I \in \mathcal{I}_{n_1.m_1}$ and $J \in \mathcal{J}_{n_2,m_2}$ with $n_1 + n_2 < 0$ and $m_1 + m_2 < 0$), as commented in the beginning of the section. Given $I \in \mathcal{I}_{n_1}$, one has $|I \cap \{ Mf_1 > C_1 2^{n_1}|F_1|\}| > \frac{1}{10} |I|$; similarly, $J \in \mathcal{J}_{n_1'}$ implies that $|J \cap \{ Mg_1 > C_2 2^{n_1'}|G_1|\}| > \frac{1}{10}|J|$. If $n_1 + n_1' \geq 0$ , then $\{ Mf_1 > C_1 2^{n_1}|F_1|\} \times\{ Mg_1 > C_2 2^{n_1'}|G_1|\} \subseteq \Omega^1 \subseteq \Omega$. Then $|I \times J \cap \Omega| > \frac{1}{100}|I \times J|$, which implies that $I \times J \subseteq \tilde{\Omega}$ and contradicts with the assumption. Same reasoning applies to $m_1$ and $m_2$. General two-dimensional level sets stopping-time decomposition -------------------------------------------------------------- With the assumption that $R \cap \tilde{\Omega}^c \neq \emptyset$, one has that $$|R\cap \Omega^2| \leq \frac{1}{100}|R|,$$ where $$\Omega^2 = \{ SSh >C_3 \|h\|_s\}.$$ Then define $$\Omega^2_{-1}:= \{SSh > C_3 2^{-1}\|h\|_{L^s}\}$$ and $$\mathcal{R}_{-1} := \{R \in \mathcal{R}: |R \cap \Omega^2_{-1}| > \frac{1}{100}|R|\}.$$ Successively define $$\Omega^2_{-1}:= \{SSh > C_3 2^{-2}\|h\|_{L^s}\}$$ and $$\mathcal{R}_{-2} := \{R \in \mathcal{R} \setminus \mathcal{R}_{-1}: |R \cap \Omega^2_{-2}| > \frac{1}{100}|R|\}.$$                                                                 This two-dimensional stopping-time decomposition generates the sets $(\Omega^2_{k_1})_{k_1 \leq 0}$ and $(\mathcal{R}_{k_1})_{k_1 \leq 0}$. Independently one can apply the same algorithm involving $SS\chi_{E'}$ which generates $(\Omega^2_{k_2})_{k_2 \leq K}$ and $(\mathcal{R}_{k_2})_{k_2 \leq K}$ where $K$ can be arbitrarily large. The existence of $K$ is guaranteed by the finite cardinality of the collection of dyadic rectangles. Sparsity condition ------------------ One important property followed from the *tensor-type stopping-time decomposition I - level sets* is the sparsity of dyadic intervals at different levels. Such geometric property plays an important role in the arguments for the main theorems. \[sparsity\] Suppose that $\displaystyle \mathcal{J} = \bigcup_{n_2 \in \mathbb{Z}} \mathcal{J}_{n_2}$ is a decomposition of dyadic intervals with respect to $Mg_1$ as specified in Section 6.3. For any fixed $n_2 \in \mathbb{Z}$, suppose that $J_0 \in \mathcal{J}_{n_2 - 10}$. Then $$\displaystyle \sum_{\substack{J \in \mathcal{J}_{n_2}\\J \cap J_0 \neq \emptyset}} |J| \leq \frac{1}{2}|J_0|.$$ To prove the proposition, one would need the following claim about point-wise estimates for $Mg_1$ on $J \in \mathcal{J}_{n_2}$: \[ptwise\] Suppose that $\bigcup_{n_2}\mathcal{J}_{n_2}$ is a partition of dyadic intervals generated from the stopping-time decomposition described above. If $J \in \mathcal{J}_{n_2}$, then for any $y \in J,$ $$ Mg_1(y)> 2^{-7} \cdot C_2 2^{n_2}|G_1|.$$ We will first explain why the proposition follows from the claim and then prove the claim. One recalls that all the intervals are dyadic, which means if $J \cap J_0 \neq \emptyset$, then either $$J \subseteq J_0$$ or $$J_0 \subseteq J.$$ If $J_0 \subseteq J$, then the claim implies that $$J_0 \subseteq J \subseteq \{ Mg_1 > C_2 2^{n_2-7}|G_1|\}.$$ But $J_0 \in \mathcal{J}_{n_2-10}$ infers that $$\big|J_0 \cap \{ Mg_1 > C_2 2^{n_2 - 7}\}\big| < \frac{1}{10}|J_0|,$$ which is a contradiction. If $J \subseteq J_0$ and suppose that $$\displaystyle \sum_{\substack{J \in \mathcal{J}_{n_2}\\J \subseteq J_0}} |J| > \frac{1}{2}|J_0|.$$ Then one can derive from $J \in \mathcal{J}_{n_2}$ that $$\big|J\cap \{Mg_1 > C_2 2^{n_2}|G_1| \} \big| > \frac{1}{10}|J|.$$ Therefore $$\sum_{\substack{J \in \mathcal{J}_{n_2}\\J \subseteq J_0}} \big|J\cap \{Mg_1 > C_2 2^{n_2}|G_1| \} \big| > \frac{1}{10}\sum_{\substack{J \in \mathcal{J}_{n_2}\\J \subseteq J_0}}|J| > \frac{1}{20}|J_0|.$$ But by the disjointness of $(J)_{J \in \mathcal{J}_{n_2}}$, $$\sum_{\substack{J \in \mathcal{J}_{n_2}\\J \subseteq J_0}} \big|J\cap \{Mg_1 > C_2 2^{n_2}|G_1| \} \big| \leq \big|J_0\cap \{Mg_1 > C_2 2^{n_2}|G_1| \} \big|.$$ Thus $$\big|J_0\cap \{Mg_1 > C_2 2^{n_2}|G_1| \} \big| > \frac{1}{20}|J_0|,$$ Now the claim, with slight modifications, implies that $J_0 \subseteq \{Mg_1 > C_2 2^{n_2-8}|G_1| \}$. But $J_0 \in \mathcal{J}_{n_2-10}$, which gives the necessary condition that $$\big|J_0\cap \{Mg_1 > C_2 2^{n_2}|G_1| \} \big| \leq \frac{1}{10}|J_0|$$ and reaches a contradiction. We will now prove the claim. Without loss of generality, we assume that $g$ is non-negative since if it is not, we can always replace it by $|g|$ where $Mg = M(|g|)$. We prove the claim case by case: Case (i): $\forall y \in \{Mg_1 > C_2 2^{n_2}|G_1|\}$, there exists $J_{y} \subseteq J$ such that $\text{ave}_{J_y}(g_1) > C_2 2^{n_2}|G_1|;$ Case (ii): There exists $y_0 \in \{Mg_1 > C_2 2^{n_2}|G_1|\}$ and $J_{y_0} \nsubseteq J$ such that $\text{ave}_{J_{y_0}}(g_1) > C_2 2^{n_2}|G_1|$ and Case (iia): $\frac{1}{40}|J| \leq |J_{y_0} \cap J|$ and $|J_{y_0}| \leq |J|$; Case (iib): $\frac{1}{40}|J| \leq |J_{y_0} \cap J|$ and $|J_{y_0}| > |J|$; Case (iic): $|J_{y_0} \cap J| < \frac{1}{40}|J|$. *Proof of (i):* In Case (i), one observes that $\{Mg_1 > C_2 2^{n_2}|G_1|\} \cap J$ can be rewritten as $\{M(g_1\cdot \chi_J) > C_2 2^{n_2}|G_1|\} \cap J$. Thus $$C_2 2^{n_2}|G_1||\{Mg_1 > C_2 2^{n_2}|G_1|\} \cap J| = C_2 2^{n_2}|G_1||\{M(g_1\chi_J) > C_2 2^{n_2}|G_1|\} \cap J| \leq \|g_1\chi_J\|_1.$$ One recalls that $|\{Mg_1 > C_2 2^{n_2}|G_1|\} \cap J| > \frac{1}{10}|J|$, which implies that $$C_2 2^{n_2}|G_1|\cdot \frac{1}{10}|J| \leq \|g_1\chi_J\|_1,$$ or equivalently, $$\frac{\|g_1\chi_J\|_1}{|J|} \geq \frac{1}{10}C_2 2^{n_2}|G_1|.$$ Therefore $Mg_1 > 2^{-4} C_2 2^{n_2}|G_1|$. *Proof of (ii)*: We will prove that if either (iia) or (iib) holds, then $Mg_1 > 2^{-7} C_2 2^{n_2}|G_1|$. If neither (iia) nor (iib) happens, then (iic) has to hold and in this case, $Mg_1 > 2^{-7} C_2 2^{n_2}|G_1|$. If there exists $y_0 \in \{Mg_1 > C_2 2^{n_2}|G_1|\}$ such that (iia) holds, then $$\frac{\|g_1 \chi_{J_{y_0}} \|_1}{|J_{y_0}|} \leq \frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{|J_{y_0}|} \leq \frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{|J_{y_0}\cap J|} \leq \frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{\frac{1}{40}|J|},$$ where the last inequality follows from $\frac{1}{40}|J| \leq |J_{y_0} \cap J|$. Moreover, $|J_{y_0}| \leq |J|$ and $y \in J_{y_0} \cap J \neq \emptyset$ infer that $|J_{y_0} \cup J| \leq 2|J|$. Thus $$\frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{\frac{1}{20}|J|} \leq \frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{\frac{1}{40}\frac{1}{2}|J_{y_0} \cup J|},$$ which implies $$\frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{|J_{y_0} \cup J|} > \frac{1}{80}C_2 2^{n_2}|G_1|,$$ and as a result $Mg_1 > 2^{-7} C_2 2^{n_2}|G_1|$ on $J$. If there exists $y \in \{Mg_1 > C_2 2^{n_2}|G_1|\}$ such that (iib) holds, then $$\frac{\|g_1 \chi_{J_{y_0}} \|_1}{|J_{y_0}|} \leq \frac{\|g_1 \chi_{J_{y_0} \cup J} \|_1}{|J_{y_0}|} = \frac{2\|g_1 \chi_{J_{y_0} \cup J} \|_1}{2|J_{y_0}|} \leq \frac{2\|g_1 \chi_{J_{y_0} \cup J} \|_1}{|J_{y_0} \cup J|},$$ where the last inequality follows from $|J_{y_0}| > |J|$. As a consequence, $$\frac{2\|g_1 \chi_{J_{y_0} \cup J} \|_1}{|J_{y_0} \cup J|} > C_2 2^{n_2}|G_1|,$$ and $Mg_1 > 2^{-1} C_22^{n_2}|G_1|$ on $J$. If neither (i), (iia) nor (iib) happens, then for $\mathcal{S}_{(iic)} := \{y: Mg_1(y) > C_2 2^{n_2}|G_1| \text{\ \ and\ \ } (i) \text{\ \ does not hold}\}$, one direct geometric observation is that $|\mathcal{S}_{(iic)} \cap J| \leq \frac{1}{20}|J|$. In particular, suppose $y \in \mathcal{S}_{(iic)}$, then any $J_{y_0}$ with $\text{ave}_{J_{y_0}}(g_1) > C_2 2^{n_2}|G_1|$ has to contain the left endpoint or right endpoint of $J$, which we denote by $J_{\text{left}}$ and $J_{\text{right}}$. If $J_{\text{left}} \in J_{y_0}$, then the assumption that neither (iia) nor (iib) holds implies that $$|J_{y_0} \cap J| < \frac{1}{40} |J|,$$ and thus $$|[J_{\text{left}}, y]| < \frac{1}{40}|J|.$$ Same implication holds true for $y \in \mathcal{S}_{(iic)}$ with $J_{\text{right}} \in J_{y_0}$. Therefore, for any $y \in \mathcal{S}_{(iic)}$, $|[J_{\text{left}}, y]| < \frac{1}{40}|J|$ or $|[y, J_{\text{right}}]| < \frac{1}{40}|J|$, which can be concluded as $$\big|\mathcal{S}_{(iic)} \cap J\big|< \frac{1}{20}|J|.$$ Since $\big|\{Mg_1> C_2 2^{n_2}|G_1|\} \cap J\big| > \frac{1}{10}|J|$, $$\bigg|\big(\{Mg_1> C_2 2^{n_2}|G_1|\} \setminus \mathcal{S}_{(iic)}\big) \cap J \bigg| > \frac{1}{20}|J|,$$ in which case one can apply the argument for (i) with $\{Mg_1> C_2 2^{n_2}|G_1|\}$ replaced by $\{Mg_1> C_2 2^{n_2}|G_1|\} \setminus \mathcal{S}_{(iic)}$ to conclude that $$Mg_1 > 2^{-5}C_2 2^{n_2} |G_1|.$$ This ends the proof for the claim. \[sp\_2d\] Given an arbitrary collection of dyadic rectangles $\mathcal{R}_0$. Define $\mathcal{J}:= \{J: R = I \times J \in \mathcal{R}_0 \}$. Suppose that $\displaystyle \mathcal{J} = \bigcup_{n_2 \in \mathbb{Z}} \mathcal{J}_{n_2}$ is a decomposition of dyadic intervals with respect to $Mg_1$ as specified in Section 6.3 so that $\displaystyle \mathcal{R}_0 = \bigcup_{n_2 \in \mathbb{Z}} \bigcup_{\substack{R= I \times J \in \mathcal{R}_0 \\ J \in \mathcal{J}_{n_2} \\ }} R $ is a decomposition of dyadic rectangles in $\mathcal{R}_0$. Then $$\sum_{n_2 \in \mathbb{Z}} \bigg|\bigcup_{\substack{R = I \times J \in \mathcal{R}_0 \\ J \in \mathcal{J}_{n_2}}}R\bigg| \lesssim \bigg|\bigcup_{R \in \mathcal{R}_0} R \bigg|. $$ Proposition \[sparsity\] gives a sparsity condition for intervals in the $y$-direction, which is sufficient to generate sparsity for dyadic rectangles in $\mathbb{R}^2$. In particular, $$\begin{aligned} \sum_{n_2 \in \mathbb{Z}} \bigg|\bigcup_{\substack{R= I \times J \in \mathcal{R}_0 \\ J \in \mathcal{J}_{n_2}}}R\bigg| =& \sum_{i = 0}^9 \sum_{n_2 \equiv i \ \ \text{mod} \ \ 10} \bigg|\bigcup_{\substack{R= I \times J \in \mathcal{R}_0 \\ J \in \mathcal{J}_{n_2}}}R\bigg| \nonumber \\ \lesssim & \sum_{i = 0}^9 \bigg|\bigcup_{n_2 \equiv i \ \ \text{mod} \ \ 10}\bigcup_{\substack{R= I \times J \in \mathcal{R}_0 \\ J \in \mathcal{J}_{n_2}}} R \bigg| \nonumber \\ \leq &10 \bigg|\bigcup_{n_2 \in \mathbb{Z}}\bigcup_{\substack{R= I \times J \in \mathcal{R}_0 \\ J \in \mathcal{J}_{n_2}}} R \bigg| \nonumber\\ = & 10 \big|\bigcup_{R \in \mathcal{R}_0} R \big|,\end{aligned}$$ where the second inequality follows from the sparsity condition in Proposition \[sparsity\]. The picture below illustrates from a geometric point of view why the two-dimensional sparsity condition (Proposition \[sp\_2d\]) follows naturally from the one-dimensional sparsity (Proposition \[sparsity\]). In the figure, $A_1, A_2 \in \mathcal{I} \times \mathcal{J}_{n_2+20}$, $B \in \mathcal{I} \times \mathcal{J}_{n_2+10}$ and $C \in \mathcal{I} \times \mathcal{J}_{n_2}$ for some $n_2 \in \mathbb{Z}$. (0,0) – (2,0) – (2,2) – (0,2) – (0,0); (-2,1/2) – (4,1/2) – (4,1) – (-2,1) – (-2,1/2); (-4,5/4) – (8,5/4) – (8,3/2) – (-4,3/2) – (-4,5/4); (1/2,-2) – (1,-2) – (1,4) – (1/2,4) – (1/2,-2); (0.75,3)node\[blue\][$C$]{}; (3,1.45)node[$A_2$]{}; (1.75,1.75)node\[red\][$B$]{}; (3,0.7)node[$A_1$]{}; .15in Summary of stopping-time decompositions ---------------------------------------    ---------------------------------------------------------------------------------- ------------------- --------------------------------------------------------------------- I. Tensor-type stopping-time decomposition I on $\mathcal{I} \times \mathcal{J}$ $\longrightarrow$ $I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2}$ $(n_1 + n_2 < 0, m_1 + m_2 < 0, $ $n_1 + m_2 < 0 , m_1 + n_2 < 0)$ II\. General two-dimensional level sets stopping-time decomposition $\longrightarrow$ $I \times J \in \mathcal{R}_{k_1,k_2} $     on $\mathcal{I} \times \mathcal{J}$ $(k_1 <0, k_2 \leq K) $ ---------------------------------------------------------------------------------- ------------------- --------------------------------------------------------------------- .25in Application of stopping-time decompositions ------------------------------------------- With the stopping-time decompositions specified above, one can rewrite the multilinear form as [$$\begin{aligned} & |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \nonumber \\ = &\bigg|\displaystyle \sum_{\substack{n_1 + n_2 < 0 \\ m_1 + m_2 < 0 \\ n_1 + m_2 < 0 \\ m_1 + n_2 < 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{I \times J \in \mathcal{I}_{n_1, m_1} \times \mathcal{J}_{n_2, m_2}\\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle \langle \tilde{B}_J^{\#_2,H}(g_1,g_2),{\varphi}_J^{1,H} \rangle \cdot\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle \langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle \bigg| \nonumber \\ \leq & \sum_{\substack{n_1 + n_2 < 0 \\ m_1 + m_2 < 0 \\ n_1 + m_2 < 0 \\ m_1 + n_2 < 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2}\\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2), {\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \cdot \frac{|\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}} \frac{|\langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}} |I| |J|.\nonumber \end{aligned}$$]{} One recalls the *general two-dimensional level sets stopping-time decomposition* that $I \times J \in \mathcal{R}_{k_1,k_2} $ only if $$|I\times J \cap (\Omega^2_{k_1})^c | \geq \frac{99}{100}|I\times J|$$ $$|I\times J \cap (\Omega^2_{k_2})^c | \geq \frac{99}{100}|I \times J|$$ with $\Omega^2_{k_1} := \{ SSh > C_3 2^{k_1}\|h\|_s\}$, and $\Omega^2_{k_2}:= \{ SS\chi_{E'} > C_3 2^{k_2}\}$. As a result, $$|I \times J| \sim |I \times J \cap (\Omega^2_{k_1})^c \cap (\Omega^2_{k_2})^c|.$$ One can therefore rewrite the multilinear form as [$$\begin{aligned} \label{form12} & |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \nonumber \\ = & \sum_{\substack{n_1 + n_2 < 0 \\ m_1 + m_2 < 0 \\ n_1 + m_2 < 0 \\ m_1 + n_2 < 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2}\\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2), {\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \cdot \frac{|\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}} \frac{|\langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}} \nonumber \\ &\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \ \ \cdot |I\times J \cap (\Omega_{k_1})^c \cap (\Omega_{k_2})^c| \nonumber \\ \leq & \sum_{\substack{n_1 + n_2 < 0 \\ m_1 + m_2 < 0 \\ n_1 + m_2 < 0 \\ m_1 + n_2 < 0 \\ k_1 < 0 \\ k_2 \leq K}} \displaystyle \sup_{I \in \mathcal{I}_{n_1,m_1}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \sup_{J \in \mathcal{J}_{n_2,m_2}} \frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}}\cdot \nonumber \\ &\quad \quad \quad \quad \quad \int_{(\Omega^2_{k_1})^c \cap (\Omega^2_{k_2})^c} \sum_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{|\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}} \frac{|\langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}}\chi_{I}(x) \chi_{J}(y) dx dy. \nonumber \\\end{aligned}$$]{} We will now estimate each components in (\[form12\]) separately for clarity. ### Estimate for the $integral$ One can apply the Cauchy-Schwartz inequality to the integrand and obtain $$\begin{aligned} \label{integral12} & \int_{(\Omega^2_{k_1})^c \cap (\Omega^2_{k_2})^c} \sum_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{|\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}} \frac{|\langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle|}{|I|^{\frac{1}{2}}|J|^{\frac{1}{2}}}\chi_{I}(x) \chi_{J}(y) dx dy \nonumber \\ \leq & \int_{(\Omega^2_{k_1})^c \cap (\Omega^2_{k_2})^c} \bigg(\sum_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}}\frac{|\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle|^2}{|I||J|} \chi_I(x)\chi_J(y)\bigg)^{\frac{1}{2}} \nonumber \\ &\quad \quad \quad \quad \quad \quad \bigg(\sum_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2}\\I \times J \in \mathcal{R}_{k_1,k_2}}}\frac{|\langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle|^2}{|I||J|}\chi_{I}(x) \chi_{J}(y)\bigg)^{\frac{1}{2}} dxdy \nonumber \\ \leq &\displaystyle \int_{(\Omega^2_{k_1})^c \cap (\Omega^2_{k_2})^c} SSh(x,y) SS\chi_{E'}(x,y) \cdot \chi_{\bigcup_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}}I \times J}(x,y) dxdy. \nonumber \\\end{aligned}$$ Based on the *general two-dimensional level sets stopping-time decomposition*, the hybrid functions have point-wise control on the domain for integration. In particular, for any $(x,y) \in (\Omega^2_{k_1})^c \cap (\Omega^2_{k_2})^c$, $$\begin{aligned} & SSh(x,y) \lesssim C_3 2^{k_1} \|h\|_s, \nonumber \\ & SS\chi_{E'}(x,y) \lesssim C_3 2^{k_2}.\end{aligned}$$ As a result, the integral can be estimated by $$\begin{aligned} C_3^2 2^{k_1}\|h\|_s 2^{k_2} \bigg| \bigcup_{\substack{I \times J \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}}I \times J \bigg|.\nonumber \\\end{aligned}$$ ### Estimate for $ \sup_{I \in \mathcal{I}_{n_1,m_1}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} $ and $ \sup_{J \in \mathcal{J}_{n_2,m_2}} \frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}}$ One recalls the algorithm in the *tensor type stopping-time decomposition I - level sets*, which incorporates the following information. $$I \in \mathcal{I}_{n_1, m_1}$$ implies that $$|I \cap \{ Mf_1 < C_1 2^{n_1}|F_1|\}| \geq \frac{9}{10}|I|,$$ $$|I \cap \{Mf_2 < C_1 2^{m_1}|F_2|\}| \geq \frac{9}{10}|I|,$$ which translates into $$I \cap \{ Mf_1 < C_1 2^{n_1}|F_1|\} \cap \{Mf_2 < C_1 2^{m_1}|F_2|\} \neq \emptyset.$$ Then one can recall Proposition \[size\_cor\] to estimate $$\sup_{I \in \mathcal{I}_{-n-n_2,-m-m_2}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \lesssim C_1^2 (2^{n_1}|F_1|)^{\alpha_1} (2^{m_1}|F_2|)^{\alpha_2},$$ for any $0 \leq \alpha_1,\alpha_2 \leq 1$. Similarly, one can apply Proposition \[size\_cor\] with $\mathcal{U}'_{n_2,m_2}:= \{ Mg_1 < C_2 2^{n_2}|G_1|\} \cap \{Mg_2 < C_2 2^{m_2}|G_2|\}$ to conclude that $$\sup_{J \in \mathcal{J}_{n_2,m_2}} \frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \lesssim C_2^2 (2^{n_2}|G_1| )^{\beta_1}(2^{m_2}|G_2|)^{\beta_2},$$ for any $0 \leq \beta_1,\beta_2 \leq 1$. By choosing $\alpha_1 = \frac{1}{p_1}, \alpha_2 = \frac{1}{q_1}, \beta_1 = \frac{1}{p_2}, \beta_2 = \frac{1}{q_2}, $ the multilinear form can therefore be estimated by $$\begin{aligned} \label{linear_form_fixed_scale} & |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \nonumber \\ \lesssim & C_1^2 C_2^2 C_3^2\sum_{\substack{n_1 + n_2 < 0 \\ m_1 + m_2 < 0 \\ n_1 + m_2 < 0 \\ m_1 + n_2 < 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{n_1 \frac{1}{p_1}}2^{m_1\frac{1}{q_1}}2^{n_2 \frac{1}{p_2}} 2^{m_2 \frac{1}{q_2}}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_1}}\cdot 2^{k_1} \| h \|_{L^s} 2^{k_2} \cdot \bigg|\bigcup_{\substack{R\in \mathcal{R}_{k_1,k_2} \\ R \in \mathcal{I}_{n_1,m_1} \times \mathcal{J}_{n_2,m_2}}} R\bigg| \nonumber. \\\end{aligned}$$ One recalls that $$\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2},$$ then $$\begin{aligned} \label{exp_size} 2^{n_1 \frac{1}{p_1}}2^{m_1\frac{1}{q_1}}2^{n_2 \frac{1}{p_2}} 2^{m_2 \frac{1}{q_2}} = & 2^{n_1\frac{1}{p_2}} 2^{n_1(\frac{1}{q_2} - \frac{1}{q_1})}2^{m_1 \frac{1}{q_1}} 2^{n_2\frac{1}{p_2}}2^{m_2(\frac{1}{q_2} - \frac{1}{q_1})} 2^{m_2\frac{1}{q_1}} \nonumber \\ = & (2^{n_1 + n_2})^{\frac{1}{p_2}} (2^{n_1+m_2})^{\frac{1}{q_2} - \frac{1}{q_1}}(2^{m_1+m_2})^{\frac{1}{q_1}}.\end{aligned}$$ By the definition of exceptional sets, $ 2^{n_1 + n_2} \lesssim 1, 2^{n_1 + m_2} \lesssim 1, 2^{m_1 + n_2} \lesssim 1, 2^{m_1 + m_2} \lesssim 1 $. Then $$n := -(n_1 + n_2) \geq 0,$$ $$m := -(m_1 + m_2) \geq 0.$$ Without loss of generality, one further assumes that $\frac{1}{q_2} \geq \frac{1}{q_1}$ (with $q_1$ and $q_2$ swapped in the opposite case), which implies that $$(2^{n_1+m_2})^{\frac{1}{q_2}- \frac{1}{q_1}} \lesssim 1.$$ Now (\[linear\_form\_fixed\_scale\]) can be bounded by $$\begin{aligned} \label{linear_almost} & |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \nonumber \\ \lesssim & C_1^2 C_2^2 C_3^2\sum_{\substack{n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} 2^{-n \frac{1}{p_2}}2^{-m \frac{1}{q_1}}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_1}}\cdot 2^{k_1} \| h \|_{L^s} 2^{k_2} \cdot \bigg|\bigcup_{\substack{R\in \mathcal{R}_{k_1,k_2} \\ R \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}}} R\bigg|. \end{aligned}$$ With $k_1, k_2, n, m$ fixed, one can apply the sparsity condition (Proposition \[sp\_2d\]) repeatedly and obtain the following bound for the expression $$\label{nested_area} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} \bigg|\bigcup_{\substack{R\in \mathcal{R}_{k_1,k_2} \\ R \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}}} R \bigg| \lesssim \sum_{m_2 \in \mathbb{Z}} \bigg| \bigcup_{\substack{R \in \mathcal{R}_{k_1,k_2} \\ R \in \mathcal{I}_{-m-m_2} \times \mathcal{J}_{m_2}}} R \bigg| \lesssim \bigg|\bigcup_{R\in \mathcal{R}_{k_1,k_2}} R\bigg| \leq \min( \big|\bigcup_{R\in \mathcal{R}_{k_1}} R\big|, \big|\bigcup_{R\in \mathcal{R}_{k_2}} R\big|).$$ The arbitrariness of the collection of rectangles in Proposition \[sp\_2d\] provides the compatibility of different stopping-time decompositions. In the current setting, the notation $\mathcal{R}_0$ in Proposition \[sp\_2d\] is chosen to be $\mathcal{R}_{k_1, k_2}$. The sparsity condition allows one to combine the *tensor-type stopping-time decomposition I* and *general two-dimensional level sets stopping-time decomposition* and to obtain information from both stopping-time decompositions. The readers who are familiar with the proof of single-parameter paraproducts [@cw] or bi-parameter paraproducts [@cptt], [@cw] might recall that (\[nested\_area\]) employs a different argument from the previous ones [@cptt], [@cw]. In particular, by previous reasonings, one would fix $n_2, m_2 \in \mathbb{Z}$ and obtain $$\label{old} \bigg|\bigcup_{\substack{R\in \mathcal{R}_{k_1,k_2} \\ R \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}}} R \bigg| \lesssim \min( \big|\bigcup_{R\in \mathcal{R}_{k_1}} R\big|, \big|\bigcup_{R\in \mathcal{R}_{k_2}} R\big|).$$ However, the expression on the right hand side of (\[old\]) is independent of $n_2$ or $m_2$, which gives a divergent series when the sum is taken over all $n_2, m_2 \in \mathbb{Z}$. This explains the novelty and necessity of the sparsity condition (Proposition \[sp\_2d\]) for our argument. To estimate the right hand side of (\[nested\_area\]), one recalls from the *general two-dimensional level sets stopping-time decomposition* that $R \in \mathcal{R}_{k_1}$ implies $$\big|R \cap \Omega^2_{k_1-1} \big| > \frac{1}{100}|R|,$$ or equivalently $$\displaystyle \bigcup_{R\in \mathcal{R}_{k_1}} R \subseteq \{ M (\chi_{\Omega^2_{k_1-1}}) > \frac{1}{10}\}.$$ As a result, $$\begin{aligned} \label{rec_area_1} \bigg|\bigcup_{R\in \mathcal{R}_{k_1}} R\bigg| \leq & \big|\{ M (\chi_{\Omega^2_{k_1-1}}) > \frac{1}{100}\}\big| \lesssim |\Omega^2_{k_1-1}|=|\{ SSh > C_3 2^{k_1} \|h\|_s\}| \lesssim C_3^{-s}2^{-k_1s},\end{aligned}$$ where the last inequality follows from the boundedness of the double square function described in Proposition \[maximal-square\]. By a similar reasoning and the fact that $|E'| \sim 1$, $$\begin{aligned} \label{rec_area_2} \bigg|\bigcup_{R\in \mathcal{R}_{k_2}} R\bigg| \leq & \big|\{ M (\chi_{\Omega^2_{k_2-1}}) > \frac{1}{100}\}\big| \lesssim |\Omega^2_{k_2-1}|=|\{ SS(\chi_{E'}) > C_3 2^{k_2}\}| \lesssim C_3^{-\gamma}2^{-k_2\gamma},\end{aligned}$$ for any $\gamma >1$. Interpolation between (\[rec\_area\_1\]) and (\[rec\_area\_2\]) yields $$\label{int_area} \bigg|\bigcup_{R\in \mathcal{R}_{k_1,k_2}} R\bigg| \lesssim 2^{-\frac{k_1s}{2}}2^{-\frac{k_2\gamma}{2}},$$ and by plugging (\[int\_area\]) into (\[nested\_area\]), one has $$\label{rec_area_hybrid} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} \bigg|\bigcup_{\substack{R\in \mathcal{R}_{k_1,k_2} \\ R \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}}} R \bigg| \lesssim 2^{-\frac{k_1s}{2}}2^{-\frac{k_2\gamma}{2}},$$ for any $\gamma >1$. One combines the estimates (\[rec\_area\_hybrid\]) and (\[linear\_almost\]) to obtain $$\begin{aligned} |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \lesssim &C_1^2 C_2^2 C_3^2 \sum_{\substack{n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{-n \frac{1}{p_2}}2^{-m \frac{1}{q_1}}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{p_2}}|G_1|^{\frac{1}{q_1}}|G_2|^{\frac{1}{q_2}}\cdot 2^{k_1(1-\frac{s}{2})} \| h \|_{L^s} 2^{k_2(1-\frac{\gamma}{2})}. \nonumber \\\end{aligned}$$ The geometric series $\displaystyle \sum_{k_1<0}2^{k_1(1-\frac{s}{2})}$ is convergent given that $s <2$. For $\displaystyle\sum_{k_2 \leq K}2^{k_2(1-\frac{\gamma}{2})}$, one can choose $\gamma >1$ to be sufficiently large for the range $0 \leq k_2 \leq K$ and $\gamma >1$ and close to $1$ for $k_2 <0$. One thus concludes that $$\begin{aligned} |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \lesssim &C_1^2 C_2^2 C_3^2 |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{p_2}}|G_1|^{\frac{1}{q_1}}|G_2|^{\frac{1}{q_2}}\| h \|_{L^s}.\end{aligned}$$ One important observation is that thanks to Lemma \[B\_size\], the sizes $$\sup_{I \in \mathcal{I}_{n_1,m_1}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}}$$ and $$\sup_{J \in \mathcal{J}_{n_2,m_2}} \frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}}$$ can be estimated in the exactly same way as $$\text{size}_{\mathcal{I}_{n_1}}\big( (f_1,\phi_I)_I \big) \cdot \text{size}_{\mathcal{I}_{m_1}}\big( (f_2,\phi_I)_I \big)$$ and $$\text{size}_{\mathcal{J}_{n_2}}\big( (g_1,\phi_J)_J \big) \cdot \text{size}_{\mathcal{J}_{m_2}}\big( (g_2,\phi_J)_J \big)$$ respectively. Based on this observation, it is not difficult to verify that the discrete model $\Pi_{\text{flag}^{\#1} \otimes \text{paraproduct}}$, $\Pi_{\text{paraproduct}\otimes \text{paraproduct}}$ can be estimated by a essentially same argument as $\Pi_{\text{flag}^{\#_1}\otimes \text{flag}^{\#_2}}$. In addition, $\Pi_{\text{flag}^0 \otimes \text{flag}^{\#_2}}$ can be studied similarly as $\Pi_{\text{flag}^0 \otimes \text{paraproduct}}$. .15in Proof of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ - Haar Model ================================================================================================ The argument in Chapter 6 is not sufficient for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ because the localized size $$\begin{aligned} & \sup_{I \cap S \neq \emptyset} \frac{|\langle B_I^H, {\varphi}^{1,H}_I \rangle |}{|I|^{\frac{1}{2}}}, \nonumber \\ & \sup_{J \cap S' \neq \emptyset} \frac{|\langle \tilde{B}_J^H, {\varphi}^{1,H}_J \rangle }{|J|^{\frac{1}{2}}}\end{aligned}$$ cannot be controlled without information about corresponding level sets. In particular, one needs to impose the additional assumption that $$\begin{aligned} &I \cap \{MB^H \leq C_1 2^{l_1}\|B^H\|_1\} \neq \emptyset, \nonumber \\ &J \cap \{M\tilde{B}^H \leq C_2 2^{l_2}\|\tilde{B}^H\|_1\} \neq \emptyset,\end{aligned}$$ where $$\begin{aligned} & B^H (f_1, f_2)(x):= \sum_{K \in \mathcal{K}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2\rangle \phi_{K}^{3,H}(x), \nonumber \\ &\tilde{B}^H (g_1, g_2)(y):= \sum_{L \in \mathcal{L}} \frac{1}{|L|^{\frac{1}{2}}} \langle g_1, \phi_L^1\rangle \langle g_2, \phi_L^2\rangle \phi_{L}^{3,H}(y).\end{aligned}$$ However, while the sizes of $B^H$ and $\tilde{B}^H$ can be controlled in this way, they lose the information from the localization (e.g. $K \cap \{ Mf_1 \leq C_1 2^{n_1}|F_1|\} \neq \emptyset$ for some $n_1 \in \mathbb{Z}$) and are thus far away from satisfaction. It is indeed the energies which capture such local information and compensate for the loss from size estimates in this scenario. Localization ------------ As one would expect from the definition of the exceptional set, the *tensor-type stopping-time decompositions* and the *general two-dimensional level sets stopping-time decomposition* are involved in the argument. We define the set $$\Omega := \Omega^1 \cup \Omega^2,$$ where $$\begin{aligned} \displaystyle \Omega^1 := &\bigcup_{n_1 \in \mathbb{Z}}\{Mf_1 > C_1 2^{n_1}|F_1|\} \times \{Mg_1 > C_2 2^{-n_1}|G_1|\}\cup \nonumber \\ & \bigcup_{m_1 \in \mathbb{Z}}\{Mf_2 > C_1 2^{m_1}|F_2|\} \times \{Mg_2 > C_2 2^{-m_1}|G_2|\}\cup \nonumber \\ &\bigcup_{l_1 \in \mathbb{Z}} \{MB^H > C_1 2^{l_1}\| B^H\|_1\} \times \{M\tilde{B}^H > C_2 2^{-l_1}\| \tilde{B}^H \|_1\},\nonumber \\ \Omega^2 := & \{SSh > C_3 \|h\|_{L^s}\}, \nonumber \\\end{aligned}$$ and $$\tilde{\Omega} := \{ M\chi_{\Omega} > \frac{1}{100}\}.$$ Let $$E' := E \setminus \tilde{\Omega}.$$ Then similar argument in Remark \[subset\] yields that $|E'| \sim |E|$ where $|E|$ can be assumed to be 1 by scaling invariance. We aim to prove that the multilinear form $$\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}, \chi_{E'}) := \langle \Pi_{\text{flag}^{0} \otimes \text{flag}^{0}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}), \chi_{E'} \rangle$$ satisfies the following restricted weak-type estimate $$|\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}} \|h\|_{L^{s}(\mathbb{R}^2)}.$$ .15in Tensor-type stopping-time decomposition II - maximal intervals -------------------------------------------------------------- ### One-dimensional stopping-time decomposition - maximal intervals One applies the stopping-time decomposition described in Section 5.5.1 to the sequences $$\big(\frac{|\langle B^H_{I}(f_1,f_2), {\varphi}^{1,H}_I \rangle|}{|I|^{\frac{1}{2}}}\big)_{I \in \mathcal{I}}$$ and $$\big(\frac{|\langle \tilde {B}^H_{J}(g_1,g_2), {\varphi}^{1,H}_J \rangle|}{|J|^{\frac{1}{2}}}\big)_{J \in \mathcal{J}}$$ We will briefly recall the algorithm and introduce some necessary notations for the sake of clarity. Since $\mathcal{I}$ is finite, there exists some $L_1 \in \mathbb{Z}$ such that $\frac{|\langle B^H_{I}(f_1,f_2), {\varphi}^{1,H}_I \rangle|}{|I|^{\frac{1}{2}}} \leq C_1 2^{L_1} \|B^H\|_1$. There exists a largest interval $I_{\text{max}}$ such that $$\frac{|\langle B^H_{I_{\text{max}}}(f_1,f_2), {\varphi}^{1,H}_{I_{\text{max}}} \rangle|}{|I_{\text{max}}|^{\frac{1}{2}}} \geq C_1 2^{L_1-1}\|B^H\|_1.$$ Then we define a *tree* $$T := \{I \in \mathcal{I}: I \subseteq I_{\text{max}}\},$$ and the corresponding *tree-top* $$I_T := I_{\text{max}}.$$ Now we repeat the above step on $\mathcal{I} \setminus T$ to choose maximal intervals and collect their subintervals in their corresponding sets, which will end thanks to the finiteness of $\mathcal{I}$. Then collect all $T$’s in a set $\mathbb{T}_{L_1-1}$ and repeat the above algorithm to $\displaystyle \mathcal{I} \setminus \bigcup_{T \in \mathbb{T}_{L_1-1}} T$. Eventually the algorithm generates a decomposition $\displaystyle \mathcal{I} = \bigcup_{l_1}\bigcup_{T \in \mathbb{T}_{l_1}}T$. One simple observation is that the above procedure can be applied to general sequences indexed by dyadic intervals. One can thus apply the same algorithm to $\mathcal{J} := \{J: I \times J \in \mathcal{R}\}$. We denote the decomposition as $\displaystyle \mathcal{J} = \bigcup_{l_2}\bigcup_{S \in \mathbb{S}_{l_2}}S$ with respect to the sequence $\big(\frac{|\langle \tilde {B}^H_{J}(g_1,g_2), {\varphi}^{1,H}_J \rangle|}{|J|^{\frac{1}{2}}}\big)_{J \in \mathcal{J}}$, where $S$ is a collection of dyadic intervals analogous to $T$ and is denoted by *tree*. And $J_S$ represents the corresponding *tree-top* analogous to $I_T$. .15in ### Tensor product of two one-dimensional stopping-time decompositions - maximal intervals If $I \times J \cap \tilde{\Omega}^{c} \neq \emptyset$ and $I \times J \in T \times S$ with $T \in \mathbb{T}_{l_1}$ and $S \in \mathbb{S}_{l_2}$, then $l_1, l_2 \in \mathbb{Z}$ satisfies $l_1 + l_2 < 0$. Equivalently, $I \times J \in T \times S$ with $T \in \mathbb{T}_{-l - l_2}$ and $S \in \mathbb{S}_{l_2}$ for some $l_2 \in \mathbb{Z}$, $l> 0$. $I \in T$ with $T \in \mathbb{T}_{l_1}$ means that $I \subseteq I_T$ where $\frac{|\langle B^H_{I_T}(f_1,f_2), {\varphi}^{1,H}_{I_T} \rangle|}{|I_T|^{\frac{1}{2}}} > C_12^{l_1} \|B^H\|_1$. By the biest trick, $\frac{|\langle B^H_{I_T}(f_1,f_2), {\varphi}^{1,H}_{I_T} \rangle|}{|I_T|^{\frac{1}{2}}} = \frac{|\langle B^H(f_1,f_2), {\varphi}^{1,H}_{I_T} \rangle|}{|I_T|^{\frac{1}{2}}} \leq MB^H(x)$ for any $x \in I_T$. Thus $I_T \subseteq \{ MB^H > C_12^{l_1} \|B^H\|_1\}$. By a similar reasoning, $J \in S$ with $S \in \mathbb{S}_{l_2}$ implies that $J \subseteq J_S \subseteq \{ M\tilde{B}^H > C_22^{l_2} \|\tilde{B}^H\|_1\}$. If $l_1 + l_2 \geq 0$, then $\{ MB^H > C_12^{l_1} \|B^H\|_1\} \times \{ M\tilde{B}^H > C_2 2^{l_2}\| \tilde{B}^H\|_1\} \subseteq \Omega^1 \subseteq \Omega$. As a consequence, $I \times J \subseteq \Omega \subseteq \tilde{\Omega}$, which is a contradiction. .15in Summary of stopping-time decompositions --------------------------------------- The notions of *tensor-type stopping-time decomposition I* and *general two-dimensional level sets stopping-time decomposition* introduced in Chapter 6 will be applied without further specifications. ------------------------------------------------------------------------------------- ------------------- ----------------------------------------------------------------------------------------- I. Tensor-type stopping-time decomposition I on $\mathcal{I} \times \mathcal{J}$ $\longrightarrow$ $I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}$ $(n_2, m_2 \in \mathbb{Z}, n > 0)$ II\. Tensor-type stopping-time decomposition II on $\mathcal{I} \times \mathcal{J}$ $\longrightarrow$ $I \times J \in T \times S$,with $T \in \mathbb{T}_{-l-l_2}$, $S \in \mathbb{S}_{l_2}$ $(l_2 \in \mathbb{Z}, l> 0)$ III\. General two-dimensional level sets stopping-time decomposition $\longrightarrow$ $I \times J \in \mathcal{R}_{k_1,k_2} $      on $\mathcal{I} \times \mathcal{J}$ $(k_1 <0, k_2 \leq K)$ ------------------------------------------------------------------------------------- ------------------- ----------------------------------------------------------------------------------------- Application of stopping-time decompositions ------------------------------------------- One first rewrites the multilinear form with the partition of dyadic rectangles specified in the stopping-time algorithm: $$\begin{aligned} |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim &\displaystyle \sum_{\substack{n > 0 \\ m > 0 \\ l > 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2} \\ S \in \mathbb{S}_{l_2}}}\sum_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n - n_2, -m - m_2} \times \mathcal{J}_{n_2, m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}}| \langle B_I^H(f_1,f_2),{\varphi}_I^{1,H} \rangle| |\langle \tilde{B}_J^H(g_1,g_2),{\varphi}_J^{1,H} \rangle| \nonumber \\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdot |\langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle| |\langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle|. \nonumber \\ $$ One can now apply the exactly same argument in Section 6.6.1 to estimate the multilinear form by $$\begin{aligned} \label{form_00} |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim \sum_{\substack{n > 0 \\ m > 0 \\ l > 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \displaystyle & \sup_{I \in T} \frac{|\langle B_I^H(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \sup_{J \in S} \frac{|\langle \tilde{B}_J^H(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \cdot \nonumber \\ & 2^{k_1} \| h \|_{L^s} 2^{k_2} \cdot \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n - n_2, -m - m_2} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg|.\end{aligned}$$ Fix $-l-l_2$ and $T \in \mathbb{T}_{-l-l_2}$, one recalls the *tensor-type stopping-time decomposition II* to conclude that $$\label{ave_1} \sup_{I \in T } \frac{|\langle B_I^H(f_1,f_2),{\varphi}_I^{1H} \rangle|}{|I|^{\frac{1}{2}}} \lesssim C_1 2^{-l-l_2} \|B^H\|_1.$$ By the similar reasoning, $$\label{ave_2} \sup_{J \in S } \frac{|\langle \tilde{B}^H_J(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \lesssim C_2 2^{l_2} \|\tilde{B}^H\|_1.$$ By applying the estimates (\[ave\_1\]) and (\[ave\_2\]) to (\[form\_00\]), $$\begin{aligned} \label{form00_set} |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}} | & \lesssim C_1 C_2 C_3^2 \sum_{\substack{n > 0 \\ m > 0 \\ l > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{-l}\|B^H\|_1 \|\tilde{B}^H \|_1\cdot 2^{k_1} \| h \|_{L^s} 2^{k_2} \cdot \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n - n_2, -m - m_2} \times \mathcal{J}_{n_2,m_2} \\I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg|. \end{aligned}$$ 0.15in Estimate for nested sum of dyadic rectangles -------------------------------------------- One can estimate the nested sum (\[ns\]) in two approaches - one with the application of the sparsity condition and the other with a Fubini-type argument which will be introduced in Section 7.5.2. $$\label{ns} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T\times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg|.$$ Both arguments aim to combine different stopping-time decompositions and to extract useful information from them. Generically, the sparsity condition argument employs the geometric property, namely Proposition \[sp\_2d\], of the *tensor-type stopping-time decomposition I* and applies the analytical implication from the *general two-dimensional level sets stopping-time decomposition*. Meanwhile, the Fubini-type argument focuses on the hybrid of the *tensor-type stopping time decomposition I - level sets* and the *tensor-type stopping-time decomposition II - maximal intervals*. As implied by the name, the Fubini-type argument attempts to estimate measures of two dimensional sets by the measures of its projected one-dimensional sets. The approaches to estimate projected one-dimensional sets are different depending on which tensor-type stopping decomposition is in consideration. ### Sparsity condition. The first approach relies on the sparsity condition which mimics the argument in the Chapter 6. In particular, fix $n, m, l , k_1$ and $k_2$, one estimates (\[ns\]) as follows. $$\begin{aligned} & \sum_{l_2}\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg| \nonumber \\ \leq & \underbrace{ \sup_{l_2}\Bigg(\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J\in \mathcal{R}_{k_1,k_2}}} I \times J \bigg|\Bigg)^{\frac{1}{2}}}_{SC-I} \nonumber \\ & \cdot \underbrace{\sum_{l_2}\Bigg(\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}}\bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}\\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J\bigg| \Bigg)^{\frac{1}{2}}}_{SC-II}.\end{aligned}$$ **Estimate of $SC-I$.** Fix $l, n, m, k_1, k_2$ and $l_2$. Then by the *one-dimensional stopping-time decomposition - maximal intervals*, for any $I \in T$ and $I' \in T'$ such that $T, T' \in \mathbb{T}^{-l-l_2}$ and $T \neq T'$, one has $I \cap I' = \emptyset$. Hence for any fixed $n_2$ and $m_2$, one can rewrite $$\begin{aligned} \label{SC-I} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg|= & \bigg|\bigcup_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R} _{k_1,k_2} }} I \times J \bigg|, \nonumber \\\end{aligned}$$ where the right hand side of (\[SC-I\]) can be trivially bounded by $$\bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg|.$$ One can then recall the sparsity condition highlighted as Proposition \[sp\_2d\] and reduce the nested sum of measures of unions of rectangles to the measure of the corresponding union of rectangles. More precisely, $$\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigg| \bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2} }} I \times J \bigg| \sim \bigg|\bigcup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J\bigg|,$$ where the right hand side can be estimated by $$\bigg|\bigcup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigcup_{\substack{ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J\bigg| \leq \bigg| \bigcup_{I \times J \in \mathcal{R}_{k_1,k_2}}I \times J\bigg| \lesssim \min(2^{-k_1s},2^{-k_2\gamma}),$$ for any $\gamma >1$. The last inequality follows directly from (\[rec\_area\_hybrid\]). Since the above estimates hold for any $l_2 \in \mathbb{Z}$, one can conclude that $$SC-I \lesssim \min(2^{-\frac{k_1s}{2}},2^{-\frac{k_2\gamma}{2}}).$$ .15in **Estimate of $SC-II$.** One invokes (\[SC-I\]) and Proposition \[sp\_2d\] to obtain $$\label{SC-II} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}}\bigg|\bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J\bigg| \sim \bigg|\bigcup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigcup_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J\bigg|.$$ One enlarges the collection of the rectangles by forgetting about the restriction that the rectangles lie in $\mathcal{R}_{k_1,k_2}$ and estimate the right hand side of (\[SC-II\]) by $$\label{SC-II2} \bigg|\bigcup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigcup_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} }} I \times J\bigg|,$$ which is indeed the measure of the union of the rectangles collected in the *tensor-type stopping-time decomposition II - maximal intervals* at a certain level. In other words, $$\bigg|\bigcup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigcup_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigcup_{\substack{I \times J \in T \times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2}}} I \times J\bigg| = \bigg| \bigcup_{T \times S \in \mathbb{T}_{-l-l_2} \times \mathbb{S}_{l_2}} I_T \times J_S\bigg|.$$ Then $$\label{fb_simple} SC-II \leq \sum_{l_2 \in \mathbb{Z}}\bigg| \bigcup_{T \times S \in \mathbb{T}_{-l-l_2} \times \mathbb{S}_{l_2}} I_{T} \times J_{S}\bigg|^{\frac{1}{2}},$$ whose estimate follows a Fubini-type argument that plays an important role in the proof. We will focus on the development of this Fubini-type argument in a separate section and discuss its applications in other useful estimates for the proof. ### Fubini argument. Alternatively, one can apply a Fubini-type argument to estimate (\[ns\]) in the sense that the measure of some two-dimensional set is estimated by the product of the measures of its projected one-dimensional sets. To introduce this argument, we will first look into (\[fb\_simple\]) which requires a simpler version of the argument. .05in **Estimate of (\[fb\_simple\]) - Introduction of Fubini argument.** As illustrated before, one first rewrites the measure of two dimensional-sets in terms of the measures of two one-dimensional sets as follows. $$\begin{aligned} \label{fb_simple_2d} & \sum_{l_2 \in \mathbb{Z}}\bigg| \bigcup_{T \times S \in \mathbb{T}_{-l-l_2} \times \mathbb{S}_{l_2}} I_{T} \times J_{S}\bigg|^{\frac{1}{2}} \nonumber \\ \leq & \bigg( \sum_{l_2 \in \mathbb{Z}}\big|\bigcup_{T\in \mathbb{T}_{-l-l_2}} I_{T} \big|\bigg)^{\frac{1}{2}}\bigg( \sum_{l_2 \in \mathbb{Z}}\big|\bigcup_{S\in \mathbb{S}_{l_2}} J_{S} \big|\bigg)^{\frac{1}{2}},\end{aligned}$$ where the last step follows from the Cauchy-Schwartz inequality. To estimate the measures of the one-dimensional sets appearing above, one can convert them to the form of “global” energies and apply the energy estimates specified in Proposition \[B\_en\_global\]. In particular, (\[fb\_simple\_2d\]) can be rewritten as $$\begin{aligned} \label{SC-II-en} & \bigg( \sum_{l_2 \in \mathbb{Z}}(C_12^{-l-l_2} \|B^H\|_1)^{1+\delta}\big|\bigcup_{T\in \mathbb{T}_{-l-l_2}} I_{T} \big|\bigg)^{\frac{1}{2}}\bigg( \sum_{l_2 \in \mathbb{Z}}(C_22^{l_2} \|\tilde{B}^H\|_1)^{1+\delta}\big|\bigcup_{S\in \mathbb{S}_{l_2}} J_{S} \big|\bigg)^{\frac{1}{2}} \cdot 2^{l\frac{(1+\delta)}{2}}\|B^H\|_{1}^{-\frac{1+\delta}{2}}\|\tilde{B}^H\|_{1}^{-\frac{1+\delta}{2}}, \nonumber \\ $$ for any $\delta >0$. One notices that for fixed $l$ and $l_2$, $$\{I_T: T \in \mathbb{T}_{-l-l_2} \}$$ is a disjoint collection of dyadic intervals according to the *one-dimensional stopping-time decomposition - maximal interval*. Thus $$\label{en_global} \sum_{l_2 \in \mathbb{Z}}(C_12^{-l-l_2} \|B^H\|_1)^{1+\delta}\big|\bigcup_{T\in \mathbb{T}_{-l-l_2}} I_{T} \big| = \sum_{l_2 \in \mathbb{Z}}(C_12^{-l-l_2} \|B^H\|_1)^{1+\delta}\sum_{T\in \mathbb{T}_{-l-l_2}}|I_{T}|$$ is indeed a “global” $L^{1+\delta}$-energy for which one can apply the energy estimates to obtain the bound $$|F_1|^{\mu_1(1+\delta)}|F_2|^{\mu_2(1+\delta)},$$ where $\delta, \mu_1, \mu_2 >0$ with $\mu_1 + \mu_2 = \frac{1}{1+\delta}$. Similarly, one can apply the same reasoning to the measure of the set in the $y$-direction to derive $$\label{SC-II-y} \sum_{l_2 \in \mathbb{Z}}(C_22^{l_2} \|\tilde{B}^H\|_1)^{1+\delta}\big|\bigcup_{S\in \mathbb{S}_{l_2}} J_{S} \big| \lesssim |G_1|^{\nu_1(1+\delta)}|G_2|^{\nu_2(1+\delta)},$$ for any $\nu_1, \nu_2 > 0$ with $\nu_1 + \nu_2 = \frac{1}{1+\delta}$. By applying (\[en\_global\]) and (\[SC-II-y\]) into (\[SC-II-en\]), one derives that $$\sum_{l_2 \in \mathbb{Z}}\bigg| \bigcup_{T \times S \in \mathbb{T}_{-l-l_2} \times \mathbb{S}_{l_2}} I_{T} \times J_{S}\bigg|^{\frac{1}{2}} \lesssim 2^{l \frac{(1+\delta)}{2}} |F_1|^{\frac{\mu_1(1+\delta)}{2}}|F_2|^{\frac{\mu_2(1+\delta)}{2}}|G_1|^{\frac{\nu_1(1+\delta)}{2}}|G_2|^{\frac{\nu_2(1+\delta)}{2}}\|B^H\|_1^{-\frac{1+\delta}{2}}\|\tilde{B}^H\|_1^{-\frac{1+\delta}{2}},$$ for any $\delta,\mu_1,\mu_2,\nu_1,\nu_2 >0$ with $\mu_1+ \mu_2 = \nu_1+ \nu_2 = \frac{1}{1+\delta}$. The reason for leaving the expressions $\|B^H\|_1^{-\frac{1+\delta}{2}}$ or $\|\tilde{B}^H\|_1^{-\frac{1+\delta}{2}}$ will become clear later. In short, $\|B^H\|_1$ and $\|\tilde{B}^H\|_1$ will appear in estimates for other parts. We will keep them as they are for the exponent-counting and then use the estimates for $\|B^H\|_1$ and $\|\tilde{B}^H\|_1$ at last. By combining the estimates $SC-I$ and $SC-II$, one can conclude that (\[ns\]) is majorized by $$\label{ns_sp} 2^{-\frac{k_2\gamma}{2}}2^{l \frac{(1+\delta)}{2}} |F_1|^{\frac{\mu_1(1+\delta)}{2}}|F_2|^{\frac{\mu_2(1+\delta)}{2}}|G_1|^{\frac{\nu_1(1+\delta)}{2}}|G_2|^{\frac{\nu_2(1+\delta)}{2}}\|B^H\|_1^{-\frac{1+\delta}{2}}\|\tilde{B}^H\|_1^{-\frac{1+\delta}{2}},$$ for $\gamma >1 $, $\delta,\mu_1,\mu_2,\nu_1,\nu_2 >0$ with $\mu_1+ \mu_2 = \nu_1+ \nu_2 = \frac{1}{1+\delta}$. The frame-work for estimating the measure of two-dimensional sets by its corresponding one-dimensional sets, as illustrated by (\[fb\_simple\_2d\]), is the so-called “Fubini-type” argument which we will heavily employ from now on. .15in **Estimate of (\[ns\]) - Application of Fubini argument.** It is not difficult to observe that (\[ns\]) can also be estimated by $$\label{set_00} \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T\times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} }} I \times J \bigg|= \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}}}\bigg|\bigcup_{\substack{I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg|\sum_{\substack{S \in \mathbb{S}_{l_2}}}\bigg|\bigcup_{\substack{J \in S \\ J \in \mathcal{J}_{n_2,m_2} }} J \bigg|.$$ One now rewrites the above expression and separates it into two parts. Both parts can be estimated by the Fubini-type argument whereas the methodologies to estimate projected one-dimensional sets are different. More precisely, (\[set\_00\]) can be separated as $$\begin{aligned} &\underbrace{\sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} \sum_{l_2 \in \mathbb{Z}}\bigg(\sum_{\substack{T \in \mathbb{T}_{-l-l_2}}}\bigg|\bigcup_{\substack{I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg|\bigg)^{\frac{1}{2}}\bigg(\sum_{\substack{S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{J \in S \\ J \in \mathcal{J}_{n_2,m_2} }} J \bigg|\bigg)^{\frac{1}{2}}}_{\mathcal{A}} \times \nonumber \\ & \underbrace{\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\sup_{l_2 \in \mathbb{Z}}\bigg(\sum_{\substack{T \in \mathbb{T}_{-l-l_2}}}\bigg|\bigcup_{\substack{I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg|\bigg)^{\frac{1}{2}}\bigg(\sum_{\substack{S \in \mathbb{S}_{l_2}}}\bigg|\bigcup_{\substack{J \in S \\ J \in \mathcal{J}_{n_2,m_2} }} J \bigg|\bigg)^{\frac{1}{2}}}_{\mathcal{B}}.\nonumber \\\end{aligned}$$ To estimate $\mathcal{A}$, one first notices that for for any fixed $n, m, n_2, m_2, l, l_2$ and a fixed tree $T \in \mathbb{T}^{-l-l_2}$, a dyadic interval $I \in T \cap \mathcal{I}_{-n-n_2.-m-m_2}$ means that (i) $I \subseteq I_T$ where $I_T$ is the tree-top interval as implied by the *one-dimensional stopping-time decomposition - maximal interval*; (ii) $I \cap \{Mf_1 \leq C_1 2^{-n-n_2+1}|F_1| \} \cap \{Mf_2 \leq C_1 2^{-m-m_2+1}|F_2| \} \neq \emptyset$. By (i) and (ii), one can deduce that $$I_T \cap \{Mf_1 \leq C_1 2^{-n-n_2+1}|F_1| \} \cap \{Mf_2 \leq C_1 2^{-m-m_2+1}|F_2| \} \neq \emptyset.$$ As a consequence, $$\label{a_x} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}}}\bigg|\bigcup_{\substack{I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg| \leq \sum_{\substack{T \in \mathbb{T}_{-l-l_2} \\ I_T \cap (\Omega^{-n-n_2,-m-m_2}_x)^c \neq \emptyset}}|I_T|.$$ A similar reasoning applies to the term involving intervals in the $y$-direction and generates $$\label{a_y} \sum_{\substack{S \in \mathbb{S}_{l_2}}}\bigg|\bigcup_{\substack{J \in S \\ I \in \mathcal{J}_{n_2,m_2}}} J \bigg| \leq \sum_{\substack{S \in \mathbb{S}_{l_2} \\J_S \cap (\Omega^{n_2,m_2}_y)^c \neq \emptyset}}|J_S|.$$ By applying the Cauchy-Schwartz inequality together with (\[a\_x\]) and (\[a\_y\]), one obtains $$\begin{aligned} \label{a_pre_en} \mathcal{A} \leq & \sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} \bigg(\sum_{l_2 \in \mathbb{Z}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2} \\ I_T \cap (\Omega^{-n-n_2,-m-m_2}_x)^c \neq \emptyset}}|I_T|\bigg)^{\frac{1}{2}}\cdot \bigg(\sum_{l_2 \in \mathbb{Z}}\sum_{\substack{S \in \mathbb{S}_{l_2} \\J_S \cap (\Omega^{n_2,m_2}_y)^c \neq \emptyset}}|J_S|\bigg) ^{\frac{1}{2}}.\end{aligned}$$ One then “completes” the expression (\[a\_pre\_en\]) to produce localized energy-like terms as follows. $$\begin{aligned} & \sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} \underbrace{\bigg[\sum_{l_2 \in \mathbb{Z}}(C_1 2^{-l-l_2}\|B^H\|_1)^2\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\ I_T \cap (\Omega^{-n-n_2,-m-m_2}_x)^c \neq \emptyset}}|I_{T}|\bigg]^{\frac{1}{2}}}_{\mathcal{A}^1}\cdot \underbrace{\bigg[\sum_{l_2 \in \mathbb{Z}}(C_2 2^{l_2}\|\tilde{B}^H\|_1)^{2} \sum_{\substack{S \in \mathbb{S}_{l_2}\\ J_S \cap (\Omega^{n_2,m_2}_y)^c \neq \emptyset}} |J_S |\bigg]^{\frac{1}{2}}}_{\mathcal{A}^2} \nonumber \\ &\cdot 2^{l}\|B^H\|_1^{-1}\|\tilde{B}^H\|_1^{-1}.\nonumber \\ \end{aligned}$$ It is not difficult to recognize that $\mathcal{A}^1$ and $\mathcal{A}^2$ are $L^2$-energies. Moreover, they follow stronger local energy estimates described in Proposition \[B\_en\]. $\mathcal{A}^1$ is indeed an $L^2$ energy localized to $\{Mf_1 \leq C_1 2^{-n-n_2}|F_1| \} \cap \{Mf_2 \leq C_1 2^{-m-m_2}|F_2| \}$. Then Proposition \[B\_en\] gives the estimate $$\label{a_1} \mathcal{A}^1 \lesssim (C_1 2^{-n-n_2})^{\frac{1}{p_1}-\theta_1} (C_1 2^{-m-m_2})^{\frac{1}{q_1} - \theta_2}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}},$$ for any $0 \leq \theta_1, \theta_2 < 1$ satisfying $\theta_1 + \theta_2 = \frac{1}{2}$. One applies the same reasoning to $\mathcal{A}^2$ to deduce that $$\label{a_2} \mathcal{A}^2 \lesssim C_2^{2}2^{n_2(\frac{1}{p_2} - \zeta_1)}2^{m_2(\frac{1}{q_2}- \zeta_2)}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}},$$ where $0 \leq \zeta_1, \zeta_2 < 1$ and $\zeta_1 + \zeta_2 = \frac{1}{2}$. One can now combine the estimates for $\mathcal{A}^1$ (\[a\_1\]) and $\mathcal{A}^2$ (\[a\_2\]) to derive $$\begin{aligned} \mathcal{A} \lesssim & C_1^{2}C_2^{2}\sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}2^{(-n-n_2)(\frac{1}{p_1}- \theta_1)}2^{(-m-m_2)(\frac{1}{q_1} - \theta_2)}2^{n_2(\frac{1}{p_2} - \zeta_1)}2^{m_2(\frac{1}{q_2}- \zeta_2)} \cdot \nonumber \\ & |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}\cdot 2^{l}\|B^H\|_1^{-1}\|\tilde{B}^H\|_1^{-1}.\end{aligned}$$ One observes that the following two conditions are equivalent: $$\label{exp_1} \frac{1}{p_1} - \theta_1 = \frac{1}{p_2} - \zeta_1 \iff \frac{1}{q_1} - \theta_2 = \frac{1}{q_2} - \zeta_2.$$ The equivalence is imposed by the fact that $$\begin{aligned} \label{exp_2} &\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2}, \nonumber \\ &\theta_1 + \theta_2 = \zeta_1 + \zeta_2 = \frac{1}{2} .\end{aligned}$$ With the choice $ 0 \leq \theta_1, \zeta_1 < 1$ with $\theta_1- \zeta_1 = \frac{1}{p_1} - \frac{1}{p_2}$ , one has $$\label{a_estimate} \mathcal{A} \lesssim C_1^2 C_2^{2}2^{-n(\frac{1}{p_1} - \theta_1)}2^{-m(\frac{1}{q_1} - \theta_2)} |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}\cdot 2^{l}\|B^H\|_1^{-1}\|\tilde{B}^H\|_1^{-1}.$$ (\[exp\_1\]) and (\[exp\_2\]) together imposes a condition that $$\label{pair_exp} \left|\frac{1}{p_1} - \frac{1}{p_2}\right| = \left|\frac{1}{q_1} - \frac{1}{q_2}\right| < \frac{1}{2}.$$ Without loss of generality, one can assume that $\frac{1}{p_1} \geq \frac{1}{p_2}$ and $\frac{1}{q_1} \leq \frac{1}{q_2}$. Then either (\[pair\_exp\]) holds or $$\frac{1}{p_1} - \frac{1}{p_2} = \frac{1}{q_2} - \frac{1}{q_1} > \frac{1}{2},$$ which implies $$\left|\frac{1}{p_1} - \frac{1}{q_2}\right| = \left|\frac{1}{p_2} - \frac{1}{q_1}\right| < \frac{1}{2}.$$ Then one can switch the role of $g_1$ and $g_2$ to “pair“ the functions as $f_1$ with $g_2$ and $f_2$ with $g_1$. A parallel argument can be applied to obtain the desired estimates. One can apply another Fubini-type argument to estimate $\mathcal{B}$ with $l, n$ and $m$ fixed. Such argument again relies heavily on the localization. First of all, for any fixed $l_2 \in \mathbb{Z}$, $$\{I: I \in T, T \in \mathbb{T}_{-l-l_2} \}$$ is a disjoint collection of dyadic intervals. Thus $$\sum_{\substack{T \in \mathbb{T}_{-l-l_2}}}\bigg|\bigcup_{\substack{I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg| \leq \bigg|\bigcup_{\substack{ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg|.$$ One then recalls the point-wise estimate stated in Claim \[ptwise\] to deduce $$\bigcup_{\substack{ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \subseteq \{ Mf_1 > C_1 2^{-n-n_2-10}|F_1|\} \cap \{ Mf_2 > C_1 2^{-m-m_2-10}|F_2|\},$$ and for arbitrary but fixed $l_2 \in \mathbb{Z}$, $$\label{b_x} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}}}\bigg|\bigcup_{\substack{I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg|\leq \big|\{ Mf_1 > C_1 2^{-n-n_2-10}|F_1|\} \cap \{ Mf_2 > C_1 2^{-m-m_2-10}|F_2|\} \big|.$$ A similar reasoning applies to the intervals in the $y$-direction and yields that for any fixed $l_2 \in \mathbb{Z}$, $$\label{b_y} \sum_{\substack{S \in \mathbb{S}_{l_2}}}\bigg|\bigcup_{\substack{J \in S \\ J \in \mathcal{J}_{n_2,m_2} }} J \bigg| \leq \big|\{ Mg_1 > C_2 2^{n_2-10}|G_1|\} \cap \{ Mg_2 > C_2 2^{m_2-10}|G_2|\} \big|.$$ To apply the above estimates, one notices that there exists some $\tilde{l}_2 \in \mathbb{Z}$ possibly depending $n, m, l, n_2, m_2$ such that $$\begin{aligned} \mathcal{B} =& \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigg(\sum_{T \in \mathbb{T}_{-l-\tilde{l}_2}}\bigg|\bigcup_{\substack{ I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg| \bigg)^{\frac{1}{2}}\bigg(\sum_{S \in \mathbb{S}_{\tilde{l}_2}}\bigg|\bigcup_{\substack{ J \in S \\ J \in \mathcal{I}_{n_2,m_2}}} J \bigg| \bigg)^{\frac{1}{2}}. \nonumber\end{aligned}$$ One can further“complete” $\mathcal{B}$ in the following manner for appropriate use of the Cauchy-Schwartz inequality. $$\begin{aligned} \mathcal{B} = & \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigg((C_12^{-n-n_2}|F_1|)^{\mu(1+\epsilon)}(C_12^{-m-m_2}|F_2|)^{(1-\mu)(1+\epsilon)}\sum_{T \in \mathbb{T}_{-l-\tilde{l}_2}}\bigg|\bigcup_{\substack{ I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg| \bigg)^{\frac{1}{2}} \cdot \nonumber \\ & \quad \quad \ \ \bigg((C_22^{n_2}|G_1|)^{\mu(1+\epsilon)}(C_2 2^{m_2}|G_2|)^{(1-\mu)(1+\epsilon)}\sum_{S \in \mathbb{S}_{\tilde{l}_2}}\bigg|\bigcup_{\substack{ J \in S \\ J \in \mathcal{I}_{n_2,m_2}}} J \bigg|\bigg)^{\frac{1}{2}}\nonumber \\ &\quad \quad \ \ \cdot 2^{n\cdot\frac{1}{2}\mu(1+\epsilon)}2^{m\cdot \frac{1}{2}(1-\mu)(1+\epsilon)}|F_1|^{-\frac{1}{2}\mu(1+\epsilon)}|F_2|^{-\frac{1}{2}(1-\mu)(1+\epsilon)}|G_1|^{-\frac{1}{2}\mu(1+\epsilon)}|G_2|^{-\frac{1}{2}(1-\mu)(1+\epsilon)} \nonumber \\ \leq & \underbrace{\bigg[\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(C_12^{-n-n_2}|F_1|)^{\mu(1+\epsilon)}(C_12^{-m-m_2}|F_2|)^{(1-\mu)(1+\epsilon)}\sum_{T \in \mathbb{T}_{-l-\tilde{l}_2}}\bigg|\bigcup_{\substack{ I \in T \\ I \in \mathcal{I}_{-n-n_2,-m-m_2}}} I \bigg|\bigg]^{\frac{1}{2}}}_{\mathcal{B}^1} \cdot \nonumber \\ &\underbrace{\bigg[\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(C_22^{n_2}|G_1|)^{\mu(1+\epsilon)}(C_2 2^{m_2}|G_2|)^{(1-\mu)(1+\epsilon)}\sum_{S \in \mathbb{S}_{\tilde{l}_2}}\bigg|\bigcup_{\substack{ J \in S \\ J \in \mathcal{I}_{n_2,m_2}}} J \bigg|\bigg]^{\frac{1}{2}}}_{\mathcal{B}^2}\nonumber \\ &\ \ \cdot 2^{n\cdot\frac{1}{2}\mu(1+\epsilon)}2^{m\cdot \frac{1}{2}(1-\mu)(1+\epsilon)}|F_1|^{-\frac{1}{2}\mu(1+\epsilon)}|F_2|^{-\frac{1}{2}(1-\mu)(1+\epsilon)}|G_1|^{-\frac{1}{2}\mu(1+\epsilon)}|G_2|^{-\frac{1}{2}(1-\mu)(1+\epsilon)}, \nonumber $$ for any $\epsilon > 0$, $0 < \mu <1$, where the second inequality follows from the Cauchy-Schwartz inequality. To estimate $\mathcal{B}^1$, one recalls (\[b\_x\]) - which holds for any fixed $l_2 \in \mathbb{Z}$ - to obtain $$\begin{aligned} \label{B_1} \mathcal{B}^1 \lesssim & \bigg[\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(C_1 2^{-n-n_2}|F_1|)^{\mu(1+\epsilon)}(C_1 2^{-m-m_2}|F_2|)^{(1-\mu)(1+\epsilon)}\big| \{ Mf_1 > C_1 2^{-n-n_2}|F_1|\} \cap \{ Mf_2 > C_1 2^{-m-m_2}|F_2|\} \big|\bigg]^{\frac{1}{2}} \nonumber \\ \leq & \bigg[\int (Mf_1(x))^{\mu(1+\epsilon)}(Mf_2(x))^{(1-\mu)(1+\epsilon)} dx\bigg]^{\frac{1}{2}} \nonumber \\ \leq & \Bigg[\bigg(\int (Mf_1(x))^{\mu(1+\epsilon) \frac{1}{\mu}} dx\bigg)^{\mu}\bigg(\int (Mf_2(x))^{(1-\mu)(1+\epsilon) \frac{1}{1-\mu}} dx\bigg)^{1-\mu}\Bigg]^{\frac{1}{2}},\end{aligned}$$ where the last step follows from Hölder’s inequality. One can now use the mapping property for the Hardy-Littlewood maximal operator $M: L^{p} \rightarrow L^{p}$ for any $p >1$ and deduces that $$\begin{aligned} \label{piece} &\bigg(\int (Mf_1(x))^{1+\epsilon} dx\bigg)^{\mu} \lesssim \|f_1\|_{1+\epsilon}^{(1+\epsilon)\mu} = |F_1|^{\mu}, \nonumber \\ &\bigg(\int (Mf_2(x))^{1+\epsilon} dx\bigg)^{1-\mu} \lesssim \|f_2\|_{1+\epsilon}^{(1+\epsilon)(1-\mu)} = |F_2|^{1-\mu}.\end{aligned}$$ By plugging the estimate (\[piece\]) into (\[B\_1\]), $$\label{B_1_final} \mathcal{B}^1 \lesssim |F_1|^{\frac{\mu}{2}}|F_2|^{\frac{1-\mu}{2}}.$$ By the same argument with $-n-n_2$ and $-m-m_2$ replaced by $n_2$ and $m_2$ correspondingly, one obtains $$\label{B_2_final} \mathcal{B}^2 \lesssim |G_1|^{\frac{\mu}{2}}|G_2|^{\frac{1-\mu}{2}}.$$ Combination of the estimates for $\mathcal{B}^1$ (\[B\_1\_final\]) and $\mathcal{B}^2$ (\[B\_2\_final\]) yields $$\label{b_estimate} \mathcal{B} \lesssim |F_1|^{-\frac{\mu}{2}\epsilon}|F_2|^{-\frac{1-\mu}{2}\epsilon}|G_1|^{-\frac{\mu}{2}\epsilon}|G_2|^{-\frac{1-\mu}{2}\epsilon}2^{n\cdot\frac{1}{2}\mu(1+\epsilon)}2^{m\cdot \frac{1}{2}(1-\mu)(1+\epsilon)}.$$ By applying the results for both $\mathcal{A}$ (\[a\_estimate\]) and $\mathcal{B}$ (\[b\_estimate\]), one concludes with the following estimate for (\[ns\]). $$\begin{aligned} \label{ns_fb} & \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z} \\ l_2 \in \mathbb{Z}}}\sum_{\substack{T \in \mathbb{T}_{-l-l_2}\\S \in \mathbb{S}_{l_2}}} \bigg|\bigcup_{\substack{I \times J \in T\times S \\ I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \times \mathcal{J}_{n_2,m_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}}} I \times J \bigg| \nonumber \\ \lesssim & C_1^{2} C_2^{2}2^{-n(\frac{1}{p_1} - \theta_1-\frac{1}{2}\mu(1+\epsilon))}2^{-m(\frac{1}{q_1}- \theta_2-\frac{1}{2}(1-\mu)(1+\epsilon))} \nonumber \\ & |F_1|^{\frac{1}{p_1}-\frac{\mu}{2}\epsilon}|F_2|^{\frac{1}{q_1}-\frac{1-\mu}{2}\epsilon}|G_1|^{\frac{1}{p_2}-\frac{\mu}{2}\epsilon}|G_2|^{\frac{1}{q_2}-\frac{1-\mu}{2}\epsilon}\cdot 2^{l}\|B^H\|_1^{-1}\|\tilde{B}^H\|_1^{-1},\end{aligned}$$ for any $0 \leq \theta_1, \theta_2 < 1$ with $\theta_1 + \theta_2 = \frac{1}{2}$, $0 <\mu<1$ and $\epsilon > 0$. One can now interpolate between the estimates obtained with two different approaches, namely (\[ns\_sp\]) and (\[ns\_fb\]), to derive the following bound for (\[ns\]). $$\begin{aligned} \label{ns_sum_result} &C_1^{2}C_2^{2} 2^{-\frac{k_2\gamma\lambda}{2}}2^{-n(\frac{1}{p_1}-\theta_1-\frac{1}{2}\mu(1+\epsilon))(1-\lambda)}2^{-m(\frac{1}{q_1} - \theta_2-\frac{1}{2}(1-\mu)(1+\epsilon))(1-\lambda)} \cdot \nonumber \\ & (2^{l})^{\lambda\frac{(1+\delta)}{2}+(1-\lambda)}\|B^H\|_1^{-\lambda\frac{(1+\delta)}{2}-(1-\lambda)}\|\tilde{B}^H\|_1^{-\lambda\frac{(1+\delta)}{2}-(1-\lambda)} \cdot \nonumber \\ & |F_1|^{\lambda \frac{\mu_1(1+\delta)}{2} + (1-\lambda)(\frac{1}{p_1}-\frac{\mu}{2}\epsilon)}|F_2|^{\lambda \frac{\mu_2(1+\delta)}{2} + (1-\lambda)(\frac{1}{q_1}-\frac{1-\mu}{2}\epsilon)}|G_1|^{\lambda \frac{\nu_1(1+\delta)}{2}+(1-\lambda)(\frac{1}{p_2}-\frac{\mu}{2}\epsilon)}|G_2|^{\lambda\frac{\nu_2(1+\delta)}{2}+(1-\lambda)(\frac{1}{q_2}-\frac{1-\mu}{2}\epsilon)},\end{aligned}$$ for some $0 \leq \lambda \leq 1$. By applying (\[ns\_sum\_result\]) to (\[form00\_set\]), one has $$\begin{aligned} & |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \nonumber \\ \lesssim & C_1^{3}C_2^{3} C_3^{3} \| h \|_{L^s}\cdot \nonumber \\ & \sum_{\substack{n > 0 \\ m > 0 \\ l > 0 \\ k_1 < 0 \\ k_2 \leq K}}2^{-l\lambda(1-\frac{1+\delta}{2})}2^{k_1}2^{k_2(1-\frac{\lambda\gamma}{2})}2^{-n(\frac{1}{p_1}-\theta_1-\frac{1}{2}\mu(1+\epsilon))(1-\lambda)}2^{-m(\frac{1}{q_1}-\theta_2-\frac{1}{2}(1-\mu)(1+\epsilon))(1-\lambda)} \nonumber \\ &\cdot |F_1|^{\lambda \frac{\mu_1(1+\delta)}{2} + (1-\lambda)(\frac{1}{p_1}-\frac{\mu}{2}\epsilon)}|F_2|^{\lambda \frac{\mu_2(1+\delta)}{2} + (1-\lambda)(\frac{1}{q_1}-\frac{1-\mu}{2}\epsilon)}|G_1|^{\lambda \frac{\nu_1(1+\delta)}{2}+(1-\lambda)(\frac{1}{p_2}-\frac{\mu}{2}\epsilon)}|G_2|^{\lambda\frac{\nu_2(1+\delta)}{2}+(1-\lambda)(\frac{1}{q_2}-\frac{1-\mu}{2}\epsilon)} \nonumber \\ & \cdot \|B^H\|_1^{\lambda(1-\frac{1+\delta}{2})}\|\tilde{B}^H\|_1^{\lambda(1-\frac{1+\delta}{2})}. \end{aligned}$$ One notices that there exists $\epsilon > 0$, $0 < \mu < 1$ and $0 <\theta_1<\frac{1}{2}$ such that $$\begin{aligned} \label{nec_condition} &\frac{1}{p_1}-\theta_1-\frac{1}{2}\mu(1+\epsilon) > 0, \nonumber \\ &\frac{1}{q_1} - \theta_2-\frac{1}{2}(1-\mu)(1+\epsilon) > 0.\end{aligned}$$ One notices that (\[nec\_condition\]) imposes a necessary condition on the range of exponents. In particular, $$\label{>1/2} \frac{1}{p_1} + \frac{1}{q_1} - (\theta_1 + \theta_2) > \frac{1}{2}\mu(1+ \epsilon) + \frac{1}{2}(1-\mu)(1+\epsilon).$$ Using the fact that $\theta_1 + \theta_2 = \frac{1}{2}$, one can rewrite (\[&gt;1/2\]) as $$\frac{1}{p_1} + \frac{1}{q_1} > 1+ \frac{\epsilon}{2}.$$ As a consequence, the case $1< \frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2} < 2 $ can be treated by the current argument. Meanwhile, the case $0 < \frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2} \leq 1$ follows a simpler argument which resembles the one for the estimates involving $L^{\infty}$-norms and will be postponed to Section 9. Imposed by (\[nec\_condition\]), the geometric series involving $2^{-n}$ and $2^{-m}$ are convergent. The convergence of series involving $2^{k_1}$ is trivial. One also observes that for any $0 < \lambda < 1$ and $0 < \delta < 1$, $$\lambda(1-\frac{1+\delta}{2}) > 0,$$ which implies that the series involving $2^{-l}$ is convergent. One can separate the cases when $k_2 > 0 $ and $k_2 \leq 0$ and select $\gamma >1$ in each case to make the series about $2^{k_2}$ convergent. Therefore, one can estimate the multilinear form by $$\begin{aligned} & \Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}} \nonumber \\ \lesssim & C_1^3 C_2^3 C_3^{3}\| h \|_{L^s} \|B^H\|_1^{\lambda(1-\frac{1+\delta}{2})}\|\tilde{B}^H\|_1^{\lambda(1-\frac{1+\delta}{2})} \nonumber \\ &\cdot |F_1|^{\lambda \frac{\mu_1(1+\delta)}{2} + (1-\lambda)(\frac{1}{p_1}-\frac{\mu}{2}\epsilon)}|F_2|^{\lambda \frac{\mu_2(1+\delta)}{2} + (1-\lambda)(\frac{1}{q_1}-\frac{1-\mu}{2}\epsilon)}|G_1|^{\lambda \frac{\nu_1(1+\delta)}{2}+(1-\lambda)(\frac{1}{p_2}-\frac{\mu}{2}\epsilon)}|G_2|^{\lambda\frac{\nu_2(1+\delta)}{2}+(1-\lambda)(\frac{1}{q_2}-\frac{1-\mu}{2}\epsilon)}, \nonumber \\\end{aligned}$$ where one can apply Proposition \[B\_global\_norm\] to derive $$\begin{aligned} & \|B^H\|_1 \lesssim |F_1|^{\rho}|F_2|^{1-\rho}, \nonumber \\ & \|\tilde{B}^H\|_1 \lesssim |G_1|^{\rho'}|G_2|^{1-\rho'},\end{aligned}$$ with the corresponding exponent to be positive as guaranteed by the fact that $0 < \lambda,\delta < 1$. One thus obtains $$\begin{aligned} \label{exp00} |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}|\lesssim& C_1^3 C_2^3 C_3^{3}\|h\|_s |F_1|^{\lambda \frac{\mu_1(1+\delta)}{2} + (1-\lambda)(\frac{1}{p_1}-\frac{\mu}{2}\epsilon) + \rho\lambda(1-\frac{1+\delta}{2})}|F_2|^{\lambda \frac{\mu_2(1+\delta)}{2} + (1-\lambda)(\frac{1}{q_1}-\frac{1-\mu}{2}\epsilon) + (1-\rho)\lambda(1-\frac{1+\delta}{2})} \nonumber \\ & \cdot |G_1|^{\lambda \frac{\nu_1(1+\delta)}{2}+(1-\lambda)(\frac{1}{p_2}-\frac{\mu}{2}\epsilon)+ \rho'\lambda(1-\frac{1+\delta}{2})}|G_2|^{\lambda\frac{\nu_2(1+\delta)}{2}+(1-\lambda)(\frac{1}{q_2}-\frac{1-\mu}{2}\epsilon)+(1-\rho')\lambda(1-\frac{1+\delta}{2})}. $$ With a little abuse of notation, we use $\tilde{p_i}$ and $\tilde{q_i}$, $i = 1,2$ to represent $p_i$ and $q_i$ in the above argument. And from now on, $p_i$ and $q_i$ stand for the boundedness exponents specified in the main theorem. One has the freedom to choose $1 < \tilde{p_i}, \tilde{q_i}< \infty$, $0 < \mu,\lambda < 1$ and $\epsilon > 0 $ such that $$\begin{aligned} \label{exp_tilde} & \lambda \frac{\mu_1(1+\delta)}{2} + (1-\lambda)(\frac{1}{\tilde{p_1}}-\frac{\mu}{2}\epsilon) + \rho\lambda(1-\frac{1+\delta}{2}) = \frac{1}{p_1} \nonumber \\ & \lambda \frac{\mu_2(1+\delta)}{2} + (1-\lambda)(\frac{1}{\tilde{q_1}}-\frac{1-\mu}{2}\epsilon) + (1-\rho)\lambda(1-\frac{1+\delta}{2}) = \frac{1}{q_1} \nonumber \\ & \lambda \frac{\nu_1(1+\delta)}{2}+(1-\lambda)(\frac{1}{\tilde{p_2}}-\frac{\mu}{2}\epsilon)+ \rho'\lambda(1-\frac{1+\delta}{2}) = \frac{1}{p_2} \nonumber \\ & \lambda\frac{\nu_2(1+\delta)}{2}+(1-\lambda)(\frac{1}{\tilde{q_2}}-\frac{1-\mu}{2}\epsilon)+(1-\rho')\lambda(1-\frac{1+\delta}{2}) = \frac{1}{q_2}.\end{aligned}$$ To see that above equations can hold, one can view the parts without $\tilde{p_i}$ and $\tilde{q_i}$ as perturbations which can be controlled small. More precisely, when $0 < \delta < 1$ is close to $1$, $$\lambda(1-\frac{1+\delta}{2}) \ll 1.$$ When $0 < \lambda < 1$ is close to $0$, one has $$\lambda \frac{\mu_1(1+\delta)}{2}, \lambda \frac{\mu_2(1+\delta)}{2}, \lambda \frac{\nu_1(1+\delta)}{2}, \lambda\frac{\nu_2(1+\delta)}{2} \ll 1$$ and $$\begin{aligned} &\frac{1}{p_i} - (1-\lambda)(\frac{1}{\tilde{p_i}}-\frac{\mu}{2}\epsilon) \ll 1,\nonumber \\ & \frac{1}{q_i} - (1-\lambda)(\frac{1}{\tilde{q_i}}-\frac{1-\mu}{2}\epsilon)\ll 1,\end{aligned}$$ for $ i = 1,2$. It is also necessary to check is that $\tilde{p_i}$ and $\tilde{q_i}$ satisfy the conditions which have been used to obtain (\[exp00\]), namely $$\begin{aligned} \frac{1}{\tilde{p_1}} + \frac{1}{\tilde{q_1}} = & \frac{1}{\tilde{p_2}} + \frac{1}{\tilde{q_2}} > 1.\end{aligned}$$ One can easily verify the first equation and the second inequality by manipulating (\[exp\_tilde\]). As a result, we have derived that $$|\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim C_1^3 C_2^3 C_3^{3}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}.$$ .25in Proof of Theorem \[thm\_weak\_inf\_mod\] for $\Pi_{\text{flag}^{\#_1} \otimes \text{flag}^{\#_2}}$ - Haar Model =============================================================================================================== One can mimic the proof in Chapter 6 with a change of perspectives on size estimates. More precisely, one applies some trivial size estimates for functions $f_2$ and $g_2$ lying in $L^{\infty}$ spaces while one needs to pay respect to the fact that $f_1$ and $g_1$ could lie in $L^p$ space for any $p >1$. Such perspective is demonstrated in the stopping-time decomposition and in the definition of the exceptional set. Loalization ----------- One defines $$\Omega := \Omega^1 \cup \Omega^2,$$ where $$\begin{aligned} \Omega^1 := &\bigcup_{n_1 \in \mathbb{Z}}\{Mf_1 > C_1 2^{n_1}\|f_1\|_{p}\} \times \{Mg_1 > C_2 2^{-n_1}\|g_1\|_{p}\}, \nonumber \\ \Omega^2 := & \{SSh > C_3 \|h\|_{L^s}\}, \nonumber \\\end{aligned}$$ and $$\tilde{\Omega} := \{ M\chi_{\Omega} > \frac{1}{100}\}.$$ Let $$E' := E \setminus \tilde{\Omega}.$$ It is not difficult to check that given $C_1, C_2$ and $C_3$ are sufficiently large, $|E'| \sim |E|$ where $|E|$ can be assumed to be 1. It suffices to prove that the multilinear form defined as $$\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}, \chi_{E'}) := \langle \Pi_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}), \chi_{E'} \rangle$$ satisfies the restricted weak-type estimate $$|\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}} \|h\|_{L^{s}(\mathbb{R}^2)}.$$ Summary of stopping-time decompositions. ---------------------------------------- ---------------------------------------------------------------------------------- ------------------- ------------------------------------------------------------------ I. Tensor-type stopping-time decomposition I on $\mathcal{I} \times \mathcal{J}$ $\longrightarrow$ $I \times J \in \mathcal{I}'_{-n-n_2} \times \mathcal{J}'_{n_2}$ $(n_2 \in \mathbb{Z}, n > 0)$ II\. General two-dimensional level sets stopping-time decomposition $\longrightarrow$ $I \times J \in \mathcal{R}_{k_1,k_2} $     on $\mathcal{I} \times \mathcal{J}$ $(k_1 <0, k_2 \leq K)$ ---------------------------------------------------------------------------------- ------------------- ------------------------------------------------------------------ where $$\begin{aligned} \mathcal{I}'_{-n-n_2} := & \{ I \in \mathcal{I} \setminus \mathcal{I}'_{-n-n_2+1}: \left| I \cap \Omega'^x_{-n-n_2}\right| > \frac{1}{10}|I| \}, \nonumber \\ \mathcal{J}'_{n_2} := & \{ J \in \mathcal{J} \setminus \mathcal{J}'_{n_2+1}: \left| I \cap \Omega'^y_{n_2}\right| > \frac{1}{10}|J| \},\end{aligned}$$ with $$\begin{aligned} \Omega'^x_{-n-n_2} := &\{Mf_1> C_1 2^{-n-n_2}\|f_1\|_p \}, \nonumber \\ \Omega'^y_{n_2} := & \{Mg_1> C_2 2^{n_2} \|g_1\|_p \}.\end{aligned}$$ Application of stopping-time decompositions ------------------------------------------- One can now apply the stopping-time decompositions and follow the same argument in Section 6 to deduce that [$$\begin{aligned} \label{form11_inf} & |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \nonumber\\ \lesssim &\bigg|\displaystyle \sum_{\substack{n> 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{n_2 \in \mathbb{Z}}\sum_{\substack{I \times J \in \mathcal{I}'_{-n-n_2} \times \mathcal{J}'_{n_2}\\I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle \langle \tilde{B}_J^{\#_2,H} (g_1,g_2),{\varphi}_J^{1,H} \rangle \langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle \langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle \bigg| \nonumber \\\nonumber \\ \lesssim & \sum_{\substack{n> 0 \\ k_1 < 0 \\ k_2 \leq K}} \sum_{n_2 \in \mathbb{Z}}\sup_{I \in \mathcal{I}'_{-n-n_2}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \cdot \sup_{J \in \mathcal{J}'_{n_2}}\frac{| \langle \tilde{B}_J^{\#_2,H} (g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}}\cdot C_3 2^{k_1} \| h \|_{L^s} 2^{k_2} \cdot \nonumber \\ &\quad \quad \quad \sum_{\substack{I \times J \in \mathcal{I}'_{-n-n_2} \times \mathcal{J}'_{n_2}\\I \times J \in \mathcal{R}_{k_1,k_2}}} \bigg|\big(\bigcup_{R\in \mathcal{R}_{k_1,k_2}} R\big) \cap \big(\bigcup_{I \in \mathcal{I}_{-n-n_2,-m-m_2}} I \times \bigcup_{J \in \mathcal{J}_{n_2,m_2}}J\big)\bigg|. \end{aligned}$$]{} To estimate $\displaystyle \sup_{I \in \mathcal{I}'_{-n-n_2}} \frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} $, one can now apply Lemma \[B\_size\] with $S:= \{Mf_1 \leq C_1 2^{-n-n_2}|F_1|^{\frac{1}{p}} \}$ and obtain $$\sup_{I \in \mathcal{I}'_{-n-n_2}}\frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \lesssim \sup_{K \cap S \neq \emptyset}\frac{|\langle f_1, {\varphi}^1_K \rangle|}{|K|^{\frac{1}{2}}} \sup_{K \cap S \neq \emptyset} \frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}},$$ where by the definition of $S$, $$\sup_{K \cap S \neq \emptyset}\frac{|\langle f_1, {\varphi}^1_K \rangle|}{|K|^{\frac{1}{2}}} \lesssim C_12^{-n-n_2}\|f_1\|_p,$$ and by the fact that $f_2 \in L^{\infty}$, $$\sup_{K \cap S \neq \emptyset} \frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} \lesssim \|f_2\|_{\infty}.$$ As a result, $$\label{est_x} \sup_{I \in \mathcal{I}'_{-n-n_2}}\frac{|\langle B_I^{\#_1,H}(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \lesssim C_12^{-n-n_2}\|f_1\|_p\|f_2\|_{\infty}.$$ By a similar reasoning, $$\label{est_y} \sup_{J \in \mathcal{J}'_{n_2}}\frac{|\langle \tilde{B}_J^{\#_2,H}(g_1,g_2),{\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \lesssim C_2 2^{n_2}\|g_1\|_p\|g_2\|_{\infty}.$$ When combining the estimates (\[est\_x\]) and (\[est\_y\]) into (\[form11\_inf\]), one concludes that $$\begin{aligned} |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#2}}| \lesssim &C_1 C_2 C_3^2 \sum_{\substack{n > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{-n}\|f_1\|_p \|g_1\|_p C_3 2^{k_1} \| h \|_{L^s} 2^{k_2} \cdot \sum_{n_2 \in \mathbb{Z}}\bigg|\big(\bigcup_{R\in \mathcal{R}_{k_1,k_2}} R\big) \cap \big(\bigcup_{I \in \mathcal{I}'_{-n-n_2,-m-m_2}} I \times \bigcup_{J \in \mathcal{J}'_{n_2,m_2}}J\big)\bigg|\nonumber \\ \lesssim &C_1 C_2 C_3^2 \sum_{\substack{n > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{-n}\|f_1\|_p \|g_1\|_p C_3 2^{k_1(1-\frac{s}{2})} \| h \|_{L^s} 2^{k_2(1-\frac{\gamma}{2})}, \nonumber \end{aligned}$$ where the last inequality follows from the sparsity condition. With proper choice of $\gamma >1$, one obtains the desired estimate. Proof of Theorem \[thm\_weak\_inf\_mod\] for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ - Haar Model ===================================================================================================== One interesting fact is that when $$\label{easy_case} \frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2} \leq 1,$$ Theorem \[thm\_weak\_mod\] can be proved by a simpler argument as remarked in Chapter 7. And Theorem \[thm\_weak\_inf\_mod\] for the model $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ can be viewed as a sub-case and proved by the same argument. The key idea is that in the case specified in (\[easy\_case\]), one no longer needs the localization of the operator $B$ in the proof. Let $$\frac{1}{t} := \frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2},$$ and the condition of the exponents (\[easy\_case\]) translates to $$t \geq 1.$$ Localization. ------------- One first defines $$\Omega := \Omega^1 \cup \Omega^2,$$ where $$\begin{aligned} \displaystyle \Omega^1 := &\bigcup_{l_2 \in \mathbb{Z}} \{MB > C_1 2^{-l_2}\| B\|_t\} \times \{M\tilde{B} > C_2 2^{l_2}\|\tilde{B}\|_t\}, \nonumber \\ \Omega^2 := & \{SSh > C_3 \|h\|_{L^s}\}, \nonumber \\\end{aligned}$$ and $$\tilde{\Omega} := \{ M\chi_{\Omega} > \frac{1}{100}\}.$$ Let $$E' := E \setminus \tilde{\Omega}.$$ We shall notice that $t \geq 1$ allows one to use the mapping property of the Hardy-Littlewood maximal operator, which plays an essential role in the estimate of $|\Omega|$. A straightforward computation shows $|E'| \sim |E|$ given that $C_1, C_2$ and $C_3$ are sufficiently large. It suffices to assume that $|E'| \sim |E| = 1$ and to prove that the multilinear form $$\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}, \chi_{E'}) := \langle \Pi_{\text{flag}^{0} \otimes \text{flag}^{0}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}), \chi_{E'} \rangle$$ satisfies the following restricted weak-type estimate $$|\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}} \|h\|_{L^{s}(\mathbb{R}^2)}.$$ Summary of stopping-time decompositions. ---------------------------------------- ---------------------------------------------------------------- ------------------- ----------------------------------------- General two-dimensional level sets stopping-time decomposition $\longrightarrow$ $I \times J \in \mathcal{R}_{k_1,k_2} $ on $ \mathcal{I} \times \mathcal{J}$ $(k_1 <0, k_2 \leq K)$ ---------------------------------------------------------------- ------------------- ----------------------------------------- .25in One performs the *general two-dimensional level sets stopping-time decomposition* with respect to the hybrid maximal-square functions as specified in the definition of the exceptional set. It would be evident from the argument below that there is no stopping-time decomposition necessary for the maximal functions involving $B$ and $\tilde{B}$. One brief explanantion is that only “averages” for $B$ and $\tilde{B}$ are required while the measurement of the set where the averages are attained is not. As a consequence, the macro-control of the averages would be sufficient and the stopping-time decomposition, which can be seen as a more delicate “slice-by-slice” or “level-by-level” partition, is not compulsory. More precisely, $$\begin{aligned} \label{form00_inf} |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| = &\displaystyle \bigg|\sum_{\substack{ k_1 < 0 \\ k_2 \leq K}} \sum_{\substack{I \times J \in \mathcal{R}_{k_1,k_2}}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),{\varphi}_I^{1,H} \rangle \langle \tilde{B}_J(g_1, g_2), {\varphi}_J^{1,H} \rangle \langle h, \psi_I^{2,H} \otimes \psi_J^{2,H} \rangle \langle \chi_{E'},\psi_I^{3,H} \otimes \psi_J^{3,H} \rangle \bigg|\nonumber \\ \lesssim & \sum_{\substack{k_1 < 0 \\ k_2 \leq K}}\displaystyle \sup_{I \times J \in \mathcal{I}\times \mathcal{J}} \bigg(\frac{|\langle B_I(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \frac{|\langle \tilde{B}_J(g_1, g_2), {\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \bigg) \cdot C_3^2 2^{k_1}\|h\|_s 2^{k_2} \bigg|\bigcup_{\substack{I \times J \in \mathcal{R}_{k_1,k_2}}}I \times J\bigg|.\end{aligned}$$ By the same reasoning applied in previous chapters, one has $$\bigg|\bigcup_{\substack{I \times J \in \mathcal{R}_{k_1,k_2}}}I \times J\bigg| \lesssim \min(C_3^{-1}2^{-k_1s}, C_3^{-\gamma}2^{-k_2 \gamma}),$$ for any $\gamma >1$. Meanwhile, an argument similar to the proof of Observation 2 in Section 7.2.2 implies that $$\frac{|\langle B_I(f_1,f_2),{\varphi}_I^{1,H} \rangle|}{|I|^{\frac{1}{2}}} \frac{|\langle \tilde{B}_J(g_1, g_2), {\varphi}_J^{1,H} \rangle|}{|J|^{\frac{1}{2}}} \lesssim C_1 C_2 \|B\|_t \|\tilde{B}\|_t,$$ for any $I \times J \cap \tilde{\Omega} \neq \emptyset$ as assumed in the Haar model. As a consequence, $$|\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim C_1 C_2 C_3^2 \sum_{\substack{k_1 < 0 \\ k_2 \leq K}}\|B\|_t \|\tilde{B}\|_t C_3 2^{k_1(1-\frac{s}{2})} \| h \|_{L^s(\mathbb{R}^2)} 2^{k_2(1-\frac{\gamma}{2})} \lesssim \|B\|_t \|\tilde{B}\|_t,$$ with appropriate choice of $\gamma>1$. One can now invoke Lemma \[B\_global\_norm\] to complete the proof of Theorem \[thm\_weak\_mod\]. In particular, $$\begin{aligned} \label{B_easy} \|B\|_t \lesssim & \|f_1\|_{p_1} \|f_2\|_{q_1} \nonumber \\ \|\tilde{B}\|_t \lesssim & \|g_1\|_{p_2} \|g_2\|_{q_2},\end{aligned}$$ while the case described in Theorem \[thm\_weak\_inf\_mod\] is when $q_1 = q_2 = \infty$ and (\[B\_easy\]) can be rewritten as $$\|B\|_p \lesssim \|f_1\|_{p} \|f_2\|_{\infty},$$ $$\|\tilde{B}\|_p \lesssim \|g_1\|_{p} \|g_2\|_{\infty}.$$ 1. One notices that Theorem \[thm\_weak\_inf\_mod\] in the Haar model is proved directly with generic functions in $L^p$ and $L^s$ spaces for $1< p < \infty, 1 < s < 2$. 2. The above argument proves Theorem \[thm\_weak\_mod\] in the Haar model for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ with the range of exponents described as (\[easy\_case\]), which completes the proof of Theorem \[thm\_weak\_mod\] - Haar model. Generalization to Fourier Case ============================== We will first highlight where we have used the assumption about the Haar model in the proof and then modify those partial arguments to prove the general case. We have used the following implications specific to the Haar model. 1. Let $\chi_{E'} := \chi_{E \setminus \tilde{\Omega}}$. Then $$\label{Haar_loc_bipara} \langle \chi_{E'}, \phi^H_{I} \otimes \phi^H_J \rangle \neq 0 \iff I \times J \cap \tilde{\Omega}^c \neq \emptyset.$$ As a result, what contributes to the multilinear forms in the Haar model are the dyadic rectangles $I \times J \in \mathcal{R}$ satisfying $I \times J \cap \tilde{\Omega}^c \neq \emptyset$, which is a condition we heavily used in the proofs of the theorems in the Haar model. 2. For any dyadic intervals $K$ and $I$ with $|K| \geq |I|$, $$\langle \phi^{3,H}_K, \phi^H_I \rangle \neq 0$$ if and only if $$K \supseteq I.$$ Therefore, the non-degenerate case imposes the condition on the geometric containment we have employed for localizations of the operator $B$. 3. In the case $(\phi^{3,H}_K)_K$ is a family of Haar wavelets, the observation highlighted as (\[haar\_biest\_cond\]) generates the biest trick (\[haar\_biest\]) which is essential in the energy estimates. We will focus on how to generalize proofs of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^{\#_1} \otimes \text{flag}^{\#_2}} $ and $\Pi_{\text{flag}^{0} \otimes \text{flag}^{0}}$ and discuss how to tackle restrictions listed as $H(I), H(II)$ and $H(III)$. The generalizations of arguments for other model operators and for Theorem \[thm\_weak\_inf\_mod\] follow from the same ideas. Generalized Proof of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^{\#_1} \otimes \text{flag}^{\#_2}} $ ---------------------------------------------------------------------------------------------------------- ### Localization and generalization of $H(I)$ The argument for $\Pi_{\text{flag}^{\#_1} \otimes \text{flag}^{\#_2}} $ in Chapter 6 takes advantage of the localization of spatial variables, as stated in $H(I)$. The following lemma allows one to decompose the original bump function into bump functions of compact supports so that a perfect localization in spatial variables can be achieved, which can be viewed as generalized $H(I)$ and whose proof is included in Chapter 3 of [@cw]. \[decomp\_compact\] Let $I \subseteq \mathbb{R}$ be an interval. Then any smooth bump function $\phi_I$ adapted to $I$ can be decomposed as $$\phi_I = \sum_{\tau \in \mathbb{N}} 2^{100 \tau} \phi_I^{\tau}$$ where for each $\tau \in \mathbb{N}$, $\phi_I^{\tau}$ is a smooth bump function adapted to $I$ and $\text{supp}(\phi_J^{\tau}) \subseteq 2^{\tau} I$. If $\int \phi_I = 0$, then the functions $\phi_I^{\tau}$ can be chosen such that $\int \phi_I^{\tau} = 0$. The multilinear form associated to $\Pi_{\text{flag}^{\#_1} \otimes \text{flag}^{\#_2}} $ in the general case can now be rewritten as $$\begin{aligned} \label{compact} \Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#_2}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{xy}, \chi_{E'}) := \displaystyle \sum_{\tau_1,\tau_2 \in \mathbb{N}}2^{-100(\tau_1+\tau_2)}\sum_{I \times J \in \mathcal{R}}& \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B^{\#_1}_I(f_1,f_2),\phi_I^1 \rangle \langle \tilde{B}^{\#_2}(g_1,g_2), \phi_J^1 \rangle \nonumber \\ & \cdot \langle h, \phi_{I}^2 \otimes \phi_{J}^2 \rangle \langle \chi_{E'},\phi_{I}^{3,\tau_1} \otimes \phi^{3, \tau_2}_{J} \rangle. \nonumber \\\end{aligned}$$ For $\tau_1, \tau_2 \in \mathbb{N}$ fixed, define [$$\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#_2}}^{\tau_1, \tau_2} (f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}, \chi_{E'}):= \sum_{I \times J \in \mathcal{R}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),\phi_I^1 \rangle \langle \tilde{B}(g_1,g_2), \phi_J^1 \rangle \cdot \langle h, \phi_{I}^2 \otimes \phi_{J}^2 \rangle \langle \chi_{E'},\phi_{I}^{3,\tau_1} \otimes \phi^{3, \tau_2}_{J} \rangle.$$]{} It suffices to prove that for any fixed $\tau_1,\tau_2 \in \mathbb{N}$, $$\label{linear_fix_fourier} |\Lambda_{\text{flag}^{\#1} \otimes \text{flag}^{\#_2}}^{\tau_1, \tau_2}| \lesssim (2^{\tau_1+ \tau_2})^{\Theta}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}},$$ for some $0 < \Theta < 100$, thanks to the fast decay $2^{-100(\tau_1+\tau_2)}$ in the decomposition of the original multilinear form (\[compact\]). One first re-defines the exceptional set with the replacement of $C_1$, $C_2$ and $C_3$ by $C_12^{10\tau_1}$, $C_2 2^{10\tau_2}$ and $C_3 2^{10\tau_1+10\tau_2}$ respectively. In particular, let $$\begin{aligned} & C_1^{\tau_1} := C_12^{10\tau_1}, \nonumber\\ & C_2^{\tau_2} := C_22^{10\tau_2}, \nonumber\\ & C_3^{\tau_1,\tau_2} := C_32^{10\tau_1+10\tau_2}. \nonumber\end{aligned}$$ Then define $$\begin{aligned} \displaystyle \Omega_1^{\tau_1, \tau_2} := &\bigcup_{\tilde{n} \in \mathbb{Z}}\{Mf_1 > C_1^{\tau_1} 2^{\tilde{n}}|F_1|\} \times \{Mg_1 > C_2^{\tau_2} 2^{-\tilde{n}}|G_1|\}\cup \nonumber \\ & \bigcup_{\tilde{\tilde{n}} \in \mathbb{Z}}\{Mf_2 > C_1^{\tau_1} 2^{\tilde{\tilde{n}}}|F_2|\} \times \{Mg_2 > C_2^{\tau_2} 2^{-\tilde{\tilde{n}}}|G_2|\}\cup \nonumber \\ &\bigcup_{\tilde{\tilde{\tilde{n}}} \in \mathbb{Z}}\{Mf_1 > C_1^{\tau_1} 2^{\tilde{\tilde{\tilde{n}}}}|F_1|\} \times \{Mg_2 > C_2^{\tau_2} 2^{-\tilde{\tilde{\tilde{n}}}}|G_2|\}\cup \nonumber \\ & \bigcup_{\tilde{\tilde{\tilde{\tilde{n}}}} \in \mathbb{Z}}\{Mf_2 > C_1^{\tau_1} 2^{\tilde{\tilde{\tilde{\tilde{n}}}} }|F_2|\} \times \{Mg_1 > C_2^{\tau_2} 2^{-\tilde{\tilde{\tilde{\tilde{n}}}} }|G_1|\},\nonumber \\ \Omega_2^{\tau_1,\tau_2} := & \{SSh > C_3^{\tau_1, \tau_2} \|h\|_{L^s(\mathbb{R}^2)}\}. \nonumber \\\end{aligned}$$ One also defines $$\begin{aligned} & \Omega^{\tau_1,\tau_2} := \Omega_1^{\tau_1,\tau_2} \cup \Omega_2^{\tau_1,\tau_2}, \nonumber \\ & \tilde{\Omega}^{\tau_1,\tau_2} := \{M(\chi_{\Omega^{\tau_1,\tau_2}})> \frac{1}{100} \}, \nonumber\\ & \tilde{\tilde{\Omega}}^{\tau_1,\tau_2} := \{M(\chi_{ \tilde{\Omega}^{\tau_1,\tau_2}})> \frac{1}{2^{2\tau_1+ 2\tau_2}} \},\end{aligned}$$ and finally $$\tilde{\Omega} := \bigcup_{\tau_1,\tau_2\in \mathbb{N}}\tilde{\tilde{\Omega}}^{\tau_1,\tau_2}.$$ It is not difficult to verify that $|\tilde{\Omega}| \ll 1$ given that $C_1, C_2$ and $C_3$ are sufficiently large. One can then define $E' := E \setminus \tilde{\Omega}$, where $|E'| \sim |E|$ as desired. For such $E'$, one has the following simple but essential observation. \[start\_point\] For any fixed $\tau_1, \tau_2 \in \mathbb{N}$ and any dyadic rectangle $I \times J$, $$\langle \chi_{E'},\phi_{I}^{3,\tau_1} \otimes \phi^{3, \tau_2}_{J} \rangle \neq 0$$ implies that $$I \times J \cap (\tilde{\Omega}^{\tau_1,\tau_2})^c \neq \emptyset.$$ We will prove the equivalent contrapositive statement. Suppose that $I \times J \cap(\tilde{\Omega}^{\tau_1,\tau_2})^c = \emptyset$, or equivalently $I \times J \subseteq \tilde{\Omega}^{\tau_1,\tau_2}$, then $$|2^{\tau_1}I \times 2^{\tau_2}J \cap \tilde{\Omega}^{\tau_1,\tau_2}| > \frac{1}{2^{2\tau_1+2\tau_2}}|2^{\tau_1}I \times 2^{\tau_2}J|,$$ which infers that $$2^{\tau_1}I \times 2^{\tau_2}J \subseteq \tilde{\tilde{\Omega}}^{\tau_1,\tau_2} \subseteq \tilde{\Omega}.$$ Since $E' \cap \tilde{\Omega} = \emptyset$, one can conclude that $$\langle \chi_{E'},\phi_{I}^{3,\tau_1} \otimes \phi^{3, \tau_2}_{J} \rangle = 0,$$ which completes the proof of the observation. \[st\_general\] Observation \[start\_point\] settles a starting point for the stopping-time decompositions with fixed parameters $\tau_1$ and $\tau_2$. More precisely, suppose that $\mathcal{R}$ is an arbitrary finite collection of dyadic rectangles. Then with fixed $\tau_1, \tau_2 \in \mathbb{N}$, let $\displaystyle \mathcal{R} := \bigcup_{n_1, n_2 \in \mathbb{Z}}\mathcal{I}^{\tau_1}_{-n_1} \times \mathcal{J}^{\tau_2}_{n_2}$ denote the *tensor-type stopping-time decomposition I - level sets* introduced in Chapter 6. Now $\mathcal{I}^{\tau_1}_{n_1}$ and $\mathcal{J}^{\tau_2}_{n_2}$ are defined in the same way as $\mathcal{I}_{n_1}$ and $\mathcal{J}_{n_2}$ with $C_1$ and $C_2$ replaced by $C_1^{\tau_1}$ and $C_2^{\tau_2}$. By the argument for Observation \[obs\_indice\] in Chapter 6, one can deduce the same conclusion that if for any $I \times J \in \mathcal{R}$, $I \times J \cap \tilde{\Omega}^c \neq \emptyset$, then $n_1 + n_2 < 0$. Due to Remark \[st\_general\], one can perform the stopping-time decompositions specified in Chapter 6 with $C_1, C_2$ and $C_3$ replaced by $C_1^{\tau_1}$, $C_2^{\tau_2}$ and $C_3^{\tau_1,\tau_2}$ respectively and adopt the argument without issues. The difference that lies in the resulting estimate is the appearance of $O(2^{50\tau_1})$, $O(2^{50\tau_2})$ and $O(2^{50\tau_1+50\tau_2})$, which is not of concerns as illustrated in (\[linear\_fix\_fourier\]). The only “black-box” used in Chapter 6 is the local size estimates (Proposition \[size\_cor\]), which needs a more careful treatment and will be explored in the next subsection. ### Local size estimates and generalization of H(II) We will focus on the estimates of $\text{size}(\langle B^{\#_1}_I, {\varphi}_I\rangle)_I$, whose argument applies to $\text{size}(\langle \tilde{B}^{\#_2}_J, {\varphi}_J\rangle)_J$ as well. It suffices to prove Lemma \[B\_size\] in the Fourier case and the local size estimates described in Proposition \[size\_cor\] follow immediately. One first attempts to apply Lemma \[decomp\_compact\] to create a setting of compactly-supported bump functions so that the same localization described in Chapter 5 can be achieved. Suppose that for any $I \in \mathcal{I}'$, $I \cap S \neq \emptyset$, then $$\begin{aligned} \text{size}_{\mathcal{I}'}((\langle B_I^{\#_1,H}, {\varphi}_I \rangle)_{I \in \mathcal{I}'} =& \frac{|\langle B^{\#_1}_{I_0}(f_1,f_2),{\varphi}_{I_0}^1 \rangle|}{|I_0|^{\frac{1}{2}}}, \nonumber \\\end{aligned}$$ for some $I_0 \in \mathcal{I}'$ such that $I_0 \cap S \neq \emptyset$. Consider $$\begin{aligned} \label{form_f} & \frac{|\langle B^{\#_1}_{I_0}(f_1,f_2),{\varphi}_{I_0}^1 \rangle|}{|I_0|^{\frac{1}{2}}} = \frac{1}{|I_0|^{\frac{1}{2}}}\bigg|\sum_{\tau_3,\tau_4 \in \mathbb{N}}2^{100 \tau_3}2^{-100{\tau_4}}\sum_{K: |K| \sim 2^{\#_1}|I_0|} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi^1_K \rangle \langle f_2, \phi^2_K \rangle \langle {\varphi}^{1,\tau_3}_{I_0}, \phi^{3,\tau_4}_K\rangle\bigg|, \end{aligned}$$ where ${\varphi}^1_{I}$ denotes the $L^{2}$ smooth bump function adapted to $I$, ${\varphi}^{1,\tau_3}_{I}$ is an $L^{2}$-normalized bump function adapted to $I$ with $\text{supp}({\varphi}^{1,\tau_3}_{I}) \subseteq 2^{\tau_3}I$, and $\phi^{3,\tau_4}_K$ is an $L^2$-normalized bump function with $\text{supp}(\phi^{3,\tau_4}_K)\subseteq 2^{\tau_4}K$. With the property of being compactly supported, one has that if $$\langle {\varphi}^{1,\tau_3}_{I}, \phi^{3, \tau_4}_K\rangle \neq 0,$$ then $$2^{\tau_3} I \cap 2^{\tau_4}K \neq \emptyset.$$ One also recalls that $I \cap S \neq \emptyset$ and $|I| \leq |K|$, it follows that $$\label{geometry_fourier} \frac{\text{dist}(K,S)}{|K|} \lesssim 2^{\tau_3 + \tau_4}.$$ Therefore, one can apply (\[geometry\_fourier\]) and rewrite (\[form\_f\]) as $$\begin{aligned} \label{size_B_f} & \frac{|\langle B^{\#_1}_{I_0}(f_1,f_2),{\varphi}_{I_0}^1 \rangle|}{|I_0|^{\frac{1}{2}}} \nonumber \\ \leq &\sum_{\tau_3,\tau_4 \in \mathbb{N}} 2^{-100\tau_3}2^{-100\tau_4} \frac{1}{|I_0|} \sum_{K:|K|\sim 2^{\#_1}|I_0|}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} |\langle |I_0|^{\frac{1}{2}}{\varphi}^{1,\tau_3}_{I_0}, |K|^{\frac{1}{2}}\phi_K^3 \rangle| \nonumber \\ \leq & \sum_{\tau_3,\tau_4 \in \mathbb{N}} 2^{-100\tau_3}2^{-100\tau_4}\frac{1}{|I_0|} \sup_{K:\frac{\text{dist}(K,S)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \sup_{K:\frac{\text{dist}(K,S)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}\frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} \nonumber \\ & \quad \cdot \sum_{K:|K|\sim 2^{\#_1}|I_0|}|\langle |I_0|^{\frac{1}{2}}{\varphi}^{1,\tau_3}_{I_0}, |K|^{\frac{1}{2}}\phi_K^3 \rangle|. \end{aligned}$$ One notices that $$\begin{aligned} \label{size_f_1} & \sup_{K:\frac{\text{dist}(K,S)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}\frac{|\langle f_1, \phi_K^1 \rangle|}{|K|^{\frac{1}{2}}} \nonumber \\ \lesssim& 2^{\tau_3+\tau_4}\sup_{K:\frac{\text{dist}(K,S)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}\frac{|\langle f_1, \phi_{2^{\tau_3+\tau_4}K}^1 \rangle|}{|2^{\tau_3+\tau_4}K|^{\frac{1}{2}}} \nonumber\\ \leq & 2^{\tau_3+ \tau_4} \sup_{K' \cap S \neq \emptyset} \frac{|\langle f_1, \phi_{K'}^1 \rangle|}{|K'|^{\frac{1}{2}}}.\end{aligned}$$ where $K' := 2^{\tau_3+\tau_4}K$, the interval with the same center as $K$ with the radius $2^{\tau_3 + \tau_4}|K|.$ Similarly, $$\label{size_f_2} \sup_{K:\frac{\text{dist}(K,S)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}\frac{|\langle f_2, \phi_K^2 \rangle|}{|K|^{\frac{1}{2}}} \lesssim 2^{\tau_3+ \tau_4} \sup_{K' \cap S \neq \emptyset} \frac{|\langle f_2, \phi_{K'}^2 \rangle|}{|K'|^{\frac{1}{2}}}.$$ Moroever, $$\begin{aligned} \label{disjoint} & \sum_{K:|K|\sim 2^{\#_1}|I_0|}|\langle |I_0|^{\frac{1}{2}}{\varphi}^{1,\tau_3}_{I_0}, |K|^{\frac{1}{2}}\phi_K^3 \rangle| \nonumber \\ \leq & \sum_{K:|K|\sim 2^{\#_1}|I_0|}\frac{1}{\left(1+\frac{\text{dist}(K,I)}{|K|}\right)^{100}} |I_0| \nonumber \\ \leq & |I_0| \sum_{k \in \mathbb{N}}k^{-100} \nonumber \\ \leq & |I_0|.\end{aligned}$$ By combining (\[size\_f\_1\]), (\[size\_f\_2\]) and (\[disjoint\]), one can estimate (\[size\_B\_f\]) as $$\begin{aligned} & \frac{|\langle B^{\#_1}_{I_0}(f_1,f_2),{\varphi}_{I_0}^1 \rangle|}{|I_0|^{\frac{1}{2}}} \nonumber \\ \lesssim & \frac{1}{|I_0|}\sum_{\tau_3,\tau_4 \in \mathbb{N}}2^{100 \tau_3}2^{-100{\tau_4}} 2^{2(\tau_3+ \tau_4)} \sup_{K' \cap S \neq \emptyset} \frac{|\langle f_1, \phi_{K'}^1 \rangle|}{|K'|^{\frac{1}{2}}} \sup_{K' \cap S \neq \emptyset}\frac{|\langle f_2, \phi_{K'}^2 \rangle|}{|K'|^{\frac{1}{2}}} |I_0|\nonumber \\ \lesssim & \sup_{K' \cap S \neq \emptyset} \frac{|\langle f_1, \phi_{K'}^1 \rangle|}{|K'|^{\frac{1}{2}}} \sup_{K' \cap S \neq \emptyset}\frac{|\langle f_2, \phi_{K'}^2 \rangle|}{|K'|^{\frac{1}{2}}},\end{aligned}$$ which is exactly the same estimate for the corresponding term in Lemma \[B\_size\]. This completes the proof of Theorem \[thm\_weak\_mod\] and \[thm\_weak\_inf\_mod\] for $\Pi_{\text{flag}^{\#_1} \otimes \text{flag}^{\#_2}}$. Generalized Proof of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ ----------------------------------------------------------------------------------------------- ### Local energy estimates and generalization of H(III) The delicacy of the argument for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ with the lacunary family $(\phi_K^3)_K$ lies in the localization and the application of the biest trick for the energy estimates. It is worthy to note that Lemma \[decomp\_compact\] fails to generate the local energy estimates. In particular, one can decompose $$\langle B_I(f_1,f_2), {\varphi}_I^1\rangle = \sum_{\tau_3,\tau_4 \in \mathbb{N}}2^{100 \tau_3}2^{-100{\tau_4}}\sum_{K: |K| \geq |I|} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi^1_K \rangle \langle f_2, \phi^2_K \rangle \langle {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K\rangle.$$ Then by the geometric observation (\[geometry\_fourier\]) implied by the non-degenerate condition $ \langle {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K\rangle \neq 0$, $$\label{loc_attempt_fourier} \langle B_I(f_1,f_2), {\varphi}_I^1\rangle = \sum_{\tau_3,\tau_4 \in \mathbb{N}}2^{100 \tau_3}2^{-100{\tau_4}}\sum_{\substack{K:|K| \geq |I| \\ K: \frac{\text{dist}(K,I)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi^1_K \rangle \langle f_2, \phi^2_K \rangle \langle {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K\rangle.$$ The localization has been obtained in (\[loc\_attempt\_fourier\]). Nonetheless, for each fixed $\tau_3$ and $\tau_4$, one cannot equate the terms $$\sum_{\substack{K:|K| \geq |I| \\ K: \frac{\text{dist}(K,I)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi^1_K \rangle \langle f_2, \phi^2_K \rangle \langle {\varphi}^{1,\tau_3}_{I}, \phi^{3,\tau_4}_K\rangle \neq \sum_{\substack{ \\ K: \frac{\text{dist}(K,I)}{|K|} \lesssim 2^{\tau_3 + \tau_4}}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi^1_K \rangle \langle f_2, \phi^2_K \rangle \langle {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K\rangle.$$ The reason is that ${\varphi}_I^{1,\tau_3}$ and $\psi_K^{3,\tau_4}$ are general $L^2$-normalized bump functions instead of Haar wavelets and $L^2$-normalized indicator functions. The “if and only if” condition $$\label{haar_biest_tri} \langle {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K\rangle \neq 0 \iff |K| \geq |I|$$ is no longer valid and is insufficient to derive the biest trick. The biest trick is crucial in the local energy estimates which shall be evident from the previous analysis. In order to use the biest trick in the Fourier case, one needs to exploit the compact Fourier supports instead of the compact supports for spatial variables in the Haar model. As a consequence, one cannot simply apply Lemma \[decomp\_compact\] to localize the energy term involving $B$ as (\[loc\_attempt\_fourier\]) since the bump functions $ {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K$ are compactly supported in space and cannot be compactly supported in frequency due to the uncertainty principle. To achieve the biest trick, one needs to apply a generalized localization. One first recalls that the Littlewood-Paley decomposition imposes that $\text{supp}({\varphi}_I^1) \subseteq \omega_I$ and $\text{supp}(\psi_K^3) \subseteq \omega_K$ where $\omega_I$ and $\omega_K$ behave as follows in the frequency space: .25 in (-20,0)– (20,0); /in [-20/$-4|I|$, -10/$-4|K|^{-1}$, -5/$-\frac{1}{4}|K|^{-1}$, 0/0,5/$\frac{1}{4}|K|^{-1}$,10/$4|K|^{-1}$,20/$4|I|$]{} [ (,0.5) – (,-0.5) node\[below\] ; ]{} (-10,0) – node\[above=0.4ex\] [$\omega_K$]{} (-5,0); (5,0) – node\[above=0.4ex\] [$\omega_K$]{} (10,0); (-20,0) – node\[above=0.8ex\] [$\omega_I$]{} (20,0); (-20,-10)– (20,-10); /in [-5/$-4|I|$, -20/$-4|K|^{-1}$, -10/$-\frac{1}{4}|K|^{-1}$, 0/0,10/$\frac{1}{4}|K|^{-1}$,20/$4|K|^{-1}$,5/$4|I|$]{} [ (,-9.5) – (,-10.5) node\[below\] ; ]{} (-20,-10) – node\[above=0.4ex\] [$\omega_K$]{} (-10,-10); (10,-10) – node\[above=0.4ex\] [$\omega_K$]{} (20,-10); (-5,-10) – node\[above=0.4ex\] [$\omega_I$]{} (5,-10); .25in As one may notice, $$\label{biest_fourier} \langle {\varphi}^{1,\tau_3}_{I}, \psi^{3,\tau_4}_K \rangle \neq 0 \iff \omega_K \subseteq \omega_I \iff |K| \geq |I|,$$ which yields the biest trick as desired. Meanwhile, we would like to attain some localization for the energy. In particular, fix any $n_1, m_1$, define the level set $$\Omega_{n_1,m_1}^x := \{Mf_1 > C_12^{n_1}|F_1|\} \cap \{Mf_2 > C_12^{m_1}|F_2|\},$$ then one would like to reduce $\text{energy}(\langle B_I, {\varphi}_I\rangle)_{I: I \cap \Omega_{n_1,m_1}^x \neq \emptyset}$ to $\text{energy}(\langle B^{n_1,m_1}_{0}, {\varphi}_I\rangle)_{I: I \cap \Omega_{n_1,m_1}^x \neq \emptyset}$, where $$B^{n_1,m_1}_0 := \sum_{\substack{K \in \mathcal{K}\\ K \cap \Omega_{n_1,m_1}^x \neq \emptyset}} \frac{1}{|K|^{\frac{1}{2}}} \langle f_1, \phi_K^1\rangle \langle f_2, \phi_K^2 \rangle \psi_K^3.$$ One observes that since $\psi_K^3$ and ${\varphi}_I^1$ are not compactly supported in $K$ and $I$ respectively, one cannot deduce that $K \cap \Omega_{n_1,m_1}^x \neq \emptyset$ given that $|K| \geq |I|$. The localization in the Fourier case is attained in a more analytic fashion. One decomposes the sum $$\begin{aligned} \label{d_0} \frac{|\langle B_I, {\varphi}^1_I \rangle|}{|I|^{\frac{1}{2}}} = & \frac{1}{|I|^{\frac{1}{2}}}\bigg|\sum_{K: |K| \geq |I|} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle\bigg| \nonumber \\ = & \frac{1}{|I|^{\frac{1}{2}}}\bigg|\sum_{d >0} \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_{d}}} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle + \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_0}}\frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}_I^1, \psi_K^3 \rangle \bigg|,\end{aligned}$$ where $$\mathcal{K}_d^{n_1,m_1} := \{K: 1 + \frac{\text{dist}(K,\Omega_x^{n_1,m_1})}{|K|} \sim 2^{d} \},$$ and $$\mathcal{K}_0^{n_1,m_1} := \{K : K \cap \Omega^{n_1,m_1}_x \neq \emptyset\}.$$ Ideally, one would like to ”omit” the former term, which is reasonable once $$\label{energy_needed} \sum_{d >0} \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_{d}}} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle \ll \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_0}}\frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle$$ so that one can apply the previous argument discussed in Chapter 6. In the other case when $$\sum_{d >0} \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_{d}}} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle \gtrsim \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_0}}\frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle,$$ local energy estimates are not necessary to achieve the result. The following lemma generates estimates for the former term and provides a guideline about the separation of cases. The notations in the lemma are consistent with the previous discussion. \[en\_loc\] Suppose that $d >0$. Then $$\frac{1}{|I|^{\frac{1}{2}}}\bigg| \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_{d}^{n_1,m_1}}} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle \bigg| \lesssim 2^{-Nd}(C_12^{n_1}|F_1|)^{\alpha_1} (C_12^{m_1}|F_2|)^{\beta_1},$$ for any $0 \leq \alpha_1,\beta_1 \leq 1$ and some $N \gg 1$. 1. One simple but important fact is that for any fixed $d>0$, $n_1$ and $m_1$, $\mathcal{K}_d^{n_1,m_1}$ is a disjoint collection of dyadic intervals. 2. Aware of the first comment, one can apply the exactly same argument in Section 10.1 to prove the lemma. Based on the estimates described in Lemma \[en\_loc\], one has that $$\begin{aligned} \label{threshold} \frac{1}{|I|^{\frac{1}{2}}}\bigg|\sum_{d >0} \sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_{d}^{n_1,m_1}}} \frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle \bigg| & \lesssim \sum_{d>0} 2^{-Nd} (C_12^{n_1}|F_1|)^{\alpha_1} (C_12^{m_1}|F_2|)^{\beta_1} \nonumber \\ & \lesssim (C_12^{n_1}|F_1|)^{\alpha_1} (C_12^{m_1}|F_2|)^{\beta_1}, \end{aligned}$$ for any $0 \leq \alpha_1, \beta_1 \leq 1$. One can then use the upper bound in (\[threshold\]) to proceed the discussion case by case. .25in **Case I: There exists $0 \leq \alpha_1, \beta_1 \leq 1$ such that $\frac{|\langle B_I, {\varphi}^1_I \rangle|}{|I|^{\frac{1}{2}}} \gg (C_12^{n_1}|F_1|)^{\alpha_1} (C_12^{m_1}|F_2|)^{\beta_1}.$** .1in In Case I, (\[energy\_needed\]) holds and the dominant term in expression (\[d\_0\]) has to be $$\frac{1}{|I|^{\frac{1}{2}}}\sum_{\substack{|K| \geq |I| \\ K \in \mathcal{K}_0^{n_1,m_1}}}\frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle {\varphi}^1_I, \psi_K^3 \rangle,$$ which provides a localization for energy estimates involving $B$. In particular, in the current case $$\begin{aligned} & \text{energy}((\langle B_I, {\varphi}_I^1\rangle)_{I}) \lesssim \text{energy}(\langle B^{n_1,m_1}_{0}, {\varphi}_I\rangle)_{I: I \cap \Omega_{n_1,m_1}^x \neq \emptyset}, \nonumber \\ & \text{energy}^t((\langle B_I, {\varphi}_I^1\rangle)_{I}) \lesssim \text{energy}^t(\langle B^{n_1,m_1}_{0}, {\varphi}_I\rangle)_{I: I \cap \Omega_{n_1,m_1}^x \neq \emptyset}, \nonumber\end{aligned}$$ for any $t >1$. Furthermore, $$\begin{aligned} & \text{energy}(\langle B^{n_1,m_1}_{0}, {\varphi}_I\rangle)_{I: I \cap \Omega_{n_1,m_1}^x \neq \emptyset} \lesssim \|B^{n_1,m_1}_{0}\|_1, \nonumber \\ & \text{energy}^t(\langle B^{n_1,m_1}_{0}, {\varphi}_I\rangle)_{I: I \cap \Omega_{n_1,m_1}^x \neq \emptyset} \lesssim \|B^{n_1,m_1}_{0}\|_t,\end{aligned}$$ for any $t > 1$, where $\|B^{n_1,m_1}_{0}\|_1$ and $\|B^{n_1,m_1}_{0}\|_t$ follow from the same estimates for their Haar variants described in Chapter 5. We will explicitly state the local energy estimates in this case. \[localized\_energy\_fourier\_x\] Suppose that $n_1, m_1 \in \mathbb{Z}$ are fixed and suppose that $\mathcal{I}'$ is a finite collection of dyadic intervals such that for any $I \in \mathcal{I} '$, $I $ satisfies 1. $I \in \mathcal{I}_{n_1,m_1}$; 2. $I \in T $ with $T \in \mathbb{T}_{l_1}$ for some $l_1$ satisfying the condition that there exists some $ 0 \leq \alpha_1, \beta_1 \leq 1$ such that $$\label{loc_condition_x} 2^{l_1}\|B\|_1 \gg (C_12^{n_1}|F_1|)^{\alpha_1} (C_12^{m_1}|F_2|)^{\beta_1}.$$ <!-- --> (i) Then for any $0 \leq \theta_1,\theta_2 <1$ with $\theta_1 + \theta_2 = 1$, one has $$\begin{aligned} &\text{energy}_{\mathcal{I}'}((\langle B_I, {\varphi}_I\rangle)_{I \in \mathcal{I}'}) \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2} 2^{n_1(\frac{1}{p_1} - \theta_1)} 2^{m_1(\frac{1}{q_1} - \theta_2)} |F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}}.\nonumber \\ $$ (ii) Suppose that $t >1$. Then for any $0 \leq \theta_1, \theta_2 <1$ with $\theta_1 + \theta_2 = \frac{1}{t}$, one has $$\begin{aligned} & \text{energy}^{t} _{\mathcal{I}'}((\langle B_I, {\varphi}_I\rangle)_{I \in \mathcal{I}'}) \lesssim C_1^{\frac{1}{p_1}+ \frac{1}{q_1} - \theta_1 - \theta_2}2^{n_1(\frac{1}{p_1} - \theta_1)}2^{m_1(\frac{1}{q_1} - \theta_2)}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}. \nonumber $$ A parallel statement holds for dyadic intervals in the $y$-direction, which will be stated for the convenience of reference later on. \[localized\_energy\_y\] Suppose that $ n_2, m_2 \in \mathbb{Z}$ are fixed and suppose that $\mathcal{J}'$ is a finite collection of dyadic intervals such that for any $J \in \mathcal{J} '$, $J $ satisfies 1. $J \in \mathcal{J}_{n_2,m_2}$; 2. $J \in S $ with $S \in \mathbb{S}_{l_2}$ for some $l_2$ satisfying the condition that there exists some $0 \leq \alpha_2, \beta_2 \leq 1$ such that $$\label{loc_condition_y} 2^{l_2}\|\tilde{B}\|_1 \gg (C_22^{n_2}|G_1|)^{\alpha_2} (C_22^{m_2}|G_2|)^{\beta_2}.$$ <!-- --> (i) Then for any $0 \leq \zeta_1,\zeta_2 <1$ with $\zeta_1 + \zeta_2= 1$, one has $$\begin{aligned} & \text{energy}_{\mathcal{J}'}((\langle \tilde{B}_J, {\varphi}_J \rangle)_{J \in \mathcal{J}'}) \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2} 2^{n_2(\frac{1}{p_2} - \zeta_1)} 2^{m_2(\frac{1}{q_2} - \zeta_2)} |G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}}. $$ (ii) Suppose that $s >1$. Then for any $0 \leq \zeta_1, \zeta_2 <1$ with $\zeta_1 + \zeta_2= \frac{1}{s}$, one has $$\begin{aligned} & \text{energy}^{t} _{\mathcal{J}'}((\langle \tilde{B}_J, {\varphi}_J \rangle)_{J \in \mathcal{J}'}) \lesssim C_2^{\frac{1}{p_2}+ \frac{1}{q_2} - \zeta_1 - \zeta_2}2^{n_2(\frac{1}{p_2} - \zeta_1)}2^{m_2(\frac{1}{q_2} - \zeta_2)}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}. $$ We would like to highlight that the localization of energies is attained under the additional conditions (\[loc\_condition\_x\]) and (\[loc\_condition\_y\]), in which case one obtains the local energy estimates stated in Proposition \[localized\_energy\_fourier\_x\] and \[localized\_energy\_y\] that can be viewed as analogies of Proposition \[B\_en\]. .25in **Case II: For any $0 \leq \alpha_1, \beta_1 \leq 1$, $ \frac{|\langle B_I, {\varphi}^1_I \rangle|}{|I|^{\frac{1}{2}}} \lesssim (C_12^{n_1}|F_1|)^{\alpha_1} (C_12^{m_1}|F_2|)^{\beta_1}.$** .1in In this alternative case, the size estimates are favorable and a simpler argument can be applied without invoking the local energy estimates. ### Proof Part 1 - Localization In this last section, we will explore how to implement the case-by-case analysis and generalize the argument in the proof of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ when $(\phi^3_K)_K$ and $(\phi^3_L)_L$ are **lacunary** families and $\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2} >1 $, which is the most tricky part to generalize for the argument in Chapter 7. The generalized argument can be viewed as a combination of the discussions in Chapter 6 and 7. One first defines the exceptional set $\Omega$ as follows: For any $\tau_1,\tau_2 \in \mathbb{N}$, define $$\Omega^{\tau_1,\tau_2} := \Omega_1^{\tau_1,\tau_2}\cup \Omega_2^{\tau_1,\tau_2}$$ with $$\begin{aligned} \displaystyle \Omega_1^{\tau_1,\tau_2} := &\bigcup_{\tilde{n} \in \mathbb{Z}}\{Mf_1 > C_1^{\tau_1} 2^{\tilde{n}}|F_1|\} \times \{Mg_1 > C_2^{\tau_2} 2^{-\tilde{n}}|G_1|\}\cup \nonumber \\ & \bigcup_{\tilde{\tilde{n}} \in \mathbb{Z}}\{Mf_2 > C_1^{\tau_1} 2^{\tilde{\tilde{n}}}|F_2|\} \times \{Mg_2 > C_2^{\tau_2} 2^{-\tilde{\tilde{n}}}|G_2|\}\cup \nonumber \\ &\bigcup_{\tilde{\tilde{\tilde{n}}} \in \mathbb{Z}}\{Mf_1 > C_1^{\tau_1} 2^{\tilde{\tilde{\tilde{n}}}}|F_1|\} \times \{Mg_2 > C_2^{\tau_2} 2^{-\tilde{\tilde{\tilde{n}}}}|G_2|\}\cup \nonumber \\ & \bigcup_{\tilde{\tilde{\tilde{\tilde{n}}}} \in \mathbb{Z}}\{Mf_2 > C_1^{\tau_1} 2^{\tilde{\tilde{\tilde{\tilde{n}}}} }|F_2|\} \times \{Mg_1 > C_2^{\tau_2} 2^{-\tilde{\tilde{\tilde{\tilde{n}}}} }|G_1|\}\cup \nonumber \\ & \bigcup_{l_2 \in \mathbb{Z}}\{MB > C_1^{\tau_1}2^{-l_2}\|B\|\} \times \{M\tilde{B} > C_2^{\tau_2} 2^{l_2}| \|\tilde{B}\|_1\}, \nonumber \\ \Omega_2^{\tau_1,\tau_2} := & \{SSh > C_3^{\tau_1,\tau_2} \|h\|_{L^s(\mathbb{R}^2)}\}, \nonumber \\\end{aligned}$$ and $$\begin{aligned} & \tilde{\Omega}^{\tau_1,\tau_2} := \{ M\chi_{\Omega^{\tau_1,\tau_2}} > \frac{1}{100}\}, \nonumber \\ & \tilde{\tilde{\Omega}}^{\tau_1,\tau_2} := \{ M\chi_{\tilde{\Omega}^{\tau_1,\tau_2}} >\frac{1}{2^{2\tau_1+2\tau_2}}\},\end{aligned}$$ and finally $$\tilde{\Omega} := \bigcup_{\tau_1,\tau_2 \in \mathbb{N}}\tilde{\tilde{\Omega}}^{\tau_1,\tau_2}.$$ Let $$E' := E \setminus \tilde{\Omega},$$ where $|E'| \sim |E| =1$ given that $C_1, C_2$ and $C_3$ are sufficiently large constants. Our goal is to prove that $$\begin{aligned} \label{compact} \Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}(f_1^x, f_2^x, g_1^y, g_2^y, h^{xy}, \chi_{E'}) := \displaystyle \sum_{\tau_1,\tau_2 \in \mathbb{N}}2^{-100(\tau_1+\tau_2)}\sum_{I \times J \in \mathcal{R}}& \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B^{\#_1}_I(f_1,f_2),\phi_I^1 \rangle \langle \tilde{B}^{\#_2}(g_1,g_2), \phi_J^1 \rangle \nonumber \\ & \cdot \langle h, \phi_{I}^2 \otimes \phi_{J}^2 \rangle \langle \chi_{E'},\phi_{I}^{3,\tau_1} \otimes \phi^{3, \tau_2}_{J} \rangle \nonumber\end{aligned}$$ satisfies the restricted weak-type estimates $$\label{final_linear} |\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}| \lesssim |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}.$$ For $\tau_1, \tau_2 \in \mathbb{N}$ fixed, let $$\Lambda_{\text{flag}^{0} \otimes \text{flag}^{0}}^{\tau_1, \tau_2} (f_1^x, f_2^x, g_1^y, g_2^y, h^{x,y}, \chi_{E'}):= \sum_{I \times J \in \mathcal{R}} \frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),\phi_I^1 \rangle \langle \tilde{B}(g_1,g_2), \phi_J^1 \rangle \cdot \langle h, \phi_{I}^2 \otimes \phi_{J}^2 \rangle \langle \chi_{E'},\phi_{I}^{3,\tau_1} \otimes \phi^{3, \tau_2}_{J} \rangle$$ then (\[final\_linear\]) can be reduced to proving that for any fixed $\tau_1, \tau_2 \in \mathbb{N}$, $$\label{linear_fix_fourier} |\Lambda_{\text{flag}^0 \otimes \text{flag}^{0}}^{\tau_1, \tau_2}| \lesssim (2^{\tau_1+ \tau_2})^{\Theta}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}$$ for some $0 < \Theta < 100$. ### Proof Part 2 - Summary of stopping-time decompositions. For any fixed $\tau_1,\tau_2 \in \mathbb{N}$, one can carry out the exactly same stopping-time algorithms in Chapter 7 with the replacement of $C_1, C_2$ and $C_3$ by $C_1^{\tau_1,\tau_2}$, $C_2^{\tau_1,\tau_2}$ and $C_3^{\tau_1,\tau_2}$ respectively. The resulting level sets, trees and collections of dyadic rectangles will follow the similar notations as before with extra indications of $\tau_1$ and $\tau_2$. .15in ------------------------------------------------------------------------------------- ------------------- ----------------------------------------------------------------------------------------------------------- I. Tensor-type stopping-time decomposition I on $\mathcal{I} \times \mathcal{J}$ $\longrightarrow$ $I \times J \in \mathcal{I}^{\tau_1}_{-n-n_2,-m-m_2} \times \mathcal{J}^{\tau_2}_{n_2,m_2}$ $(n_2, m_2 \in \mathbb{Z}, n > 0)$ II\. Tensor-type stopping-time decomposition II on $\mathcal{I} \times \mathcal{J}$ $\longrightarrow$ $I \times J \in T \times S $ with $T \in \mathbb{T}_{-l-l_2}^{\tau_1}$, $S \in \mathbb{S}_{l_2}^{\tau_2}$ $(l_2 \in \mathbb{Z}, l > 0)$ III\. General two-dimensional level sets stopping-time $\longrightarrow$ $I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2} $      decomposition on $\mathcal{I} \times \mathcal{J}$ $(k_1 <0, k_2 \leq K)$ ------------------------------------------------------------------------------------- ------------------- ----------------------------------------------------------------------------------------------------------- ### Proof Part 3 - Application of stopping-time decompostions. As one may recall, the multilinear form is estimated based on the stopping-time decompositions, the sparsity condition and the Fubini-type argument. $$\begin{aligned} &|\Lambda_{\text{flag}^0 \otimes \text{flag}^{0}}^{\tau_1, \tau_2}| \nonumber \\ = & \bigg| \sum_{\substack{l >0 \\ n> 0 \\ m> 0 \\ k_1 < 0 \\ k_2 \leq K}}\sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ l_2\in \mathbb{Z}}} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}}\sum_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \times \mathcal{J}_{n_2,m_2}^{\tau_2} \\ I \times J \in T \times S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1\tau_2}}}\frac{1}{|I|^{\frac{1}{2}} |J|^{\frac{1}{2}}} \langle B_I(f_1,f_2),\phi_I^1 \rangle \langle \tilde{B}(g_1,g_2), \phi_J^1 \rangle \langle h, \phi_I^2 \otimes \phi_J^2 \rangle \langle \chi_{E'}, \phi_I^{3,\tau_1} \otimes \phi_J^{3,\tau_2} \rangle\bigg| \nonumber \\ \lesssim &C_1^{\tau_1}C_2^{\tau_2}(C_3^{\tau_1,\tau_2})^2\sum_{\substack{l >0 \\ n> 0 \\ m> 0 \\ k_1 < 0 \\ k_2 \leq K}}2^{k_1} \|h\|_s 2^{k_2}\sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ l_2\in \mathbb{Z}}} 2^{-l-l_2} \|B\|_1 2^{l_2} \|\tilde{B}\|_1 \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J\bigg|. \nonumber \\\end{aligned}$$ The nested sum $$\begin{aligned} \label{ns_fourier} & \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ l_2\in \mathbb{Z}}} 2^{-l-l_2} \|B\|_1 2^{l_2}\|\tilde{B}\|_1 \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg| \end{aligned}$$ can be estimated using the same sparsity condition for (\[ns\_sp\]) and a modified Fubini argument as discussed in the following two subsection. ### Proof Part 4 - Sparsity condition One invokes the sparsity condition Theorem \[sparsity\] and argument in Chapter 6 to obtain the following estimate for (\[ns\_fourier\]). $$\begin{aligned} \label{ns_fourier_sp} & \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ l_2\in \mathbb{Z}}} 2^{-l-l_2} \|B\|_1 2^{l_2}\|\tilde{B}\|_1 \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg| \nonumber \\ \lesssim & 2^{-\frac{k_2\gamma}{2}}2^{-l(1- \frac{(1+\delta)}{2})} |F_1|^{\frac{\mu_1(1+\delta)}{2}}|F_2|^{\frac{\mu_2(1+\delta)}{2}}|G_1|^{\frac{\nu_1(1+\delta)}{2}}|G_2|^{\frac{\nu_2(1+\delta)}{2}}\|B\|_1^{1-\frac{1+\delta}{2}}\|\tilde{B}\|_1^{1-\frac{1+\delta}{2}}.\end{aligned}$$ For any $0 < \delta \ll1$, Lemma \[B\_global\_norm\] implies that $$\begin{aligned} & \|B\|^{1-\frac{1+\delta}{2}} \lesssim |F_1|^{\rho(1-\frac{1+\delta}{2})}|F_2|^{(1-\rho)(1-\frac{1+\delta}{2})}, \nonumber \\ & \|\tilde{B}\|^{1-\frac{1+\delta}{2}} \lesssim |G_1|^{\rho'(1-\frac{1+\delta}{2})}|G_2|^{(1-\rho')(1-\frac{1+\delta}{2})}.\end{aligned}$$ Therefore, (\[ns\_fourier\_sp\]) can be majorized by $$\label{ns_fourier_sp_final} 2^{-\frac{k_2\gamma}{2}}2^{-l(1- \frac{(1+\delta)}{2})} |F_1|^{\frac{\mu_1(1+\delta)}{2}+\rho(1-\frac{1+\delta}{2})}|F_2|^{\frac{\mu_2(1+\delta)}{2}+(1-\rho)(1-\frac{1+\delta}{2})}|G_1|^{\frac{\nu_1(1+\delta)}{2}+\rho'(1-\frac{1+\delta}{2})}|G_2|^{\frac{\nu_2(1+\delta)}{2}+(1-\rho')(1-\frac{1+\delta}{2})}.$$ .15 in ### Proof Part 5 - Fubini argument The separation of cases based on the levels of the stopping-time decompositions for $ \big(\frac{|\langle B^H_{I}(f_1,f_2), {\varphi}^{1,H}_I \rangle|}{|I|^{\frac{1}{2}}}\big)_{I \in \mathcal{I}} $ and $ \big(\frac{|\langle \tilde {B}^H_{J}(g_1,g_2), {\varphi}^{1,H}_J \rangle|}{|J|^{\frac{1}{2}}}\big)_{J \in \mathcal{J}} $, in particular the ranges of $l_2$ in the *tensor-type stopping-time decomposition I*, plays an important role in the modified Fubini-type argument. With $l \in \mathbb{N}$ fixed, the ranges of exponents $l_2$ are defined as follows: $$\begin{aligned} \mathcal{EXP}_1^{l,-n-n_2,n_2,-m-m_2,m_2} := & \{l_2 \in \mathbb{Z}: \text{for any}\ \ 0 \leq \alpha_1, \beta_1, \alpha_2, \beta_2 \leq 1, \nonumber \\ & \quad \quad \quad \quad 2^{-l-l_2} \| B\|_1\lesssim (C_1^{\tau_1}2^{-n-n_2}|F_1|)^{\alpha_1} (C_1^{\tau_1}2^{-m-m_2}|F_2|)^{\beta_1} \ \ \text{and} \nonumber \\ & \quad \quad \quad \quad 2^{l_2} \|\tilde{B}\|_1 \lesssim (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2} \}, \nonumber \\ \mathcal{EXP}_2^{l,-n-n_2,n_2, -m-m_2,m_2} := & \{l_2 \in \mathbb{Z}: \text{there exists} \ \ 0 \leq \alpha_1, \beta_1 \leq 1 \ \ \text{such that} \nonumber \\ & \quad \quad \quad \quad 2^{-l-l_2} \| B\|_1 \gg (C_1^{\tau_1}2^{-n-n_2}|F_1|)^{\alpha_1} (C_1^{\tau_1}2^{-m-m_2}|F_2|)^{\beta_1} \ \ \text{and} \nonumber \\ & \quad \quad \quad \quad \text{for any} \ \ 0 \leq \alpha_2, \beta_2 \leq 1, \nonumber \\ & \quad \quad \quad \quad 2^{l_2} \|\tilde{B}\|_1 \lesssim (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2}\}, \nonumber \\ \mathcal{EXP}_3^{l,-n-n_2,n_2,-m-m_2,m_2} := & \{l_2 \in \mathbb{Z}: \text{for any} \ \ 0 \leq \alpha_1, \beta_1 \leq 1, \nonumber \\ & \quad \quad \quad \quad 2^{-l-l_2} \| B\|_1 \lesssim (C_1^{\tau_1}2^{-n-n_2}|F_1|)^{\alpha_1} (C_1^{\tau_1}2^{-m-m_2}|F_2|)^{\beta_1} \ \ \text{and} \nonumber \\ & \quad \quad \quad \quad \text{there exists} \ \ 0 \leq \alpha_2, \beta_2 \leq 1 \ \ \text{such that} \nonumber \\ & \quad \quad \quad \quad 2^{l_2} \|\tilde{B}\|_1 \gg (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2}\}, \nonumber \\ \mathcal{EXP}_4^{l,-n-n_2,n_2,-m-m_2,m_2} := & \{l_2 \in \mathbb{Z}: \text{there exists} \ \ 0 \leq \alpha_1, \beta_1, \alpha_2, \beta_2 \leq 1 \ \ \text{such that} \nonumber \\ & \quad \quad \quad \quad 2^{-l-l_2} \| B\|_1 \gg (C_1^{\tau_1}2^{-n-n_2}|F_1|)^{\alpha_1} (C_1^{\tau_1}2^{-m-m_2}|F_2|)^{\beta_1} \ \ \text{and} \nonumber \\ & \quad \quad \quad \quad 2^{l_2} \|\tilde{B}\|_1 \gg (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2}\}. \nonumber \end{aligned}$$ One decomposes the sum into four parts based on the ranges specified above: $$\begin{aligned} & \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ l_2\in \mathbb{Z}}} 2^{-l-l_2} \|B\|_1 2^{l_2} \|\tilde{B}\|_1\sum_{\substack{S \in \mathbb{S}^{-l-l_2} \\ T \in \mathbb{T}^{l_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2} \cap S \times \mathcal{J}_{n_2,m_2} \cap T \\ I \times J \in \mathcal{R}_{k_1,k_2}}}I \times J\bigg| \nonumber \\ = &\underbrace{ \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ }} \sum_{l_2\in \mathcal{EXP}_1^{l,-n-n_2,n_2,-m-m_2,m_2}}}_I+ \underbrace{\sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ }} \sum_{l_2\in \mathcal{EXP}_2^{l,-n-n_2,n_2,-m-m_2,m_2}}}_{II} + \nonumber \\ & \underbrace{\sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ l}} \sum_{_2\in \mathcal{EXP}_3^{l,-n-n_2,n_2,-m-m_2,m_2,}} }_{III}+ \underbrace{\sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}\\ }} \sum_{l_2\in \mathcal{EXP}_4^{l,-n-n_2,n_2,-m-m_2,m_2,}}}_{IV}.\end{aligned}$$ One denotes the four parts by $I$, $II$, $III$ and $IV$ and will derive estimates for each part separately. The multilinear form can thus be decomposed correspondingly as follows: $$\begin{aligned} |\Lambda_{\text{flag}^0 \otimes \text{flag}^{0}}^{\tau_1, \tau_2}| \lesssim & \underbrace{C_1^{\tau_1}C_2^{\tau_2} (C_3^{\tau_1,\tau_2})^2\sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2} \cdot I}_{\Lambda_{I}^{\tau_1, \tau_2}} + \underbrace{C_1^{\tau_1}C_2^{\tau_2} (C_3^{\tau_1,\tau_2})^2 \sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2} \cdot II}_{\Lambda_{II}^{\tau_1, \tau_2}} + \nonumber \\ &\underbrace{ C_1^{\tau_1}C_2^{\tau_2} (C_3^{\tau_1,\tau_2})^2\sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2} \cdot III}_{\Lambda_{III}^{\tau_1, \tau_2}} + \underbrace{C_1^{\tau_1}C_2^{\tau_2} (C_3^{\tau_1,\tau_2})^2\sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2} \cdot IV}_{\Lambda_{IV}^{\tau_1, \tau_2}}.\end{aligned}$$ It would be sufficient to prove that each part satisfies the bound $$(C_1^{\tau_1}C_2^{\tau_2} C_3 ^{\tau_1,\tau_2})^{\Theta}|F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}}|G_2|^{\frac{1}{q_2}}$$ for some constant $0<\Theta<100$. With little abuse of notation, we will simplify $\mathcal{EXP}_i^{^l,-n-n_2,n_2,-m-m_2,m_2}$ by $\mathcal{EXP}_i$, for $i = 1,2,3,4.$ .15in **Estimate of $\Lambda_I^{\tau_1,\tau_2}$.** Though for $I$, the localization of energies cannot be applied at all, one observes that energy estimates are indeed not necessary. In particular, $$\begin{aligned} \label{I} I \lesssim & \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}}} \sum_{l_2\in \mathcal{EXP}_1}2^{-l-l_2}\|B\|_1 2^{l_2}\|\tilde{B}\|_1 \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J\bigg| \nonumber \\ \leq & \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}}} \bigg(\sup_{l_2\in \mathcal{EXP}_1} 2^{-l-l_2}\|B\|_1\bigg)\bigg(\sup_{l_2\in \mathcal{EXP}_1} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J\bigg|\bigg) \bigg(\sum_{l_2\in \mathcal{EXP}_1}2^{l_2}\|\tilde{B}\|_1\bigg). $$ We will estimate the expressions in the parentheses separately. (i) It is trivial from the definition of $\mathcal{EXP}_1$ that for any $0 \leq \alpha_1, \beta_1 \leq 1$, $$\sup_{l_2\in \mathcal{EXP}_1} 2^{-l-l_2}\|B\|_1 \lesssim (C_1^{\tau_1}2^{-n-n_2}|F_1|)^{\alpha_1} (C_1^{\tau_1}2^{-m-m_2}|F_2|)^{\beta_1}.$$ (ii) The last expression is a geometric series with the largest term bounded by $$\label{I_i} (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2},$$ for any $0 \leq \alpha_2, \beta_2 \leq 1$ according to the definition of $\mathcal{EXP}_1$. As a result, $$\sum_{l_2 \in \mathcal{EXP}_1} 2^{l_2} \|\tilde{B}\|_1 \lesssim (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2},2^{m_2}|G_2|)^{\beta_2},$$ for any $0 \leq \alpha_2, \beta_2 \leq 1$. (iii) For any fixed $-n-n_2, -m-m_2, n_2,m_2, l_2, \tau_1,\tau_2$, $$\begin{aligned} \label{I_ii} &\{I_T: I_T \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \ \ \text{and} \ \ T \in \mathbb{T}^{\tau_1}_{-l-l_2} \}, \nonumber \\ & \{J_S: J_S \in \mathcal{I}_{n_2,-m_2}^{\tau_2} \ \ \text{and} \ \ S \in \mathbb{S}^{\tau_2}_{l_2} \}\end{aligned}$$ are disjoint collections of dyadic intervals. Therefore $$\begin{aligned} \label{I_iii} &\sup_{l_2\in \mathcal{EXP}_1} \sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}} \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J\bigg| \nonumber \\ \leq & \sup_{l_2}\bigg|\bigcup_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ S \in \mathbb{S}_{l_2}^{\tau_2}}}\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \cap T \times \mathcal{J}_{n_2,m_2}^{\tau_2} \cap S \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J\bigg| \nonumber\\ \leq & \bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \times \mathcal{J}_{n_2,m_2}^{\tau_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg|.\end{aligned}$$ One can now plug in the estimates (\[I\_i\]), (\[I\_ii\]) and (\[I\_iii\]) into (\[I\]) and derive that for any $0 \leq \alpha_1, \beta_1, \alpha_2, \beta_2 \leq 1$, $$\begin{aligned} & I \nonumber \\ \lesssim & (C_1^{\tau_1})^2(C_2^{\tau_2})^2\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(2^{-n-n_2}|F_1|)^{\alpha_1} (2^{-m-m_2}|F_2|)^{\beta_1}(2^{n_2}|G_1|)^{\alpha_2} (2^{m_2}|G_2|)^{\beta_2}\bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \times \mathcal{J}_{n_2,m_2}^{\tau_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg| \nonumber \\ \leq & (C_1^{\tau_1})^2(C_2^{\tau_2})^2 \sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(2^{-n-n_2}|F_1|)^{\alpha_1} (2^{-m-m_2}|F_2|)^{\beta_1}(2^{n_2}|G_1|)^{\alpha_2} (2^{m_2}|G_2|)^{\beta_2}\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \times \mathcal{J}_{n_2,m_2}^{\tau_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg|. \end{aligned}$$ By letting $\alpha_1 = \frac{1}{p_1}$, $\alpha_2 = \frac{1}{q_1}$, $\beta_1 = \frac{1}{p_2}$ and $\beta_2 = \frac{1}{q_2}$ and the argument for choice of indices in Chapter 6, one has $$I \lesssim (C_1^{\tau_1})^2(C_2^{\tau_2})^2 2^{-n\frac{1}{p_2}}2^{-m\frac{1}{q_1}} |F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}} |G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}}\sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \times \mathcal{J}_{n_2,m_2}^{\tau_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg| .$$ where $$\begin{aligned} \label{I_measure} & \sum_{\substack{n_2 \in \mathbb{Z}\\ m_2 \in \mathbb{Z}}}\bigg|\bigcup_{\substack{I \times J \in \mathcal{I}_{-n-n_2,-m-m_2}^{\tau_1} \times \mathcal{J}_{n_2,m_2}^{\tau_2} \\ I \times J \in \mathcal{R}_{k_1,k_2}^{\tau_1,\tau_2}}}I \times J \bigg| \lesssim \min(2^{-k_1s},2^{-k_2\gamma}),\end{aligned}$$ for any $\gamma >1$. The estimate is a direct application of the sparsity condition described in Proposition \[sp\_2d\] that has been extensively used before. One can now apply (\[I\_measure\]) to conclude that $$\begin{aligned} |\Lambda_I^{\tau_1,\tau_2}| = & C_1^{\tau_1}C_2^{\tau_2}(C_3^{\tau_1,\tau_2})^2 \sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2} \cdot I \nonumber \\ \lesssim & (C_1^{\tau_1}C_2^{\tau_2}C_3^{\tau_1,\tau_2})^{6} \sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1(1-\frac{s}{2})}\|h\|_s 2^{k_2(1-\frac{\gamma}{2})} 2^{-n\frac{1}{p_2}}2^{-m\frac{1}{q_1}} |F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}} |G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}},\end{aligned}$$ and achieves the desired bound with appropriate choice of $\gamma>1$. .15in **Estimate of $\Lambda_{II}^{\tau_1,\tau_2}$.** One first observes that the estimates for $\Lambda_{II}^{\tau_1,\tau_2}$ apply to $\Lambda_{III}^{\tau_1,\tau_2}$ due to the symmetry. One shall notice that $$\begin{aligned} \label{II} II \leq & \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}\bigg( \sum_{l_2 \in \mathcal{EXP}_2} 2^{l_2} \|\tilde{B}\|_1\bigg) \bigg(\sup_{l_2 \in \mathcal{EXP}_2}2^{-l-l_2}\|B\|_1\sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ I_T \in \mathcal{I}_{-n-n_2,-m-m_2}}}|I_T|\bigg)\bigg(\sup_{l_2}\sum_{\substack{S \in \mathbb{S}_{l_2}^{\tau_2} \\ J_S \in \mathcal{J}_{n_2,m_2}^{\tau_2}}}|J_S|\bigg).\end{aligned}$$ (i) The first expression is a geometric series which can be bounded by $$\label{II_i} (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2},$$ for any $0 \leq \alpha_2, \beta_2 \leq 1$ (up to some constant as discussed in the estimate of $I$). (ii) The second term in (\[II\]) can be considered as a localized $L^{1,\infty}$-energy. In addition, given by the restriction that $l_2 \in \mathcal{EXP}_2$, one can apply the localization and the corresponding energy estimates described in Proposition \[localized\_energy\_fourier\_x\]. In particular, for any $0 \leq \theta_1, \theta_2 < 1$ with $\theta_1 + \theta_2 = 1$, $$\begin{aligned} \label{II_ii} & \sup_{l_2 \in \mathcal{EXP}_2}2^{-l-l_2}\|B\|_1\sum_{\substack{T \in \mathbb{T}_{-l-l_2}^{\tau_1} \\ I_T \in \mathcal{I}_{-n-n_2,-m-m_2}}}|I_T| \nonumber \\ \lesssim & (C_1^{\tau_1}2^{-n-n_2})^{\frac{1}{p_1}-\theta_1}(C_1^{\tau_1} 2^{-m-m_2})^{\frac{1}{q_1}- \theta_2} |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}.\end{aligned}$$ (iii) For any fixed $n_2,m_2,l_2$ and $\tau_2$, $\{J_S: J_S \in \mathcal{J}_{n_2,m_2}^{\tau_2} \ \ \text{and } \ \ S \in \mathbb{S}_{l_2}^{\tau_2}\}$ is a disjoint collection of dyadic intervals, which implies that $$\begin{aligned} \label{II_iii} \sup_{l_2}\sum_{\substack{S \in \mathbb{S}^{l_2} \\ J_S \in \mathcal{J}_{n_2,m_2}}}|J_S| & \leq \big| \bigcup_{\substack{J_S \in \mathcal{J}_{n_2,m_2}}}J_S\big| \nonumber \\ & \lesssim \big|\{ Mg_1 > C_2^{\tau_2} 2^{n_2-10}|G_1| \} \cap \{Mg_2 > C_2^{\tau_2} 2^{m_2-10}|G_2| \}\big|,\end{aligned}$$ where the last inequality follows from the point-wise estimates indicated in Claim \[ptwise\]. By combining (\[II\_i\]), (\[II\_ii\]) and (\[II\_iii\]), one can majorize (\[II\]) as $$\begin{aligned} \label{II_final} II \lesssim & \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(C_2^{\tau_2} 2^{n_2}|G_1|)^{\alpha_2} (C_2^{\tau_2} 2^{m_2}|G_2|)^{\beta_2}(C_1^{\tau_1}2^{-n-n_2})^{\frac{1}{p_1}- \theta_1}(C_1^{\tau_1} 2^{-m-m_2})^{\frac{1}{q_1} - \theta_2} |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}} \nonumber \\ & \quad \quad \cdot \big|\{ Mg_1 > C_2^{\tau_2} 2^{n_2-10}|G_1| \} \cap \{Mg_2 > C_2^{\tau_2} 2^{m_2-10}|G_2| \}\big| \nonumber \\ & \leq \sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} (C_1^{\tau_1}2^{-n-n_2})^{\frac{1}{p_1} - \theta_1}(C_1^{\tau_1} 2^{-m-m_2})^{\frac{1}{q_1} - \theta_2} |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}}\nonumber \\ & \quad \quad \quad \cdot (C_2^{\tau_2} 2^{n_2})^{\alpha_2 - (1+\epsilon)(1-\mu)}(C_2^{\tau_2 }2^{m_2})^{\beta_2-(1+\epsilon)\mu}|G_1|^{\alpha_2- (1+\epsilon)(1-\mu)} |G_2|^{\beta_2-(1+\epsilon)\mu} \cdot \nonumber \\ &\quad \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(C_2^{\tau_2}2^{n_2}|G_1|)^{(1+\epsilon)(1-\mu)}(C_2^{\tau_2}2^{m_2}|G_2|)^{(1+\epsilon)\mu}\big|\{ Mg_1 > C_2^{\tau_2} 2^{n_2-10}|G_1| \} \cap \{Mg_2 > C_2^{\tau_2} 2^{m_2-10}|G_2| \}\big|.\end{aligned}$$ By the Hölder-type argument introduced in Chapter 7, one can estimate the expression $$\begin{aligned} \label{II_fub} & \sum_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}}(C_2^{\tau_2}2^{n_2}|G_1|)^{(1+\epsilon)(1-\mu)}(C_2^{\tau_2}2^{m_2}|G_2|)^{(1+\epsilon)\mu}\big|\{ Mg_1 > C_2^{\tau_2} 2^{n_2-10}|G_1| \} \cap \{Mg_2 > C_2^{\tau_2} 2^{m_2-10}|G_2| \}\big| \nonumber \\ \lesssim & |G_1|^{1-\mu} |G_2|^{\mu}.\end{aligned}$$ Therefore, by plugging in (\[II\_fub\]) and some simplifications, (\[II\_final\]) can be majorized by $$\begin{aligned} & II \nonumber \\ \lesssim & (C_1^{\tau_1} C_2^{\tau_2})^2\sup_{\substack{n_2 \in \mathbb{Z} \\ m_2 \in \mathbb{Z}}} (2^{-n-n_2})^{\frac{1}{p_1} - \theta_1}( 2^{-m-m_2})^{\frac{1}{q_1} - \theta_2} |F_1|^{\frac{1}{p_1}}|F_2|^{\frac{1}{q_1}} (2^{n_2})^{\alpha_2 - (1+\epsilon)(1-\mu)}(2^{m_2})^{\beta_2-(1+\epsilon)\mu}|G_1|^{\alpha_2-\epsilon(1-\mu)} |G_2|^{\beta_2-\epsilon\mu}. $$ One would like to choose $0 \leq \alpha_2, \beta_2 \leq 1, 0 < \mu < 1$ and $\epsilon>0$ such that $$\begin{aligned} \label{exp_cond_fourier} & \alpha_2-\epsilon(1-\mu) = \frac{1}{p_2}, \nonumber \\ & \beta_2 - \epsilon\mu = \frac{1}{q_2}. \end{aligned}$$ Meanwhile, one can also achieve the equalities $$\begin{aligned} & \frac{1}{p_1} - \theta_1 = \alpha_2-(1+\epsilon)(1-\mu), \nonumber \\ & \frac{1}{q_1} - (1-\theta_1) = \beta_2-(1+\epsilon)\mu, \end{aligned}$$ which combined with (\[exp\_cond\_fourier\]), yield $$\begin{aligned} & \frac{1}{p_1} - \theta_1 = \frac{1}{p_2} - (1-\mu), \nonumber \\ & \frac{1}{q_1} - (1-\theta_1) = \frac{1}{q_2} -\mu.\end{aligned}$$ Thanks to the condition that $$\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2},$$ one only needs to choose $0 < \theta_1, \mu < 1$ such that $$\frac{1}{p_1} - \frac{1}{p_2} = \theta_1- (1-\mu).$$ To sum up, one has the following estimate for II: $$\label{II_ns} II \lesssim (C_1^{\tau_1} C_2^{\tau_2})^2 2^{-n(\frac{1}{p_1}- \theta_1)}2^{-m(\frac{1}{q_1}- (1-\theta_1))}|F_1|^{\frac{1}{p_1}} |F_2|^{\frac{1}{q_1}}|G_1|^{\frac{1}{p_2}} |G_2|^{\frac{1}{q_2}}.$$ Last but not least, one can interpolate between the estimates (\[II\_ns\]) and (\[ns\_fourier\_sp\_final\]) obtained from the sparsity condition to conclude that $$\begin{aligned} \label{ns_fourier_fb_final} |\Lambda_{II}^{\tau_1,\tau_2}| = & C_1^{\tau_1} C_2^{\tau_2}(C_3^{\tau_1,\tau_2})^2 \sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2} \cdot II \nonumber \\ \lesssim & (C_1^{\tau_1} C_2^{\tau_2}C_3^{\tau_1,\tau_2})^6 \sum_{\substack{l> 0 \\ n > 0 \\ m > 0 \\ k_1 < 0 \\ k_2 \leq K}} 2^{k_1}\|h\|_s 2^{k_2(1-\frac{\lambda\gamma}{2})} 2^{-l\lambda(1- \frac{(1+\delta)}{2})}2^{-n(1-\lambda)(\frac{1}{p_1}- \theta_1)}2^{-m(1-\lambda)(\frac{1}{q_1}- (1-\theta_1))} \nonumber \\ & \cdot |F_1|^{(1-\lambda)\frac{1}{p_1}+\lambda\frac{\mu_1(1+\delta)}{2}+\lambda\rho(1-\frac{1+\delta}{2})} |F_2|^{(1-\lambda)\frac{1}{q_1}+\lambda\frac{\mu_2(1+\delta)}{2}+\lambda(1-\rho)(1-\frac{1+\delta}{2})} \nonumber \\ &\cdot |G_1|^{(1-\lambda)\frac{1}{p_2}+\lambda\frac{\nu_1(1+\delta)}{2}+\lambda\rho'(1-\frac{1+\delta}{2})} |G_2|^{(1-\lambda)\frac{1}{q_2}+\lambda\frac{\nu_2(1+\delta)}{2}+\lambda(1-\rho')(1-\frac{1+\delta}{2})}. \end{aligned}$$ One has enough degree of freedom to choose the indices and obtain the desired estimate: (i) for any $0 < \lambda,\delta < 1$, the series $\displaystyle \sum_{l>0}2^{-l\lambda(1- \frac{(1+\delta)}{2})}$ is convergent; (ii) one notices that for $0 < \theta_1 < 1$, $\displaystyle \sum_{n>0}2^{-n(1-\lambda)(\frac{1}{p_1}- \theta_1)}$ and $\displaystyle \sum_{m>0}2^{-m(1-\lambda)(\frac{1}{q_1}- (1-\theta_1))}$ converge if $$\begin{aligned} &\frac{1}{p_1} - \theta_1>0, \nonumber \\ &\frac{1}{q_1} - (1-\theta_1)>0, \end{aligned}$$ which implies that $$\frac{1}{p_1} + \frac{1}{q_1} > 1.$$ This would be the condition we impose on the exponents $p_1$ and $q_1$. The proof for range $\frac{1}{p_1} + \frac{1}{q_1} \leq 1$ follows a simpler argument. (iii) One can identify (\[ns\_fourier\_fb\_final\]) with (\[exp00\]) and choose the indices to match the desired exponents for $|F_1|,|F_2|, |G_1|$ and $|G_2|$ in the exactly same fashion. .15 in **Estimate of $\Lambda_{IV}^{\tau_1,\tau_2}$.** When $l_2 \in \mathcal{EXP}_4$, one has the localization that the main contribution of $$\sum_{|K| \geq |I|}\frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle \chi_{E'}, \psi_K^3\rangle$$ comes from $$\sum_{K \supseteq I}\frac{1}{|K|^{\frac{1}{2}}}\langle f_1, {\varphi}_K^1 \rangle \langle f_2, \psi_K^2 \rangle \langle \chi_{E'}, \psi_K^3\rangle$$ as in the Haar model. As a consequence, it is not difficult to check that the argument in Section 7 applies to the estimate of $IV$, where one employs the local energy estimates stated in Proposition \[localized\_energy\_fourier\_x\] and \[localized\_energy\_y\] instead of Proposition \[B\_en\] and derive that $$\begin{aligned} \label{ns_fourier_fb_iv} IV \lesssim (C_1^{\tau_1} C_2^{\tau_2})^2 2^{-n(\frac{1}{p_1} - \theta_1-\frac{1}{2}\mu(1+\epsilon))}2^{-m(\frac{1}{q_1}- \theta_2-\frac{1}{2}(1-\mu)(1+\epsilon))} |F_1|^{\frac{1}{p_1}-\frac{\mu}{2}\epsilon}|F_2|^{\frac{1}{q_1}-\frac{1-\mu}{2}\epsilon}|G_1|^{\frac{1}{p_2}-\frac{\mu}{2}\epsilon}|G_2|^{\frac{1}{q_2}-\frac{1-\mu}{2}\epsilon}. \end{aligned}$$ By interpolating between (\[ns\_fourier\_fb\_iv\]) and (\[ns\_fourier\_sp\]) which agree with the estimates for the nested sum using the Fubini argument and the sparsity condition developed in Section 7, one achieves the desired bound. When only one of the families $(\phi_K)_{K \in \mathcal{K}}$ and $(\phi_L)_{L \in \mathcal{L}}$ is lacunary, a simplified argument is sufficient. Without loss of generality, we assume that $(\psi_K)_{K \in \mathcal{K}}$ is a lacunary family while $({\varphi}_L)_{L \in \mathcal{L}}$ is a non-lacunary family. One can then split the argument into two parts depending the range of the exponents $l_2$: (i) $l_2 \in \{l_2 \in \mathbb{Z}: 2^{l_2}\|\tilde{B}\|_1 \lesssim (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2}(C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2} \}$; (ii) $l_2 \in \{l_2 \in \mathbb{Z}: 2^{l_2}\|\tilde{B}\|_1 \gg (C_2^{\tau_2}2^{n_2}|G_1|)^{\alpha_2}(C_2^{\tau_2}2^{m_2}|G_2|)^{\beta_2} \}$; where Case (i) can be treated by the same argument for $II$ and Case (ii) by the reasoning for $IV$. This completes the proof of Theorem \[thm\_weak\_mod\] for $\Pi_{\text{flag}^0 \otimes \text{flag}^0}$ in the general case. As commented in the beginning of this section, the argument for Theorem \[thm\_weak\_mod\] and \[thm\_weak\_inf\_mod\] developed in the Haar model can be generalized to the Fourier setting, which ends the proof of the main theorems. Appendix I - Multilinear Interpolations ======================================= This chapter is devoted to various multilinear interpolations that allow one to reduce Theorem \[main\_theorem\] to \[thm\_weak\] (and Theorem \[main\_thm\_inf\] to \[thm\_weak\_inf\] correspondingly). We will start from the statement in Theorem \[thm\_weak\] and implement interpolations step by step to reach Theorem \[main\_theorem\]. Throughout this chapter, we will consider $T_{ab}$ as a trilinear operator with first two function spaces restricted to tensor-product spaces. Interpolation of Multilinear forms ---------------------------------- One may recall that Theorem \[thm\_weak\] covers all the restricted weak-type estimates except for the case $2 \leq s \leq \infty$. We will apply the interpolation of multilinear forms to fill in the gap. In particular, Let $T^*_{ab}$ denote the adjoint operator of $T_{ab}$ such that $$\langle T_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, h),l\rangle = \langle T^*_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, l), h \rangle$$ Due to the symmetry between $T_{ab}$ and $T^*_{ab}$, one concludes that the multilinear form associated to $T^{*}_{ab}$ satisfies $$|\Lambda(f_1 \otimes g_1, f_2 \otimes g_2, h, l)| \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}} |H|^{\frac{1}{r'}} |L|^{\frac{1}{s}}$$ for every measurable set $F_1, F_2 \subseteq \mathbb{R}_x$, $G_1, G_2 \subseteq \mathbb{R}_y$, $H, L \subseteq \mathbb{R}^2$ of positive and finite measure and every measurable function $|f_i| \leq \chi_{F_i}$, $|g_j| \leq \chi_{G_j}$, $|h| \leq \chi_{H}$ and $|l| \leq \chi_{L}$ for $i, j = 1, 2$. The notation and the range of exponents agree with the ones in Theorem \[thm\_weak\]. One can now apply the interpolation of multilinear forms described in Lemma 9.6 of [@cw] to attain the restricted weak-type estimate with $1 < s \leq \infty $: $$\label{s=inf} |\Lambda(f_1 \otimes g_1, f_2 \otimes g_2, h, l)| \lesssim |F_1|^{\frac{1}{p_1}} |G_1|^{\frac{1}{p_2}} |F_2|^{\frac{1}{q_1}} |G_2|^{\frac{1}{q_2}}|H|^{\frac{1}{s}} |L|^{\frac{1}{r'}}$$ where $\frac{1}{s} = 0$ if $s= \infty$. For $1 \leq s < \infty$, one can fix $f_1, g_1, f_2, g_2 $ and apply linear Marcinkiewiecz interpolation theorem to prove the strong-type estimates for $h \in \L^s(\mathbb{R}^2)$ with $1 < s < \infty$. The next step would be to validate the same result for $h \in L^{\infty}$. One first rewrites the multilinear form associated to $T_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, h)$ as $$\begin{aligned} \label{linear_form_interp} \Lambda(f_1 \otimes g_1, f_2 \otimes g_2, h, \chi_{E'}) := & \langle T_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, h), \chi_{E'}\rangle \nonumber \\ = & \langle T^*_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, \chi_{E'}), h\rangle. \nonumber\\\end{aligned}$$ Let $Q_N := [ -N,N]^2$ denote the cube of length $2N$ centered at the origin in $\mathbb{R}^2$, then (\[linear\_form\_interp\]) can be expressed as $$\begin{aligned} & \displaystyle \lim_{N \rightarrow \infty} \int_{Q_N} T^*_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, \chi_{E'})(x) h(x) dx \nonumber \\ = & \lim_{N \rightarrow \infty}\int T^*_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, \chi_{E'})(x) (h\cdot\chi_{Q_N})(x) dx \nonumber \\ = & \lim_{N \rightarrow \infty}\int T_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, h\cdot\chi_{Q_N})(x) \chi_{E'}(x) dx \nonumber \\ = & \lim_{N \rightarrow \infty}\Lambda(f_1 \otimes g_1, f_2 \otimes g_2, h\cdot\chi_{Q_N}, \chi_{E'}).\end{aligned}$$ Let $\tilde{h}:= \frac{h \chi_{Q_N}}{\|h\|_{\infty}}$, where $|\tilde{h}| \leq \chi_{Q_N}$ with $|Q_N| \leq N^2$. One can thus invoke (\[s=inf\]) to conclude that $$\begin{aligned} |\Lambda(f_1 \otimes g_1, f_2 \otimes g_2, h\chi_{Q_N}, \chi_{E'})| =& \|h\|_{\infty} \cdot |\Lambda(f_1 \otimes g_1, f_2 \otimes g_2, \tilde{h}, \chi_{E'})| \nonumber \\ \lesssim & |F_1|^{\frac{1}{p_1}}|G_1|^{\frac{1}{p_2}}|F_2|^{\frac{1}{q_1}}|G_2|^{\frac{1}{q_2}}\|h\|_{\infty}|E|^{\frac{1}{r'}}.\end{aligned}$$ As the bound for the multilinear form is independent of $N$, passing to the limit when $N \rightarrow \infty$ yields that $$|\Lambda(f_1 \otimes g_1, f_2 \otimes g_2, h, \chi_{E'})| \lesssim |F_1|^{\frac{1}{p_1}}|G_1|^{\frac{1}{p_2}}|F_2|^{\frac{1}{q_1}}|G_2|^{\frac{1}{q_2}}\|h\|_{\infty}|E|^{\frac{1}{r'}}.$$ Combined with the statement in Theorem \[thm\_weak\], one has that for any $1 < p_1,p_2, q_1,q_2 < \infty$, $1<s \leq \infty$, $0 < r < \infty$, $\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2} = \frac{1}{r} - \frac{1}{s}$, $$\label{restricted_weak} \|T_{ab}(f_1 \otimes g_1, f_2 \otimes g_2, h)\|_{r,\infty} \lesssim |F_1|^{\frac{1}{p_1}}|G_1|^{\frac{1}{p_2}}|F_2|^{\frac{1}{q_1}}|G_2|^{\frac{1}{q_2}}\|h\|_{s}$$ for every measurable set $F_1, F_2 \subseteq \mathbb{R}_x$, $G_1, G_2 \subseteq \mathbb{R}_y$ of positive and finite measure and every measurable function $|f_i| \leq \chi_{F_i}$, $|g_j| \leq \chi_{G_j}$ for $i, j = 1, 2$. Tensor-type Marcinkiewiecz Interpolation ---------------------------------------- The next and final step would be to attain strong-type estimates for $T_{ab}$ from (\[restricted\_weak\]). We first fix $h \in L^{s}$ and define $$T^{h}(f_1 \otimes g_1, f_2 \otimes g_2) := T_{ab}(f_1 \otimes g_1, f_2 \otimes g_2,h)$$ One can then apply the following tensor-type Marcinkiewiecz interpolation theorem to each $T^h$ so that Theorem \[main\_theorem\] follows. \[tensor\_interpolation\] Let $1 < p_1,p_2, q_1, q_2< \infty$ and $ 0 < t < \infty$ such that $\frac{1}{p_1} + \frac{1}{q_1} = \frac{1}{p_2} + \frac{1}{q_2} = \frac{1}{t}$. Suppose a multilinear tensor-type operator $T(f_1 \otimes g_1, f_2 \otimes g_2)$ satisfies the restricted weak-type estimates for any $\tilde{p_1}, \tilde{p_2}, \tilde{q_1}, \tilde{q_2}$ in a neighborhood of $p_1, p_2, q_1, q_2$ respectively with $ \frac{1}{\tilde{p_1}} + \frac{1}{\tilde{q_1}} = \frac{1}{\tilde{p_2}} + \frac{1}{\tilde{q_2}} = \frac{1}{\tilde{t}}$, equivalently $$\|T(f_1 \otimes g_1, f_2 \otimes g_2) \|_{\tilde{t},\infty} \lesssim |F_1|^{\frac{1}{\tilde{p_1}}} |G_1|^{\frac{1}{\tilde{p_2}}} |F_2|^{\frac{1}{\tilde{q_1}}} |G_2|^{\frac{1}{\tilde{q_2}}}$$ for any measurable sets $F_1 \subseteq \mathbb{R}_{x} , F_2 \subseteq \mathbb{R}_{x}, G_1\subseteq \mathbb{R}_y, G_2\subseteq \mathbb{R}_y$ of positive and finite measure and any measurable function $|f_1(x)| \leq \chi_{F_1}(x)$, $|f_2(x)| \leq \chi_{F_2}(x)$, $|g_1(y)| \leq \chi_{G_1}(y)$, $|g_2(y)| \leq \chi_{G_2}(y)$. Then $T$ satisfies the strong-type estimate $$\|T(f_1 \otimes g_1, f_2 \otimes g_2) \|_{t} \lesssim \|f_1\|_{p_1} \|g_1\|_{p_2} \|f_2\|_{q_1} \|g_2\|_{q_2}$$ for any $f_1 \in L^{p_1}(\mathbb{R}_x)$, $f_2 \in L^{q_1}(\mathbb{R}_x)$, $g_1 \in L^{p_2}(\mathbb{R}_y)$ and $g_2 \in L^{q_2}(\mathbb{R}_y)$. The proof of the theorem resembles the argument for the multilinear Marcinkiewiecz interpolation(see [@bm]) with small modifications. Appendix II - Reduction to Model Operators ========================================== Littlewood-Paley Decomposition ------------------------------ ### Set Up Let ${\varphi}\in \mathcal{S}(\mathbb{R})$ be a Schwartz function with $\text{supp} {\widehat}{{\varphi}} \subseteq [-2,2]$ and ${\widehat}{{\varphi}}(\xi) = 1$ on $[-1,1]$. Let $${\widehat}{\psi}(\xi) = {\widehat}{{\varphi}}(\xi) - {\widehat}{{\varphi}}(2\xi)$$ so that $\text{supp} {\widehat}{\psi} \subseteq [-2,-\frac{1}{2}] \cup [-\frac{1}{2}, 2]$. Now for every $k \in \mathbb{Z}$, define $${\widehat}{\psi}_{k}: = {\widehat}{\psi}(2^{-k}\xi)$$ One important observation is that $$\sum_{k \in \mathbb{Z}} {\widehat}{\psi}_k(\xi) = 1$$ We will adopt the notation *lacunary* for $({\psi}_k)_k$ and *non-lacunary* for $({{\varphi}}_k)_k$. ### Special Symbols We will first focus on a special case of the symbols and the general case will be studied as an extension afterwards. Suppose that $$a(\xi_1,\eta_1,\xi_2,\eta_2) = a_1(\xi_1,\xi_2)a_2(\eta_1,\eta_2)$$ $$b(\xi_1,\eta_1,\xi_2,\eta_2,\xi_3,\eta_3) = b_1(\xi_1,\xi_2,\xi_3) b_2(\eta_1,\eta_2,\eta_3)$$ where $$a_1(\xi_1,\xi_2) = \sum_{k_1} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2)$$ $$b_1(\xi_1,\xi_2,\xi_3) = \sum_{k_2} {\widehat}{\phi}_{k_2}(\xi_1) {\widehat}{\phi}_{k_2}(\xi_2) {\widehat}{\phi}_{k_2}(\xi_3)$$ At least one of the families $({\phi}_{k_1}(\xi_1))_{k_1}$ and $({\phi}_{k_1}(\xi_2))_{k_1}$ is lacunary and at least one of the families $({\phi}_{k_2}(\xi_1))_{k_2}$, $({\phi}_{k_2})(\xi_2))_{k_2}$ and $({\phi}_{k_2}(\xi_3))_{k_2}$ is lacunary. Moroever, $$a_2(\eta_1,\eta_2) = \sum_{j_1} {\widehat}{\phi}_{j_1}(\eta_1) {\widehat}{\phi}_{j_1}(\eta_2)$$ $$b_2(\eta_1,\eta_2,\eta_3) = \sum_{j_2} {\widehat}{\phi}_{j_2}(\eta_1) {\widehat}{\phi}_{j_2}(\eta_2) {\widehat}{\phi}_{j_2}(\eta_3)$$ where at least one of the families $({\phi}_{j_1}(\eta_1))_{j_1}$ and $({\phi}_{j_1}(\eta_2))_{j_1}$ is lacunary and at least one of the families $({\phi}_{j_2}(\eta_1))_{j_2}$, $({\phi}_{j_2})(\eta_2))_{j_2}$ and $({\phi}_{j_2}(\eta_3))_{j_2}$ is lacunary. Then $$\begin{aligned} a_1(\xi_1,\xi_2) b_1(\xi_1,\xi_2,\xi_3) = & \sum_{k_1,k_2} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) {\widehat}{\phi}_{k_2}(\xi_1) {\widehat}{\phi}_{k_2}(\xi_2) {\widehat}{\phi}_{k_2}(\xi_3) \nonumber \\ = & \underbrace{\sum_{k_1 \approx k_2}}_{I^1} + \underbrace{\sum_{k_1 \ll k_2}}_{II^1} + \underbrace{\sum_{k_1 \gg k_2}}_{III^1}.\end{aligned}$$ Case $I^1$ gives rise to the symbol of paraproduct. More precisely, $$I^1 = \sum_{k} {\widehat}{\tilde{\phi}}_{k}(\xi_1) {\widehat}{\tilde{\phi}}_{k}(\xi_2) {\widehat}{\phi}_{k}(\xi_3)$$ where ${\widehat}{\tilde{\phi}}_{k}(\xi_1) := {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_2}(\xi_1)$ and ${\widehat}{\tilde{\phi}}_{k}(\xi_2) := {\widehat}{\phi}_{k_1}(\xi_2) {\widehat}{\phi}_{k_2}(\xi_2)$ when $k := k_1 \approx k_2$. The above expression can be completed as $$I^1 = \sum_{k} {\widehat}{\tilde{\phi}}_{k}(\xi_1) {\widehat}{\tilde{\phi}}_{k}(\xi_2) {\widehat}{\phi}_{k}(\xi_3) {\widehat}{\phi}_{k}(\xi_1 + \xi_2 + \xi_3)$$ and at least two of the families $ ({\tilde{\phi}}_{k}(\xi_1)_{k}$, $({\tilde{\phi}}_{k}(\xi_2))_{k}$, $ ({\phi}_{k}(\xi_3))_{k}$, $({\phi}_{k}(\xi_1 + \xi_2 + \xi_3))_{k}$ are lacunary. Case $II^1$ and $III^1$ can be treated similarly. In Case $II^1$, the sum is non-degenerate when $(\phi_{k_2}(\xi_1))_{k_2}$ and $(\phi_{k_2}(\xi_2))_{k_2}$ are non-lacunary. In particular, one has $$II ^1 = \sum_{k_1 \ll k_2} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) {\widehat}{{\varphi}}_{k_2}(\xi_1) {\widehat}{{\varphi}}_{k_2}(\xi_2) {\widehat}{\psi}_{k_2}(\xi_3)$$ In the case when the symbols are assumed to take the special form, the above expression can be rewritten as $$\sum_{k_1 \ll k_2} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) {\widehat}{\psi}_{k_2}(\xi_3),$$ which can be “completed" as $$\label{completion} \sum_{k_1 \ll k_2} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\phi}_{k_1}(\xi_1+\xi_2) {\widehat}{{\varphi}}_{k_2}(\xi_1+\xi_2) {\widehat}{\psi}_{k_2}(\xi_3){\widehat}{\psi}_{k_2}(\xi_1+\xi_2+\xi_3)$$ The exact same argument can be applied to $a_2(\eta_1,\eta_2)b_2(\eta_1,\eta_2,\eta_3)$ so that the symbol can be decomposed as $$\underbrace{\sum_{j_1 \approx j_2}}_{I^2} + \underbrace{\sum_{j_1 \ll j_2}}_{II^2} + \underbrace{\sum_{j_1 \gg j_2}}_{III^2}$$ where $$I^2 = \sum_{j} {\widehat}{\tilde{\phi}}_{j}(\eta_1) {\widehat}{\tilde{\phi}}_{j}(\xi_2) {\widehat}{\phi}_{j}(\eta_3) {\widehat}{\phi}_{j}(\eta_1+\eta_2+\eta_3)$$ with at least two of the families $({\tilde{\phi}}_{j}(\eta_1))_{j}$, $( {\tilde{\phi}}_{j}(\eta_2) )_{j}$, $({\phi}_{j}(\eta_3))_{j}$ and $({\phi}_{j}(\eta_1+\eta_2+\eta_3))_j$ are lacunary. Case $II^2$ and $III^2$ have similar expressions, where $$II ^2 = \sum_{j_1 \ll j_2} {\widehat}{\phi}_{j_1}(\eta_1) {\widehat}{\phi}_{j_1}(\eta_2){\widehat}{\phi}_{j_1}(\eta_1+\eta_2) {\widehat}{{\varphi}}_{j_2}(\eta_1+\eta_2) {\widehat}{\psi}_{j_2}(\eta_3){\widehat}{\psi}_{j_2}(\eta_1+\eta_2+\eta_3).$$ One can now combine the decompositions and analysis for $a_1,a_2,b_1$ and $b_2$ to study the original operator: $$\begin{aligned} T_{ab}(f_1 \otimes g_1,f_2 \otimes g_2, h) = T_{ab}^{I^1I^2} + T_{ab}^{I^1 II^2} + T_{ab}^{I^1 III^2} + T_{ab}^{II^1 I^2} + T_{ab}^{II^1 II^2} + T_{ab}^ {II^1 III^2} + T_{ab}^{III^1 I^1} + T_{ab}^{III^2 II^2} + T_{ab}^{III^1 III^2}.\end{aligned}$$ Because of the symmetry between frequency variables $(\xi_1,\xi_2,\xi_3)$ and $(\eta_1,\eta_2,\eta_3)$ and the symmetry between cases for frequency scales $k_1 \ll k_2$ and $k_1 \gg k_2$, $j_1 \ll j_2$ and $j_1 \gg j_2$, it suffices to consider the following operators and others can be proved using the same argument. 1. $T_{ab}^{I^1 I^2}$ is a bi-parameter paraproduct; 0.15in 2. $$\begin{aligned} T_{ab}^{II^1 I^2} = & \displaystyle \sum_{\substack{k_1 \ll k_2 \\ j \in \mathbb{Z}}} \int {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\phi}_{k_1}(\xi+\xi_2) {\widehat}{{\varphi}}_{k_2}(\xi_1+\xi_2) {\widehat}{\psi}_{k_2}(\xi_3){\widehat}{\psi}_{k_2}(\xi_1+\xi_2+\xi_3) \nonumber \\ & \quad \quad \quad {\widehat}{\tilde{\phi}}_{j}(\eta_1) {\widehat}{\tilde{\phi}}_{j}(\eta_2) {\widehat}{\phi}_{j}(\eta_3) {\widehat}{\phi}_{j}(\eta_1+\eta_2+\eta_3) {\widehat}{f_1}(\xi_1) {\widehat}{f_2}(\xi_2) {\widehat}{g_1}(\eta_1) {\widehat}{g_2}(\eta_2) {\widehat}{h}(\xi_3,\eta_3)\nonumber \\ & \quad \quad \quad \cdot e^{2\pi i x(\xi_1+\xi_2+\xi_3)} e^{2\pi i y(\eta_1+\eta_2+\eta_3)}d\xi_1 d\xi_2 d\xi_3 d\eta_1 d\eta_2 d\eta_3 \nonumber \\ = & \sum_{\substack{k_1 \ll k_2 \\ j \in \mathbb{Z}}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * {\varphi}_{k_2} \bigg) ( g_1 * \tilde{\phi}_{j}) (g_2 * \tilde{\phi}_{j}) (h * \psi_{k_2}\otimes \phi_{j}) * \psi_{k_2}\otimes \phi_{j}, \nonumber \\\end{aligned}$$ where at least two of the families $(\phi_{k_1})_{k_1}$ are lacunary and at least two of the families $(\phi_{j})_{j}$ are lacunary. .15 in 3. $$\begin{aligned} T_{ab}^{II^1 II^2} = & \displaystyle \sum_{\substack{k_1 \ll k_2 \\ j_1 \ll j_2}} \int {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\phi}_{k_1}(\xi+\xi_2) {\widehat}{{\varphi}}_{k_2}(\xi_1+\xi_2) {\widehat}{\psi}_{k_2}(\xi_3){\widehat}{\psi}_{k_2}(\xi_1+\xi_2+\xi_3) \nonumber \\ & \quad \quad \quad {\widehat}{\phi}_{j_1}(\eta_1) {\widehat}{\phi}_{j_1}(\eta_2){\widehat}{\phi}_{j_1}(\eta_1+\eta_2) {\widehat}{{\varphi}}_{j_2}(\eta_1+\eta_2) {\widehat}{\psi}_{j_2}(\eta_3){\widehat}{\psi}_{j_2}(\eta_1+\eta_2+\eta_3) \nonumber \\ & \quad \quad \quad {\widehat}{f_1}(\xi_1) {\widehat}{f_2}(\xi_2) {\widehat}{g_1}(\eta_1) {\widehat}{g_2}(\eta_2) {\widehat}{h}(\xi_3,\eta_3) \cdot e^{2\pi i x(\xi_1+\xi_2+\xi_3)} e^{2\pi i y(\eta_1+\eta_2+\eta_3)}d\xi_1 d\xi_2 d\xi_3 d\eta_1 d\eta_2 d\eta_3 \nonumber \\ = & \sum_{\substack{k_1 \ll k_2 \\ j_1 \ll j_2}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * {\varphi}_{k_2} \bigg) \bigg(\big(( g_1 * \phi_{j_1}) (g_2 * \phi_{j_1}) * \phi_{j_1}\big) * {\varphi}_{j_2} \bigg) \nonumber \\ & \quad \ \ \ \ \cdot (h * \psi_{k_2}\otimes \psi_{j_2}) * \psi_{k_2}\otimes \psi_{j_2},\end{aligned}$$ where at least two of the families $(\phi_{k_1})_{k_1}$ are lacunary and at least two of the families $(\phi_{j_1})_{j_1}$ are lacunary. .15in ### General Symbols The extension from special symbols to general symbols can be treated as specified in Chapter 2.13 of [@cw]. With abuse of notations, we will proceed the discussion as in the previous section with recognition of the fact that bump functions do not necessarily equal to $1$ on their supports, which prevents simple manipulation as before. One notices that $I^1$ generates bi-parameter paraproduct as previously. In Case $II^1$, since $k_1 \ll k_2$, ${\widehat}{{\varphi}}_{k_2}(\xi_1)$ and ${\widehat}{{\varphi}}_{k_2}(\xi_2)$ behave like ${\widehat}{{\varphi}}_{k_2}(\xi_1 + \xi_2)$. One could obtain (\[completion\]) as a result. To make the argument rigorous, one considers the Taylor expansions $${\widehat}{{\varphi}}_{k_2}(\xi_1) = {\widehat}{{\varphi}}_{k_2}(\xi_1 + \xi_2) + \sum_{l_1> 0} \frac{{\widehat}{{\varphi}}^{(l_1)}_{k_2}(\xi_1+ \xi_2)}{{l_1}!}(-\xi_2)^{l_1}$$ $${\widehat}{{\varphi}}_{k_2}(\xi_2) = {\widehat}{{\varphi}}_{k_2}(\xi_1 + \xi_2) + \sum_{l _2> 0} \frac{{\widehat}{{\varphi}}^{(l_2)}_{k_2}(\xi_1+ \xi_2)}{{l_2}!}(-\xi_1)^{l_2}$$ There are some abuse of notations in the sense that ${\widehat}{{\varphi}}_{k_2}(\xi_1+ \xi_2)$ in both equations do not represent for the same function - they correspond to ${\widehat}{{\varphi}}_{k_2}(\xi_1)$ and ${\widehat}{{\varphi}}_{k_2}(\xi_2)$ respectively, and share the common feature that $({{\varphi}}_{k_2}(\xi_1))_{k_2}$ and $({{\varphi}}_{k_2}(\xi_2))_{k_2}$ are non-lacunary families of bump functions. Let ${\widehat}{\tilde{{\varphi}}}_{k_2}(\xi_1+\xi_2)$ denote the product of the two and one can rewrite $II^1$ as $$\begin{aligned} &\underbrace{\sum_{k_1 \ll k_2} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) {\widehat}{\tilde{{\varphi}}}_{k_2}(\xi_1 + \xi_2){\widehat}{\psi}_{k_2}(\xi_3)}_{II^1_0} + \nonumber \\ & \underbrace{\sum_{\substack{0 < l_1+l_2 \leq M}}\sum_{k_1 \ll k_2}{\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) \frac{{\widehat}{{\varphi}}_{k_2}^{(l_1)}(\xi_1 + \xi_2)}{{l_1}!} \frac{{\widehat}{{\varphi}}_{k_2}^{(l_2)}(\xi_1 + \xi_2)}{{l_2}!} (-\xi_1)^{l_2}(-\xi_2)^{l_1} {\widehat}{\psi}_{k_2}(\xi_3)}_{II^1_1} + \nonumber \\ &\underbrace{\sum_{\substack{l_1 + l_2 > M }}\sum_{k_1 \ll k_2}{\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) \frac{{\widehat}{{\varphi}}_{k_2}^{(l_1)}(\xi_1 + \xi_2)}{{l_1}!} \frac{{\widehat}{{\varphi}}_{k_2}^{(l_2)}(\xi_1 + \xi_2)}{{l_2}!} (-\xi_1)^{l_2}(-\xi_2)^{l_1} {\widehat}{\psi}_{k_2}(\xi_3) }_{II^1_{\text{rest}}}, \nonumber \\\end{aligned}$$ where $M \gg |\alpha_1|$. One observes that $II^1_0$ can be “completed" to obtain (\[completion\]) as desired. One can simplify $II^1_1$ as $$\begin{aligned} II^1_1 & = \sum_{\substack{0 < l_1 + l_2\leq M}} \sum_{\mu=100}^{\infty} \sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) \frac{{\widehat}{{\varphi}}_{k_2}^{(l_1)}(\xi_1 + \xi_2)}{{l_1}!} \frac{{\widehat}{{\varphi}}_{k_2}^{(l_2)}(\xi_1 + \xi_2)}{{l_2}!} (-\xi_1)^{l_2}(-\xi_2)^{l_1} {\widehat}{\psi}_{k_2}(\xi_3) \nonumber \\ & = \sum_{\substack{0 < l_1 + l_2 \leq M }} \sum_{\mu=100}^{\infty} \sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) 2^{-k_2 l_1}{\widehat}{{\varphi}}_{k_2,l_1}(\xi_1 + \xi_2) 2^{-k_2 l_2}{\widehat}{{\varphi}}_{k_2,l_2}(\xi_1 + \xi_2) (-\xi_1)^{l_2}(-\xi_2)^{l_1} {\widehat}{\psi}_{k_2}(\xi_3) \nonumber \\ & \sim \sum_{\substack{0 < l_1+l_2 \leq M }} \sum_{\mu=100}^{\infty} \sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2) 2^{-k_2 l_1}{\widehat}{{\varphi}}_{k_2,l_1}(\xi_1 + \xi_2) 2^{-k_2 l_2}{\widehat}{{\varphi}}_{k_2,l_2}(\xi_1 + \xi_2) 2^{k_1l_1}2^{k_1 l_2} {\widehat}{\psi}_{k_2}(\xi_3) \nonumber \\ & = \sum_{\substack{0 < l_1+l_2 \leq M }} \sum_{\mu=100}^{\infty} 2^{-\mu(l_1+\l_2)}\sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{{\varphi}}_{k_2,l_1}(\xi_1 + \xi_2) {\widehat}{{\varphi}}_{k_2,l_2}(\xi_1 + \xi_2) {\widehat}{\psi}_{k_2}(\xi_3) \nonumber \\ &= \sum_{\substack{0 < l_1+l_2 \leq M }}\sum_{\mu=100}^{\infty} 2^{-\mu(l_1+\l_2)} \underbrace{ \sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\tilde{{\varphi}}}_{k_2,l_1,l_2}(\xi_1 + \xi_2) {\widehat}{\psi}_{k_2}(\xi_3)}_{II_{1,\mu}^{1}}, \nonumber \\\end{aligned}$$ where $ {\tilde{{\varphi}}}_{k_2,l_1,l_2}(\xi_1 + \xi_2) $ denotes an $L^{\infty}$-normalized non-lacunary bump function with Fourier support at scale $2^{k_2}$. One notices that $II_{1,\mu}^{1}$ has a form similar to (\[completion\]) and can be rewritten as $$\sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\phi}_{k_1}(\xi+\xi_2) {\widehat}{\tilde{{\varphi}}}_{k_2,l_1,l_2}(\xi_1+\xi_2) {\widehat}{\psi}_{k_2}(\xi_3){\widehat}{\psi}_{k_2}(\xi_1+\xi_2+\xi_3)$$ Meanwhile. $$\begin{aligned} II^1_{\text{rest}} = & \sum_{\substack{l_1+l_2 > M }}\sum_{\mu=100}^{\infty} 2^{-\mu(l_1+\l_2)} \sum_{k_2 = k_1 + \mu} {\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\tilde{{\varphi}}}_{k_2,l_1,l_2}(\xi_1 + \xi_2) {\widehat}{\psi}_{k_2}(\xi_3)\nonumber \\ \leq &\sum_{\mu=100}^{\infty} 2^{-\mu M} \underbrace{\sum_{k_2 = k_1 + \mu} \sum_{\substack{l_1 +l_2 > M }}{\widehat}{\phi}_{k_1}(\xi_1) {\widehat}{\phi}_{k_1}(\xi_2){\widehat}{\tilde{{\varphi}}}_{k_2,l_1,l_2}(\xi_1 + \xi_2) {\widehat}{\psi}_{k_2}(\xi_3)}_{II^{1}_{\text{rest},\mu}},\nonumber \\\end{aligned}$$ where $m^1_{\mu} := II^{1}_{\text{rest},\mu}$ is a Coifman-Meyer symbol satisfying $$\left|\partial^{\alpha_1} m^1_{\mu}\right| \lesssim 2^{\mu |\alpha_1|}\frac{1}{|(\xi_1,\xi_2)|^{|\alpha_1|}}$$ for sufficiently many multi-indices $\alpha_1$. Same procedure can be applied to study $a_2(\eta_1,\eta_2)b_2(\eta_1 \eta_2,\eta_3)$. One can now combine all the arguments above to decompose and study $$T_{ab} = T_{ab}^{I^1I^2} + T_{ab}^{I^1 II^2} + T_{ab}^{I^1 III^2} + T_{ab}^{II^1 I^2} + T_{ab}^{II^1 II^2} + T_{ab}^ {II^1 III^2} + T_{ab}^{III^1 I^1} + T_{ab}^{III^2 II^2} + T_{ab}^{III^1 III^2}$$ where each operator takes the form $$\displaystyle \int_{\mathbb{R}^6} \text{symbol} \cdot {\widehat}{f_1}(\xi_1) {\widehat}{f_2}(\xi_2) {\widehat}{g_1}(\eta_1) {\widehat}{g_2}(\eta_2) {\widehat}{h}(\xi_3,\eta_3)e^{2\pi i x(\xi_1+\xi_2+\xi_3)} e^{2\pi i y(\eta_1+\eta_2+\eta_3)}d\xi_1 d\xi_2 d\xi_3 d\eta_1 d\eta_2 d\eta_3$$ with the symbol for each operator specified as follows. 1. $T_{ab}^{I^1I^2}$ is a bi-parameter paraproduct as in the special case. .15in 2. $T_{ab}^{II^1 I^2}$: $(II^{1}_0 + II^{1}_{1} + II^{1}_{\text{rest}}) \otimes I^2$ where the operator associated with each symbol can be written as (i) $$T^{II_0^1 I^2} := \sum_{\substack{k_1 \ll k_2 \\ j \in \mathbb{Z}}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * {\varphi}_{k_2} \bigg) ( g_1 * \tilde{\phi}_{j}) (g_2 * \tilde{\phi}_{j}) (h * \psi_{k_2}\otimes \phi_{j}) * \psi_{k_2}\otimes \phi_{j}$$ (ii) $$T^{II^1_1 I^2} := \sum_{\substack{0 < l_1+l_2 \leq M }}\sum_{\mu= 100}^{\infty} 2^{-\mu(l_1+\l_2)} T^{II^1_{1,\mu}I^2}$$ with $$T^{II^1_{1,\mu}I^2}:= \sum_{\substack{k_2 = k_1 + \mu \\ j \in \mathbb{Z}}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * \tilde{{\varphi}}_{k_2,l_1,l_2} \bigg)( g_1 * \tilde{\phi}_{j}) (g_2 * \tilde{\phi}_{j}) (h * \psi_{k_2}\otimes \phi_{j}) * \psi_{k_2}\otimes \phi_{j}$$ (iii) $$T^{II^1_{\text{rest}}I^2} := \sum_{\mu= 100}^{\infty}2^{\mu M} T^{II^1_{\text{rest},\mu}I^2}$$ One notices that $II^1_{\text{rest},\mu}$ and $I^2$ are Coifman-Meyer symbols. $T^{II^1_{\text{rest},\mu}I^2}$ is therefore a bi-parameter paraproduct and one can apply the Coifman-Meyer theorem on paraproducts to derive the bound of type $O(2^{|\alpha_1|\mu})$, which would suffice due to the decay factor $2^{-\mu M}$. 3. $T^{II^1 II^2}$: $(II^1_0 + II^1_1 + II^1_{\text{rest}}) \otimes (II^2_0 + II^2_1 + II^2_{\text{rest}})$ where the operator associated with each symbol can be written as (i) $$T^{II_0^1 II_0^2} := \sum_{\substack{k_1 \ll k_2 \\ j_1 \ll j_2}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * {\varphi}_{k_2} \bigg) \bigg(\big(( g_1 * \phi_{j_1}) (g_2 * \phi_{j_1}) * \phi_{j_1}\big) * {\varphi}_{j_2} \bigg) (h * \psi_{k_2}\otimes \psi_{j_2}) * \psi_{k_2}\otimes \psi_{j_2}$$ (ii) $$T^{II^1_1 II_0^2} := \sum_{\substack{0 < l_1+l_2 \leq M }}\sum_{\mu= 100}^{\infty} 2^{-\mu(l_1+\l_2)} T^{II^1_{1,\mu}II^2_0}$$ with $$T^{II^1_{1,\mu}II^2_0}:= \sum_{\substack{k_2 = k_1 + \mu \\ j_1 \ll j_2}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * \tilde{{\varphi}}_{k_2,l_1,l_2} \bigg)\bigg(\big(( g_1 * \phi_{j_1}) (g_2 * \phi_{j_1}) * \phi_{j_1}\big) * {\varphi}_{j_2} \bigg) (h * \psi_{k_2}\otimes \psi_{j_2}) * \psi_{k_2}\otimes \psi_{j_2}$$ (iii) $$T^{II^1_{\text{rest}}II^2_0}:= \sum_{\mu= 100}^{\infty}2^{\mu M} T^{II^1_{\text{rest},\mu}II^2_0}$$ where $T^{II^1_{\text{rest},\mu}II^2_0}$ is a multiplier operator with the symbol $$m^1_{\mu}\otimes II^2_0$$ which generates a model similar as $T^{I^1 II^2_0}$ or, by symmetry, $T^{II^1_0 I^2}$. (iv) $$T^{II^1_1 II^2_1}:= \sum_{\substack{0 < l_1+l_2 \leq M \\ l_1' + l_2' \leq M'}}\sum_{\mu,\mu'= 100}^{\infty} 2^{-\mu(l_1+\l_2)} 2^{\mu'(l_1'+l_2')}T^{II^1_{1,\mu}II^2_{1,\mu'}}$$ with $$T^{II^1_{1,\mu}II^2_{1,\mu'}}:= \sum_{\substack{k_2 = k_1 + \mu \\ j_2 = j_1 + \mu'}}\bigg(\big(( f_1 * \phi_{k_1}) (f_2 * \phi_{k_1}) * \phi_{k_1}\big) * \tilde{{\varphi}}_{k_2,l_1,l_2} \bigg)\bigg(\big(( g_1 * \phi_{j_1}) (g_2 * \phi_{j_1}) * \phi_{j_1}\big) * \tilde{{\varphi}}_{j_2,l_1',l_2'} \bigg) (h * \psi_{k_2}\otimes \psi_{j_2}) * \psi_{k_2}\otimes \psi_{j_2}$$ (v) $$T^{II^1_{\text{rest}} II^2_1} := \sum_{\mu= 100}^{\infty}2^{\mu M} T^{II^1_{\text{rest},\mu}II^2_1}$$ where $T^{II^1_{\text{rest},\mu}II^2_1}$ has the symbol $$m^1_{\mu}\otimes II^2_1$$ which generates a model similar as $T^{I^1 II^2_1}$ or $T^{II^1_1 I^2}$. (vi) $$T^{II^1_{\text{rest}}II^2_{\text{rest}}} := \sum_{\mu,\mu'= 100}^{\infty}2^{\mu M}2^{\mu'M'} T^{II^1_{\text{rest},\mu}II^2_{\text{rest},\mu'}}$$ where $ T^{II^1_{\text{rest},\mu}II^2_{\text{rest},\mu'}}$ is associated with the symbol $$m^1_{\mu}\otimes m^2_{\mu'}$$ which generates a model similar as $T^{II^1_{\text{rest},\mu}I^2}$, $T^{I^1 II^2_0}$ or $T^{II^1_0 I^2}$. 4. $T^{III^1 II^2}$, $T^{III^1 I^2}$ and $T^{III^1 III^2}$ can be studied by the exact same reasoning for $T^{II^1II^2}$, $T^{II^1 I^2}$ and $T^{II^1 II^2}$ by the symmetry between symbols $II$ and $III$. Discretization -------------- With discretization procedure specified in Chapter 2.2 of [@cw], one can reduce the above operators into the following discrete model operators listed in Theorem (\[thm\_weak\]): ---------------------------------- -------------------- ------------------------------------------------------- $T^{II^1_0 I^2}$ $ \longrightarrow$ $\Pi_{\text{flag}^0 \otimes \text{paraproduct}}$ $T^{II^1_{1,\mu} I^2}$ $ \longrightarrow$ $ \Pi_{\text{flag}^{\mu} \otimes \text{paraproduct}}$ $T^{II^1_0 II^2_0}$ $ \longrightarrow$ $ \Pi_{\text{flag}^0 \otimes \text{falg}^0} $ $T^{II^1_0 II^2_{1,\mu'}}$ $ \longrightarrow$ $\Pi_{\text{flag}^0 \otimes \text{flag}^{\mu'}} $ $T^{II^1_{1,\mu} II^2_{1,\mu'}}$ $ \longrightarrow$ $\Pi_{\text{flag}^{\mu} \otimes \text{flag}^{\mu'}} $ ---------------------------------- -------------------- ------------------------------------------------------- [03]{} Benea, C. and Muscalu, C. *Quasi-Banach valued Inequalities via the helicodial method*, J. Funct. Anal. 273, no. 4, 1295-1353, \[2017\]. Benea, C. and Muscalu, C. *Mixed norm estimates via the helicoidal method*, Preprint \[2020\]. Bennett, J., Bez, N., Buschenhenke, S. and Flock, T. C. *The nonlinear Brascamp-Lieb inequality forsimple data*, Preprint, arxiv: 1801.05214. Bennett,J., Bez, N., Cowling, M. G. and Flock, T. C. *Behaviour of the Brascamp-Lieb constant*, Bull.Lond. Math. Soc.49, no. 3, 512-518, \[2017\]. Bennett, J., Carbery, A., Christ, M. and Tao, T. *The Brascamp-Lieb inequalities: finiteness, structure and extremals*, Geom. Funct. Anal.17, 1343-1415, \[2007\]. Brascamp, H. J. and Lieb, E. H. *Best constants in Young’s inequality, its converse,and its generalization to more than three functions*, Adv. Math.20, 151-173, \[1976\]. Carbery, A., Hänninen, T. S. and Valdimarsson, S. *Multilinear duality and factori-sation for Brascamp-Lieb-type inequalities with applications*, arXiv:1809.02449. Chang, S.-Y. A. and Fefferman, R. *Some recent developments in Fourier analysis and $H^p$ theory on product domains*, Bull. Amer. Math. Soc., vol. 12, 1-43, \[1985\]. Coifman, R. R. and Meyer, Y. *Operateurs multilineaires*, Hermann, Paris, \[1991\]. Durcik, P. and Thiele, C. *Singular Brascamp-Lieb inequalities with cubical structure*, arXiv:1809.08688. Durcik, P. and Thiele, C. *Singular Brascamp-Lieb: a survey*, arXiv:1904.08844. Fefferman, C. and Stein, E. *Some maximal inequalities*, Amer. J. Math., vol. 93, 107-115, \[1971\]. Germain, P., Masmoudi, N. and Shatah, J. *Global Solutions for the Gravity Water Waves Equation in Dimension 3*, \[2009\]. Kato, T. and Ponce, G. *Commutator estimates and the Euler and Navier-Stokes equations*,Comm. Pure App. Math. ,41, 891-907, \[1988\]. Kenig, C. *On the local and global well-posedness theory for the KP-I equation*, Ann. Inst. H. Poincaré Anal. Non Linéaire 21, 827-838, \[2004\]. Lacey, M. and Thiele, C. *On Calderón’s conjecture*, Ann. of Math. (2),149(2):475-496, \[1999\]. Lu, G., Pipher, J. and Zhang, L. *Bi-parameter trilinear Fourier multipliers and pseudo-differential operators with flag symbols*, arXiv:1901.00036. Miyachi, A. and Tomita, N. *Estimates for trilinear flag paraproducts on $L^{\infty}$ and Hardy spaces*, Math. Z. 282, 577 - 613, \[2016\]. Muscalu, C. *Paraproducts with flag singularities I. A case study*, Rev. Mat. Iberoamericana, vol. 23 705-742, \[2007\]. Muscalu, C., Pipher, J., Tao, T. and Thiele, C. *Bi-parameter paraproducts*, Acta Math., vol. 193, 269-296, \[2004\]. Muscalu, C., Pipher, J., Tao, T. and Thiele, C. *Multi-parameter paraproducts*, Rev. Mat. Iberoamericana, pp. 963-976, \[2006\]. Muscalu, C. and Schlag, W. *Classical and Multilinear Harmonic Analysis*, \[2013\]. Muscalu, C., Tao, T. and Thiele, C. *Multi-linear operators given by singular multipliers*, J. Amer. Math. Soc. 15, 469-496, \[2002\]. Muscalu, C., Tao, T. and Thiele, C. *$L^p$ estimates for the biest l: The Walsh case*, Math. Ann. 329, 401-426, \[2004\]. Muscalu, C., Tao, T. and Thiele, C. *$L^p$ estimates for the biest lI: The Fourier case*, Math. Ann., 329, 427-461, \[2004\]. [^1]: Basic inequalities refer to the inequalities obtained for Dirac kernels. [^2]: Many cases of arbitrary complexity follow from the mixed-norm estimates for vector-valued inequalities in the paper by Benea and the first author [@bm2]. [^3]: Its boundedness is at present an open question, raised by the first author of the article on several occasions. [^4]: Multilinear form, denoted by $\Lambda$, associated to an n-linear operator $T(f_1, \ldots, f_n)$ is defined as $\Lambda(f_1, \ldots, f_n, f_{n+1}) := \langle T(f_1,\ldots, f_n), f_{n+1}\rangle $.
"This changes everything." "[Fishlegs] Well, by my calculations, Hiccup, for the Dragon Blade to ignite in those kind of wind conditions, it would require" "An additional half jar of Monstrous Nightmare gel." "Precisely!" "Are you thinking what I'm thinking, Fishlegs?" "[both] Build a new handle to hold twice as much gel." "Of course." "Now, what gauge cylinder will we use?" "My brain says 10, but my heart says 13..." "That's a language we'll never understand." "[growls in approval] [chuckles] Yeah." "You know, we too have a language that you will never understand." "This is news?" "No, seriously." "We created our own secret twin language." "Yeah, just in case we ever got captured and needed to communicate in code." "Okay, I know I'm gonna regret asking this, but what exactly is this secret language of yours?" "It's complex, so try and follow along with that pretty little head of yours." "Ello-hay, [snorts] Uffnut-Ray. [snorts]" "Ello-hay, [snorts] Uffnut-Tay. [snorts]" "We call it Boar Latin!" "[Ruffnut] Yeah." "Genius, right?" "[snorts]" "Oh, wait, uh..." "En-ius-jay, Ight-ray?" "[snorts]" "Can you believe it only took us 11 years to come up with that?" "I mean, 15 with the research and development." "What's with all the snorting?" "Uh, hello?" "It's called "Boar Latin." Uh, boar. [snorts]" "Yeah, heard of it?" "Can't have the Latin without the boar." "Then it'd just be Latin." "Duh." "Everyone speaks Latin these days." "[chuckles] Ummy-Day. [snorts]" "Erk-jay. [snorts]" "Hey, just for the record, I understand everything you guys are saying." "That's ingenious, Hiccup." "I wouldn't have thought of it if you hadn't suggested Changewing acid." "Uh, I come back from patrol for this?" "[imitating Fishlegs] "Oh, you're so smart, Hiccup."" "[imitating Hiccup] "Oh, no, actually you're the smartest, Fishlegs."" "[imitating Fishlegs] "Oh, you're so pretty."" "[imitating Hiccup] "Oh, actually you're so pretty." "We're both pretty." "Let's hug."" "Ah!" "Ook-Lay. [snorts]" "Uh, what was that?" "Boar Latin." "I'll explain later." "[Terrible Terror growls]" "Terror Mail." "We'll continue this discussion later, Fishlegs." "Huh?" "What is it?" "Urgent message from the Defenders of the Wing." "Mala needs help." "[whizzing]" "All right, gang." "Fan out and keep your eyes peeled." "We have no idea what we're flying into." "[woman 1] They're here!" "[man 1] Welcome!" "[crowd cheering]" "Look, there they are!" "[man 2] Welcome, Riders!" "[man 3] I see them now!" "[crowd continues cheering]" "Hiccup Haddock, thank the ancients, you received our message." "Mala." "Throk." "What happened?" "Is it Hunters?" "No." "Something much worse." "[laughs] This would appear to be an egg-mergency." "Or some might call it "emergency egg-may." [snorts]" "Has something happened to Tuffnut?" "Nope." "This is pretty much a daily thing." "Is that" "An Eruptodon egg." "Unlike other dragons, Eruptodons only produce a single egg in their lifetime." "Our tribe has been waiting generations for our Great Protector to have an heir." "And now, it has finally happened." "So, this should be a time for celebration, shouldn't it?" "[all sighing]" "If it were that simple." "An Eruptodon egg can only hatch under very special conditions." "The dragon is born of flame and its egg requires the life-giving lava of its ancestral nesting site, a cavern deep inside the Grand Volcano." "So, what's it doin' out here?" "[rumbling] [growls wearily]" "Easy girl." "It's all right." "The birth weakened our already aged Great Protector, so much so that she cannot fly to the sacred site." "We were able to spare the egg, but without proper nesting, it will not hatch." "Our only option is to transport the egg ourselves before the lava rises and floods the cavern." "Whoa!" "[screams] -[Mala screams] [laughs] [sniffs]" "The future of our entire civilization rests on this egg's survival." "If it fails to hatch, the Great Protector will not have an heir." "And if there is no Eruptodon to eat the lava from the volcano, the island and our tribe is doomed." "Mala, we will deliver that egg into the volcano." "[laughs] When you say "we will," you actually mean "you will," right?" "Okay, great." "Check you later." "By the looks of the lava, we have a small window, but it should be enough time to get in and get out." "Exactly." "You thinking what I'm thinking, Fishlegs?" "[retches] Here it comes." "Another Hicclegs lovefest." "I'll fly the egg down." "I'll fly the egg down." "[all gasp]" "Uh, Hiccup, I think you mean I should fly the egg down, because Gronckles are accustomed to lava." "Well, that's true, yeah, but a Night Fury has the distinct speed advantage, don't you think?" "Gentlemen." "Time is waning." "Hiccup and the Night Fury will fly the egg to the cavern." "Right." "Yes." "Okay." "What just happened?" "I have no idea, but Hicclegs just got very interesting." "Our armor is coated with a layer of heat-resistant Eruptodon saliva." "Ugh." "It should help protect against the effects of the volcano." "And, speaking of heat..." "[Toothless grunting]" "Gronckle Iron tail fin." "[clinking] [whistles]" "Uh, Fishlegs, look" "The sacred cavern is located on the south side of the volcano's interior." "May the spirits of our fallen warriors guide your wings, Hiccup Haddock." "[grunts]" "Okay, uh, I guess all I need now is the egg." "The egg is my responsibility." "I'm going with you." "My Queen, let me go instead." "No, Throk." "A Queen must always be willing to risk her life for her people." "[growls]" "[lava bubbling]" "[grunts] Come on." "Keep going, bud." "[Mala grunts] [growls]" "Down there." "I see it." "Toothless, wing right." "[Toothless grunting]" "[Hiccup] Whoa!" "What is happening, Hiccup?" "If that tail gives out, the three of us and the egg are done for." "[Hiccup grunting] -[Toothless growling]" "[Mala groaning]" "My queen." "Hiccup, what happened?" "Quickly." "Replace the Night Fury's tail." "You must go back." "I don't have another one." "I could try to make one out of" "No." "There's no time." "Oh, no." "I feared this would happen." "The egg has spent too much time outside the nesting site." "It requires the life-giving lava." "And if it isn't delivered soon, it will become hard as stone." "Then..." "Then, what?" "It will never hatch." "And it will die." "[crowd clamoring]" "There is no need for panic." "We must stay calm." "[grunting in pain]" "These herbs will help you regain your strength, Great Protector." "Looks like ol' Throk might be a little "cray-cray." [laughs, snorts]" "Oh!" "Wow!" "I had no idea" "you spoke our language." "Rother-bay. [snorts] [both laughing]" "What?" "[grunting in pain] [crowd continues clamoring]" "Fear not." "The egg will be delivered into the Grand Volcano as promised." "Hey, uh, Fishlegs, look, [stammers] about earlier" "Yeah, earlier." "Right." "Weird." "So weird." "[chuckles]" "Well, I thought we could put our heads together again and see if we can come up with a solution." "She really needs us." "Yes." "I agree." "Great." "Great." "Yeah, well, I've been giving it a lot of thought." "Me, too." "Perfect." "Then you must be thinking what I'm thinking, right?" "Scale down the cliff." "Submerge the egg in a lava bath." "[sighs in exasperation]" "Lava bath?" "Oh, come on." "We could never maintain its temperature." "Lava cools." "Scale down the cliff?" "Were you being serious about that?" "Well, what do you think I'm" "Guys, remember earlier when you both agreed Gronckles were good in lava conditions?" "Maybe Fishlegs and Meatlug should give it a try?" "You know, Eruptodons, Gronckles, both Boulder class." "Hey, we tried it your way." "Why not just" "Excellent idea." "We leave at once." "Uh..." "Come, Hiccup." "We don't know what we'll find and may need your help." "[growling]" "[growls excitedly]" "We are nearing the sacred nesting site." "Okay, girl." "Take us home." "Can't this Gronckle fly any faster?" "[grunts] -[exclaims]" "Fishlegs, why are you stopping?" "[lava explodes]" "Impressive." "Okay, girl, let's go." "[screeching]" "We must transport the egg to the end of these caverns before the lava floods in." "Quickly." "[screeching]" "Rother-bay [snorts] Notlout-say [snorts] e-way [snorts] elcome-way [snorts] ou-yay. [snorts]" "For the millionth time, you two," "I don't understand anything you've been saying for the last three hours!" "Shh." "It's okay, Boar Brother Snotlout." "Don't." "No need to hide your proud roots." "Please stop." "You're among your Boar Latin family now." "[screams]" "Or should we say, "amily-fay." [snorts]" "Okay, that lava is getting a little too close to the entrance." "Not to worry, Astrid Hofferson." "Queen Mala knows this volcano better than anyone on the island." "[grunts]" "They've been down there for a long time." "Yes." "The lava is rising quickly." "They should've returned by now." "That settles it." "We're going in." "[growls] [lava explodes]" "I agree." "But how?" "Those explosions are too dangerous and getting worse." "If we just had a way to get down safely without dragons." "Actually, we might have a "lan-pay." -[snorts]" "[screams] -[laughs]" "We'll make it." "Uh, hey, you guys, what are those?" "Whoa!" "Hmm." "There was a time when the tribal elders would climb down into these caverns and sacrifice themselves for the good of the tribe." "Right, right, right, right." "But, what are these figures?" "I have never seen those before." "Uh, the egg?" "We should keep moving." "[distant roaring]" "Um, what was that?" "Uh, I'm not sure." "[screeching]" "Please tell me those aren't bats." "Yeah, they're definitely not bats." "[screeching getting louder]" "[Fishlegs screams]" "Guard the egg!" "[Meatlug grunts]" "This must have been what the carving was trying to warn about." "But there's too many." "They're as relentless as Speed Stingers, so we should probably... [clangs]" "Not exactly what I had in mind." "I'll direct them away while you and Meatlug get Mala and the egg to safety." "Hiccup, these dragons eat fire." "Fishlegs, that is abundantly clear." "[Mala] No!" "Mala!" "[grunting]" "Stay away." "They outnumber her three to one." "Then we need to even the odds." "[screeching]" "Ah, now what?" "I don't know." "I thought you had the idea." "[Meatlug grunting] [screeching in panic] [growls]" "Hiccup Haddock." "No!" "[screeching]" "Fishlegs Ingerman." "We'll never make it to her in time." "[growling]" "Meatlug, roll!" "Meatlug, fly!" "[growls quizzically, groans] [screams] Stop!" "No!" "[growling]" "The Diving Bell was your big "lan-pay"?" "You flew all the way to Berk for a big hunk of metal to dangle over fire?" "Why not bring back a frying pan?" "How "umb-day" are they?" "[laughs, snorts]" "It's uncanny." "There's no trace of an accent." "To talk that eloquently, he must be at least a quarter boar." "Maybe two-fifths." "He is hairy in strange places." "Hey!" "Actually, I believe this could work." "If we were to invert it, and then coat it in Eruptodon saliva." "It won't last long but should be enough to reach the cavern, find them, and raise them to safety." "I'll get to work." "Oh, great, now what?" "Which direction?" "Left." "Right." "Oh, Gods." "What is going on with us?" "We're just not thinking." "We need to clear our heads." "Right, right." "Good idea." "Right." "Left?" "[gasps]" "Ugh!" "Oh, maybe we're cursed." "No." "There is a perfectly logical explanation for what's happening." "Of course, I can't think of it right now." "Mala, what direction do you think?" "Hiccup, where's Mala?" "Mala!" "She snuck off down those corridors." "Mala!" "Mala!" "[grunts annoyingly]" "Okay, Barf and Belch, take us in." "[Barf and Belch grunting]" "[lava bubbling]" "Grace of the ancients." "The Eruptodon saliva worked." "Yes." "But we need to move faster." "Guys, let's pick up the "ace-pay." [snorts]" "[laughs] They made it." "Great." "Time for the big swing." "[metal creaking]" "[Astrid screams] -[splashes]" "[Astrid grunting]" "[Toothless growling]" "[chomps]" "Toothless, what are you doing?" "[screams] Toothless!" "Pull up, you crazy Night Fury." "[gasps] Oh, that's not good." "[growls in pain]" "Okay, you're the "idea dragon." Now what?" "Mala!" "Mala!" "Mala!" "Mala!" "I must have offended the gods, Hiccup." "That's why we're being punished." "I should've never taken Odin's name in vain." "Never!" "Oh, come on, Fishlegs." "That has nothing to do with this." "There, you see, our fortunes are finally changing for the better." "Leave me." "This is my duty." "My people." "Mala, you don't know what they're capable of." "[screeching] [all scream]" "Unhand me." "I command you." "Sacrificing yourself won't do anyone any good." "Hiccup, look." "[screeching]" "Are they frustrated at not being able to crack the shell, or is it something else?" "Not sure, but it doesn't seem predatory." "[rumbling]" "Wait, Fishlegs, are you thinking what I'm thinking?" "You know, I think I actually am!" "There is no more time for indecision." "Mala..." "[screeching]" "Oh, for the love of Odi" "Seriously?" "Sorry." "I really gotta work on that." "[bubbling] [explosion]" "[groans] Th..." "Th..." "Throk." "So tired." "[groans]" "Don't close your eyes, Astrid." "We must stay awake for Hiccup Haddock and Queen Mala." "Hi..." "Hi..." "Hiccup." "[sizzling] [screeching]" "That is the Egg of the Great Protector." "I command you." "Return it." "[screeching] [grunting with effort] [screeching furiously] [rumbling]" "Mala, give us the egg." "Absolutely not." "We have a plan." "Do you?" "We do." "And I think you should hear us out on this one." "Those dragons won't let you pass, but we have a way to get the egg to the nesting place." "Trust us, Mala." "[rumbling] [screeching]" "Hiccup, what are you doing?" "They don't want to harm the egg." "They're not predators, okay?" "They're here to help Eruptodon eggs reach their sacred nesting place." "Yeah." "Those cave drawings weren't a warning from your ancestors." "They were historical records, instructions." "[screeching]" "Their attacks were to keep us humans from damaging the egg." "[rumbling loudly]" "We have to go." "Now." "[growling] [screeching]" "Ooh." "This is gonna be close." "Give us all you got!" "[Hiccup] Come on." "Yes!" "Hey!" "Down there!" "Look behind you!" "Hiccup, there in the lava!" "Let's go, Fishlegs." "Right there with you." "[grunts] [grunts]" "Come on, Throk." "[grunts] Go!" "That Gronckle clearly cannot hold us all." "My mission was to get my queen to safety." "By the ancients." "By the ancients." "[roars]" "[Mala] Yes!" "[Fishlegs] [laughs]" "[grunting affectionately]" "The egg is in good hands, Mala." "Exactly what I was gonna say." "[both laughing]" "It's nice to see things are back to normal." "Whatevs." "I sort of liked the new Hicclegs." "The other kind is "otally-tay" [snorts] "oring-bay." [snorts]" "A true master linguist." "[growls in approval] [chuckles] You said it "Ookfang-hay." [snorts]" "[screeches]" "[screeching excitedly]"
1. Introduction {#s0005} =============== The success of coral reefs in oligotrophic environments is owed to the symbiotic association of the habitat-forming scleractinian corals with photosymbionts from the genus *Symbiodinium* (zooxanthellae). These algal symbionts enable the coral host to access the pool of dissolved inorganic nitrogen and phosphorus in the water column in addition to the nutrient uptake by heterotrophic feeding ([@bb0040], [@bb0065], [@bb0210], [@bb0130], [@bb0275], [@bb0070], [@bb0115], [@bb0225]). Moreover, the zooxanthellae recycle ammonium excreted as metabolic waste product by the host, thereby efficiently retaining nitrogen within the holobiont ([@bb0210], [@bb0235], [@bb0295]). The nutrient limitation experienced by the zooxanthellae *in hospite* in oligotrophic conditions results in a skewed chemical balance of the cellular nitrogen and phosphorus content relative to the available carbon. As a result, photosynthetic carbon fixation can be uncoupled from cellular growth, facilitating the translocation of a large proportion of photosynthates to the coral host ([@bb0205], [@bb0215], [@bb0085], [@bb0075]). Reefs and the provision of their valuable ecosystem services are globally threatened by climate change and a range of anthropogenic pressures ([@bb0125], [@bb0190], [@bb0260], [@bb0150], [@bb0165], [@bb0010], [@bb0155], [@bb0055], [@bb0185]). In this context, it has become increasingly clear that the nutrient environment plays a defining role in determining coral reef resilience ([@bb0055], [@bb0080], [@bb0270], [@bb0025], [@bb0110], [@bb0015]). The ratio of dissolved inorganic nitrogen to phosphorus in the marine environment can be interpreted as an indicator of whether photosynthetic primary production is limited by the availability of nitrogen or phosphorus. In coral reef waters, N:P ratios were found in an approximate range from 4.3:1 to 7.2:1 ([@bb0265], [@bb0045], [@bb0105]) which is lower than the canonical Redfield ratio of 16:1, considered optimal to sustain phytoplankton growth ([@bb0240]). Consequently, many processes in coral reefs tend to be nitrogen limited ([@bb0110]). Natural nutrient levels in coral reef ecosystems are impacted by the rising anthropogenic nutrient input into the oceans, especially into coastal waters, via the atmospheric deposition of combustion products, agricultural activities, erosion and sewage discharge ([@bb0080], [@bb0025], [@bb0055]). Since a number of these sources of nutrient enrichment can be influenced at the local scale ([@bb0020], [@bb0175], [@bb0005]), the management of nutrification is a promising tool for coral reef protection which also holds potential to mitigate some of the negative effects of rising sea water temperatures on these ecosystems ([@bb0055]). It has been conceptualised that some direct negative effects of eutrophication on the *Symbiodinium* stress tolerance may be caused, paradoxically, by an associated deprivation of nutrients vital for the physiological functioning of the coral symbionts ([@bb0310], [@bb0055]). The resulting nutrient starvation can occur for example when the availability of one type of essential nutrient (e.g. phosphate) decreases relative to the cellular demand, resulting in imbalanced and unacclimated growth ([@bb0220]). High nitrate concentrations in combination with low phosphate availability have previously been shown to result in phosphate starvation of the algal symbiont and increased susceptibility of corals to heat- and light-stress-induced bleaching ([@bb0310]). In principle, this condition could not only result from an increased cellular demand due to nutrient (nitrogen) -- accelerated cell proliferation rates but also from a selective decrease of one specific nutrient type ([@bb0220]). Relevant shifts of the nutrient balance in the natural reef environments were reported, for example, for the reefs of Discovery Bay in Jamaica where enrichment with groundwater-borne nitrate resulted in a dissolved inorganic nitrogen to phosphorus ratio of 72:1, coral decline and phase shifts to macroalgal dominance ([@bb0180]). However, the functioning of the coral-*Symbiodinium* association can be severely impaired not only by the imbalanced availability of nutrients, but also by a combined deprivation of both, nitrogen and phosphorus ([@bb0250]). In this light, the expected nutrient impoverishment of oceanic waters that could result from global warming or the rapid uptake of dissolved inorganic nutrients by ephemeral phytoplankton blooms could possibly act in combination with increased heat stress levels to accelerate reef decline ([@bb0055], [@bb0245]). Due to the fast uptake of dissolved inorganic nutrients by benthic communities it is often difficult to measure the level of nutrient exposure in coral reefs ([@bb0110]). Consequently, biomarkers are required that inform about the nature of the nutrient stress which corals and their symbionts experience under certain conditions ([@bb0035], [@bb0055]). Recently, we have demonstrated that bleaching and reduced growth of corals resulting from the deprivation of dissolved inorganic nitrogen and phosphorus is reflected by the ultrastructure of zooxanthellae ([@bb0250]). The undersupply with nutrients manifests in a larger symbiont cell size, increased accumulation of lipid bodies, higher numbers of starch granules and a striking fragmentation of their accumulation bodies. We have exploited the potential of these biomarkers to detect nutrient stress imposed on the coral-*Symbiodinium* association and explored the response of the algal ultrastructure to skewed dissolved inorganic nitrogen to phosphorus ratios. 2. Materials and methods {#s0010} ======================== 2.1. Coral culture {#s0015} ------------------ We used *Symbiodinium* clade C1 associated with *Euphyllia paradivisa* as model to establish in long-term experiments the responses of the coral holobiont and zooxanthellae biomarkers to different nutrient environments. We exposed the corals to high nitrogen-low phosphorus (HN/LP) and low nitrogen--high phosphorus (LN/HP) conditions and compared them to corals experiencing nutrient replete (HN/HP) and low nutrient (LN/LP) conditions ([@bb0250]). We note that the attributes "high" and "low" are introduced to facilitate comparison of the nutrient conditions in the context of our experiment and do not necessarily represent all natural reef environments. Imbalanced nutrient conditions were established in individual aquarium systems within the experimental mesocosm of the Coral Reef Laboratory at the National Oceanography Centre Southampton ([@bb0050]): high nitrogen/low phosphorus (HN/LP = \~ 38 μM NO~3~^−^/\~0.18 μM PO~4~^−^; N:P ratio = 211:1) and low nitrogen/high phosphorus (LN/HP = \~ 0.06 μM NO~3~^−^/\~3.6 μM PO~4~^−^; N:P ratio = 1: 60). The ammonium levels found in our mesocosm are very low (\< 0.7% of total dissolved inorganic nitrogen) compared to the combined nitrite (\~ 10%) and nitrate concentrations (\~ 90%) ([@bb0310]). Therefore, the measured NO~3~^−^ concentrations (combined NO~2~^−^/NO~3~^−^) represent largely the total dissolved inorganic nitrogen pool that could be accessed by the zooxanthellae in the present experiment. All experimental systems were supplemented with iron and other trace elements by weekly dosage of commercially available solutions (Coral Colours, Red Sea) and partial water changes with freshly made artificial seawater using the Pro-Reef salt mixture (Tropic Marin). Both the holobiont and the zooxanthellae phenotypes were dominated by the response to the dissolved inorganic nutrient environment and largely unaffected by heterotrophic feeding by the host in our previous study ([@bb0250]). However, to avoid any potential influence of nutrients in particulate form, the corals were not provided with food in the present experiments. Colonies of *Euphyllia paradivisa* ([@bb0050]) were cultured under the two imbalanced N:P ratios for \> 6 months at a constant temperature of 25 °C and a 10/14 h light/dark cycle. Corals in the HN/LP treatment were first maintained at lower light intensity (∼ 80 μmol m^− 2^ s^− 1^) due to the mortality risk caused by prolonged exposure to this nutrient ratio at higher light levels ([@bb0310]). Light intensities were gradually ramped up to ∼ 150 μmol m^− 2^ s^− 1^ over 7 days and corals were kept under these conditions for 4 months prior to sampling. The corals from the LN/HP treatment experienced a photonflux of ∼ 150 μmol m^− 2^ s^− 1^ throughout the experiment. The results of the analyses were contrasted to those described in [@bb0250] where corals were cultured under comparable light and temperature conditions but at different nutrient levels (high nitrogen/high phosphorus (HN/HP = \~ 6.5 μM NO~3~^−^/\~0.3 μM PO~4~^−^) vs low nitrogen/low phosphorus (LN/LP = \~ 0.7 μM NO~3~^−^/\~0.006 μM PO~4~^−^). 2.2. Measurements of dissolved inorganic nutrients {#s0020} -------------------------------------------------- Nitrate concentrations were measured by zinc reduction of nitrate to nitrite followed by a modified version of the Griess reaction as described in ([@bb0140]) using commercially available reagents (Red Sea Aquatics UK Ltd), according to the manufacturer\'s instructions. The resultant colour change was measured using a custom programmed colorimeter at 560 nm (DR900, HACH LANGE) calibrated with nitrate standard solution in the range 0 to 20 mg l^− 1^ NO~3~. Phosphate concentrations were measured using the PhosVer 3 (Ascorbic Acid) method (\#8048, HACH LANGE) using the same colorimeter (DR900, HACH LANGE) with the program specified by the manufacturer. 2.3. Determination of polyp size {#s0025} -------------------------------- The size of the live polyp (i.e. the part of the corallite covered by tissue) was determined by the end of the treatments. First, the corals were removed from the water to ensure full retraction of the polyp tissue. After a drip-off period of \~ 2 min, the mean diameters of the individual polyps were measured by averaging the longest and the shortest diameter of oval corallites (Fig. S1). In the case of round corallites, two measurements were taken along two orthogonal lines through the centre. The mean extension of the live tissue cover of the outer parts of the corallites was determined by measuring and averaging its extension at 5 measuring points spaced out evenly around the corallite. The live polyp volume was calculated using these measurements assuming a cylindrical shape of the polyp. 2.4. Photosynthetic efficiency (Fv/Fm) {#s0030} -------------------------------------- A Diving PAM (Walz) was used to determine the Photosystem II (PSII) maximum quantum efficiency (Fv/Fm) as a measure of stress experienced by the zooxanthellae. Measurements were taken under dim light exposure after 12 h of dark acclimation ([@bb0300]). A reduction of Fv/Fm below 0.5 was considered to be an indicator of stress as these lower values can indicate PSII damage when measured after dark recovery ([@bb0120]). 2.5. Transmission electron microscopy {#s0035} ------------------------------------- ### 2.5.1. Sample preparation and imaging {#s0040} For each experimental treatment, three tentacles of *E. paradivisa* (one per colony) were sampled from fully expanded polyps 1 h after the start of the light period. Tentacles were removed from the centre of each polyp to ensure that they were maximally exposed to light. Specimens were fixed and imaged as described in ([@bb0250]). Briefly, tentacles were fixed (3% glutaraldehyde, 4% formaldehyde, 0.1 M PIPES buffer containing 14% sucrose at pH 7.2) and then cut to obtain only the central section of each tentacle, post-fixed using 1% osmium tetroxide, stained with 2% uranyl acetate and dehydrated with a graded ethanol series before being embedded in Spurr\'s resin. Semi-thin tentacle sections (\~ 240 nm) were cut and stained with 1% toluidine blue and 1% borax for light microscope observations. For each specimen 3--5 thin sections (\< 100 nm thick) were obtained that were \> 20 μm apart from each other to eliminate the possibility of imaging the same algal cell twice. For each experimental treatment, at least nine sections originating from all three tentacles were produced. Sections were stained with lead citrate and imaged on a Hitatchi H7000 transmission electron microscope. For each grid square (Cu200), the 3--4 largest zooxanthellae were imaged in order to analyse only cells that were cut close to their central plane, thus being representative for the maximal cell diameter. For each tentacle, a minimum of 30 zooxanthellae cells were imaged, using 3 or more sections. A total of 100 micrographs of individual zooxanthellae (× 6000 magnification) were acquired for each treatment. ### 2.5.2. Micrograph analysis {#s0045} All micrographs were analysed using Fiji ([@bb0255]). The size of individual zooxanthellae cells was deduced from the cell section area (*n* = 100). Furthermore, the area occupied by lipid bodies, starch granules and uric acid crystals was determined for each cell and presented as a percentage of the cell section area (*n* = 100). Accumulation body integrity was measured by the degree of fragmentation by counting the number of fissures in the periphery ([@bb0250]). The accumulation body was only analysed when it was clearly visible in the section. For this parameter, a mean was derived for each processed tentacle per treatment (*n* = 3). The zooxanthellae density was determined by measuring the area of the endoderm and counting the contained zooxanthellae, using semi-thin sections imaged under a light microscope at × 40 magnification (*n* = 3). While the relative differences between samples from the respective treatments are unaltered, the present method produces absolute numbers which are higher compared to published values ([@bb0250]). 2.6. Statistical analysis {#s0050} ------------------------- For the morphological parameters of zooxanthellae, statistical replication was achieved by analysing 100 distinct algal cells from three tentacles and from different areas within each tentacle (*n* = 100) ([@bb0250]; Table S1). Data from nutrient replete (HN/HP) and low nutrient (LN/LP) treatments ([@bb0250]) were analysed for comparison. A mean value of zooxanthellae density was obtained for each processed tentacle (*n* = 3) (Table S2). Data were tested for normality using the Shapiro-Wilk test and log transformed if found to be non-normally distributed. Statistically significant effects resulting from the difference in dissolved inorganic nutrient availability were determined by one-way analysis of variance (ANOVA) (Table S3), followed by Tukey\'s post hoc test for pairwise comparison (Table S4). Data that were not normally distributed after transformation were, therefore, determined by the non-parametric Kruskal-Wallis one way ANOVA on ranks. *P* \< 0.05 was considered to be significant in all instances. 3. Results {#s0055} ========== 3.1. Effects on the coral holobiont {#s0060} ----------------------------------- Corals exposed to the imbalanced, HN/LP conditions, displayed a smaller polyp size and a bleached appearance that closely resembled the phenotype observed in low nutrient water (LN/LP) ([Fig. 1](#f0005){ref-type="fig"}, [Fig. 2](#f0010){ref-type="fig"}a). In contrast, the corals kept under LN/HP imbalanced nutrient levels showed a similar phenotype to the nutrient replete (HN/HP) treatment. The bleached appearance of the polyps from HN/LP conditions was associated with low numbers of zooxanthellae in the tentacle tissue ([Fig. 1](#f0005){ref-type="fig"}, [Fig. 2](#f0010){ref-type="fig"}, [Table 1](#t0005){ref-type="table"}), similar to the low nutrient LN/LP treatment. In contrast, the symbiont numbers in the tissue of LN/HP exposed corals were comparable to those of corals from nutrient replete (HN/HP) conditions ([Fig. 1](#f0005){ref-type="fig"}, [Fig. 2](#f0010){ref-type="fig"}).Fig. 1Effect of dissolved inorganic nutrient availability on polyp size, and on zooxanthellae density and ultrastructure. Panels on the left hand side show representative photographs of *Euphyllia paradivisa* polyps from each experimental treatment. Panels in the central column show light microscope images of tentacle endoderm cross sections (× 40 magnification). Panels on the right hand side show micrographs of individual zooxanthellae which represent a mean ultrastructure (*n* = 100) resulting from the respective treatments (× 6000 magnification). HN/HP = high nitrogen/high phosphorus, LN/LP = low nitrogen/low phosphorus, HN/LP = high nitrogen/low phosphorus, LN/HP = low nitrogen/high phosphorus. AB = accumulation body, ch = chloroplast, LB = lipid body, N = nucleus with condensed chromosomes, P = pyrenoid, S = starch granule, U = uric acid crystals.Fig. 1.Fig. 2Effect of dissolved inorganic nutrient availability on polyp size and on zooxanthellae density. (a) Coral polyp volume, (b) zooxanthellae density. HN/HP = high nitrogen/high phosphorus, LN/LP = low nitrogen/low phosphorus, HN/LP = high nitrogen/low phosphorus, LN/HP = low nitrogen/high phosphorus. Mean ± s.d. Statistically significant differences are indicated by the use of different letters (one-way ANOVA, Tukey\'s test, *P* \< 0.05).Fig. 2.Table 1*Symbiodinium* biomarker patterns characteristic for different nutrient environments.Table 1.Nutrient conditionNutrient replete HN/HPLow nutrients LN/LPImbalanced HN/LPImbalanced LN/HPZooxanthellae nutrient statusNutrient replete growthN/P co-limitationP-starvedN-limitedZooxanthellae densityNormalLowLowNormalPolyp sizeNormalSmallSmallNormalCoral healthNormalBleachedBleachedNormalZooxanthellae health (Fv/Fm)Normal \> 0.5Normal \> 0.5Stressed \< 0.5Normal \> 0.5Zooxanthellae ultrastructural biomarkersCell sizeSmallIncreasedIncreasedSmallLipid body contentLowIncreasedIncreasedIncreasedStarch granule contentLowIncreasedIncreasedIncreasedUric acid crystal contentn.d.n.d.Increasedn.d.Accumulation body fragmentationn.d.Increasedn.d.n.d. 3.2. Effects on the *Symbiodinium* ultrastructure {#s0070} ------------------------------------------------- The analysis of TEM micrographs revealed that the size of zooxanthellae from the imbalanced HN/LP treatment and the low nutrient condition were significantly increased compared to those from the nutrient replete and the imbalanced LN/HP treatments ([Fig. 1](#f0005){ref-type="fig"}, [Fig. 3](#f0015){ref-type="fig"}, [Table 1](#t0005){ref-type="table"}). The low nutrient (LN/LP) condition and both types of nutrient imbalance increased the content of lipid bodies and starch granules in the symbiont cells ([Fig. 3b,c](#f0015){ref-type="fig"}) in comparison to corals from the HN/HP treatment. A biochemical assay using the lipophilic dye Nile Red (see supplementary material for method) confirmed that the increased cellular content of lipid bodies is due to an accumulation of neutral lipids (Fig. S2A). The lipid content remained stable over the day (Fig. S2B). Only the imbalanced HN/LP condition resulted in a marked increase in the content of uric acid crystals ([Fig. 3d](#f0015){ref-type="fig"}, [Table 1](#t0005){ref-type="table"}). Interestingly, none of the imbalanced nutrient treatments caused the fragmentation of the accumulation body characteristic of the low nutrient condition ([Fig. 3e](#f0015){ref-type="fig"}). 3.3. Effects on *Symbiodinium* photosynthetic efficiency (Fv/Fm) {#s0065} ---------------------------------------------------------------- Compared to nutrient replete conditions, zooxanthellae from specimens from the imbalanced HN/LP treatment showed a reduction in the maximum quantum efficiency (Fv/Fm) with values of 0.34 ± 0.05 after dark recovery being indicative of PSII damage or disturbance ([Fig. 3](#f0015){ref-type="fig"}f, [Table 1](#t0005){ref-type="table"}). In contrast, Fv/Fm values \> 0.5 were recorded for zooxanthellae of corals from the other treatments.Fig. 3Effect of dissolved inorganic nutrient availability on zooxanthellae ultrastructure and Fv/Fm. (a) Cell size (*n* = 100), (b) lipid body accumulation (*n* = 100), (c) starch granule accumulation (*n* = 100), (d) uric acid crystal accumulation (*n* = 100), (e) accumulation body fragmentation (*n* = 3), (f) Fv/Fm (*n* = 5). HN/HP = high nitrogen/high phosphorus, LN/LP = low nitrogen/low phosphorus, HN/LP = high nitrogen/low phosphorus, LN/HP = low nitrogen/high phosphorus. Box plots: the vertical line within each box represents the median. The box extends from the first to the third quartile and whiskers extend to the smallest and largest non-outliers. Outliers are not shown. Bar chart: mean ± s.d. Statistically significant differences are indicated by the use of different letters (one-way ANOVA, Tukey\'s test, *P* \< 0.05).Fig. 3. 4. Discussion {#s0075} ============= We used ultrastructural biomarkers of zooxanthellae to gain novel insights into the response of the coral -- *Symbiodinium* symbiosis to imbalanced nutrient environments and to analyse the role of nitrogen and phosphorus for the functioning of this association and potential implications for coral reef management. We used the reef coral *Euphyllia paradivisa* harbouring *Symbiodinium* sp. (clade C1) as a model system, exposed the corals to HN/LP and to LN/HP conditions and compared them to specimens from nutrient replete (HN/HP) and low nutrient (LN/LP) treatments ([@bb0250]). 4.1. Effect of high nitrate/low phosphate conditions {#s0080} ---------------------------------------------------- Recently, we demonstrated that corals exposed to HN/LP conditions were more susceptible to bleaching when exposed to heat stress and/or elevated light levels ([@bb0310]). The detrimental effects were linked to the relative undersupply with phosphorus that can result from the higher demand of the proliferating algal populations rather than to the high nitrogen levels. Phosphate starvation in *Symbiodinium* sp. resulted in a drop of photosynthetic efficiency associated with changes in the ratio of phospho- and sulfo-lipids ([@bb0310]). In other photosynthetic organisms, similar responses to phosphate stress could be attributed to critical changes in the properties of photosynthetic membranes ([@bb0095]). Hence, our findings provided a potential mechanistic link between nutrient stress, the malfunctioning of the photosynthetic machinery and the observed bleaching response. With their low zooxanthellae numbers, bleached appearance and small polyp size, the corals from the HN/LP treatment under elevated light levels resembled the low-nutrient phenotype (LN/LP) previously described ([@bb0250]). These two treatments also had similar effects on the ultrastructure of zooxanthellae, with cell size and the accumulation of carbon-rich storage bodies (lipid bodies and starch granules) being increased in comparison to zooxanthellae from nutrient replete conditions. Similar structural changes were found to be indicative of nutrient limitation in zooxanthellae and free-living microalgae ([@bb0145], [@bb0200], [@bb0160], [@bb0195], [@bb0305], [@bb0250]). Those characteristics have been interpreted as indicators of an uncoupling of carbon fixation from cellular growth. In this state, the nutrient-limited cells sustain a high photosynthetic production while their energy demand is reduced due to slower proliferation rates ([@bb0250], [@bb0160], [@bb0285], [@bb0280]). Since corals from the HN/LP conditions were supplied with excess nitrogen, the nutrient limitation phenotype of corals and symbionts can be clearly attributed to the undersupply with phosphate. Importantly, under both, nutrient replete and low nutrient conditions, the photosynthetic efficiency measured as Fv/Fm was in the healthy range (\> 0.5). In contrast, Fv/Fm was strongly reduced in the imbalanced HN/LP treatment, indicative of failing photosynthesis due to phosphate starvation ([@bb0310], [@bb0055]). At ultrastructural level, the phosphate starvation phenotype resulting from nitrogen enrichment in combination with low phosphate supply can be clearly distinguished from the low-nutrient phenotype by the pronounced accumulation of uric acid crystals. This finding is in line with previous studies that observed comparable deposits in zooxanthellae in response to nitrate enrichment, forming a transitory storage of assimilated nitrogen ([@bb0030], [@bb0170]). Finally, the phosphate-starved zooxanthellae lack the intriguing fragmentation pattern of the accumulation body, characteristic of strongly nutrient-limited zooxanthellae ([@bb0250]). 4.2. Effect of low nitrate/high phosphate conditions {#s0085} ---------------------------------------------------- Despite the relative undersupply of nitrogen in the low nitrate/high phosphate treatment, the polyp size and zooxanthellae density of these corals were comparable to those from the replete nutrient treatment. However, the ultrastructural biomarkers revealed signs of nutrient limitation such as elevated levels of lipid bodies and starch granules in symbiont cells from corals under LN/HP. In the light of previous findings, the effects of the low supply with nitrogen could be interpreted to cause an uncoupling of carbon-fixation and cellular growth that manifests in the increased accumulation of carbon-rich storage products. However, as indicated by the smaller cell size and the high number of symbiont cells within the coral tissue, comparable to those from corals experiencing high nutrient levels ([@bb0250], [@bb0160], [@bb0285], [@bb0280]), cell proliferation rates are still high enough to sustain these zooxanthellae densities. These results, together with the high Fv/Fm values of zooxanthellae from LN/HP corals suggest that the N-limitation sustains a slower but chemically balanced growth while maintaining a functional photosynthesis. 4.3. Differential effects of N and P undersupply and critical thresholds {#s0090} ------------------------------------------------------------------------ Our results suggest that symbiotic corals can tolerate an undersupply with nitrogen much better than an undersupply with phosphorus. These findings likely reflect an adaptation of the algal symbionts to the nutrient environment of coral reefs where processes are mostly nitrogen limited ([@bb0045], [@bb0105], [@bb0265], [@bb0110]). In agreement with this assumption, previous studies found a trend that nitrogen enrichment stimulates zooxanthellae growth and results in higher zooxanthellae densities, often without obvious negative effects on the corals ([@bb0080]). It cannot be ruled out, however, that nitrogen-fixation by coral-associated microbes in the presence of high phosphate concentration might potentially relieve some of the nitrogen-undersupply of the corals ([@bb0230]). The present study clearly shows that phosphate deficiency, alone or in combination with a low supply of nitrate, results in a severe disturbance of the symbiotic partnership as indicated by the loss of coral tissue and zooxanthellae. Phosphate starvation of zooxanthellae induced by nitrogen enrichment and resulting high N:P ratios has previously been shown to disturb the photosynthetic capacity of zooxanthellae and increase the vulnerability of corals to light- and heat stress-mediated bleaching ([@bb0310]). The fact that normal photosynthetic efficiency is retained by zooxanthellae in corals from the LN/LP treatment suggests that an undersupply with phosphate has less severe consequences when the algae become limited by nitrogen. This can be explained by the reduced P-demand of the non-/slow-growing algal population ([@bb0055]). The concentrations of dissolved inorganic nutrients in our LN/LP treatment (\~ 0.7 μM/\~0.006 μM) suggest that at measured nitrate concentrations \< 0.7 μM the impact of skewed N:P ratio becomes less pronounced. In our experiments, a phosphate concentration of \~ 0.3 μM at a N:P ratio of 22:1 yielded an overall healthy phenotype. Accordingly, it is likely that the absolute N:P ratio becomes also less critical for the proper functioning of the symbionts when phosphate concentrations exceed a vital supply threshold (\> 0.3 μM), even when the symbionts are rapidly proliferating. In contrast, a phosphate concentration of \~ 0.18 μM at a \~ 10-fold higher N:P ratio (211:1) yielded a bleached phenotype with the remaining symbionts showing signs of stress (Fv/Fm \< 0.4). Therefore, the P-threshold at which corals can become stressed in the presence of high N concentrations can be as high as 0.18 μM. Effects of P deficiency can be expected to become worse if supply from other sources such as particulate food or internal reserves, is low. 4.4. Implications for environmental monitoring and coral reef management {#s0095} ------------------------------------------------------------------------ Our study suggests that phosphate can become critically limiting even at concentrations ≤ 0.18 μM if the N:P ratios well exceed 22:1. This appears surprising since phosphate concentrations in this range are commonly considered ambient or high in natural reef environments. However, [@bb0180] reports phosphate concentrations of 0.1--0.18 μM at N:P ratios in the range of 33--72 to be associated with phosphate limitation of macroalgae in the declining reefs of Discovery Bay (Jamaica). These data suggest that the critical threshold values determined by our laboratory study can indeed be found in reef environments impacted by eutrophication. However, it is important to note that nutrient values measured in the water column of natural or experimental mesocosm settings represent a steady-state equilibrium that depends on their production and uptake by organisms. Since these fluxes vary spatially and temporarily among reef regions, the measured nutrient concentrations have to be considered in the context of the respective environment. Consequently, there is an urgent need to refine these thresholds and quantify the absolute amounts of nutrients and the associated fluxes that are responsible for the observed biological effects. These values are required to provide reliable and effective target values for management purposes. Of particular interest in the context of the present work is also the role of phytoplankton blooms. Stimulated by nutrient enrichment in the first place, coastal blooms can limit primary production by depleting essential nutrients or shifting their ratio over time and space ([@bb0055]). Critically, the depletion of dissolved inorganic phosphorus has been reported in the aftermath of phytoplankton blooms that were initially set off by elevated nitrogen levels ([@bb0060], [@bb0100], [@bb0135]). Such a lack of phosphate may render benthic corals more susceptible to stress, bleaching and associated mortality ([@bb0310]). Indeed, previous studies have observed a correlation between elevated nitrogen concentrations, increased phytoplankton densities and coral bleaching ([@bb0290], [@bb0315], [@bb0055]). Preventing the enrichment of coral reef waters with excess nitrogen should consequently be a management priority. However, it is important to note that also other forms of nutrient enrichment can have a plethora of direct and indirect negative effect on corals and their symbionts (reviewed by [@bb0055]). Therefore, the reduction of nutrient enrichment has to be generally high on the agenda of coral reef management ([@bb0245]). The extended set of cumulative, ultrastructural biomarkers provided here ([Table 1](#t0005){ref-type="table"}) can be used to identify different forms of nutrient stress in *Euphyllia* sp. associated with *Symbiodinium* (C1). These biomarkers hold promise to indicate nutrient stress also in other symbiotic coral species and in various reef settings. Importantly, they have potential to become part of the toolkit that is required for an in-depth understanding of the nutrient environment in coral reefs by bridging knowledge gaps left by traditional measurements of nutrient levels in the water column. Our findings highlight the key role of phosphorus in sustaining zooxanthellae numbers and coral biomass and for the proper functioning of symbiont photosynthesis, thereby contributing to the critical understanding of the importance of phosphorus for the functioning of symbiotic corals ([@bb0090]). Author contributions {#s0100} ==================== JW and CD provided the research question and the experimental set-up. SR, JW and CD designed the experiments. SR conducted experiments and analysed data. SR, JW and CD interpreted data. AR contributed to the maintenance of the experimental set-up Appendix A. Supplementary data {#s0105} ============================== Supplementary materialImage 1 We acknowledge the Biomedical Imaging Unit University of Southampton (A. Page) for the access to the TEM and T. Stead (School of Biological Sciences, Royal Holloway) for discussing zooxanthellae micrographs. We thank Laura Muras (Kopernikus Gymnasium Wasseralfingen) for helping with the polyp size measurements and Luke Morris (University of Southampton) for the conducting the Nile Red assays. The study was funded by NERC (NE/K00641X/1 to JW), European Research Council under the European Union\'s Seventh Framework Programme (FP/2007--2013)/ERC Grant Agreement no. 311179 to JW and a Vice Chancellor Award studentship to JW. We thank Tropical Marine Centre (London) and Tropic Marin (Wartenberg) for sponsoring the *Coral Reef Laboratory* at the University of Southampton. Supplementary data to this article can be found online at <http://dx.doi.org/10.1016/j.marpolbul.2017.02.044>.
1. INTRODUCTION {#gepi21989-sec-0010} =============== Hundreds of studies have searched for gene--gene and gene--environment interaction effects in human data with the underlying motivation of identifying or at least accounting for potential biological interaction. So far, this quest has been quite unsuccessful and the large number of methods that have been developed to improve detection (Aschard et al., [2012b](#gepi21989-bib-0006){ref-type="ref"}; Cordell, [2009](#gepi21989-bib-0016){ref-type="ref"}; Gauderman, Zhang, Morrison, & Lewinger, [2013](#gepi21989-bib-0024){ref-type="ref"}; Hutter et al., [2013](#gepi21989-bib-0032){ref-type="ref"}; Thomas, [2010a](#gepi21989-bib-0048){ref-type="ref"}; Wei, Hemani, & Haley, [2014](#gepi21989-bib-0051){ref-type="ref"}) have not qualitatively changed this situation. This lack of discovery in the face of a substantial research investment has been discussed in several review papers that pointed out a number of issues specific to interaction tests, including exposure assessment, time‐dependent effect, confounding effect and multiple comparisons (Aschard et al., [2012b](#gepi21989-bib-0006){ref-type="ref"}; Bookman et al., [2011](#gepi21989-bib-0009){ref-type="ref"}; Thomas, [2010b](#gepi21989-bib-0049){ref-type="ref"}). While these factors are obvious barriers to the identification of interaction effects, it appears that some of the limitations of standard regression‐based interaction tests that pertain to the nature of interaction effects are underestimated. Previous work showed the detection of some interaction effects requires larger sample sizes than marginal effects for a similar effect size (Aiken, West, & Reno, [1991](#gepi21989-bib-0002){ref-type="ref"}; Greenland, [1983](#gepi21989-bib-0028){ref-type="ref"}), however it is not an absolute rule. Understanding the theoretical basis of this lack of power can help us optimizing study design to improve detection of interaction effect in human traits and diseases, and open the path for new methods development. Moreover the interpretation of effect estimates from interaction models often suffer from various imprecisions. Compared to marginal models, the coding scheme for interacting variables can impact effect estimates and association signals for the main effects (Aiken et al., [1991](#gepi21989-bib-0002){ref-type="ref"}; Andersen & Skovgaard, [2010](#gepi21989-bib-0003){ref-type="ref"}). Also, the current strategy to derive the contribution of interaction effects to the variance of an outcome greatly disadvantages interaction effects and are inappropriate when the goal of a study is not prediction but to assess the relative importance of an interaction term from a biological perspective. While alternative approaches exist, they have not so far been considered in genetic association studies. Finally, the development of new pairwise gene--gene and gene--environment interaction tests is reaching some limits, because the number of prior assumptions that can be leveraged to improve power (e.g. gene--environment independence (Piegorsch, Weinberg, & Taylor, [1994](#gepi21989-bib-0043){ref-type="ref"}) or the presence of marginal genetic effect for interacting variants (Dai, Kooperberg, Leblanc, & Prentice, [2012](#gepi21989-bib-0017){ref-type="ref"}) is limited when only two predictors are considered. With the exponential increase of available genetic and nongenetic data, the development and application of multivariate interaction tests offer new opportunities to building powerful approaches and moving the field forward. 2. METHODS AND RESULTS {#gepi21989-sec-0020} ====================== 2.1. Coding scheme and effect estimates {#gepi21989-sec-0030} --------------------------------------- Consider an interaction effect between a single nucleotide polymorphism (SNP) *G* and an exposure *E* (which can be an environmental exposure or another genetic variant) on a quantitative outcome *Y*. For simplicity I assume in all further derivation that *E* is normally distributed with variance 1, and *G* and *E* are independents. The simplest and most commonly assumed underlying model for *Y* when testing for an interaction effect between *G* and *E* is defined as follows: $$Y = \beta_{G} \times G + \beta_{E} \times E + \beta_{GE} \times G \times E + \varepsilon$$ where $\beta_{G}$ is the main effect of *G*, $\beta_{E}$ is the main effect of *E*, *β* ~GE~ is a linear interaction between *G* and *E*, and ε, the residual, is normally distributed with mean and variance σ^2^ set so that *Y* as mean of 0 and variance of 1 (so the absence of the intercept term in the above equation). One can then evaluate the impact of applying linear transformation of the genotype and/or the exposure when testing for main and interaction effects. For example, assuming *E* has a mean \> 0 and *G* is defined as the number of coded allele in the generative model, *Y* can be rewritten as a function of $G_{std}$ and $E_{std}$, the standardized *G* and *E*: $$Y = \beta_{G}^{\prime} \times G_{std} + \beta_{E}^{\prime} \times E_{std} + \beta_{GE}^{\prime} \times G_{std} \times E_{std} + \varepsilon^{\prime}$$where $\beta_{G}^{\prime}$, $\beta_{E}^{\prime}$, and $\beta_{GE}^{\prime}$ are the main effects of $G_{std}$ and $E_{std}$ and their interaction. Relating the standardized and unstandardized equations, we obtain (supplementary Appendix A): $$\beta_{G}^{\prime} = \left( {\beta_{G} + \beta_{GE} \times \mu_{E}} \right) \times \sigma_{G}\mspace{6mu}$$ $$\beta_{E}^{\prime} = \left( {\beta_{E} + \beta_{GE} \times \mu_{G}} \right) \times \sigma_{E}$$ $$\beta_{GE}^{\prime} = \beta_{GE} \times \sigma_{E} \times \sigma_{G}$$where $\mu_{G}$, $\sigma_{G}$, $\mu_{E}$, and $\sigma_{E}$ are the mean and variance of *G* and *E*, respectively. Hence, the estimated main effects of $G_{std}$ and $E_{std}$ not only scale with the variance of *G* and *E* but can also change qualitatively if there is an interaction effect (i.e. the direction of the effect can change). In comparison, the interaction effect $\beta_{GE}^{\prime}$ remains qualitatively similar, however, because $\beta_{GE}^{\prime}$ does not scale with $\sigma_{GE}$ the variance of the interaction term but with the variance of *G* and *E*, the interpretation of the relative importance of the interaction effect can change (see Section [2.3](#gepi21989-sec-0050){ref-type="sec"}). Which coding scheme for *G* and *E* has the most biological sense can only be discussed on a case by case basis (Aiken et al., [1991](#gepi21989-bib-0002){ref-type="ref"}). Indeed, defining the optimal coding for a biological question can be very challenging, and as noted in previous work, "*most mathematical models are convenient fictions and would certainly be rejected given sufficient sample size*" (Clayton, [2009](#gepi21989-bib-0014){ref-type="ref"}). Yet, it is important to recognize that coding scheme should be chosen carefully when testing an interaction as different coding can correspond to qualitatively different relative contribution of each predictor (the main and interaction terms) to the outcome. This is illustrated in Figure [1](#gepi21989-fig-0001){ref-type="fig"}, which shows the contribution of a pure interaction effect ($\beta_{G} = \beta_{E} = 0$ and $\beta_{GE} \neq 0$) to *Y*. When *G* and *E* are centered, the interaction term has a positive contribution to the most extreme subgroups (low exposure and homozygote for the protective allele vs. high exposure and homozygote for the risk allele) and a negative contribution to the opposite heterogeneous subgroups (low exposure and homozygote for the risk allele vs. high exposure and homozygote for the protective allele, Fig. [1](#gepi21989-fig-0001){ref-type="fig"}A). Conversely, when *G* and *E* are positive or null only, the interaction term corresponds to a monotonic increase (or decrease if the interaction effect is opposite to the main effects) of the magnitude of the genetic and environmental effects (Fig. [1](#gepi21989-fig-0001){ref-type="fig"}B). Hence, allowing *G* and/or *E* to have negative values in the generative model implies an interaction effect that could be difficult to interpret from a biological perspective. Furthermore, one can easily show that when the mean of the exposure increases while its variance is fixed (e.g. if an environmental exposure increases affect the entire population), an interaction effect will appear more and more as a sole genetic effect (supplementary Fig. S1). Overall, coding schemes that can be related through linear transformations are mathematical equivalent (they produce the same outcome values as long as the predictor effects are rederived to account for the transformation). However, coding scheme should not be overlooked because of this equivalence, and as shown in the example of Figure [1](#gepi21989-fig-0001){ref-type="fig"}, variable coding should be justified whenever the interpretation of effect estimates matters. ![Example of a gene by exposure interaction effect on height. Pattern of contribution of a hypothetical genetic‐by‐exposure interaction term to human height when shifting the location of the genetic and the exposure burdens in the generative model. Genetic burden can correspond to the number of coded allele for a given SNP and exposure burden can correspond to the measure of an environmental exposure. Examples of simple coding are defined in parenthesis on both axes, and the resulting contributions to each specific combination of genetic and environmental values are defined on each panel for a given interaction effect parameter $\beta_{GE}$. In (A) the interaction is defined as the product of a centered genetic variant and a centered exposure. Such encoded interaction induces positive contribution to the outcome for the two extreme groups: (i) maximum exposure burden and maximum genetic burden, and (ii) lowest exposure burden and lowest genetic burden; and negative contribution to the outcome for the two opposite groups: (iii) maximum exposure burden and lowest genetic burden, and (iv) lowest exposure burden and maximum genetic burden. In (B) genetic and exposure burdens are encode in their natural scale and are therefore positive or null. Such coding induces a contribution of the interaction effect to the outcome that is monotonic with increasing genetic and exposure burden.](GEPI-40-678-g001){#gepi21989-fig-0001} Hopefully, the choice of a specific coding scheme, how to interpret effect estimates when modeling an interaction, and the motivation for adding nonlinear terms in general have been already debated, and several general guidelines have been proposed (see for example the review by Robert J. Friedrich (Friedrich, [1982](#gepi21989-bib-0022){ref-type="ref"})). The consensus is that, if the range of the independent variables do naturally includes zero (e.g. smoking status, genetic variants) there is no problem in interpreting the estimated main and interaction effect. For an interaction effect between A and B, the main effect of A corresponds to the effect of A when B is absent and conversely. On the contrary, if the range of the variables do not naturally encompass zero, then the observed estimates "*will be an extrapolations beyond the observed range of experience"* (Friedrich, [1982](#gepi21989-bib-0022){ref-type="ref"}). Centering the variables can be an option to address this concern. In that case, the main effect of A and B would represent the effect of A among individuals having the mean value of B and conversely. However, as mentioned previously, using centered variables induces a less interpretable interaction term. I suggest that a reasonable alternative would consists in shifting the exposure values so that it has a minimum value close to 0, or alternatively to use ordinal categories of the exposure (e.g. high vs. low BMI as done to define obesity), so that the main effect of A would correspond to the effect among the lowest observed value of B in the population and conversely. 2.2. Power considerations {#gepi21989-sec-0040} ------------------------- The power of the tests from the interaction model and from a marginal genetic model defined as $Y = \beta_{mG} \times G + \varepsilon_{m}$, can be compared when deriving the noncentrality parameters (*ncp*) of the predictors of interest. Assuming all effects are small, so that σ^2^ the residual variance is close to 1, these *ncps* can be approximated by (see supplementary Appendix B): $$ncp_{G} \approx N \times \sigma_{G}^{2} \times \beta_{G}^{2} \times \frac{\sigma_{E}^{2}}{\mu_{E}^{2} + \sigma_{E}^{2}}$$ $$ncp_{E} \approx N \times \sigma_{E}^{2} \times \beta_{E}^{2} \times \frac{\sigma_{G}^{2}}{\mu_{G}^{2} + \sigma_{G}^{2}}$$ $$ncp_{GE} \approx N \times \sigma_{E}^{2} \times \sigma_{G}^{2} \times \beta_{GE}^{2} = N \times \beta_{GE}^{{}^{\prime}2}$$ $$ncp_{mG} \approx N \times \sigma_{G}^{2} \times \left( {\beta_{G} + \beta_{GE} \times \mu_{E}} \right)^{2} = N \times \beta_{G}^{{}^{\prime}2}$$ Note that in such scenario adjusting for the effect of *E* in the marginal genetic model has a minor impact on $ncp_{mG}$. The above equations indicate first that the significance of the marginal test of *G* ($ncp_{mG}$) and the interaction test ($ncp_{GE}$) are invariant with the coding used in the model tested, while the significance of the test of the main genetic and exposure effects can change dramatically when shifting the mean of *G* and *E*. Second, as illustrated in Figure [2](#gepi21989-fig-0002){ref-type="fig"}, depending on the parameters of the distribution of the exposure and the genetic variants in the generative model, the relative power of each test can be dramatically different. For example, if the genetic variant has only a main linear effect but is not interacting with the exposure, we obtain $ncp_{G} = ncp_{mG} \times \sigma_{E}^{2}/\left( {\mu_{E}^{2} + \sigma_{E}^{2}} \right)$, so that testing for $\beta_{mG}$ will be much more powerful that testing for $\beta_{G}$ if the mean of *E* is large, although there is no interaction effect here. When the generative model includes an interaction effect only ($\beta_{G} = \beta_{E} = 0$ and $\beta_{GE} \neq 0$), we obtain $ncp_{mG} = ncp_{GE} \times \mu_{E}^{2}/\sigma_{E}^{2}$. Again, the marginal test of the genetic effect can be dramatically more powerful than the test of interaction effect although the generative model includes only an interaction term but no main effect. ![Relative power of the joint test of main genetic and interaction effects. Power comparison for the tests of the main genetic effect (*main.G*), the interaction effect (*int.GxE*), and the joint effect (*Joint G.GxE*) from the interaction model, and the test of the marginal genetic effect (*mar.G*). The outcome *Y* is define as a function of a genetic variant *G* coded as \[0,1,2\] with a minor allele frequency of 0.3, and the interaction of *G* with an exposure *E* normally distributed with variance 1 and mean $\overline{E}$. The genetic and interaction effects vary so that they explain 0% and 0.04% (A), 0.1% and 0.1% (B), 0.6% and 0.1% (C) with effect in opposite direction, and 0.4% and 0% (D) of the variance of *Y*, respectively. Power and $\rho_{G,G \times E}$, the correlation between *G* and the $G \times E$ interaction term (E) were plotted for a sample size of 10,000 individuals and increasing $\overline{E}$ from 0 to 5.](GEPI-40-678-g002){#gepi21989-fig-0002} It follows that the power to detect an interaction effect explaining for example 1% of the variance of *Y* (where variance explained for a predictor *X* is defined as $\beta_{X}^{2}/\sigma_{X}^{2}$) but inducing no marginal genetic effect (i.e. when *E* is centered as in Fig. [1](#gepi21989-fig-0001){ref-type="fig"}A) is much higher than for an interaction explaining the same amount of variance but whose effect can be capture by a marginal term (i.e. when *E* is not centered as in Fig. [1](#gepi21989-fig-0001){ref-type="fig"}B--D). This result is a direct consequence of the covariance between $\beta_{G}$ and $\beta_{GE}$ that arise when having noncentered exposure in the generative model (Fig. [2](#gepi21989-fig-0002){ref-type="fig"}E). This covariance equals $\mu_{E} \times \sigma_{G}^{2}$ (supplementary Appendix C). It induces uncertainty on the estimation of the predictor effects, which decreases the significance of the estimates in the interaction model. With increasing intercorrelations between predictors it becomes impossible to disentangle the effects of one predictor from another, the standard errors of the effect estimates becoming infinitely large and the power decreases to the null (Farrar & Glauber, [1967](#gepi21989-bib-0021){ref-type="ref"}). As showed in the simulation study from supplementary Figures S2 and S3 these results appear consistent for both linear and logistic regression and when assuming non‐normal distribution of the exposure. This lead to the nonintuitive situation where the power to detect a relatively simple and parsimonious interaction effect from a biological perspective--defined as the product of a genetic variant and an exposure both coded to be positive or null--is very small; and in most scenarios where the main genetic and interaction effects do not canceled each other (see e.g. Weiss, [2008](#gepi21989-bib-0052){ref-type="ref"}) the marginal association test of *G* would be more powerful. In comparison a more exotic interaction effect as defined in Figure [1](#gepi21989-fig-0001){ref-type="fig"}A and supplementary Figure S1E, would be both much easier to detect and not captured in a screening of marginal genetic effect. 2.3. Proportion of variance explained {#gepi21989-sec-0050} ------------------------------------- In genetic association studies the proportion of variance explained by an interaction term is commonly evaluated as the amount of variance of the outcome it can explain on top of the marginal linear effect of the interacting factors (Hill, Goddard, & Visscher, [2008](#gepi21989-bib-0031){ref-type="ref"}). Following the aforementioned principle, one can derive the contribution of *G* ($r_{G}^{2}$), *E* ($r_{E}^{2}$), and $G \times E$ ($r_{GE}^{2}$) to the variance of the outcome using the estimates from the standardize model, in which the interaction term is independent from *G* and *E* (supplementary Appendix D): $$r_{G}^{2} = \beta_{G}^{{}^{\prime}2} = \left( {\beta_{G} + \beta_{GE} \times \mu_{E}} \right)^{2} \times \sigma_{G}^{2}$$ $$r_{E}^{2} = \beta_{E}^{{}^{\prime}2} = \left( {\beta_{E} + \beta_{GE} \times \mu_{G}} \right)^{2} \times \sigma_{E}^{2}$$ $$r_{GE}^{2} = \beta_{GE}^{{}^{\prime}2} = \left( {\beta_{GE} \times \sigma_{E} \times \sigma_{G}} \right)^{2}.$$ The total variance explained by the predictors in the interaction model equals $r_{model}^{2} = r_{G}^{2} + r_{E}^{2} + r_{GE}^{2}$. It follows that one can draw various scenarios where the estimated main effect of *E* and *G* can be equal to zero but have a nonzero contribution to the variance of *Y* because of the interaction effect. Consider the simple example where *G* and *E* are binary variables and have a pure synergistic effect, that is, the effect of *G* and *E* is observed only in the exposed subjects carrying the risk allele. Following the above equations, if *G* and *E* have frequencies of, e.g. 0.3 and 0.7 and $\beta_{GE} = 0.5$, the contribution of *G*, *E*, and $G \times E$ to the variance of the outcome equal 2.56%, 0.47% and 1.10%, respectively. More generally, Figure [3](#gepi21989-fig-0003){ref-type="fig"} shows that depending on the frequency of the causal allele and the distribution of the exposure in the generative model, the vast majority of the contribution of the interaction term to the variance of *Y* will be attributed to either the genetic variant or the exposure. This is in agreement with previous work showing that even if a large proportion of the genetic effect on a given trait is induced by interaction effects, the observed contribution of interaction terms to the heritability can still be very small (Hill et al., [2008](#gepi21989-bib-0031){ref-type="ref"}). Because such interaction effects have small contribution to $r_{model}^{2}$ on top of the marginal effects of *E* and *G*, they have a very limited utility for prediction purposes in the general population (Aschard et al., [2012a](#gepi21989-bib-0004){ref-type="ref"}; Aschard, Zaitlen, Lindstrom, & Kraft, [2015](#gepi21989-bib-0007){ref-type="ref"}). ![Examples of attribution of phenotypic variance explained by an interaction effect. Proportion of variance of an outcome *Y* explained by a genetic variant *G*, an exposure *E* and their interaction *G* × *E* in a model harboring a pure interaction effect only ($Y = \beta_{GE} \times G \times E + \varepsilon$). The exposure *E* follows a normal distribution with a standard deviation of 1 and mean of 0 (A), 2 (B), and 4 (C). The genetic variant is biallelic with a risk allele frequency increasing from 0.01 to 0.99. The interaction effect is set so that the maximum of the variance explained by the model equals 1%.](GEPI-40-678-g003){#gepi21989-fig-0003} Still, this is a strong limitation when the goal is not prediction but to understand the underlying architecture of the trait under study and to evaluate the relative importance of main and interaction effects from a public health perspective. Lewontin (Lewontin, [1974](#gepi21989-bib-0037){ref-type="ref"}) highlighted similar issues, showing that the analysis of causes and the analysis of variance are not necessarily overlapping concepts. His work presents various scenarios where "*the analysis of variance will give a completely erroneous picture of the causative relations between genotype, environment, and phenotype because the particular distribution of genotypes and environments in a given population*." Since then, a number of theoretical studies have explored the issue of assigning importance to correlated predictors (Budescu, [1993](#gepi21989-bib-0011){ref-type="ref"}; Chao, Zhao, Kupper, & Nylander‐French, [2008](#gepi21989-bib-0012){ref-type="ref"}; Darlington, [1968](#gepi21989-bib-0018){ref-type="ref"}; Green, Carroll, & DeSarbo, [1978](#gepi21989-bib-0026){ref-type="ref"}) and several alternatives measures have been proposed. To my knowledge, none of these measures has been considered so far in human genetic association studies. The advantages and limitation of these alternative methods have been debated for years and no clear consensus arose, however Pratt axiomatic justification (Pratt, [1987](#gepi21989-bib-0045){ref-type="ref"}) for one of these method---further presented in the literature as the Product Measure (Bring, [1996](#gepi21989-bib-0010){ref-type="ref"}), Pratt index or Pratt\'s measure (Thomas, Hughes, & Zumbo, [1998](#gepi21989-bib-0050){ref-type="ref"})---makes it a relevant substitute. For a predictor $X_{i}$, the Pratt\'s index that we refer further as $r^{2*}$, is defined as the product of $\beta_{X_{i}}$, the standardized coefficient from the multivariate model (where all predictors are scaled to have mean 0 and variance 1, including the interaction term), times its marginal (or zero‐order) correlation with the outcome $cor\left( {Y,X_{i}} \right)$, i.e. $r_{X_{i}}^{2*} = \beta_{X_{i}} \times cor\left( {Y,X_{i}} \right)$. By definition, $r_{X_{i}}^{2*}$ attributes a predictor\'s importance as a direct function of its estimated effect and therefore addresses the previously raised concern. Among other relevant properties, it depends only on regression coefficients, multiple correlation, and residual variance but not higher moments, and it does not change with (nonconstant) linear transformation of predictors other than $X_{i}$. It also has convenient additivity properties as it satisfies the condition $r_{G}^{2*} + r_{E}^{2*} + r_{GE}^{2*} = r_{model}^{2}$ (supplementary Appendix D), so that the overall contribution of the predictors is the sum of their individual contribution, and for example the cumulated contribution of multiple interaction effects can easily be evaluated by summing $r_{X_{i}}^{2*}$. The Pratt\'s index also received criticisms (Bring, [1996](#gepi21989-bib-0010){ref-type="ref"}; Chao et al., [2008](#gepi21989-bib-0012){ref-type="ref"}), in particular for allowing $r_{X}^{2*}$ being negative (Thomas et al., [1998](#gepi21989-bib-0050){ref-type="ref"}). Pratt\'s answer to this concern is that $r_{X_{i}}^{2*}$ only describes the average contribution of a predictor to the outcome variance in one dimension and is therefore, as any one‐dimension measure, a suboptimal representation of the complexity of the underlying model. For example, a negative $r_{X_{i}}^{2*}$ means that if we were able to remove the effect of $X_{i}$, the variance of the outcome would increase because of the correlation of $X_{i}$ with other predictors (see example from supplementary Appendix D). From a practical perspective, $r_{X_{i}}^{2*}$ can be expressed as a function of the estimated effects, the means and the variances of *E* and *G* (supplementary Appendix D), and can be derived using estimates from a standard regression model: $$r_{G}^{2*} = \left( {\beta_{G}^{2} + \beta_{G} \times \beta_{GE} \times \mu_{E}} \right) \times \sigma_{G}^{2}$$ $$r_{E}^{2*} = \left( {\beta_{E}^{2} + \beta_{E} \times \beta_{GE} \times \mu_{G}} \right) \times \sigma_{E}^{2}$$ $$\begin{array}{ccl} r_{GE}^{2*} & = & {\beta_{GE}^{2} \times \sigma_{GE}^{2} + \beta_{GE}} \\ & & {\times \left( {\beta_{G} \times \mu_{E} \times \sigma_{G}^{2} + \beta_{E} \times \mu_{G} \times \sigma_{E}^{2}} \right)} \\ \end{array}$$ As showed in Figure [4](#gepi21989-fig-0004){ref-type="fig"} and supplementary Figure S4, the Pratt index can recover the pattern of the causal model in situations where the standard approach would underestimate the importance of the interaction effects. It can therefore be of great use in future studies to evaluate the importance of potentially modifiable exposures that influence the genetic component of multifactorial traits. ![Relative importance of an interaction term as defined by the Pratt index. Contribution of a genetic variant *G* with minor allele frequency of 0.5, a normally distributed exposure *E* with mean of 4 and variance of 1 and their interaction *G* × *E*, to the variance of a normally distributed outcome *Y*, based on the standard approach‐--the marginal contribution of *E* and *G* and the increase in *r* ^2^ when adding the interaction term--(gray boxes), and based on the Pratt index (blue boxes), across 10,000 replicates of 5,000 subjects. For illustration purposes the predictors explain jointly 10% of the variance of *Y*. In scenario (A) all G, E, and G × E have equal contribution, while in scenarios (B), (C), and (D) there is no interaction effect, no exposure effect, and no genetic effect, respectively.](GEPI-40-678-g004){#gepi21989-fig-0004} Table [1](#gepi21989-tbl-0001){ref-type="table-wrap"} illustrates the differences between the two approaches for two confirmed interaction effect on body mass index (BMI). In case (1), the authors identified and replicated an interaction between soda consumption and a genetic risk score (GRS) of 32 BMI SNPs. Case (2) is a replication of a previously identified interaction between a GRS of 12 BMI SNPs and physical activity (Ahmad et al., [2013](#gepi21989-bib-0001){ref-type="ref"}). Following the formulas above and using approximation of mean and variance of the genetic and exposure variables (supplementary Table S1 and S2), I estimated the contribution of each term using the standard approach and the Pratt index after rederiving effect estimates for a model where predictor values (for the GRS and the exposure) are shifted so that the minimum observed values equal 0---as suggested earlier. It resulted in major differences of the relative importance of the three predictors, the contribution of the interaction effect as derived with the Pratt Index being substantially higher in both cases (increasing from 4.4% to 10.8% for case 1, and from 0.4% to 15.7% for case 2). Case 1 highlights in particular that reducing soda consumption might have a greater impact in reducing the average BMI in the population than one would expect when focusing on the amount of variance explained as defined in the standard approach. An important caveat here is that the Pratt Index is sensitive to location shift of the predictor (as performed in this analysis) and the results from Table [1](#gepi21989-tbl-0001){ref-type="table-wrap"} would change if a different transformation was applied to the predictors (i.e. if the minimum possible value was defined differently). In comparison, the standard approach is robust to linear transformation of the predictors. ###### Relative importance of GRS by exposure interaction effect from real data example Reference Contribution to BMI Standard[a](#gepi21989-tbl1-note-0002){ref-type="fn"} Pratt index --------------------------------- --------------------- ------------------------------------------------------- ------------- 32 BMI SNPs × soda consumption Total 0.011 0.011 \% of genetic 91.0% 70.5% \% of environment 4.6% 18.7% \% of interaction 4.4% 10.8% 12 BMI SNPs × physical Activity Total 0.016 0.016 \% of genetic 43.8% 49.1% \% of environment 55.8% 35.2% \% of interaction 0.4% 15.7% BMI, body mass index; SNP, single nucleotide polymorphisms. Variance explained for the interaction effect is derived as the variance explained on top of the marginal contribution from the genes and the environment. John Wiley & Sons, Ltd. 2.4. Improving detection through multivariate interaction tests {#gepi21989-sec-0060} --------------------------------------------------------------- Using statistical technics such as the Pratt index can provide clues on the importance of interaction effects; however it does not help in mapping interaction. Increasing power mostly relies on two principles: increasing sample size, and leveraging assumptions on the underlying model. The case‐only test, which assumes independence between the genetic variant and the exposure, and a two steps strategy that select candidate variants for interaction test based on their marginal linear effects, are good examples of the later principle (Dai et al., [2012](#gepi21989-bib-0017){ref-type="ref"}; Gauderman et al., [2013](#gepi21989-bib-0024){ref-type="ref"}; Mukherjee, Ahn, Gruber, & Chatterjee, [2012](#gepi21989-bib-0041){ref-type="ref"}). However, only a limited number of assumptions can be made for a single variant by a single exposure interaction test. With the overwhelming wave of genomic and environmental data, I suggest that a major path to move the field forward is to extend this principle while considering jointly more parameters. This has actually already been applied over the past few years with the joint test of main genetic and interaction effects (Kraft, Yen, Stram, Morrison, & Gauderman, [2007](#gepi21989-bib-0034){ref-type="ref"}). The *ncp* of such a joint test can be expressed as a function of main and interaction estimates ($\beta_{G}$ and $\beta_{GE}$), their variances ($\sigma_{\beta_{G}}^{2}$ and $\sigma_{\beta_{GE}}^{2}$) and their covariance γ (supplementary Appendix E). By accounting for γ the joint test recovers most of the power lost by the univariate test of the main genetic and interaction effect (so the situation where neither the interaction effect nor the main genetic effect are significant, while the joint test is, see e.g. SNP rs11654749 in (Hancock et al., [2012](#gepi21989-bib-0030){ref-type="ref"})). More importantly, in the presence of both main and interaction effects, it can outperform the marginal test of *G*. Although this is at the cost of decreased precision, i.e. if the test is significant, one cannot conclude whether association signal is driven by the main or the interaction effect. Moreover this would be true only if the contribution of the interaction effect on top of the marginal effect is large enough so that it balanced the increase in number of degree of freedom (Aschard, Hancock, London, & Kraft, [2010](#gepi21989-bib-0005){ref-type="ref"}; Clayton & McKeigue, [2001](#gepi21989-bib-0013){ref-type="ref"}) (Fig. [2](#gepi21989-fig-0002){ref-type="fig"}). Application of the joint test of main genetic effect and a single gene by exposure interaction term is now relatively common in GWAS setting (Hamza et al., [2011](#gepi21989-bib-0029){ref-type="ref"}; Hancock et al., [2012](#gepi21989-bib-0030){ref-type="ref"}; Manning et al., [2012](#gepi21989-bib-0039){ref-type="ref"}). However, exploring further multivariate interactions with multiple exposures is limited by practical considerations. Existing software to perform the joint test in a meta‐analysis context (Aschard et al., [2010](#gepi21989-bib-0005){ref-type="ref"}; Manning et al., [2011](#gepi21989-bib-0040){ref-type="ref"}) only allow the analysis of a single interaction term mostly because it requires the variance‐covariance matrix between estimates, which is not provided by popular GWAS software. Leveraging the results from the previous sections on can show that the *ncp* of the joint test of main genetic effect and interactions with *l* independent exposures can be expressed as the sum of *ncp* from the test of *G* and the $G \times E_{cent.i}$ where $E_{cent.i}$ is the centered exposure *i* (supplementary Appendix E): $$ncp_{G + GE} = N \times \sigma_{G}^{2} \times \beta_{G}^{{}^{\prime\prime}2} + \sum\limits_{i = 1...l}\left\lbrack {N \times \sigma_{G}^{2} \times \sigma_{E}^{2} \times \beta_{GE_{cent.i}}^{{}^{\prime\prime}2}} \right\rbrack$$where $\beta_{G}^{{}^{\prime\prime}}$ and $\beta_{GE_{cent.i}}^{{}^{\prime\prime}}$ are the effects of *G* and $G \times E_{cent.i}$. Such a test is robust to non‐normal distribution of the exposure, and modest correlation (\<0.1) between the genetic variant and the exposures, but sensitive to moderate correlation (\>0.1) between exposures (supplementary Figs. S5 and S6). Hence, one can perform meta‐analysis of a joint test including multiple interaction effects using existing software simply by centering exposures. In brief one would have to perform first a standard inverse‐variance meta‐analysis to derive chi‐squares for the $l + 1$ terms from the model considered, and then to sum all chi‐squares to form a chi‐square with $l + 1$ *df*. Importantly, centering the exposures will be of interest only when testing jointly multiple interactions and the main genetic effect. In comparison, the combined test of multiple interaction effects can be simply performed by summing chi‐squares from each independent interaction test or from interaction test derived in a joint model. As previously, the validity of this approach relies on independence between the genetic variant and the exposures, and between the exposures. Finally, a more general solutions that should be explored in future studies would consists, as proposed for the analysis of multiple phenotypes (e.g. Zhu et al., [2015](#gepi21989-bib-0053){ref-type="ref"}), in estimating the correlation between all tests considered (main genetic effect and/or multiple interaction effects) using genome‐wide summary statistics in order to form a multivariate test. A second major direction for the development of multivariate test is to assume the effects of multiple genetic variants depend on a single "scaling" variable *E*. A rising approach consists in testing for interaction between the scaling variable and a genetic risk score (GRS), derived as the weighted sum of the risk alleles. Several interaction effects have been identified using this strategy (Ahmad et al., [2013](#gepi21989-bib-0001){ref-type="ref"}; Fu et al., [2013](#gepi21989-bib-0023){ref-type="ref"}; Langenberg et al., [2014](#gepi21989-bib-0035){ref-type="ref"}; Pollin et al., [2012](#gepi21989-bib-0044){ref-type="ref"}; Qi, Cornelis, Zhang, van Dam, & Hu, [2009](#gepi21989-bib-0046){ref-type="ref"}; Qi et al., [2012](#gepi21989-bib-0047){ref-type="ref"}), some being replicated in independent studies (Ahmad et al., [2013](#gepi21989-bib-0001){ref-type="ref"}; Qi et al., [2012](#gepi21989-bib-0047){ref-type="ref"}). This relative success, as compared to univariate analysis, has generated discussion regarding potential underlying mechanisms (Aschard et al., [2015](#gepi21989-bib-0007){ref-type="ref"}; Ebbeling & Ludwig, [2013](#gepi21989-bib-0020){ref-type="ref"}; Goran, [2013](#gepi21989-bib-0025){ref-type="ref"}; Greenfield, Samaras, & Campbell, [2013](#gepi21989-bib-0027){ref-type="ref"}; Malavazos, Briganti, & Morricone, [2013](#gepi21989-bib-0038){ref-type="ref"}). Overall, testing for an interaction effect between a GRS and a single exposure consists in expanding the principle of a joint test of multiple interactions while leveraging the assumption that, for a given choice of coded alleles, most interaction effects are going in the same direction. It is similar in essence to the burden test that has been widely used for rare variant analysis (Lee, Abecasis, Boehnke, & Lin, [2014](#gepi21989-bib-0036){ref-type="ref"}). In its simplest form it can be expressed as the sum of all interaction effects and it captures therefore deviation of the mean of interaction effects from 0. When interaction effects are null on average, a joint test of all interaction tests (as previously described) will likely be the most powerful approach as it allows interaction effects to be heterogeneous. Conversely, if interactions tend to go in the same direction, the GRS‐based test can outperform other approaches (Fig. [5](#gepi21989-fig-0005){ref-type="fig"} **)**. Of course, in a realistic scenario, a number of non‐interacting SNPs would be included in the GRS, diluting the overall interaction signal and therefore decreasing power. However, the gain in power for the multivariate approaches can remain substantial even when a large proportion of the SNPs tested (e.g. 95% in the example from Fig. [5](#gepi21989-fig-0005){ref-type="fig"}) is not interacting with the exposure. Table [2](#gepi21989-tbl-0002){ref-type="table-wrap"} illustrates power achieved by these tests in the examples used for Table [1](#gepi21989-tbl-0001){ref-type="table-wrap"}. ![Advantages and limitations of testing interaction effect with a genetic risk score. Examples of power comparison for the combined analysis of interaction effects between 20 SNPs and a single exposure. Power was derived for three scenarios: the interaction effects are normally distributed (upper panels) and (A) centered, (B) slightly positive so that 25% of the interactions are negative, and (C) positive only. Three tests are compared while increasing sample size from 0 to 10,000: the joint test of all interaction terms, the genetic risk score by exposure interaction test, and the test of the strongest interaction effect (pairwise test) after correction for the 20 tests performed (middle panels). The lower panels show power of the three tests for a sample size of 5,000, when including 1--400 non‐interacting SNPs on top of the 20 causal SNPs in the analysis and after accounting for multiple testing in the pairwise test.](GEPI-40-678-g005){#gepi21989-fig-0005} ###### Genetic risk score by exposure interaction in real data 32 BMI SNP × soda consumption 12 BMI SNP × physical activity ---------------------------------------------------------------- ---------------------------------------------------------------- ------------------------------- -------------------------------- Reported *P*‐value *Best SNP* [a](#gepi21989-tbl2-note-0002){ref-type="fn"} 0.0030 0.0030 *GRS from paper* [c](#gepi21989-tbl2-note-0004){ref-type="fn"} \<0.001 0.016 Derived *P*‐value[a](#gepi21989-tbl2-note-0002){ref-type="fn"} *wGRS* 0.000028 0.0027 *uGRS* 0.00019 0.015 *chi2* 0.014 0.050 Power[b](#gepi21989-tbl2-note-0003){ref-type="fn"} *Best SNP* 0.43 0.68 *wGRS* 0.99 0.85 *uGRS* 0.96 0.54 SNP, single nucleotide polymorphisms; wGRS, weighted GRS; uGRS, unweighted GRS; chi2, sum of individual interaction chi‐squared. *P*‐values derived from individual SNP by exposure interaction estimates, not corrected for the number of SNPs tested. Power is approximated based on the effect estimate. It is derived for an alpha level of 5% and sample sizes similar to those used in the corresponding study. For soda consumption, the authors used a weighted GRS, for physical activity, the authors used an unweighted GRS. John Wiley & Sons, Ltd. Finally, as showed in supplementary Appendix F, assuming the SNPs in the GRS are independents from each others, the GRS by *E* interaction test can be derived from individual interaction effect estimates. More precisely, consider testing the effect of a weighted GRS on *Y*: $$Y \sim \gamma_{GRS} \times GRS + \gamma_{E} \times E + \gamma_{INT} \times GRS \times E$$where $\gamma_{GRS}$, $\gamma_{E}$, and $\gamma_{INT}$ are the main effect of the weighted GRS, the main effect of *E* and the interaction effect between *E* and the GRS, respectively. The test of $\gamma_{INT}$ is asymptotically equivalent to the meta‐analysis of $\gamma_{G_{i} \times E}$, the interaction effects between $G_{i}$ and *E*, using an inverse‐variance weighted sum to derive a 1 *df* chi‐square, i.e. (see supplementary Appendix F, supplementary Figs. S7 and S8, and Dastani et al., [2012](#gepi21989-bib-0019){ref-type="ref"}): $$\left( \frac{{\hat{\gamma}}_{INT}}{{\hat{\sigma}}_{\gamma_{INT}}} \right)^{2} = \frac{\left( {\sum_{m}\frac{w_{i} \times {\hat{\gamma}}_{G_{i} \times E}}{{\hat{\sigma}}_{\gamma_{G_{i} \times E}}^{2}}} \right)^{2}}{\sum_{m}\frac{w_{i}^{2}}{{\hat{\sigma}}_{\gamma_{G_{i} \times E}}^{2}}}\mspace{6mu}$$where $w_{i}$ is the weight given to SNP *i*. A number of strategies can be used for the weighting scheme. Assuming equal effect size of all interaction effects, one should weight each SNP by the inverse of their standard deviation ($w_{i} = 1/\sigma_{G_{i}}$). Alternatively, others have used weights proportional to the marginal genetic effect of the SNPs, assuming the magnitude of the marginal and interaction effects are correlated. Obviously, the relative power of each of these weighting schemes depends on their relevance in regard to the true underlying model. Finally, applying GRS‐based interaction tests implicitly supposed a set of candidate genetic variants have been identified. The current rationale consists in assuming that most interacting variants also display a marginal linear effect and therefore have focused on GWAS hits, however other screening methods can be used (Aschard, Zaitlen, Tamimi, Lindstrom, & Kraft, [2013](#gepi21989-bib-0008){ref-type="ref"}; Pare, Cook, Ridker, & Chasman, [2010](#gepi21989-bib-0042){ref-type="ref"}). Moreover, existing knowledge, such as functional annotation (Consortium, [2004](#gepi21989-bib-0015){ref-type="ref"}) or existing pathway database (Kanehisa et al., [2014](#gepi21989-bib-0033){ref-type="ref"}) can be leverage to refine the sets of SNPs to be aggregated into a GRS. 3. DISCUSSION {#gepi21989-sec-0070} ============= Advancing knowledge of how genetic and environmental factors combine to influence human traits and diseases remains a key objective of research in human genetics. Ironically, the simplest and most parsimonious biological interaction models---those in which the effect of a genetic variant is either enhanced or decreased depending on a common exposure---are probably the most difficult to identify. Furthermore, the contribution of such interaction effects can be dramatically underestimated when measured as the drop in *r* ^2^ if the interaction term was removed from the model. Here, I argue for the use of new approaches and analytical strategies to address these concerns. This includes using methods such as the Pratt index to evaluate the relative importance of interaction effects in genetic association studies. These methods can highlight important modifiable exposures influencing genetic mechanisms, which could be neglected with the existing approach. Regarding detection, and besides increasing sample size, increasing power to detect interaction effects in future studies will likely mostly rely on leveraging additional assumptions on the underlying model. In the big data era, where millions of genetic variants are measured on behalf of multiple environmental exposures and endo‐phenotypes, this means using multivariate models. A variety of powerful statistical tests can be devised assuming multiple environmental exposures interact with multiple genetic variants. As showed in this study, the application of such approaches can dramatically improve power to detect interaction that can be missed by standard univariate tests. While these methods comes at the cost of decreased precision---i.e. a significant signal would point out multiple potential culprit---they can identify interaction effects that would potentially be of greater clinical relevance that univariate pairwise interaction (Aschard et al., [2012a](#gepi21989-bib-0004){ref-type="ref"}, [2015](#gepi21989-bib-0007){ref-type="ref"}; Qi et al., [2012](#gepi21989-bib-0047){ref-type="ref"}). Understanding the strengths and limitations of standard statistical methods is a major key to overcome today\'s challenges for the identification of interaction effects in human traits and diseases. By deciphering the basic principles of interaction tests, this perspective aims at providing a comprehensive guideline for performing interaction effects analyses in genetic association studies, and opening the path for future method development. Supporting information ====================== Additional Supporting Information may be found online in the supporting information tab for this article. ###### Appendix A: Effect estimates from standardized and unstandardized predictors Appendix B: Non‐centrality parameters for marginal and interaction models Appendix C: Variance‐covariance for the GxE term and its estimated effect Appendix D: Derivation of the Pratt index Appendix E: Joint test of main and interaction effects Appendix F: GRS‐based test, joint test and univariate test of multiple interaction effects Figure S1. Linear interaction effect across different coding schemes Figure S2. Power comparison for linear regression Figure S3. Power comparison for logistic regression Figure S4. The Pratt index across multiple interactions Figure S5. Joint test of main genetic effect and multiple interaction effects in a linear regression Figure S6. Joint test of main genetic effect and multiple interaction effects in a logistic regression Figure S7. GRS‐based statistic and meta‐analysis of single SNP estimates in linear regression Figure S8. GRS‐based statistic and meta‐analysis of single SNP estimates in logistic regression ###### Click here for additional data file. I am grateful to Peter Kraft, Noah Zaitlen, Ami Joshi, John Pratt, and Donald Halstead for helpful discussions and comments. I also thank Shafqat Ahmad, Paul Franks, and Qibin Qi for sharing details on their analysis of SNP by physical activity interaction, and SNP by soda consumption interaction, respectively. This research was funded by NIH grants R21HG007687. The author has no conflict of interest to declare.
INFECTIOUS diseases are pervasive. So pervasive, in fact, that without effective mechanisms of resistance, host populations can be quickly reduced in size or even driven to extinction. For instance, chestnut blight effectively wiped out the American chestnut, which had little if any resistance to this novel pathogen, after its introduction to North America in the early 1900s ([@bib1]; [@bib2]). Similarly, when Myxoma virus was introduced to Australia in the 1950s, local rabbit populations were almost entirely susceptible, resulting in millions of deaths and the decimation of local populations ([@bib24]). Human populations, too, have been heavily affected by infectious disease in the past, perhaps most notably during the 1918 influenza pandemic that killed \>50 million people before fading away in 1920 ([@bib14]; [@bib27]). Although these examples are striking and demonstrate the impact of unchecked infectious disease, they are far from the norm. More commonly, host populations have effective mechanisms of resistance against pathogens they encounter regularly ([@bib25]), with significant variability between populations depending on their history of exposure ([@bib5]; [@bib30]). The existence of substantial variation in resistance to infectious disease within host populations has generated hope that it may be possible to identify the genes conferring resistance. Identifying such resistance genes would pave the way for genetic engineering of resistant crops and livestock, focus drug development efforts on likely targets, and open the door to gene therapeutic approaches within human populations. As the genomic revolution has progressed, it has become increasingly common to search for these "resistance genes" using genome-wide association studies (GWAS) ([@bib22]; [@bib26]). Loosely speaking, these studies compare the marker genotypes of individuals infected with disease and those uninfected and ask which loci predict an individual's infection status. The GWAS approach has now been used to successfully identify a range of candidate genes thought to be important in resistance to infectious disease in plants and animals ([@bib9]; [@bib15]; [@bib29]; [@bib32]; [@bib12]). Despite the successes of the GWAS approach in some cases, it is becoming increasingly recognized that the approach has significant limitations. For instance, GWAS are most powerful when resistance depends on common genetic variants with relatively large phenotypic effects ([@bib18]). In addition, which candidate genes are identified by this method may depend on the environment in which the study is conducted ([@bib28]). These limitations apply to GWAS in general, not just those studies focused on infectious disease, and are widely recognized. When GWAS are used to understand the genetic basis of resistance to infectious disease, however, a potentially more important problem arises. Specifically, the resistance genes identified within the host population may depend on the genetic composition of the infectious disease itself ([@bib22]). This sensitivity of the GWAS approach to the genetic composition of the infectious disease becomes acute any time genotype-by-genotype (G × G) interactions exist; in other words, when particular combinations of host and pathogen genes yield resistance whereas other combinations lead to susceptibility. These G × G interactions may have drastic effects on the results of genetic association studies and our understanding of disease resistance ([@bib17]), similar to the effects of gene-by-environment interactions. One particularly disconcerting possibility is that rapid pathogen evolution or host--pathogen coevolution will cause the host resistance genes that can be identified by GWAS to fluctuate rapidly over time. Here we quantitatively explore the performance of GWAS when resistance to infectious disease involves G × G interactions between host and disease. We begin by presenting a general mathematical model of an association study to investigate disease resistance and evaluate the role of G × G interactions for several forms of host--parasite interactions. We then simulate host--pathogen coevolution to illustrate the extent to which G × G interactions may vary across time and/or space. We conclude by reanalyzing published genome-wide association data ([@bib8]) of *Daphnia magna* resistance to its *Pasteuria ramosa* pathogen, distinguishing regions of the genome associated with overall health from those involved in resistance specific to a particular *P. ramosa* strain. Model {#s1} ===== We consider a scenario, common in practice, where host resistance is measured as a continuous quantitative trait. This would be the case, for instance, if host resistance is assessed by measuring viral load, duration of infection, or damage to host tissues. Our model assumes that host resistance depends on the value of a quantitative trait in the host, $z_{H},$ relative to the value of a quantitative trait in the pathogen, $z_{P}.$ Specifically, we assume host susceptibility, *S*, is given by the following function:$$S = f\left( {z_{H} - z_{P}} \right).$$The function *f* is sufficiently general to accommodate many commonly observed resistance mechanisms. For instance, in the interaction between the snail *Biomphalaria glabrata* and its trematode parasites, resistance depends on the relative quantities of reactive oxygen molecules in the snail ($z_{H}$) and reactive oxygen scavenging molecules produced by the parasite ($z_{P}$) ([@bib6]; [@bib20]). In cases like these, the function *f* may take a sigmoid form which we call the phenotypic-difference model ([Figure 1A](#fig1){ref-type="fig"}) ([@bib23]; [@bib3]):Figure 1Host--parasite interaction models. Susceptibility to infection as a function of the distance between host and pathogen phenotypes, $z_{H} - z_{P},$ for the (A) phenotypic-difference and (B) phenotypic-matching model. Red curves show exact functions whereas black curves are the quadratic approximations.$$f\left( {z_{H} - z_{P}} \right) = \frac{1}{1 + e^{\alpha{({z_{H} - z_{P}})}}}.$$In contrast, in the interaction between the schistosome parasite, *Schistosoma mansoni*, and its snail host, *B. glabrata*, resistance depends on the degree to which the conformation of defensive FREP molecules produced by the snail ($z_{H}$) match the conformation of parasite mucin molecules ($z_{P}$) and successfully bind to them ([@bib19]). In such cases, the function *f* may take a Gaussian form which we call a phenotypic-matching model ([Figure 1B](#fig1){ref-type="fig"}) ([@bib16]):$$f\left( {z_{H} - z_{P}} \right) = e^{- \alpha{({z_{H} - z_{P}})}^{2}}.$$To study the effects of genetic interactions on susceptibility to infection, *S*, we must integrate genetics into our phenotypic model. For a haploid host and pathogen where $z_{H}$ and $z_{P}$ depend on $n_{H}$ and $n_{P}$ biallelic loci, respectively, we can write general expressions for these phenotypes as functions of alleles present in each species:$$\begin{matrix} {z_{H} = b_{H0} + {\sum\limits_{i = 1}^{n_{H}}{b_{Hi}X_{Hi}}} + {\sum\limits_{\substack{i,j \\ i \neq j}}^{n_{H}}{b_{Hi,Hj}X_{Hi}X_{Hj}}} + {\sum\limits_{\substack{i,j,k \\ i \neq j \neq k}}^{n_{H}}{b_{Hi,Hj,Hk}X_{Hi}X_{Hj}X_{Hk}}} + \ldots + \epsilon_{H}} \\ {z_{P} = b_{P0} + {\sum\limits_{i = 1}^{n_{P}}{b_{Pi}X_{Pi}}} + {\sum\limits_{\substack{i,j \\ i \neq j}}^{n_{P}}{b_{Pi,Pj}X_{Pi}X_{Pj}}} + {\sum\limits_{\substack{i,j,k \\ i \neq j \neq k}}^{n_{P}}{b_{Pi,Pj,Pk}X_{Pi}X_{Pj}X_{Pk}}} + \ldots + \mathit{\epsilon}_{P}} \\ \end{matrix}$$In these expressions, $X_{Mi}$ is an indicator variable describing the allelic state (0 or 1) of an individual of species *M* at locus *i*, $b_{M0}$ is the phenotype of an individual of species *M* with all "0" alleles, and $b_{Mi}$ is the additive effect of carrying a "1" allele at locus *i* in species *M*. The remaining coefficients ($b_{Mi,Mj},$ $b_{Mi,Mj,Mk},$ *etc*.) describe epistatic interactions among loci. Finally, $\epsilon_{M}$ captures an environmental contribution to the phenotype of species *M*, which is assumed to have mean 0, a constant variance, and be uncorrelated with an individual's phenotype. Substituting Equation 4 into Equation 1 yields a model of host susceptibility as a function of host and pathogen genotypes. Our goal now is to use this genetic model to predict the sensitivity of GWAS to the genetic composition of the pathogen population. We will explore both traditional, single-species GWAS approaches and a novel approach that takes genetic information from both host and pathogen into account (co-GWAS). Our investigation will rely on a pair of complementary approaches. First, we will develop and analyze analytical approximations that quantify the sensitivity of GWAS and co-GWAS approaches to changes in pathogen genotype frequencies. These analytical approximations will rely on simplified genotype--phenotype maps and will not explicitly integrate evolution and coevolution. Second, we will develop and analyze simulations that allow us to explore the consequences of rapid pathogen evolution and coevolution between the species on the performance of both GWAS and co-GWAS approaches. Analytical Approximation {#s2} ======================== To simplify the genetic model of resistance developed in the previous section sufficiently for mathematical analysis, we begin by considering the case where $n_{H} = n_{P} = 2.$ In addition, we assume that the phenotypes of host and pathogen are not too far from one another, such that the quantity $z_{H} - z_{P}$ is small relative to the extent of phenotypic specificity (*α* in Equations 2 and 3). Under this assumption, Equation 1 can be approximated by its second order Taylor series expansion. This allows the genetic model of susceptibility to be simplified to the following approximate expression:$$\begin{matrix} {S \approx f\left( 0 \right) + f^{\prime}\left( 0 \right)\left\lbrack {\left( {b_{H0} + b_{H1}X_{H1} + b_{H2}X_{H2} + \epsilon_{H}} \right) - \left( {b_{P0} + b_{P1}X_{P1} + b_{P2}X_{P2} + \epsilon_{P}} \right)} \right\rbrack} \\ {+ \frac{1}{2}f^{''}\left( 0 \right)\left\lbrack {\left( {b_{H0} + b_{H1}X_{H1} + b_{H2}X_{H2} + \epsilon_{H}} \right) - \left( {b_{P0} + b_{P1}X_{P1} + b_{P2}X_{P2} + \epsilon_{P}} \right)} \right\rbrack^{2} + \mathcal{O}\left\lbrack \left( {z_{H} - z_{P}} \right)^{3} \right\rbrack,} \\ \end{matrix}$$where primes indicate derivatives with respect to the distance between host and pathogen phenotypes. With (5) in hand, we have a model that predicts host resistance as a function of host and pathogen genotypes. In the following two sections, we will use (5) to investigate how the genetic composition of the pathogen population influences the results of GWAS and co-GWAS. Extending these models to complete G × G association studies requires a large number of pathogen loci ($n_{P} \gg 2$) and thus may be computationally prohibitive. For many pathogens, however, strain type or subtype may be known and capture much of the relevant genetic variation in the pathogen population. In these cases, tracking pathogen types can greatly reduce the effective number of loci, even to $n_{P} = 2$ as in Equation 5. Such simplifications should allow us to expand beyond two host loci to a whole host genome ($n_{H} \gg 2$), while avoiding the computational complexity of tracking all possible genetic interactions between the full host genome and the full parasite genome. Single-species GWAS {#s3} ------------------- We envision a standard GWAS where susceptibility to infection has been measured for some number of host individuals, each of which has also been genotyped at a large number of marker loci. To focus our model on the effects of species interactions, we will assume this data accurately provides us with the genotype of individuals at the two host resistance loci. Using this data, the goal of the genetic association study is to partition host susceptibility between these genes relative to their effects. This can be done by fitting susceptibility with a linear combination of the genetic indicator variables:$$S \approx \beta_{H0} + \beta_{H1}X_{H1} + \beta_{H2}X_{H2} + \beta_{H1,H2}X_{H1}X_{H2},$$where the *β* coefficients can be found using least squares regression. The biological interpretation of this linear model is straightforward. The intercept coefficient, $\beta_{H0},$ is the expected host resistance when both 0 host alleles are present. The coefficients $\beta_{H1}$ and $\beta_{H2}$ are the inferred additive effects of the 1 alleles at the first and second loci, respectively, and $\beta_{H1,H2}$ captures the epistatic interaction between the two host 1 alleles. Solving for the coefficients in (6) we have (see Supplemental Material, *Mathematica* notebook in [File S1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1/FileS1.zip)):$$\begin{matrix} {\beta_{H0} = f\left( 0 \right) + f^{\prime}\left( 0 \right)\left\lbrack {{\overset{\sim}{b}}_{H0}-\left( {{\overset{\sim}{b}}_{P0} + b_{P1}q_{P1} + b_{P2}q_{P2}} \right)} \right\rbrack + \frac{1}{2}f^{''}\left( 0 \right)\left\{ {\left( {{\overset{\sim}{b}}_{H0} - {\overset{\sim}{b}}_{P0}} \right)^{2} +} \right.} \\ \left. {b_{P1}^{2}q_{P1} + b_{P2}^{2}q_{P2} + 2\left\lbrack {\left( {b_{P1}q_{P1} + b_{P2}q_{P2}} \right)\left( {{\overset{\sim}{b}}_{H0} - {\overset{\sim}{b}}_{P0}} \right) - b_{P1}b_{P2}\left( {q_{P1}q_{P2} + D_{P}} \right)} \right\rbrack} \right\} \\ {\beta_{Hi} = f^{\prime}\left( 0 \right)b_{Hi} + \frac{1}{2}f^{''}\left( 0 \right)\left\lbrack {b_{Hi}^{2} + 2b_{Hi}\left( {b_{H0} - b_{P0} - q_{P1}b_{P1} - q_{P2}b_{P2}} \right)} \right\rbrack} \\ {\beta_{H1,H2} = f^{''}\left( 0 \right)b_{H1}b_{H2},} \\ \end{matrix}$$for *i* = {1,2}, where $f\left( 0 \right),$ $f^{\prime}\left( 0 \right),$ and $f^{''}\left( 0 \right)$ are the resistance function and its first and second derivative evaluated at 0 as in Equation 5, and where ${\overset{\sim}{b}}_{H0} = b_{H0} + \epsilon_{H}$ and ${\overset{\sim}{b}}_{P0} = b_{P0} + \epsilon_{P}.$ Importantly, these expressions for the coefficients depend on the allele frequency at the pathogen loci, $q_{P1}$ and $q_{P2},$ as well as the linkage disequilibrium between them, $D_{P}.$ Note that the relevant allele frequencies and linkage disequilibrium are among pathogens to which the host is exposed, which may not be equivalent to the pathogen population as a whole. As a result of the dependence of the coefficients in (7) on the pathogen allele frequencies and linkage disequilibrium, the allelic effects (*β*'s) inferred by a host-only GWAS can be quite sensitive to the genetic composition of the pathogen population ([Figure 2](#fig2){ref-type="fig"}). Changes in pathogen allele frequency can alter the magnitude and sign of the inferred effects. From a practical standpoint, if susceptibility is assayed in two host populations that are exposed to pathogen populations that differ greatly in their allele frequencies, one may find a host allele has a protective effect in one population but increases risk in the other. Similar to hidden host population structure, uncontrolled differences in the pathogen population can greatly alter the inferences of single-species GWAS. ![Host-only model with resistance dependent on phenotypic differences (A,C) or phenotypic matching (B,D) between hosts and parasites. (A and B) Allelic effects inferred using the host-only design from Equation 6: $\beta_{0}$ (black), $\beta_{H1}$ and $\beta_{H2}$ (solid red lines), $\beta_{H1,H2}$ (dashed red). (C and D) Variation explained by host additive effects only (solid line), and host additive and epistatic effects (dashed line) as given by the host-only model in (6).](779fig2){#fig2} A second result that can be drawn from Equation 7 is that when the resistance function is approximately linear, $f^{''}\left( 0 \right) = 0,$ the inferred additive and epistatic effects, $\beta_{H1},\ \beta_{H2},$ and $\beta_{H1,H2}$ are independent of the pathogen allele frequencies. For example, in contrast to the nonlinear phenotypic-matching model where the inferred effects vary with pathogen allele frequency, the inferred effects remain constant in the approximately linear phenotypic-difference model ([Figure 2](#fig2){ref-type="fig"}). A third conclusion from Equation 7 is that, at least under the assumption that $z_{H} - z_{P}$ is small, the epistatic interaction between the host loci, $\beta_{H1,H2},$ is independent of pathogen genetics. We will explore the consequences of this dependence on the pathogen allele frequencies for the stability of GWAS-inferred effects across evolutionary time (See the *Host--Parasite Coevolution* section below). In addition to identifying the allelic effects on host resistance, an important metric of GWAS performance is the proportion of phenotypic variation explained by the identified causative loci. Given the dependence of the estimated allelic effects on pathogen allele frequencies, we calculated the total phenotypic variation explained by the host loci across the range of pathogen allele frequencies ([Figure 2, C and D](#fig2){ref-type="fig"}). When the pathogen population is monomorphic ($q_{P1} = q_{P2} = 0\,\text{or}\, 1$), the host loci can explain 100% of the genetic variation in the phenotype. If the pathogen population is polymorphic, however, the host-only approach may explain as little as 10% of the variation. Partitioning the total variation explained into the additive and epistatic contributions demonstrates that, due to changes in the additive effect size $b_{Hi},$ the relative contribution of additive and epistatic effects also varies with pathogen allele frequency and depends on the form of the host--parasite interaction. Two-species co-GWAS {#s4} ------------------- The results derived in the previous section demonstrate that traditional single-species GWAS may be sensitive to the genetic composition of the pathogen population at loci involved in host--pathogen specificity. In this section, we attempt to overcome this problem by developing an alternative GWAS design in which both host and pathogen genetics are incorporated. In contrast to the traditional method where only host genotypes are recorded, this design requires that both host and pathogen genotypes are known. As with Equation 6, we now attempt to fit host resistance as a linear function of the allelic indicator variables, but we include pathogen indicators as well as interaction terms between host and pathogen loci:$$\begin{matrix} {S \approx \beta_{0} + \beta_{H1}X_{H1} + \beta_{H2}X_{H2} + \beta_{H1,H2}X_{H1}X_{H2} + \beta_{P1}X_{P1} + \beta_{P2}X_{P2} + \beta_{P1,P2}X_{P1}X_{P2}} \\ {+ \beta_{H1,P1}X_{H1}X_{P1} + \beta_{H1,P2}X_{H1}X_{P2} + \beta_{H2,P1}X_{H2}X_{P1} + \beta_{H2,P2}X_{H2}X_{P2}.} \\ \end{matrix}$$As with Equation 6, the coefficients of this equation have straightforward biological interpretations. The intercept, $\beta_{0},$ describes the expected host resistance when all host and pathogen loci have 0 alleles. Terms 2, 3, 5, and 6 describe the additive effects of each individual host and pathogen 1 allele; and terms 4 and 7 describe the epistatic interactions between loci within the host and pathogen, respectively. The remaining four terms describe the G × G interactions between pairs of host and pathogen loci. Despite the complexity of Equation 8, and hence the logistical and computational challenges of applying it, the expressions for each of these coefficients in terms of the host and pathogen phenotypic effects are simple (see *Mathematica* notebook in [File S1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1/FileS1.zip)): β H 0 = f ( 0 ) \+ f ′ ( 0 ) ( b ∼ H 0 − b ∼ P 0 ) \+ 1 2 f ″ ( 0 ) ( b ∼ H 0 − b ∼ P 0 ) 2 β H i = f ′ ( 0 ) b H i \+ 1 2 f ″ ( 0 ) b H i \[ b H i \+ 2 ( b ∼ H 0 − b ∼ P 0 ) \] for i = { 1 , 2 } β H 1 , H 2 = f ″ ( 0 ) b H 1 b H 2 β P i = − f ′ ( 0 ) b P i − 1 2 f ″ ( 0 ) b P i \[ b P i \+ 2 ( b ∼ H 0 − b ∼ P 0 ) \] for i = { 1 , 2 } β P 1 , P 2 = f ″ ( 0 ) b P 1 b P 2 β H i , P j = − f ″ ( 0 ) b H i b P j for i = { 1 , 2 } , j = { 1 , 2 } . Comparing the equations in (9) with the coefficients in (7) reveals an important conclusion: the effect sizes no longer depend on the pathogen allele frequencies nor the linkage disequilibrium ([Figure 3, A and B](#fig3){ref-type="fig"}). This result suggests that the two-species, co-GWAS approach is more robust to changes in the genetic composition of the pathogen population and thus may be much less sensitive to rapid evolution and spatial genetic structuring within the pathogen population. ![Host--pathogen model with phenotypic-difference (A,C) or phenotypic-matching (B,D) based resistance. (A and B) Allelic effects inferred using the host-parasite design from Equation 8: $\beta_{0}$ (black), $\beta_{Hi}$ (solid red), $\beta_{H1,H2}$ (dashed red), $\beta_{Pi}$ (solid blue), $\beta_{P1,P2}$ (dashed blue), and $\beta_{Hi,Pj}$ (dashed purple). (C and D) Variation explained by host additive effects only (solid red), host additive and epistatic effects (dashed red), host and pathogen additive and epistatic effects (dashed blue), and a full host--pathogen model as given in Equation 8.](779fig3){#fig3} In addition to stabilizing the estimated allelic effects across pathogen allele frequencies, the total phenotypic variation explained by the co-GWAS greatly exceeds that of the host-only GWAS. For the two-locus case explored here, the co-GWAS approach can explain 100% of the variation regardless of pathogen allele frequency ([Figure 3, C and D](#fig3){ref-type="fig"}). The contributions of additive, epistatic, and G × G interactions do, however, vary with pathogen allele frequency. As with the host-only approach, when the pathogen population is monomorphic the host effects explain all of the observed phenotypic variation. In summary, unlike the host-only model, the effect size coefficients (Equation 9) and the total variation explained, no longer vary with pathogen allele frequency. This contrast between the host-only and co-GWAS approaches is particularly relevant any time the composition of the pathogen population is likely to differ between the sample used for the association study and the population in which the resulting inferences are applied. In the following section we explore how temporal changes in the host and pathogen populations driven by coevolution affects the reproducibility of GWAS over time and, by extension, space. Host--Parasite Coevolution {#s5} ========================== To simulate host--parasite coevolution, we envision a system where each host comes into contact with a single parasite each generation. The probability that this contact results in infection is determined by host susceptibility, *S*, which is a function of the host and parasite genotype. Infected hosts experience a fitness cost $\xi_{H},$ whereas their infecting parasites receive a fitness benefit $\xi_{P}.$ In the absence of infection, both hosts and parasites have a fitness of 1. Together, these assumptions lead to the following fitness of a host with genotype $\left\{ {X_{H1},X_{H2}} \right\}$ that comes into contact with a pathogen with genotype $\left\{ {X_{P1},X_{P2}} \right\}:$$$W_{H} = 1 - \xi_{H}S\left( {X_{H1},X_{H2},X_{P1},X_{P2}} \right);$$whereas the pathogen has a fitness of$$W_{P} = 1 + \xi_{P}S\left( {X_{H1},X_{H2},X_{P1},X_{P2}} \right).$$Given these fitnesses, we simulate allele frequencies and linkage disequilibrium over time assuming random mating, a per-locus mutation rate of μ, and a recombination rate *r* (see *Mathematica* notebook in [File S1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1/FileS1.zip)). We then use Equations 7 and 9 to calculate the inferred allelic effect sizes by using a host-only GWAS or co-GWAS for each generation over the course of coevolution for both the phenotypic-difference and phenotypic-matching models ([Figure 4](#fig4){ref-type="fig"}). ![Allelic effects over coevolutionary time. Top row: Phenotypes $z_{H}$ (red) and $z_{P}$ (blue) simulated over coevolutionary time in the phenotypic-difference (left) and phenotypic-matching models (right). Middle row: Coefficients estimated under the host-only model (7) (black is $\beta_{0},$ solid red is $\beta_{Hi},$ dashed red is $\beta_{H1,H2}$). Bottom row: Coefficients estimated under the host--pathogen model (9) (black is $\beta_{0},$ solid red is $\beta_{Hi},$ dashed red is $\beta_{H1,H2},$ blue is $\beta_{Pi},$ dashed blue is $\beta_{P1,P2},$ purple dashed is $\beta_{Hi,Pj}$). Because epistatic and G × G interactions are absent in the phenotypic-difference model, their allelic effects all overlap at 0 and hence are not all visible.](779fig4){#fig4} As expected, using the host-only GWAS approach, the inferred allelic effects can vary over time but only under the quadratic-shaped, phenotypic-matching model. As noted above, the estimated effects can even change sign, having large positive values when sampled in one generation and large negative values when sampled only a few generations later. In contrast, the inferred effects remain constant in the co-GWAS approach regardless of the coevolutionary model. In terms of the phenotypic variation explained, the host-only approach explains only a portion of genetically determined phenotypic variation, whereas the co-GWAS approach can explain up to 100%. The contribution of different genetic components to the total variation explained remains approximately constant under the phenotypic-difference model but varies rapidly as allele frequency changes in the phenotypic-matching model. Data availability {#s6} ----------------- The analysis, numerical simulations, and scripts to generate the original figures were coded in Wolfram *Mathematica* 11 ([File S1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1/FileS1.zip)) and are available for download from the Dryad Digital Repository (DOI: <https://doi.org/10.5061/dryad.tb25q>). Daphnia--Pasteuria GWAS {#s7} ======================= Taken together, our analytical model and simulations illustrate that incorporating pathogen genetic information into the search for disease genes can greatly increase the explanatory power and repeatability of genome scans. Testing these theoretical predictions with biological data is a critical step in evaluating the power of the co-GWAS approach relative to a traditional single-species GWAS. Analysis of biological data will include several complications that we ignored above, including finite sample sizes, arbitrary forms of coevolutionary interactions, and complex genomic architectures. Unfortunately, we know of no studies that include full host and parasite genomic data as well as the outcome of infection experiments. Further, the computational tools to perform a co-GWAS in the form of Equation 8 do not yet exist. We can, however, use recently published data by [@bib8] on the susceptibility of *D. magna* to two *P. ramosa* strains, C1 and C19, as a preliminary test of our analytical predictions. In particular, we compare the results of genome scans for C1 and C19 susceptibility analyzed separately to a single genome scan for susceptibility using all the data but ignoring pathogen strain type. Our analytical model predicts that, despite having half the sample size, the separate genome scans for C1 and C19 resistance should reveal loci that determine host--parasite specificity, whereas the full data scan will have lower power to do so. Note that strain type captures almost all of the relevant genetic information in this case, given that the parasite is clonal. The original data set, provided on Dryad by the authors ([@bib8]), sampled 97 *D. magna* clones from three distinct geographic regions---1 site in Germany, 1 in Switzerland, and 11 sites in Finland---and provided the sequence at 6403 SNPs. Host susceptibility (S: susceptible; R: resistant) infection by each *P. ramosa* strain, C1 and C19, was determined by assessing whether fluorescently labeled spores attached to the host's esophagus ([@bib11]). All four possible combinations of susceptibility and resistance to the two strains (SS, SR, RS, and RR) were present. By performing two separate association studies, one for each strain, [@bib8] used this experimental design to identify genomic regions associated with susceptibility to a specific parasite strain. Following the methods in the original work, we compare their results to a third genome scan including all the data, a total of 194 samples, ignoring the *Pasteuria* strain type tested. All genome scans were performed using the R package GenAbel, adjusting for population structure and repeated measures of the same host genotype using the Eigenstat method ([@bib4]). To accurately assess which genomic regions are associated with susceptibility to C1, C19, and/or "overall" susceptibility from the complete data set, we used the *Daphnia* genetic map constructed by [@bib10] to array the scaffolds into 10 linkage groups. To limit the detection of false positives, we followed an approach analogous to that used in [@bib8] where SNPs were only considered significantly associated with a given susceptibility phenotype if there existed four SNPs in a 10-cM region with a log-likelihood score \>2 ([Figure 5](#fig5){ref-type="fig"}). Multiple genomic regions are significantly associated with susceptibility to C1, C19, and to disease susceptibility in the complete data set without strain information. Four linkage groups (4, 5, 7, and 9), with a total of 28 significant SNPs, are associated with C1 susceptibility. Three linkage groups (1, 4, and 7) with 38 SNPs are associated with C19 susceptibility, and two linkage groups (4 and 5) with 35 SNPs are associated with susceptibility in the complete data set. Thus, while the complete data set has twice as many measures of disease susceptibility, it has less power to detect genetic regions underlying disease susceptibility because of the lack of parasite information. ![GWAS of *D. magna* susceptibility. Genetic associations of each SNP with C1 (red ●), C19 (blue ▪), and overall susceptibility in the complete data set without parasite-type information (green ▴). Hence, each SNP is represented three times, once for each genome-wide scan. Note that closely linked SNPs often overlap with one another and are not all individually visible. Significant SNPs are shown in color while those below the log-likelihood of two threshold or that are not clustered within a 10-cM region of three other significant SNPs are shown in gray. The 10 linkage groups are delineated by vertical dashed lines.](779fig5){#fig5} The contrast between the associations for C1 and C19 susceptibility to overall susceptibility in the complete data set provides additional information about the nature of the genetic basis to resistance. Genomic regions associated with the overall resistance regardless of parasite type, particularly when these regions are also associated with C1 and C19 resistance, provide increased resistance regardless of the parasite strain tested and are consistent with general host health and nonspecific immune response. By contrast, sites that are not associated with overall resistance---despite the data set having twice the size---but are associated with either C1 or C19, are good candidates for loci that act in a parasite-specific manner. Examining [Figure 5](#fig5){ref-type="fig"}, we therefore conclude that linkage group 4 and possibly 5 are involved in general health and resistance. In contrast, the regions on the far left and right of linkage group 7 as well as the regions on linkage group 1 and 9, which are associated only with C1 or C19 resistance, are indicative of parasite-specific resistance loci. These conclusions are in agreement with the hypothesized model and previous molecular work on *Daphnia* resistance to *Pasteuria*. In particular, resistance to *Pasteuria* is hypothesized to be controlled by a three-locus, matching-alleles system. One of these loci (the C locus) determines overall host susceptibility regardless of pathogen strain and is thought to reside on linkage group 4 ([@bib7]). In the absence of protection from the C locus, a second "A locus" is thought to confer resistance to C1 when the dominant allele is present. The regions detected on linkage groups 7 and 9 in the hosts exposed to C1 may only be candidates for such C1-specific resistance. Finally, if the C locus and A locus are both homozygous recessive, a third "B locus" determines susceptibility to the C19 strain. Such a locus would likely be hard to detect in a GWAS due to epistasis between the A, B, and C loci; nevertheless, the regions associated with only C19 resistance (on linkage groups 1 and 7) would be candidates for such a B locus. Overall we conclude that significant SNPs obtained without accounting for parasite type may signal general health status. Against this background, a co-GWAS can help identify genes whose regions are likely critical to host--parasite specificity and variation in host susceptibility. Discussion {#s8} ========== Identifying genes that determine a host's susceptibility to infection is a promising frontier with a wide range of applications, including agriculture and human health. Yet, as our mathematical models demonstrate, association studies focusing on identifying genes in a single species without accounting for the genetics of the interacting species can drastically affect our ability to detect disease genes involved in host--pathogen specificity and limit our ability to account for the genetic variation in disease susceptibility. When the genetic composition of the pathogen population varies over time and/or space, this can further lead to inconsistencies in the results of genetic association studies. Finally, using previously published data on *D. magna* resistance to its *Pasteuria* parasite, we illustrate that performing association studies with and without information about pathogen type can be used to distinguish genomic regions affecting general *vs.* specific resistance to pathogens. Consistent with current models for *Daphania*--*Pasteuria* interactions, we identify one region associated with general health as well as candidate regions more directly involved in mediating host--pathogen specificity. The mathematical analysis presented above focuses on host--pathogen interactions of a specific form, given by Equation 1. Although we have relied on an approximation that assumes weak phenotype differences, *i.e.*, $z_{H} - z_{P}$ is small, we postulate that the power to detect strain-specific resistance genes will be increased whenever parasite information is incorporated, even when genes have major effects and phenotypic differences become large. Similarly, the methods used above can be extended to include alternative interaction types such as a "matching-alleles" interaction (see *Mathematica* notebook in [File S1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1/FileS1.zip)). The expressions for the *β* coefficients under this interaction model are unruly and difficult to interpret. Using a numerical approach, we observe that once again G × G interactions can explain a significant proportion of the variation in susceptibility ([Figure S1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.3001481/-/DC1/FigureS1.pdf) available on Dryad), particularly in highly variable pathogen populations. Unlike the phenotypic-difference and phenotypic-matching models, however, the co-GWAS approach (Equation 8) no longer explains all of the variation in susceptibility and the coefficients vary with pathogen allele frequency. This is a result of higher order interactions not included in our model. Hence, although the co-GWAS approach performs significantly better than a single-species approach, it will not always capture the full genetic basis of infection because of the second order approximation used in Equation 8. Regardless of the form of the interaction, our analytical models and simulations illustrate that incorporating pathogen genetics into the search for disease genes can greatly increase the explanatory power and repeatability of genome scans. Unfortunately, several logistical and computational challenges preclude applying a full two-species GWAS. Most notably, such a design requires additional genetic data that is not currently available. More specifically, this design requires genotyping all hosts and the pathogens to which they are exposed, not just the host--parasite combinations observed in infected individuals. Future exploration is warranted to determine whether uninfected individuals can simply be treated as unknown with respect to pathogen exposure, and what the consequences of doing so would be for the statistical power of our approach. The complexity of the two-species design (Equation 8) relative to that of a single-species design (Equation 6) also introduces computational challenges. In addition to requiring larger sample sizes, estimating the effects of the large number of potential G × G interactions in a full host--genome by parasite--genome study is computationally unrealistic. In addition to the large number of pairwise interactions between hosts and pathogens, depending on the form of the interaction, higher order genetic interactions may be necessary to fully explain the variation in susceptibility. These higher order interactions can be particularly important as the number of loci underlying susceptibility, $n_{H}$ and $n_{P},$ increases. Although incorporating complete pathogen genetic data may be unfeasible, there often exists some form of pathogen typing, which is largely indicative of the pathogen's genotype and may be sufficient for the purposes of a host genome-wide scan. For example, despite its vast diversity, Hepatitis C virus has been subdivided into seven genotypes ([@bib13]; [@bib21]), which may capture much of the relevant variation in host susceptibility. The *Daphnia*--*Pasteuria* data set we analyzed provides a valuable test case for a two-species co-GWAS. In this study, we know exactly to which pathogen type individuals have been exposed, which is generally not known in natural populations. This information may have increased the power of the study to detect loci underlying C1 and C19 susceptibility. Despite this increased power, we chose to use the arguably lenient significance threshold of a log-likelihood score \>2 plus clustering of four or more SNPs, as in the original article. Requiring more stringent threshold corrections for multiple sampling, such as a Bonferroni correction, does not yield any significant SNPs. Given the correspondence between the GWAS results and those of functional studies ([@bib7]), however, many of the observed SNPs are arguably not false positives. Using the log-likelihood of two and clustering threshold, we observe fewer genomic regions associated with overall susceptibility when parasite information is not incorporated than when conducting GWAS with exposure to either C1 or C19, despite the complete data set containing twice the number of data points. As an alternative to analyzing the complete data set, we could hold the sample size constant in a combined analysis by randomly choosing whether the host was exposed to C1 or to C19 for each host genotype ([Figure S2](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1/FigureS2.pdf) available on Dryad). Interestingly, this "mixed" GWAS not only identifies the same regions on linkage group 4 and 5 but also identifies regions on linkage groups 1, 9, and 10, as were found in the single pathogen-type GWAS. The fact that this mixed analysis picks up some of the potentially parasite-specific loci is likely due to randomly sampling an excess of C1- or C19-tested clones. Consistent with this interpretation, exactly which parasite-specific regions are identified varies with the random sample chosen. Nevertheless, as with the complete data set, a comparison between C1, C19, and mixed susceptibility provides additional information about which genes are involved in general health *vs.* parasite-specific susceptibility. The results presented here highlight several important avenues for future research. First and foremost, designing genome-wide association methods that allow for G × G interactions is critically important, as is the collection of genotypic data from hosts and pathogens. This could be approached, for example, by adapting GWAS designs and analyses used to detect gene-by-environment interactions ([@bib31]). Recognizing the importance of host--pathogen genetic interactions is important for understanding the applicability and limitations of single-species association scans. Developing metrics that capture relevant variability in host and pathogen populations may facilitate the application of these results. Finally, incorporating G × G interactions into our association studies will also enable us to understand what mathematical models of host--parasite interactions best predict the genetic interactions observed in natural systems, allowing for further refinements of the models. Supplementary Material {#s9} ====================== Supplemental material is available online at [www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1](http://www.genetics.org/lookup/suppl/doi:10.1534/genetics.117.300481/-/DC1). ###### Click here for additional data file. ###### Click here for additional data file. ###### Click here for additional data file. We thank Matt Osmond and two anonymous reviewers for their many helpful suggestions that improved this manuscript. This project was supported by a fellowship from the University of British Columbia to A.M., a National Science Foundation grant to S.L.N. (DEB 1450653), and a Natural Sciences and Engineering Research Council of Canada grant to S.P.O. (RGPIN-2016-03711). Communicating editor: W. Stephan
Data can be found at: <http://hdl.handle.net/2445/151737>. Introduction {#sec001} ============ Malaria caused by *Plasmodium vivax* (*Pv*) is a neglected tropical disease, especially during pregnancy, of worldwide distribution \[[@pntd.0008155.ref001]\]. The negative effects of malaria in pregnant women and their offspring have been better described for malaria caused by *P*. *falciparum* (*Pf*) whereas fewer reports have investigated the outcomes of *Pv* infection in pregnancy \[[@pntd.0008155.ref002]\]. To address this gap, we performed a multicenter cohort study (the PregVax project) to characterize the burden and health consequences of malaria caused by *Pv* in pregnant women from five malaria endemic areas \[[@pntd.0008155.ref003]\]. Within that cohort, we set out to characterize in more depth and breadth the immune responses induced in pregnant women when infected or exposed to *Plasmodium* parasites \[[@pntd.0008155.ref004]--[@pntd.0008155.ref007]\], and how they might correlate with negative clinical outcomes. As part of this investigation, here we aim to better understand the cellular immune mediators circulating in the blood of pregnant women, whose immune system is altered due to pregnancy \[[@pntd.0008155.ref008]\] when facing a parasite infection like *Pv* that has been associated to inflammation, particularly in the case of severe disease \[[@pntd.0008155.ref009]\]. A recent study by Singh et al \[[@pntd.0008155.ref010]\] has shown increased levels of inflammatory markers in vivax malaria during pregnancy, but it only included three cytokines (IL-6, IL-1β and TNF). For a more comprehensive evaluation of the multiple effects that *Pv* infection may elicit in the immune system of pregnant women, a wider set of cellular biomarkers of different functions, including chemokines and growth factors as well as T helper (T~H~)-related and regulatory cytokines, needs to be studied. This is particularly relevant to further understand the role of CCL11 during pregnancy and its association with *Pv* infection, as we previously showed decreased blood concentrations of this chemokine in pregnant women compared to non-pregnant individuals and in malaria-exposed compared to malaria-naïve individuals \[[@pntd.0008155.ref007]\]. In an initial recent analysis, we evaluated the effect of pregnancy and of residing in tropical countries, where exposure to infectious diseases is more common, on the concentration of cytokines in plasma samples from women at different times during gestation and after puerperium, as well as in the three blood compartments (periphery, cord, placenta) \[[@pntd.0008155.ref008]\]. We found that the concentrations of circulating cytokines were the highest at postpartum (at least 10 weeks after delivery), with higher values at delivery compared to the first antenatal clinic visit. Furthermore, anti-plasmodial antibodies (markers of malaria exposure) correlated with cytokine concentrations postpartum, but not during pregnancy, suggesting that pregnancy had a greater effect than malaria exposure on cytokine levels. Additionally, no strong associations between cytokines and gestational age were detected. In the present study, we assess the relationships between cytokine, chemokine and growth factor plasma concentrations, delivery outcomes and presence of *Pv*, in the PregVax cohort. Our multi-biomarker multicenter study is the first one to characterize the immunological signature of *Pv* infection in pregnancy to this extent. Materials and methods {#sec002} ===================== Study design and population {#sec003} --------------------------- This analysis was done in the context of the PregVax project, a cohort study of 9,388 pregnant women from five countries where malaria is endemic: Brazil (BR), Colombia (CO), Guatemala (GT), India (IN) and Papua New Guinea (PNG), enrolled between 2008 and 2012 at the first antenatal visit, and followed up until delivery. A venous blood sample was collected to perform immunological assays in the following participants: a) any women with *Pv* infection (with or without *Plasmodium* coinfections) at any visit from any country, b) a random subcohort (approximately 10% of total cohort) assigned as the immunology cohort, at enrolment and delivery. Bleedings at delivery included peripheral, cord and placental (only CO and PNG) blood. *Pv* and *Pf* (studied as a possible confounder in co-infected women) parasitaemias were assessed at every visit in Giemsa-stained blood slides that were read onsite. An external validation of parasitemia results was performed by expert microscopists in a subsample of slides (100 per country) at the Hospital Clinic and at the Hospital Sant Joan de Deu in Barcelona, Spain. Submicroscopic *Pv* and *Pf* infections were also determined at enrolment and delivery by real time-PCR in a group of participants, which included the immunological subcohort. Malaria symptoms and hemoglobin (Hb, g/dL) levels were also recorded at enrolment and delivery, as well as neonatal birth weight (g). The protocol was approved by the national and/or local ethics committees of each site, the CDC IRB (USA) and the Hospital Clinic Ethics Review Committee (Barcelona, Spain). Written informed consent was obtained from all study participants. All human subjects were adults. Isolation of plasma {#sec004} ------------------- Five to 10 mL of blood were collected aseptically in heparinized tubes. Plasma was separated by centrifuging at 600 g for 10 min at room temperature, aliquoted and stored at -80ºC. Samples from BR, CO, GT and PNG were shipped to the Barcelona Institute for Global Health on dry ice. The measurement of cytokines, chemokines and growth factors (hereinafter together referred to as biomarkers) was performed at ISGlobal, Barcelona (Spain) to minimize inter-site variability. Samples from India were analyzed at ICGEB, Delhi. Multiplex bead array assay {#sec005} -------------------------- The biomarkers were analyzed in thawed plasmas with a multiplex suspension detection system *Cytokine Magnetic 30-Plex Panel* (Invitrogen, Madrid, Spain) which allows the detection of the following biomarkers: epidermal growth factor (EGF), Eotaxin/CCL11, fibroblast growth factor (FGF), granulocyte colony-stimulating factor (G-CSF), granulocyte-macrophage colony-stimulating factor (GM-CSF), hepatocyte growth factor (HGF), interferon (IFN)-α, IFN-γ, interleukin (IL)-1RA, IL-1β, IL-2, IL-2R, IL-4, IL-5, IL-6, IL-7, IL-8/CXCL8, IL-10, IL-12(p40/p70), IL-13, IL-15, IL-17, IFN-γ induced protein (IP-10/CXCL10), monocyte chemoattractant protein (MCP-1/CCL2), monokine induced by IFN-γ (MIG/CXCL9), macrophage inflammatory protein (MIP)-1α/CCL3, MIP-1β/CCL4, regulated on activation, normal T cell expressed and secreted (RANTES/CCL5), tumor necrosis factor (TNF), and vascular endothelial growth factor (VEGF). Fifty μL of the plasmas were tested in single replicates (dilution 1:2, as recommended by the vendor). Each plate contained serial dilutions (1:3) of a standard sample of known concentration of each analyte provided by the manufacturer, as well as a blank control and a reference sample control for quality control purposes, all of them in duplicates. Upper and lower values of the standard curves for each analyte are displayed in [S1 Table](#pntd.0008155.s002){ref-type="supplementary-material"}. The assays were carried out according to the manufacturer's instructions. Beads were acquired on the BioPlex100 system (Bio-Rad, Hercules, CA) and concentrations calculated using the Bioplex software. When values were out of range (OOR) according to the software, a value three-times lower than the lowest standard concentration was assigned (as standard dilutions were 1:3) for OOR values under the curves, and a value three-times higher than the highest standard concentration was assigned for OOR values above the curve. Moreover, the software extrapolated values below and above the lower and higher concentrations, respectively, of the standard curves when they fitted into the curves and were not OOR. These values were kept with the exception of those three-times below the lowest standard concentration and three-times above the highest standard concentration, for which those respective values were assigned. In addition, the cytokine TGF-β1 was analyzed in all plasmas except those from India, with a DuoSet ELISA kit (R&D Systems). Following the vendor's recommendations, latent TGF-β1 was activated to its immunoreactive form with HCl and neutralized with NaOH/HEPES. A 40-fold plasma dilution was used. *Plasmodium* spp. detection by real time-PCR {#sec006} -------------------------------------------- From the whole cohort (9,388 women), 1500 recruitment and 1500 different delivery samples were randomly selected for PCR. Samples from BR, CO, GT, and half of the samples from PNG, were analyzed at the *Istituto Superiore di Sanità* (Rome, Italy), as described \[[@pntd.0008155.ref003]\]. The threshold for positivity for each species was established as a cycle threshold \<45, according to negative controls. *Pv* diagnosis for IN samples was performed in Delhi following Rome's protocol adapted for the instrument sensitivity (the third step amplification 72ºC for 25 sec instead of 5 sec). Approximately half of the PNG samples were analyzed for submicroscopic infections in Madang, following a protocol \[[@pntd.0008155.ref011]\] similar to that from Rome, except that the threshold for positivity for each species was established as cycle threshold \<40, according to negative controls. DNA was extracted from whole blood-spot filter paper. Sample selection and statistical methods {#sec007} ---------------------------------------- It was not possible to measure the biomarkers in all the plasma samples available due to budget constraints. Therefore, an initial 50 enrolment samples per country (35 in IN, 235 total) and their paired delivery samples were randomly selected. However, follow-up rates were low and when paired recruitment/delivery samples available were \<50, random delivery samples were included to achieve N = 50 (35 in IN). Thus, 129/235 delivery samples were paired to recruitment samples. In addition, because malaria prevalence was generally low in our random subset, we performed a case-control selection including all the available samples from women with a *Pv* infection (diagnosed by microscopy and/or PCR) at recruitment (N = 49) or delivery (N = 18) and similar numbers of randomly-selected samples with a negative *Pv* PCR result, matched by country (N = 62 and N = 7 respectively). Finally, 144 placental plasmas, 125 peripheral plasmas collected at delivery and paired to the placental samples and 112 cord plasmas were analysed. A flow chart with all samples analysed is provided in [S1 Fig](#pntd.0008155.s001){ref-type="supplementary-material"}. Data from the 5 countries were combined in the analysis. Except when otherwise specified, *Pv* infection was defined as either a positive smear or a PCR positive result (or both). Overall, our aim was to search for the associations between cytokine concentrations and health outcomes (malaria infection, Hb levels and birth weight). In this regard, we performed two separate investigations. First, a cross-sectional analysis in which health outcomes and cytokine concentrations were examined at the same timepoint, either recruitment or delivery. Second, a longitudinal analysis of the effect of cytokine concentrations at recruitment on health outcomes at delivery. To study the association of biomarkers with *Pv* infection, first a principal component analysis (PCA) was performed. In the PCA, a large set of possibly correlated variables (e.g. cytokines) is transformed into a small set of linearly uncorrelated variables called principal components (PC), which may be interpreted as clusters of cytokines. For each PC, we show the contribution (loading score) of each cytokine, considering the generally accepted cut-off established at loading score = 0.3. For further analyses, we only considered the seven PCs that accounted more for the variance of the data set (Eigenvalue ≥1, Kaiser-Guttman criterion). To use PCs as variables (each representing several cytokines at a time), PC scores were predicted for each subject and logistic regression models were estimated with PCs as independent variables and *Pv* infection as the dependent variable. We excluded TGF-β from the PCA analysis as this cytokine was analyzed by a different technique (ELISA) and was not measured in IN. Finally, to assess if our data set was suitable for PCA, we ran the Kaiser-Meyer-Olkin (kmo) test for sampling adequacy. To assess the association between individual biomarker concentrations and *Pv* infection, we used the Mann-Whitney test in the crude analysis (corrected for multiple comparisons with the Benjamini-Hochberg method) and estimated multivariable logistic regression models adjusting for the following variables: site, age at recruitment, gestational age, parity, delivery mode (vaginal vs caesarean birth) for the analysis with cytokine concentration at delivery and *Pf* infection. At delivery, three different blood compartments were investigated: periphery, placenta and cord. For this objective, pairwise statistical significance was interpreted based on 95% confidence intervals (CI), and considered significant when the interval did not include 1. The same adjusted regression model was estimated to analyze the association between peripheral plasma biomarker concentration at recruitment and future (at delivery) *Pv* infection. Furthermore, the association of biomarkers with submicroscopic and microscopic *Pv* infections was assessed with linear regression models adjusted by site. Finally, the association between biomarker concentrations in plasma with maternal Hb levels and birth weight was assessed using multivariable linear regression models, adjusted for site, age (at recruitment), Hb levels (at recruitment), parity, delivery mode for the analysis with cytokine concentration at delivery, *Pv* and *Pf* infection during pregnancy. For this objective, pairwise statistical significance was interpreted based on 95% confidence intervals, and considered significant when the interval did not include 0. Overall, significance was defined at p\<0.05. Analyses and graphs were performed using Stata/SE 10.1 (College Station, TX, USA) and GraphPad Prism (La Jolla, CA, USA). Results {#sec008} ======= Characteristics of the study population {#sec009} --------------------------------------- A total of 987 plasma samples belonging to 572 pregnant women were analyzed for biomarker concentration, comprising 346 peripheral plasma samples collected at recruitment, 385 peripheral plasmas at delivery, 112 cord plasmas and 144 placental plasmas. Unfortunately, only 129 samples were paired between recruitment and delivery due to low follow-up rates. The study population characteristics at baseline are provided in [Table 1](#pntd.0008155.t001){ref-type="table"}. The number of infection cases by country, method and timepoint are provided in [S2 Table](#pntd.0008155.s003){ref-type="supplementary-material"}. 10.1371/journal.pntd.0008155.t001 ###### Baseline characteristics of study population. This refers to all the women included in the study at both timepoints. ![](pntd.0008155.t001){#pntd.0008155.t001g} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Site ----------------------------------------------------------------------------------------------------------- ----------------------- ------------------------ ------------------------- ----------------------- ------------------------ -------------------- **Age (years)**[^a^](#t001fn001){ref-type="table-fn"}^,^[^b^](#t001fn002){ref-type="table-fn"} 23.3 (6.0) \[90\] 22.3 (5.8) \[115\] 25.0 (7.7) \[91\] 23.5 (3.5) \[58\] 25.5 (5.7) \[210\] **Gravidity (number of previous pregnancies)**[^d^](#t001fn004){ref-type="table-fn"} 0 26 (29%) 34 (30%) 30 (33%) 27 (47%) 69 (38%) 1--3 42 (47%) 56 (49%) 31 (34%) 29 (50%) 76 (42%) 4+ 22 (24%) 25 (21%) 30 (33%) 2 (3%) 37 (20%) **GA at recruitment**\ 0--12 15 (17%) 28 (24%) 5 (6%) 1 (2%) 11 (6%) **(weeks)**[^d^](#t001fn004){ref-type="table-fn"} 13--24 30 (34%) 42 (37%) 32 (36%) 30 (52%) 87 (48%) 25+ 43 (49%) 45 (39%) 52 (58%) 27 (47%) 84 (46%) **GA at delivery (weeks, by Ballard method)**[^d^](#t001fn004){ref-type="table-fn"} 0--37 5 (8%) 29 (39%) 6 (10%) 29 (69%) 49 (33%) 38--41 54 (92%) 42 (57%) 35 (58%) 13 (31%) 83 (56%) 42+ 0 (0%) 3 (4%) 19 (32%) 0 (0%) 16 (11%) **BMI (kg/m**^**2**^**)** [^a^](#t001fn001){ref-type="table-fn"}^,^[^b^](#t001fn002){ref-type="table-fn"} 25.7 (4.4) \[89\] 23.5 (3.5) \[114\] 25.8 (3.9) \[91\] 23.1 (4.5) \[58\] 23.7 (3.3) \[173\] **Hemoglobin (g/dL)**[^a^](#t001fn001){ref-type="table-fn"}^,^[^b^](#t001fn002){ref-type="table-fn"} 11.3 (1.3) \[90\] 10.9 (1.6) \[114\] 11.1 (1.5) \[82\] 9.5 (1.6) \[58\] 9.52 (1.5) \[214\] **Birth weight** [^a^](#t001fn001){ref-type="table-fn"}^,^[^g^](#t001fn007){ref-type="table-fn"} **(g)** 3166.1 (526.4) \[61\] 3224.11 (408.7) \[82\] 3151.9.6 (537.3) \[60\] 3031.7 (436.5) \[42\] 2923.1 (493.3) \[159\] **Delivery mode**[^d^](#t001fn004){ref-type="table-fn"} **V** 52 (79) 66 (82) 44 (71) 34 (79) 135 (100) **C** 14 (21) 15 (18) 18 (29) 9 (21) 0 (0) **Syphilis screening**[^d^](#t001fn004){ref-type="table-fn"} **POS** 0 (0) 7 (10) N/A 0 (0) 5 (4) **NEG** 56 (100) 64 (90) N/A 13 (100) 123 (96) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ^a^ Arithmetic Mean (standard deviation) \[number\]. ^b^ At recruitment. ^c^ One-way ANOVA. ^d^ n (percentage). ^e^ Chi-squared test. ^f^ Fisher's exact test. ^g^ birth weight excluding twins. PNG: Papua New Guinea. GA: gestational age (weeks). BMI: body mass index. V: vaginal. C: cesarean section. POS: positive. NEG: negative. N/A: not available. Association of *Pv* infection with plasma biomarker concentration at recruitment {#sec010} -------------------------------------------------------------------------------- Considering the large amount of cytokine variables, we first performed a PCA to reduce the dimensionality of data. The kmo test for sampling adequacy to PCA analysis resulted in kmo = 0.85 that according to the literature might be considered as meritorious \[[@pntd.0008155.ref012]\]. In the PCA analysis, seven PCs contributed mostly to the variance of the data ([S3 Table](#pntd.0008155.s004){ref-type="supplementary-material"}) and were further considered for regression analyses. Of those seven PCs, three had a positive association with *Pv* infection: PC3, PC5 and PC7 ([S4 Table](#pntd.0008155.s005){ref-type="supplementary-material"}). Then we analyzed which cytokines contributed mostly to those PCs associated with *Pv* infection ([Table 2](#pntd.0008155.t002){ref-type="table"}). The PC3-proinflammatory-chemokine group showed the highest contribution by CXCL8, CCL4 and CCL3. The PC5-antiinflammatory-inflammatory group had the highest contribution by IL-10, CXCL10, IL-6 and CCL2, whereas the PC7-CCL5-T~H~ group had the highest contribution by CCL5, IL-4, IL-2R and IL-12 ([Table 2](#pntd.0008155.t002){ref-type="table"}). 10.1371/journal.pntd.0008155.t002 ###### Loading scores for principal component analysis at recruitment. ![](pntd.0008155.t002){#pntd.0008155.t002g} Variable PC1 PC2 PC3 PC4 PC5 PC6 PC7 Unexplained ---------- ------- ------- ----------- ------- ----------- ------- ----------- ------------- TNF   0.429 ** **   ** **   ** ** 0.271 IL-1β   0.344 ** **   ** **   ** ** 0.261 IL-6     ** **   **0.356**   ** ** 0.211 IL-10     ** **   **0.506**   ** ** 0.334 IL-1RA     ** **   ** **   ** ** 0.179 IFN-α     ** **   ** **   ** ** 0.316 CXCL8     **0.565**   ** **   ** ** 0.271 CCL3     **0.406**   ** **   ** ** 0.262 CCL4     **0.426**   ** **   ** ** 0.267 CCL2     ** **   **0.312**   ** ** 0.290 CXCL10     ** **   **0.489**   ** ** 0.337 CXCL9     ** **   ** ** 0.474 ** ** 0.439 CCL11     ** **   ** ** 0.634 ** ** 0.376 CCL5     ** **   ** **   **0.695** 0.247 IFN-γ     ** ** 0.487 ** **   ** ** 0.426 IL-12     ** **   ** **   **0.341** 0.317 IL-2 0.494   ** **   ** **   ** ** 0.218 IL-15 0.351   ** **   ** **   ** ** 0.319 IL-2R     ** **   ** **   **0.352** 0.361 IL-4     ** **   ** **   **0.392** 0.552 IL-5   0.456 ** **   ** **   ** ** 0.229 IL-13     ** ** 0.447 ** **   ** ** 0.377 IL-17   0.468 ** **   ** **   ** ** 0.244 EGF 0.373   ** **   ** **   ** ** 0.371 FGF 0.497   ** **   ** **   ** ** 0.224 HGF     ** **   ** **   ** ** 0.403 VEGF     ** **   ** **   ** ** 0.495 G-CSF ** **   ** ** 0.456 ** **   ** ** 0.487 GM-CSF ** **   ** **   ** **   ** ** 0.347 IL-7 ** ** 0.405 ** **   ** **   ** ** 0.250 Loading scores for each principal component (PC) with Eigenvalue\>1 and proportion of unexplained variance after varimax rotation. In bold the PC with a positive association with *P*. *vivax* infection. Only shown if loading score \>0.3. As the PCA is quite exploratory, we also analyzed biomarkers individually. We found that *Pv*--infected women had higher plasma concentrations of proinflammatory biomarkers IL-6, CXCL8, CCL3, CCL4 and CCL2, of T~H~1-related cytokines IL-12, IL-15 and IL-2R, and of growth factor VEGF ([Fig 1](#pntd.0008155.g001){ref-type="fig"}) than uninfected women, consistent with PCA analysis. After adjusting for other confounders (see [materials and methods](#sec002){ref-type="sec"}), we found a positive association of *Pv* infection with proinflammatory biomarkers IL-6, IL-1β, CCL4, CCL2, CXCL10 and TNF (borderline non-significant for the latter); the antiinflammatory IL-10; the chemokine CCL5; the T~H~1-related cytokines IL-12 and IL-2R; the T~H~2-related cytokine IL-5; and the growth factors FGF, HGF, VEGF and IL-7 ([Table 3](#pntd.0008155.t003){ref-type="table"}). In contrast, a negative association was observed with CCL11 plasma concentration ([Table 3](#pntd.0008155.t003){ref-type="table"}). ![Effect of *Plasmodium vivax* infection on peripheral plasma biomarker concentrations at recruitment.\ Box plots represent median (white line), and 25^th^ and 75^th^ percentiles (lower and upper hinge respectively) of biomarker concentrations in peripheral plasma at recruitment, in *P*. *vivax* infected (I, N = 54) and uninfected (U, N = 247) pregnant women. Concentrations for all biomarkers are expressed in pg/mL. P-value corresponds to the Mann-Whitney test corrected for multiple comparisons with the Benjamini-Hochberg method. \*p\<0.05, \*\*p\<0.01, \*\*\*p\<0.001.](pntd.0008155.g001){#pntd.0008155.g001} 10.1371/journal.pntd.0008155.t003 ###### Association of plasma biomarker concentration with *P*. *vivax* infection. ![](pntd.0008155.t003){#pntd.0008155.t003g} Recruitment Delivery -------- ------------- ---------------- ---------- ---------------- ---------- ---------------- ------ ------------- TNF 1.03 1.00; 1.06 1.00 0.88; 1.12 1.02 0.89; 1.18 1.02 0.97; 1.08 IL-1β **1.06** **1.02; 1.09** 1.00 0.89; 1.13 1.06 0.93; 1.21 1.00 0.95; 1.05 IL-6 **1.09** **1.03; 1.15** 0.96 0.87; 1.05 0.91 0.81; 1.04 1.04 0.98; 1.10 IL-10 **1.09** **1.02; 1.17** 0.97 0.86; 1.09 **1.17** **1.02; 1.34** 1.06 0.54; 2.07 IL-1RA 1.06 0.98; 1.15 1.06 0.97; 1.16 1.04 0.94; 1.16 1.11 0.94; 1.30 TGF-β 1.01 0.88; 1.17 1.20 0.97; 1.47 1.01 0.87; 1.16 0.91 0.79; 1.06 IFN-α 1.05 0.93; 1.19 **1.33** **1.11; 1.58** 1.09 0.94; 1.26 1.15 0.78; 1.70 CXCL8 1.02 0.98; 1.07 1.03 0.98; 1.08 0.99 0.92; 1.08 1.05 1.00; 1.11 CCL3 1.04 0.95; 1.13 1.08 0.94; 1.24 0.89 0.74; 1.07 1.06 0.87; 1.28 CCL4 **1.09** **1.02; 1.17** 0.90 0.77; 1.04 1.01 0.91; 1.12 1.06 0.98; 1.15 CCL2 **1.24** **1.10; 1.41** 1.01 0.93; 1.09 1.06 0.97; 1.16 1.07 0.99; 1.16 CXCL10 **1.17** **1.06; 1.29** 1.04 0.91; 1.18 0.93 0.79; 1.09 0.97 0.82; 1.13 CXCL9 1.06 0.97; 1.15 1.03 0.94; 1.14 1.05 0.92; 1.19 1.01 0.92; 1.12 CCL11 **0.88** **0.77; 0.99** 1.01 0.87; 1.18 0.93 0.80; 1.09 0.91 0.76; 1.08 CCL5 **1.13** **1.01; 1.27** 1.08 0.93; 1.26 0.93 0.84; 1.02 1.01 0.84; 1.22 IFN-γ 1.16 0.84; 1.59 1.38 0.89; 2.14 1.00 N/A 0.56 0.01; 36.96 IL-12 **1.37** **1.11; 1.68** **1.57** **1.15; 2.15** 0.97 0.77; 1.23 0.84 0.60; 1.18 IL-2 1.04 0.97; 1.11 0.98 0.89; 1.09 1.06 0.96; 1.17 1.03 0.95; 1.11 IL-15 1.03 0.98; 1.08 1.03 0.91; 1.16 1.05 0.93; 1.20 0.97 0.83; 1.13 IL-2R **1.12** **1.02; 1.24** 0.99 0.87; 1.14 0.91 0.75; 1.09 0.94 0.76; 1.15 IL-4 0.94 0.73; 1.22 1.04 0.83; 1.31 0.92 0.52; 1.64 0.69 0.06; 8.61 IL-5 **1.04** **1.01; 1.08** 1.00 0.91; 1.11 1.15 0.87; 1.51 1.00 0.95; 1.06 IL-13 1.10 0.99; 1.22 0.98 0.86; 1.12 0.91 0.70; 1.19 0.84 0.57; 1.22 IL-17 1.03 0.99; 1.07 1.10 0.91; 1.33 0.75 0.22; 2.53 1.01 0.93; 1.10 EGF 0.98 0.89; 1.08 0.96 0.85; 1.09 1.04 0.90; 1.20 0.97 0.81; 1.16 FGF **1.08** **1.02; 1.14** 0.93 0.85; 1.03 1.08 0.97; 1.19 1.03 0.93; 1.13 HGF **1.06** **1.01; 1.10** 0.96 0.90; 1.03 1.00 0.94; 1.06 1.05 0.93; 1.20 VEGF **1.09** **1.02; 1.17** 0.97 0.84; 1.12 1.05 0.89; 1.23 1.05 0.92; 1.19 G-CSF 1.10 0.88; 1.36 1.03 0.74; 1.43 0.96 0.75; 1.24 1.06 0.88; 1.29 GM-CSF 1.02 0.99; 1.06 1.04 0.97; 1.11 1.09 0.98; 1.21 1.00 0.95; 1.05 IL-7 **1.05** **1.01; 1.08** 0.97 0.88; 1.07 0.99 0.84; 1.16 1.03 0.91; 1.17 Multivariable logistic regression models adjusting for the following variables: site, age at recruitment, gestational age, parity, delivery mode (just for delivery samples) and *P*. *falciparum* infection were estimated. *P*. *vivax* infection cases included those diagnosed by either PCR or microscopy. Odds ratio (OR) per 25% increase in biomarker concentration. Recruitment, N = 275; delivery periphery, N = 199 (infection rates by *Plasmodium spp*. and timepoint in [S2 Table](#pntd.0008155.s003){ref-type="supplementary-material"}). Placenta N = 75 (61 *Pv*-, 14 *Pv*+). Cord N = 82 (57 *Pv*- 25 *Pv*+). In bold if 95% confidence interval (CI) does not include 1. N/A: regression model could not be estimated as all samples considered have the same value for IFN-γ concentration (9.6 pg/mL). Finally, we investigated the different effect (if any) of submicroscopic and microscopic *Pv* infections, i.e. infection density, on biomarker plasma concentration at recruitment. Results were interpreted based on 95% CI. On the one hand, *Pv* microscopic but not submicroscopic infections were associated with elevated plasma concentrations of proinflammatory biomarkers TNF, IL-1β, IL-6, CXCL8, CCL2, CXCL10, CXCL9; the antiinflammatory IL-10 and IL-1RA; the chemokine CCL5; the T~H~1-related cytokine IL-2R; the T~H~2-related cytokine IL-5; the T~H~17-related cytokine IL-17 and the growth factors VEGF and GM-CSF ([Table 4](#pntd.0008155.t004){ref-type="table"}). On the other hand, *Pv* submicroscopic but not microscopic infections were associated with elevated plasma concentrations of biomarkers IL-2, FGF and IL-7 ([Table 4](#pntd.0008155.t004){ref-type="table"}). Of note, when we did this stratification, the negative association with CCL11 levels was lost ([Table 4](#pntd.0008155.t004){ref-type="table"}). 10.1371/journal.pntd.0008155.t004 ###### Association of plasma biomarker concentration with microscopic and submicroscopic *P*. *vivax* infection. ![](pntd.0008155.t004){#pntd.0008155.t004g} RECRUITMENT DELIVERY -------- ------------- ---------- ---------------- ---------- ---------------- -------- ----------- ------------------ ---------- ----------------   Effect Effect 95% CI Effect 95% CI Effect Effect 95% CI Effect 95% CI TNF 0 0.94 -0.32; 2.20 **1.53** **0.17; 2.88** 0 0.58 -0.14; 1.31 **1.89** **0.46; 3.32** IL-1β 0 1.31 -0.09; 2.71 **2.98** **1.48; 4.48** 0 0.20 -0.50; 0.90 0.21 -1.18; 1.60 IL-6 0 0.30 -0.77; 1.36 **1.89** **0.75; 3.03** 0 **-0.68** **-1.23; -0.13** 0.29 -0.80; 1.38 IL-10 0 0.38 -0.37; 1.14 **1.75** **0.94; 2.56** 0 -0.22 -0.71; 0.27 **1.17** **0.19; 2.14** IL-1RA 0 0.46 -0.19; 1.10 **0.74** **0.05; 1.43** 0 0.17 -0.34; 0.68 0.07 -0.94; 1.08 TGF-β 0 -0.01 -0.44; 0.42 -0.05 -0.49; 0.39 0 0.04 -0.25; 0.32 0.26 -0.29; 0.81 IFN-α 0 0.26 -0.15; 0.68 0.21 -0.23; 0.65 0 **0.34** **0.03; 0.65** 0.02 -0.60; 0.64 CXCL8 0 -0.14 -1.16; 0.88 **1.18** **0.09; 2.28** 0 0.39 -0.67; 1.45 1.13 -0.98; 3.24 CCL3 0 **0.65** **0.04; 1.26** **1.15** **0.50; 1.81** 0 -0.10 -0.48; 0.28 -0.16 -0.92; 0.59 CCL4 0 -0.01 -0.60; 0.57 0.31 -0.32; 0.93 0 -0.26 -0.68; 0.15 -0.66 -1.47; 0.16 CCL2 0 0.62 -0.25; 1.49 **1.76** **0.82; 2.69** 0 -0.07 -0.63; 0.49 -0.20 -1.31; 0.92 CXCL10 0 0.36 -0.32; 1.04 **1.07** **0.34; 1.80** 0 -0.1 -0.51; 0.32 0.03 -0.80; 0.85 CXCL9 0 0.19 -0.39; 0.78 **0.67** **0.04; 1.30** 0 -0.21 -0.75; 0.33 0.36 -0.72; 1.43 CCL11 0 0.03 -0.43; 0.49 -0.43 -0.93; 0.06 0 -0.14 -0.53; 0.24 -0.14 -0.90; 0.63 CCL5 0 0.26 -0.19; 0.70 **0.62** **0.14; 1.10** 0 0.04 -0.34; 0.42 0.46 -0.30; 1.22 IFN-γ 0 0.00 -0.17; 0.16 0.00 -0.18; 0.18 0 0.02 -0.17; 0.20 -0.10 -0.47; 0.27 IL-12 0 **0.30** **0.03; 0.57** **0.38** **0.09; 0.67** 0 0.17 -0.04; 0.38 0.20 -0.21; 0.62 IL-2 0 **0.75** **0.08; 1.43** 0.28 -0.45; 1.01 0 -0.14 -0.68; 0.39 -0.80 -1.85; 0.26 IL-15 0 **0.76** **0.12; 1.40** **1.01** **0.32; 1.70** 0 0.14 -0.34; 0.63 -0.36 -1.33; 0.60 IL-2R 0 0.44 -0.10; 0.98 **1.07** **0.49; 1.65** 0 0.03 -0.30; 0.35 0.13 -0.51; 0.77 IL-4 0 0.02 -0.17; 0.20 -0.05 -0.25; 0.15 0 -0.06 -0.32; 0.21 -0.16 -0.69; 0.37 IL-5 0 0.86 -0.42; 2.13 **1.89** **0.52; 3.26** 0 0.02 -0.72; 0.76 **1.68** **0.21; 3.15** IL-13 0 -0.05 -0.62; 0.53 0.23 -0.38; 0.85 0 -0.05 -0.42; 0.32 -0.32 -1.05; 0.41 IL-17 0 0.55 -0.50; 1.60 **1.28** **0.16; 2.41** 0 0.57 0.03; 1.12 -0.12 -1.20; 0.96 EGF 0 -0.26 -0.81; 0.28 -0.21 -0.79; 0.38 0 -0.28 -0.63; 0.08 -0.15 -0.85; 0.56 FGF 0 **0.99** **0.34; 1.64** 0.59 -0.11; 1.28 0 -0.40 -0.93; 0.12 -0.32 -1.36; 0.72 HGF 0 0.90 -0.25; 2.04 0.17 -1.05; 1.39 0 -0.65 -1.38; 0.08 0.13 -1.26; 1.52 VEGF 0 0.07 -0.40; 0.55 **0.98** **0.48; 1.48** 0 -0.36 -0.73; 0.01 -0.13 -0.87; 0.60 G-CSF 0 0.09 -0.18; 0.35 0.06 -0.22; 0.34 0 -0.09 -0.26; 0.08 0.13 -0.20; 0.47 GM-CSF 0 0.78 -0.46; 2.03 **1.52** **0.18; 2.86** 0 0.39 -0.40; 1.18 -0.27 -1.81; 1.27 IL-7 0 **1.58** **0.32; 2.83** 1.18 -0.17; 2.53 0 0.31 -0.52; 1.14 1.55 -0.06; 3.17 Multivariable logistic regression models adjusting for site. CI: confidence interval. Neg: no infection detected by either PCR or microscopy. PCR+: PCR positive and microscopy negative. Microscopy +: smear positive regardless of PCR result. Expected change: change in mean concentration measured in pg/mL. Recruitment N = 76: Neg N = 26; PCR+ N = 36; Microscopy + N = 14. Delivery N = 89: Neg N = 61; PCR+ N = 24; Microscopy+ N = 4. In bold if 95% CI does not include 0. Association of *Pv* infection with plasma biomarker concentration at delivery {#sec011} ----------------------------------------------------------------------------- In PCA analysis at delivery, seven PCs also resulted in eigenvalue\>1 (kmo = 0.86, [S5 Table](#pntd.0008155.s006){ref-type="supplementary-material"}). However, regression models showed no association of any PC with *Pv* infection at delivery ([S6 Table](#pntd.0008155.s007){ref-type="supplementary-material"}). We did not observe differences between *Pv*-infected and uninfected women in plasma biomarker levels in the crude analysis (not shown) at any compartment. In the adjusted analysis, we observed a positive association of *Pv* infection with IFN-α and IL-12 peripheral plasma concentration and with IL-10 placental plasma concentration ([Table 3](#pntd.0008155.t003){ref-type="table"}). After stratifying by *Plasmodium* infection density, submicroscopic infection was associated with increased peripheral concentrations of IFN-α and decreased concentrations of IL-6, while microscopic infections were associated with elevated levels of TNF, IL-10 and IL-5. Moreover, microscopic infections were associated with increased concentrations of TNF and IL-5 ([Table 4](#pntd.0008155.t004){ref-type="table"}). Plasma biomarker concentration and delivery outcomes {#sec012} ---------------------------------------------------- Hb levels at delivery were positively associated with CCL11 and FGF peripheral plasma concentrations at recruitment ([Table 5](#pntd.0008155.t005){ref-type="table"}) and with CXCL9 placental plasma concentration ([Table 6](#pntd.0008155.t006){ref-type="table"}), and negatively associated with IL-1RA and G-CSF cord plasma concentrations ([Table 6](#pntd.0008155.t006){ref-type="table"}). Birth weight showed no association with any biomarker at recruitment ([Table 5](#pntd.0008155.t005){ref-type="table"}), and it was negatively associated with peripheral IL-4 concentration at delivery ([Table 6](#pntd.0008155.t006){ref-type="table"}). 10.1371/journal.pntd.0008155.t005 ###### Association of biomarkers at recruitment with hemoglobin levels at delivery and birth weight. ![](pntd.0008155.t005){#pntd.0008155.t005g}   Hemoglobin (g/dL) Birth weight (g) -------- ------------------- ------------------ -------- ---------------- TNF 0.01 -0.02; 0.05 -3.83 -11.40; 3.75 IL-1β 0.01 -0.03; 0.04 0.38 -7.23; 8.00 IL-6 -0.02 -0.07; 0.03 -8.75 -21.92; 4.41 IL-10 0.00 -0.06; 0.07 -16.00 -32.73; 0.73 IL-1RA 0.01 -0.06; 0.08 -3.38 -21.29; 14.53 TGF-β -0.08 -0.20; 0.04 -1.15 -32.95; 30.65 IFN-α -0.01 -0.12; 0.11 -24.09 -52.01; 3.82 CXCL8 0.01 -0.03; 0.06 -6.34 -16.67; 3.98 CCL3 0.00 -0.08; 0.09 -3.60 -24.23; 17.03 CCL4 0.02 -0.04; 0.08 -3.48 -18.15; 11.19 CCL2 0.03 -0.06; 0.13 1.47 -22.06; 25.00 CXCL10 -0.03 -0.12; 0.06 -15.18 -37.20; 6.85 CXCL9 -0.01 -0.09; 0.08 -6.56 -27.38; 14.25 CCL11 **0.15** **0.03; 0.26** -9.84 -38.48; 18.80 CCL5 0.00 -0.12; 0.11 1.41 -26.42; 29.23 IFN-γ -0.15 -0.49; 0.19 -50.53 -135.09; 34.03 IL-12 0.04 -0.17; 0.24 4.84 -44.48; 54.16 IL-2 0.05 -0.01; 0.11 12.87 -2.32; 28.06 IL-15 0.01 -0.06; 0.07 3.55 -12.91; 20.00 IL-2R -0.02 -0.11; 0.07 -7.58 -30.77; 15.60 IL-4 0.00 -0.18; 0.17 -18.35 -62.30; 25.59 IL-5 0.02 -0.02; 0.06 -4.17 -12.20; 3.87 IL-13 0.01 -0.08; 0.11 -3.97 -26.94; 19.00 IL-17 0.02 -0.02; 0.06 1.56 -7.41; 10.53 EGF 0.05 -0.02; 0.12 10.98 -6.24; 28.20 FGF **0.06** **0.00; 0.12** 3.26 -11.52; 18.04 HGF 0.01 -0.03; 0.05 -2.61 -12.52; 7.30 VEGF 0.00 -0.10; 0.11 -17.65 -35.44; 0.13 G-CSF 0.02 -0.02; 0.06 -6.24 -14.68; 2.20 GM-CSF -0.01 -0.19; 0.16 -13.46 -54.75; 27.84 IL-7 0.03 -0.01; 0.07 0.10 -8.52; 8.72 Multivariable linear regression models adjusting for the following variables: site, age at recruitment, hemoglobin (Hb) at recruitment for analysis of Hb at delivery, gravidity, gestational age, delivery mode, and *P*. *falciparum* and *P*. *vivax* infection. Effect: change in Hb levels (g/dL) or birth weight (g) per 25% increase in biomarker concentration. N = 145. In bold if 95% confidence interval (CI) does not include 0. 10.1371/journal.pntd.0008155.t006 ###### Association of biomarkers at delivery with hemoglobin levels at delivery and birth weight. ![](pntd.0008155.t006){#pntd.0008155.t006g} . Periphery Placenta Cord -------- ----------- ------------------ ------------ ------------------- ---------------------- ---------------------- ---------- ------------------- ----------- ------------------ -------- ------------------ TNF 0.03 -0.04; 0.10 0.19 -18.88; 19.27 0.01 -0.15; 0.17 26.49 -6.86; 59.85 0.00 -0.05; 0.05 -4.84 -16.71; 7.03 IL-1β -0.03 -0.09; 0.03 -6.18 -23.11; 10.75 0.00 -0.15; 0.14 22.00 -1.36; 45.37 0.00 -0.04; 0.05 -0.26 -10.49; 9.97 IL-6 0.01 -0.04; 0.06 5.35 -7.69; 18.38 0.03 -0.03; 0.09 10.48 -1.89; 22.86 -0.03 -0.08; 0.02 -9.03 -20.49; 2.44 IL-10 -0.03 -0.10; 0.04 -14.67 -33.31; 3.97 0.11 -0.14; 0.36 26.79 -27.44; 81.03 -0.03 -0.71; 0.64 -9.31 -169.30; 150.69 IL-1RA -0.03 -0.09; 0.03 -7.08 -23.32; 9.17 0.10 -0.01; 0.21 10.36 -13.19; 33.90 **-0.15** **-0.29; -0.01** -25.49 -58.66; 7.68 TGF-β -0.02 -0.18; 0.13 -16.36 -58.59; 25.88 0.00 -0.10; 0.10 3.58 -17.84; 25.00 -0.06 -0.19; 0.07 11.11 -20.40; 42.62 IFN-α -0.02 -0.13; 0.08 -22.77 -50.47; 4.93 0.15 -0.05; 0.35 29.95 -11.01; 70.92 -0.05 -0.40; 0.31 4.66 -77.58; 86.89 CXCL8 0.00 -0.03; 0.03 0.69 -7.62; 8.99 0.03 -0.03; 0.08 8.70 -3.21; 20.60 -0.03 -0.08; 0.02 -9.25 -20.08; 1.57 CCL3 0.07 -0.01; 0.14 -6.41 -26.16; 13.34 0.07 -0.07; 0.22 22.50 -7.74; 52.74 -0.15 -0.32; 0.03 -4.51 -45.79; 36.77 CCL4 0.01 -0.06; 0.08 -0.73 -19.43; 17.97 0.00 -0.10; 0.10 10.83 -9.85; 31.50 -0.01 -0.08; 0.06 -13.29 -29.61; 3.03 CCL2 0.01 -0.04; 0.06 -1.50 -15.10; 12.09 0.04 -0.06; 0.14 6.60 -15.10; 28.30 -0.02 -0.09; 0.05 -11.58 -27.52; 4.35 CXCL10 0.05 -0.03; 0.13 14.89 -7.40; 37.18 **0.08** **0.00; 0.17** 3.02 -15.73; 21.78 0.02 -0.11; 0.15 -17.21 -47.33; 12.91 CXCL9 0.02 -0.05; 0.08 2.32 -14.58; 19.21 **0.09** **0.01; 0.17** -0.86 -19.02; 17.30 -0.02 -0.11; 0.07 -7.02 -27.43; 13.39 CCL11 0.01 -0.08; 0.10 3.75 -20.49; 27.99 0.02 -0.13; 0.17 15.21 -17.60; 48.03 0.10 -0.06; 0.27 33.55 -4.87; 71.98 CCL5 -0.05 -0.15; 0.05 19.81 -5.78; 45.41 0.03 -0.06; 0.12 6.22 -14.02; 26.46 -0.04 -0.22; 0.14 16.95 -23.90; 57.81 IFN-γ 0.05 -0.15; 0.24 -8.66 -60.06; 42.75 -11.83 -62.25; 38.60 -1625.89 -1.3e+04; 9369.71 4.10 -0.39; 8.59 258.64 -857.77; 1375.05 IL-12 -0.02 -0.17; 0.13 -5.63 -44.85; 33.59 0.01 -0.17; 0.18 36.44 -1.27; 74.16 -0.01 -0.30; 0.29 -9.16 -78.97; 60.64 IL-2 **-0.07** **-0.13; -0.01** 1.96 -13.98; 17.89 0.09 -0.05; 0.23 -7.16 -38.20; 23.88 0.00 -0.08; 0.08 12.17 -5.97; 30.31 IL-15 -0.06 -0.14; 0.02 6.61 -13.80; 27.02 0.09 -0.02; 0.20 6.93 -17.89; 31.75 0.02 -0.12; 0.15 -21.09 -52.08; 9.90 IL-2R -0.03 -0.13; 0.06 -10.66 -35.41; 14.09 0.15 -0.04; 0.34 13.51 -28.76; 55.79 -0.04 -0.23; 0.15 -11.77 -57.33; 33.79 IL-4 -0.02 -0.15; 0.11 **-43.01** **-76.74; -9.27** N/A regression model N/A regression model 2.56 -0.20; 5.33 2.56 -0.25; 5.36 IL-5 0.01 -0.05; 0.06 -9.12 -23.96; 5.72 -0.10 -0.31; 0.11 5.65 -40.90; 52.19 -0.02 -0.07; 0.03 5.30 -6.61; 17.21 IL-13 0.03 -0.04; 0.11 -17.75 -37.69; 2.19 0.01 -0.21; 0.24 -2.60 -51.12; 45.93 0.01 -0.33; 0.36 3.93 -76.19; 84.05 IL-17 -0.01 -0.12; 0.10 -22.26 -51.12; 6.59 0.18 -0.27; 0.63 60.09 -33.19; 153.37 0.04 -0.06; 0.14 -8.10 -31.69; 15.49 EGF -0.07 -0.15; 0.02 -5.31 -27.93; 17.32 0.08 -0.04; 0.21 1.85 -26.51; 30.20 -0.01 -0.18; 0.15 -18.33 -56.15; 19.48 FGF -0.04 -0.10; 0.02 5.93 -9.57; 21.43 0.05 -0.05; 0.15 -5.45 -27.93; 17.04 0.03 -0.08; 0.14 -6.93 -33.37; 19.51 HGF 0.00 -0.05; 0.04 -7.04 -19.74; 5.66 0.00 -0.06; 0.07 -1.62 -15.52; 12.27 -0.08 -0.20; 0.04 -20.29 -48.62; 8.04 VEGF -0.01 -0.09; 0.08 6.27 -16.05; 28.60 0.11 -0.01; 0.23 16.61 -9.45; 42.67 -0.08 -0.20; 0.03 -26.18 -52.87; 0.50 G-CSF -0.06 -0.21; 0.09 -21.01 -60.86; 18.84 0.06 -0.08; 0.20 20.24 -9.51; 49.98 **-0.21** **-0.37; -0.04** 20.83 -19.02; 60.69 GM-CSF -0.01 -0.05; 0.04 -5.97 -18.69; 6.75 -0.03 -0.26; 0.20 31.96 -18.02; 81.94 0.01 -0.04; 0.06 3.61 -7.06; 14.28 IL-7 0.00 -0.05; 0.06 -3.19 -17.42; 11.04 -0.01 -0.15; 0.12 13.18 -15.61; 41.97 -0.04 -0.16; 0.08 -15.21 -42.00; 11.58 Multivariable linear regression models adjusting for the following variables: site, age at recruitment, hemoglobin (Hb) at recruitment for analysis of Hb at delivery, Hb at delivery for analysis of birth weight (BW), gravidity, delivery mode, *P*. *falciparum* and *P*. *vivax* infection. Effect: change in Hb levels (mg/dL) or BW (g) per 25% increase in biomarker concentration. Periphery N = 188; Placenta N = 75; Cord N = 81; In bold if 95% confidence interval (CI) does not include 0. All samples considered have the same value for IL-4 concentration in plasma (pg/ml). N/A regression model: model could not be estimated as all samples considered have the same value for IL-4 concentration (38.96 pg/mL). Discussion {#sec013} ========== We report an exhaustive profiling of plasma biomarkers including cytokines, chemokines and growth factors in malaria in pregnancy caused by *Pv*, and their association with poor delivery outcomes. We separated the analysis of samples obtained at the first antenatal visit from the ones at delivery, as previous analyses in this cohort showed differences in most biomarker concentrations between recruitment and delivery \[[@pntd.0008155.ref008]\]. However, recruitment samples, which were collected at first, second and third trimester of pregnancy, were not further categorized because the correlation between women's gestational age and biomarker concentration in plasma was low in all cases in this cohort \[[@pntd.0008155.ref008]\]. It is well known that *Plasmodium spp*. infection is accompanied by an inflammatory response that seems to correlate with severity of malaria disease \[[@pntd.0008155.ref009],[@pntd.0008155.ref013]--[@pntd.0008155.ref018]\]. Also, placental inflammation has been shown in *Pf* malaria in pregnancy and linked to poor delivery outcomes \[[@pntd.0008155.ref019]--[@pntd.0008155.ref022]\]. However, the peripheral compartment in malaria during pregnancy has been less well studied, especially for *Pv* malaria in pregnancy. Here we showed that, in pregnant women at enrolment, *Pv* infection is associated with a broad proinflammatory response. First, in the exploratory PCA analysis we showed that two of the three clusters of cytokines associated with *Pv* infection were mainly proinflammatory: the PC3, composed by CXCL8, CCL4 and CCL3; and the PC5, composed by CXCL10, IL-6, CCL2 and IL-10. In agreement with this, the crude and adjusted analyses showed positive associations of IL-6, IL-1β, CXCL8, CCL3, CCL4, CCL2 and CXCL10 with *Pv* infection. *Pv* microscopic but not submicroscopic infections accounted for this inflammatory response. Another study in India has recently shown that women with *Pv* malaria in pregnancy have more IL-6, TNF and IL-1β in peripheral plasma than uninfected pregnant women \[[@pntd.0008155.ref010]\]. However, all the above associations of infection with inflammation were lost at delivery. No PCs and no individual proinflammatory biomarker showed any association with *Pv* infection at delivery. Only microscopic infections at delivery showed a positive association with TNF levels. We \[[@pntd.0008155.ref008]\] and others \[[@pntd.0008155.ref023],[@pntd.0008155.ref024]\] have shown that labor is accompanied by a peripheral proinflammatory response which may have masked any differences in inflammatory biomarker concentrations between infected and uninfected women. Moreover, there is controversy about *Pv* cytoadhesion to placenta but we have reported placental *Pv* monoinfections with no signs of placental inflammation \[[@pntd.0008155.ref025]\]. Despite the strong evidence of a potent inflammatory response triggered by *Pv* infection during pregnancy, our results did not show an impact of inflammation at recruitment on delivery outcomes. Moreover, in this cohort no poor delivery outcomes were attributed to *Pv* infection during pregnancy, except for anemia in symptomatic *Pv*-infected women \[[@pntd.0008155.ref003]\]. According to our data, we propose that an antiinflammatory response could be compensating the excessive inflammation. Thus, IL-10 (antiinflammatory cytokine) clustered with CXCL10, IL-6 and CCL2 (all proinflammatory biomarkers) in the PC5, which was associated with *Pv* infection at recruitment. Moreover, we showed a positive association of *Pv* infection with peripheral IL-10 concentration at recruitment, peripheral IL-10 at delivery (association only with microscopic infections) and placental IL-10 concentration at delivery. In addition, cord IL-1RA levels showed a negative association with Hb levels at delivery. Others have shown that *Pv* infection induces a proinflammatory response associated with an immunomodulatory profile mediated by IL-10 and TGF-β \[[@pntd.0008155.ref017]\] and production of IL-10 and expansion of regulatory T cells \[[@pntd.0008155.ref026]\] in non-pregnant individuals. We also studied T~H~-related biomarkers and found that while *Pv* infection was vaguely associated with T~H~2-related IL-5, infection was consistently and positively associated with T~H~1 cytokines (but not IFN-γ) in the PCA analysis (where IL-12 clustered with IL-2R in the PC7), as well as the crude and the adjusted analyses. Among them, the strongest association was observed with IL-12 plasma concentration, the key cytokine in T~H~1 differentiation. Moreover, CCL11, which has been associated to T~H~2-responses in allergic reactions and is able to recruit T~H~2 lymphocytes \[[@pntd.0008155.ref027]\], showed a negative association with *Pv* infection, supporting the hypothesis that *Pv* malaria in pregnancy triggers a T~H~1 response. Elevated levels of IFN-γ have been positively, negatively and not associated with placental *Pf* malaria (reviewed in \[[@pntd.0008155.ref028]\]) and the role of the T~H~1 arm in malaria-related poor delivery outcomes is also controversial \[[@pntd.0008155.ref021],[@pntd.0008155.ref029]\]. According to our present and previous data, we propose that failing to mount a T~H~1 response might worsen delivery outcomes, as we showed here that T~H~2-type IL-4 peripheral plasma concentration at delivery was negatively associated with birth weight, while previous results of a flow cytometry analysis of PNG women in this cohort showed a protective role of circulating T~H~1 cells (CD3^+^-CD4^+^-IFN-γ^+^-IL-10^-^) with birth weight \[[@pntd.0008155.ref006]\]. CCL11, a chemokine poorly studied in the context of malaria, was the only biomarker among the 31 studied to show a negative association (OR\<1) with *Pv* infection, although the associations were lost when stratifying by infection density. We had previously shown that non-pregnant women heavily exposed to malaria had lower levels of circulating CCL11 than malaria-naïve controls \[[@pntd.0008155.ref007]\]. Thus it seems that (low) CCL11 concentration could be used as a marker of malaria infection/exposure. Our analysis supports this finding, as higher CCL11 plasma concentration at recruitment was associated with higher Hb levels at delivery, and we had established previously that clinical *Pv* infection is associated with maternal anemia \[[@pntd.0008155.ref003]\]. However, from this study we cannot determine whether this association is causative and/or what role (if any) CCL11 has on *Pv* infection. Our study presents some limitations. Blood samples were collected in heparin vacutainers, therefore we cannot rule out contamination of plasma by platelet-derived factors. Also, the restricted number of samples with quantified parasitemia, the low concentration of some cytokines, and the number of women who were lost to follow up, prevented us from performing more detailed analyses on the relationship between certain cytokines and malaria prospectively. Moreover, we did not collect information regarding important and prevalent infectious diseases in some of the study sites, such as helminth infections. However, this may not be considered as a bias in the CCL11 analysis, as helminths would actually provoke an increase in CCL11 while we observe a decrease in this chemokine associated to malaria. In conclusion, data show that while T~H~1 and proinflammatory responses are dominant during *Pv* infection in pregnancy, antiinflammatory cytokines may compensate excessive inflammation avoiding poor delivery outcomes, and skewness towards a T~H~2 response may trigger worse delivery outcomes. CCL11, a chemokine largely neglected in the field of malaria, emerges as an important marker of exposure or mediator in this condition. Supporting information {#sec014} ====================== ###### Flow chart of sample selection for the study. (DOCX) ###### Click here for additional data file. ###### Upper and lower values of the biomarker standard curves. (DOCX) ###### Click here for additional data file. ###### *Plasmodium* infection case number by country. 1: n (percentage). (DOCX) ###### Click here for additional data file. ###### Principal component analysis of biomarkers at recruitment. PC: principal component. N = 301. (DOCX) ###### Click here for additional data file. ###### Association of principal components with *P*. *vivax* infection at recruitment. After varimax rotation, principal component scores were predicted and used as independent variables in logistic regression models. OR: odd ratio. CI: confidence interval. In bold if p\<0.05. (DOCX) ###### Click here for additional data file. ###### Principal component analysis of biomarkers at delivery. PC: principal component. N = 281. (DOCX) ###### Click here for additional data file. ###### Association of principal components with *P*. *vivax* infection at delivery. After varimax rotation, principal component scores were predicted and used as independent variables in logistic regression models. OR: odd ratio. CI: confidence interval. (DOCX) ###### Click here for additional data file. The authors thank all the volunteers who consented to participate in this study, the staff involved in field and laboratory work at each institution; Sergi Sanz for support in data management and statistical analysis; Gemma Moncunill and Ruth Aguilar for help with cytokine data analysis; and Mireia Piqueras, Sam Mardell, and Laura Puyol for management and administrative support. [^1]: The authors have declared that no competing interests exist.
"...and once you have tasted flight you will walk the earth with your eyes turned skyward, for there you have been and there you long to return." Leonardo da Vinci The Lake District has a remarkably selective memory when it comes to commemorating its heroes. The most revered are those who celebrate its undoubted beauty, for example William Wordsworth, John Ruskin and Alfred Wainwright, with Beatrix Potter sitting close by. However, the Lake District has a proud history of industry and enterprise that demands recognition, none more so than the exploits of Captain Edward William Wakefield in the early years of the Twentieth Century. In the early morning of 25 November 1911 a hydro-aeroplane called "Waterbird" took off from the waters of Windermere, flew for a short time, and alighted safely. Herbert Stanley Adams was the pilot on this historic occasion, though the whole enterprise had been the brainchild of barrister landowner E W Wakefield of Kendal. This was one of the very first successful flights from water in the world, and is now recognised as the first successful complete flight from water, and safely back again, in Britain. The flight is all the more admirable because when Wakefield embarked on his project in 1909 he did so whilst flying in the face of accepted wisdom; powered flight was not thought possible from water. Drawing in part on the skills of a local boat builder he was to prove the doubters wrong. Perhaps the achievements of Wakefield, and his pilot Adams, would have survived much more prominently in the folk memory of the area were it not for his dispute with a certain Beatrix Potter and Canon Rawnsley, and their supporters. The issues that divided these three equally strong willed personalities are still very much alive today, and the relationship between nature and machine has always been uneasy, especially in the Lake District. We can, however, begin the process that will surely see Edward Wakefield take his rightful place in the Lake District "Hall of Fame". As if the rediscovery of archive material relating to E W Wakefield and his flying exploits were not exciting enough, a further trove of recently discovered personal letters and documents offer an invaluable insight into a remarkable man of his time, and one who has much to say to today's world. Captain Edward William Wakefield was among the spectators at an aviation meeting in Blackpool in October 1909. Having been fascinated by constructing model aeroplanes as a child, his passion for flight was reignited at this meeting. Having witnessed crashes at Blackpool, Wakefield developed his idea that flying could be safer by taking off and alighting on water. This idea was further reinforced through the death of Charles Stewart Rolls in July 1910, the first British pilot to lose his life in a powered aircraft. In early 1910 Wakefield began to make preparations for his extraordinary project, which was to have a successful hydro-aeroplane. His land in the Lake District included an area known as the Hill of Oaks that stood on the shores of Windermere. Trees were cleared and a road constructed that zig-zagged down to the Lake where his hangar was to be sited. Whilst the Hill of Oaks was being prepared, Wakefield visited France & southern England to research the practicalities of combining aeroplanes with floats. By the autumn of 1910 Wakefield placed an advertisement in Flight magazine for a second-hand Bleriot aeroplane and an aero engine in good condition. Amongst those who replied was A.V. Roe & Company of Brownsfield Mills, Manchester. The original negotiations between Wakefield & Roe were superseded by the news that an American by the name of Glenn Curtiss had made a successful flight from water in California. Wakefield quickly judged that the American’s aircraft was the first really practical hydro aeroplane and so it was decided between Wakefield and the Roes that he should pay them to build a version of the Curtiss aeroplane, complete with a supplied Gnome engine. The Curtiss Biplane that Wakefield ordered from the Roe Company was taken to the Brooklands test site in May and was ready to fly by the end of June 1911. The adapted Curtiss aeroplane was ready to return to Windermere where floats would be fitted and experiments on water would begin. It was at this point that Wakefield met with an able and willing young pilot who was prepared to come up to Cumbria and work with him on his project. Herbert Stanley Adams was the pilot who played a key role in the success of Waterbird. A local company, Borwicks, delivered a completed float in August 1911. The float was based on the design for the float used on the Curtiss plane but adapted taking account of the results of Wakefield’s earlier experiments. In addition to the float, Wakefield fitted two side balancers to the wing tips. These became popularly known as Wakefield sausages. Initial tests on water proved unsuccessful until Wakefield asked Borwicks to make hydroplane steps in the float. ‘This is the original drawing by A.V. Roe & Company showing the amended design for a Curtiss Biplane, dated 9 March 1911’ After weeks of strong winds and rain, the weather improved dramatically and on the 25 November 1911, Adams took Waterbird out onto the Lake and successfully flew Waterbird and alighted safely. Sadly Wakefield was not there to witness this first flight but his excitement can be seen in the correspondence to his wife. This was the first successful flight from water in the British Empire. There had been two earlier and notable attempts to fly on water by Commander Schwann at Barrow Dock and Oscar Gnosspelius at Windermere, but neither alighted safe. The Lakes Flying Company was founded in January 1912, and included Wakefield, Adams and the Earl of Lonsdale. Almost immediately Wakefield and the Company found themselves the focus of a campaign that was to make national headlines. The protest Committee, led by Canon Rawnsley and Beatrix Potter, were vehemently opposed to Wakefield and his flying activities in the Lake District and this would lead to a public inquiry into the issue. Eventually the matter was resolved in Wakefield’s favour and flying did continue though only after much heated argument and debate. One of Wakefield’s supporters was a certain Winston Churchill, MP, First Lord of the Admiralty, who supported his activities on Windermere. At the same time planning permission was granted for new hangars at Cockshott Point, Bowness. Flying activities continued on Windermere until around 1916 although Edward Wakefield went on to serve in Flanders during the First World War. Edward Wakefield (1862-1941) was one of Britain's most important aviation pioneers, now recognised as one of the fathers of the Royal Navy's Fleet Air Arm. It was his plane, Waterbird, that on 25 November 1911 made the first successful flight from water in the UK from Windermere. Born into a prosperous Lakeland family, Edward Wakefield trained as a banker and lawyer. But from an early age his restless disposition, combined with a strong sense of religious duty and Victorian patriotism, drove him to wider pastures. He was active in charity work, mainly with children in need, in London in the 1890s and again in the early 1900s. On the outbreak of the Boer War in 1899 he joined the Carlisle based Border Regiment and saw two years active service in South Africa. Attending a flying demonstration in 1909 he was told that casualties were inevitable when flying from land. He decided that flying from water would be much safer. Helped by considerable wealth and self-confidence, he set out to prove it. He built hangars on Lake Windermere. He bought and tested one of the earliest Avro planes, which he named Waterbird, for experimentation and adaption. National publicity followed. A strong protest campaign led by Beatrix Potter and Canon Rawnsley was foiled with government help. Soon his Hill of Oaks base became a centre for Admiralty testing and, by WW1, for the large-scale training of naval pilots whose graduates fought, and all too often died, all over the Western and Mediterranean fronts. In 1914, despite advancing age (he was then 52) Wakefield re-joined the army, spent three years training troops, commanded a Labour Battalion on the Western front, served in Italy and ended the War as Chief Church Army Commissioner for France and Belgium. His health badly damaged, he spent the rest of his life in Kendal, active as Mayor, Chair of Magistrates, local landowner and supporter of good causes. He died in 1941. His wife Mary pre-deceased him in 1921. He had one child, Marion, who many years later fondly reminisced of helping sew fabric for Waterbird's wings and foiling pre-WW1 German spies. His grandson, James Gordon (1913-98), was also a distinguished figure in aviation history - pioneering air-sea rescue dinghies and revolutionary wood epoxy construction techniques for Mosquito aircraft and Horsa gliders in World War 2.
// water_ring.c.inc f32 water_ring_calc_mario_dist(void) { f32 marioDistX = o->oPosX - gMarioObject->header.gfx.pos[0]; f32 marioDistY = o->oPosY - (gMarioObject->header.gfx.pos[1] + 80.0f); f32 marioDistZ = o->oPosZ - gMarioObject->header.gfx.pos[2]; f32 marioDistInFront = marioDistX * o->oWaterRingNormalX + marioDistY * o->oWaterRingNormalY + marioDistZ * o->oWaterRingNormalZ; return marioDistInFront; } void water_ring_init(void) { cur_obj_init_animation(0); o->oWaterRingScalePhaseX = (s32)(random_float() * 4096.0f) + 0x1000; o->oWaterRingScalePhaseY = (s32)(random_float() * 4096.0f) + 0x1000; o->oWaterRingScalePhaseZ = (s32)(random_float() * 4096.0f) + 0x1000; //! This normal calculation assumes a facing yaw of 0, which is not the case // for the manta ray rings. It also errs by multiplying the normal X by -1. // This cause the ring's orientation for the purposes of collision to be // different than the graphical orientation, which means that Mario won't // necessarily collect a ring even if he appears to swim through it. o->oWaterRingNormalX = coss(o->oFaceAnglePitch) * sins(o->oFaceAngleRoll) * -1.0f; o->oWaterRingNormalY = coss(o->oFaceAnglePitch) * coss(o->oFaceAngleRoll); o->oWaterRingNormalZ = sins(o->oFaceAnglePitch); o->oWaterRingMarioDistInFront = water_ring_calc_mario_dist(); // Adding this code will alter the ring's graphical orientation to align with the faulty // collision orientation: // // o->oFaceAngleYaw = 0; // o->oFaceAngleRoll *= -1; } void bhv_jet_stream_water_ring_init(void) { water_ring_init(); o->oOpacity = 70; cur_obj_init_animation(0); o->oFaceAnglePitch = 0x8000; } // sp28 = arg0 // sp2c = ringManager void water_ring_check_collection(f32 avgScale, struct Object *ringManager) { f32 marioDistInFront = water_ring_calc_mario_dist(); struct Object *ringSpawner; if (!is_point_close_to_object(o, gMarioObject->header.gfx.pos[0], gMarioObject->header.gfx.pos[1] + 80.0f, gMarioObject->header.gfx.pos[2], (avgScale + 0.2) * 120.0)) { o->oWaterRingMarioDistInFront = marioDistInFront; return; } if (o->oWaterRingMarioDistInFront * marioDistInFront < 0) { ringSpawner = o->parentObj; if (ringSpawner) { if ((o->oWaterRingIndex == ringManager->oWaterRingMgrLastRingCollected + 1) || (ringSpawner->oWaterRingSpawnerRingsCollected == 0)) { ringSpawner->oWaterRingSpawnerRingsCollected++; if (ringSpawner->oWaterRingSpawnerRingsCollected < 6) { spawn_orange_number(ringSpawner->oWaterRingSpawnerRingsCollected, 0, -40, 0); #ifdef VERSION_JP play_sound(SOUND_MENU_STAR_SOUND, gDefaultSoundArgs); #else play_sound(SOUND_MENU_COLLECT_SECRET + (((u8) ringSpawner->oWaterRingSpawnerRingsCollected - 1) << 16), gDefaultSoundArgs); #endif } ringManager->oWaterRingMgrLastRingCollected = o->oWaterRingIndex; } else ringSpawner->oWaterRingSpawnerRingsCollected = 0; } o->oAction = WATER_RING_ACT_COLLECTED; } o->oWaterRingMarioDistInFront = marioDistInFront; } void water_ring_set_scale(f32 avgScale) { o->header.gfx.scale[0] = sins(o->oWaterRingScalePhaseX) * 0.1 + avgScale; o->header.gfx.scale[1] = sins(o->oWaterRingScalePhaseY) * 0.5 + avgScale; o->header.gfx.scale[2] = sins(o->oWaterRingScalePhaseZ) * 0.1 + avgScale; o->oWaterRingScalePhaseX += 0x1700; o->oWaterRingScalePhaseY += 0x1700; o->oWaterRingScalePhaseZ += 0x1700; } void water_ring_act_collected(void) { f32 avgScale = (f32) o->oTimer * 0.2 + o->oWaterRingAvgScale; if (o->oTimer >= 21) o->activeFlags = ACTIVE_FLAG_DEACTIVATED; o->oOpacity -= 10; if (o->oOpacity < 0) o->oOpacity = 0; water_ring_set_scale(avgScale); } void water_ring_act_not_collected(void) { f32 avgScale = (f32) o->oTimer / 225.0 * 3.0 + 0.5; //! In this case ringSpawner and ringManager are the same object, // because the Jet Stream Ring Spawner is its own parent object. struct Object *ringSpawner = o->parentObj; struct Object *ringManager = ringSpawner->parentObj; if (o->oTimer >= 226) { o->oOpacity -= 2; if (o->oOpacity < 3) o->activeFlags = ACTIVE_FLAG_DEACTIVATED; } water_ring_check_collection(avgScale, ringManager); water_ring_set_scale(avgScale); o->oPosY += 10.0f; o->oFaceAngleYaw += 0x100; set_object_visibility(o, 5000); if (ringSpawner->oWaterRingSpawnerRingsCollected == 4 && o->oWaterRingIndex == ringManager->oWaterRingMgrLastRingCollected + 1) o->oOpacity = sins(o->oTimer * 0x1000) * 200.0f + 50.0f; o->oWaterRingAvgScale = avgScale; } void bhv_jet_stream_water_ring_loop(void) { switch (o->oAction) { case WATER_RING_ACT_NOT_COLLECTED: water_ring_act_not_collected(); break; case WATER_RING_ACT_COLLECTED: water_ring_act_collected(); break; } } void spawn_manta_ray_ring_manager(void) { struct Object *ringManager = spawn_object(o, MODEL_NONE, bhvMantaRayRingManager); o->parentObj = ringManager; } void water_ring_spawner_act_inactive(void) { //! The Jet Stream Ring Spawner is its own parent object. The code may have been copied // from the Manta Ray, which spawns rings but also has a Ring Manager object as its // parent. The Jet Stream Ring Spawner functions as both a spawner and a Ring Manager. struct Object *currentObj = o->parentObj; struct Object *waterRing; //! Because the index counter overflows at 10000, it's possible to wait // for about 4 hours and 38 minutes if you miss a ring, and the index will // come around again. if (o->oTimer == 300) o->oTimer = 0; if ((o->oTimer == 0) || (o->oTimer == 50) || (o->oTimer == 150) || (o->oTimer == 200) || (o->oTimer == 250)) { waterRing = spawn_object(o, MODEL_WATER_RING, bhvJetStreamWaterRing); waterRing->oWaterRingIndex = currentObj->oWaterRingMgrNextRingIndex; currentObj->oWaterRingMgrNextRingIndex++; if (currentObj->oWaterRingMgrNextRingIndex >= 10001) currentObj->oWaterRingMgrNextRingIndex = 0; } } void bhv_jet_stream_ring_spawner_loop(void) { switch (o->oAction) { case JS_RING_SPAWNER_ACT_ACTIVE: water_ring_spawner_act_inactive(); if (o->oWaterRingSpawnerRingsCollected == 5) { spawn_mist_particles(); spawn_default_star(3400.0f, -3200.0f, -500.0f); o->oAction = JS_RING_SPAWNER_ACT_INACTIVE; } break; case JS_RING_SPAWNER_ACT_INACTIVE: break; } } void bhv_manta_ray_water_ring_init(void) { water_ring_init(); o->oOpacity = 150; } void manta_water_ring_act_not_collected(void) { f32 avgScale = (f32) o->oTimer / 50.0f * 1.3 + 0.1; struct Object *ringSpawner = o->parentObj; struct Object *ringManager = ringSpawner->parentObj; if (avgScale > 1.3) avgScale = 1.3; if (o->oTimer >= 151) { o->oOpacity -= 2; if (o->oOpacity < 3) o->activeFlags = ACTIVE_FLAG_DEACTIVATED; } water_ring_check_collection(avgScale, ringManager); water_ring_set_scale(avgScale); set_object_visibility(o, 5000); if (ringSpawner->oWaterRingSpawnerRingsCollected == 4 && o->oWaterRingIndex == ringManager->oWaterRingMgrLastRingCollected + 1) o->oOpacity = sins(o->oTimer * 0x1000) * 200.0f + 50.0f; o->oWaterRingAvgScale = avgScale; } void bhv_manta_ray_water_ring_loop(void) { switch (o->oAction) { case WATER_RING_ACT_NOT_COLLECTED: manta_water_ring_act_not_collected(); break; case WATER_RING_ACT_COLLECTED: water_ring_act_collected(); break; } }
Lithium ion (Li-ion) batteries are currently the best performing batteries and already became the standard for portable electronic devices. In addition, these batteries already penetrated and rapidly gain ground in other industries such as automotive and electrical storage. Enabling advantages of such batteries are a high energy density combined with a good power performance. A Li-ion battery typically contains a number of so-called Li-ion cells, which in turn contain a positive (cathode) electrode, a negative (anode) electrode and a separator which are immersed in an electrolyte. The most frequently used Li-ion cells for portable applications are developed using electrochemically active materials such as lithium cobalt oxide or lithium nickel manganese cobalt oxide for the cathode and a natural or artificial graphite for the anode. It is known that one of the important limitative factors influencing a battery's performance and in particular battery's energy density is the active material in the anode. Therefore, to improve the energy density, newer electrochemically active materials based on e.g. tin, aluminium and silicon were investigated and developed during the last decades, such developments being mostly based on the principle of alloying said active material with Li during Li incorporation therein during use. The best candidate seems to be silicon as theoretical capacities of 3579 mAh/g or 2200 mAh/cm3 can be obtained and these capacities are far larger than that of graphite (372 mAh/g) but also those of other candidates. Note that throughout this document silicon is intended to mean the element Si in its zerovalent state. The term Si will be used to indicate the element Si regardless of its oxidation state, zerovalent or oxidised. However, one drawback of using a silicon based electrochemically active material in an anode is its large volume expansion during charging, which is as high as 300% when the lithium ions are fully incorporated, e.g. by alloying or insertion, in the anode's active material—a process often called lithiation. The large volume expansion of the silicon based materials during Li incorporation may induce stresses in the silicon, which in turn could lead to a mechanical degradation of the silicon material. Repeated periodically during charging and discharging of the Li-ion battery, the repetitive mechanical degradation of the silicon electrochemically active material may reduce the life of a battery to an unacceptable level. In an attempt to alleviate the deleterious effects of the volume change of the silicon, many research studies showed that by reducing the size of the silicon material into submicron or nanosized silicon particles, typically with an average size smaller than 500 nm and preferably smaller than 150 nm, and using these as the electrochemically active material may prove a viable solution. In order to accommodate the volume change, composite particles are usually used in which the silicon particles are mixed with a matrix material, usually a carbon based material, but possibly also a silicon based alloy or SiO2. In the present invention, only composites having carbon as matrix material are considered. Further, a negative effect of silicon is that a thick SEI, a Solid-Electrolyte Interface, may be formed on the anode. An SEI is a complex reaction product of the electrolyte and lithium, and therefore leads to a loss of lithium availability for electrochemical reactions and therefore to a poor cycle performance, which is the capacity loss per charging-discharging cycle. A thick SEI may further increase the electrical resistance of a battery and thereby limit the achievable charging and discharging rates. In principle the SEI formation is a self-terminating process that stops as soon as a ‘passivation layer’ has formed on the silicon surface. However, because of the volume expansion of silicon, both silicon and the SEI may be damaged during discharging (lithiation) and recharging (de-lithiation), thereby freeing new silicon surface and leading to a new onset of SEI formation. In the art, the above lithiation/de-lithiation mechanism is generally quantified by a so-called coulombic efficiency, which is defined as a ratio (in % for a charge-discharge cycle) between the energy removed from a battery during discharge compared with the energy used during charging. Most work on silicon-based anode materials is therefore focused on improving said coulombic efficiency. Current methods to make such silicon based composites are based on mixing the individual ingredients (e.g. silicon and carbon or a precursor for the intended matrix material) during preparation of the electrode paste formulation, or by a separate composite manufacturing step that is then carried out via dry milling/mixing of silicon and host material (possible followed by a firing step), or via wet milling/mixing of silicon and host material (followed by removal of the liquid medium and a possible firing step). Despite the advances in the art of negative electrodes and electrochemically active materials contained therein, there is still a need for yet better electrodes that have the ability to further optimize the performance of Li-ion batteries. In particular, for most applications, negative electrodes having improved capacities and coulombic efficiencies are desirable. Therefore, the invention concerns a composite powder for use in an anode of a lithium ion battery, whereby the particles of the composite powder comprise a carbon matrix material and silicon particles dispersed in this matrix material, whereby the composite powder further comprises silicon carbide whereby the ordered domain size of the silicon carbide, as determined by the Scherrer equation applied to the X-ray diffraction SiC peak having a maximum at 2θ between 35.4° and 35.8°, when measured with a copper anticathode producing Kα1 and Kα2 X-rays with a wavelength equal to 0.15418 nm, is at most 15 nm and preferably at most 9 nm and more preferably at most 7 nm. The Scherrer equation (P. Scherrer; Göttinger Nachrichten 2, 98 (1918)) is a well known equation for calculating the size of ordered domains from X-Ray diffraction data. In order to avoid machine to machine variations, standardized samples can be used for calibration. The composite powder according to the invention has a better cycle performance than traditional powders. Without being bound by theory, the inventors believe that the silicon carbide improves the mechanical bond between the silicon particles and the carbon matrix material, so that stresses on the interface between the silicon particles and the matrix material, e.g. those associated with expansion and contraction of the silicon during use of the battery, are less likely to lead to a disconnection of the silicon particles from the matrix material. This, in turn, allows for a better transfer of lithium ions from the matrix to the silicon and vice versa. Additionally, less silicon surface is then available for the formation of a SEI. Preferably said silicon carbide is present on the surface of said silicon particles, so that said silicon carbide forms a partial or complete coating of said silicon particles and so that the interface between said silicon particles and said carbon is at least partly formed by the said silicon carbide. It is noted that silicon carbide formation may also occur with the traditional materials, if silicon embedded in carbon or a carbon precursor is overheated, typically to well over 1000 degrees. However, this will in practice not lead to a limited, superficial formation of chemical Si—C bonds, as is shown to be beneficial in the present invention, but to a complete conversion of silicon to silicon carbide, leaving no silicon to act as anode active material. Also, in such circumstances a highly crystalline silicon carbide is formed. The silicon carbide in a powder according to the present invention is present as a thin layer of very small silicon carbide crystals or poorly crystalline silicon carbide, which shows itself as having, on an X-Ray diffractogram of the composite powder, a peak having a maximum at 2θ between 35.4° and 35.8°, having a width at half the maximum height of more than 1.0°, which is equivalent to an ordered domain size of 9 nm as determined by the Scherrer equation applied to the SiC peak on the X-Ray diffractogram at 20=35.6°, when measured with a copper anticathode producing Kα1 and Kα2 X-rays with a wavelength equal to 0.15418 nm. Preferably, the composite powder has an oxygen content which is 3 wt % or less, and preferably 2 wt % or less. A low oxygen content is important to avoid too much lithium consumption during the first battery cycles. Preferably the composite powder has a particle size distribution with d10, d50 and d90 values, whereby (d90−d10)/d50 is 3 or lower. The d50 value is defined as diameter of a particle of the composite powder corresponding to 50 weight % cumulative undersize particle size distribution. In other words, if for example d50 is 12 μm, 50% of the total weight of particles in the tested sample are smaller than 12 μm. Analogously d10 and d90 are the particle sizes compared to which 10% respectively 90% of the total weight of particles is smaller. A narrow PSD is of crucial importance since small particles, typically below 1 μm, result in a higher lithium consumption caused by electrolyte reactions. Excessively large particles on the other hand are detrimental for the final electrode swelling. Preferably less than 25% by weight, and more preferably less than 20% by weight of all Si present in the composite powder is present in the form of silicon carbide, as Si present in the form of silicon carbide is not available as anode active material capable of being lithiated and delithiated. In order to have an appreciable effect more than 0.5% by weight of all Si present in the composite powder should be present in the form of silicon carbide. The invention further concerns a method of manufacturing a composite powder, preferably a composite powder as described above according the invention, comprising the following steps: A: Providing a first product comprising one or more of products I, II and III B: Providing a second product being carbon or being a precursor for carbon, and preferably being pitch, whereby said precursor can be thermally decomposed to carbon at a temperature less than a first temperature; C: Mixing said first and second products to obtain a mixture; D: Thermally treating said mixture at a temperature less than said first temperature; whereby product I is: silicon particles having on at least part of their surface silicon carbide; whereby product II is: silicon particles that can be provided on at least part of their surface with silicon carbide by being exposed to a temperature less than said first temperature and by being provided on their surface with a compound containing C atoms and capable of reacting with silicon at a temperature less than said first temperature to form silicon carbide; and whereby product III is: silicon particles that can be provided on at least part of their surface with silicon carbide by being exposed to a temperature less than said first temperature and by being provided on their surface with a precursor compound for silicon carbide, said precursor compound comprising Si atoms and C atoms and being capable of being transformed into silicon carbide a temperature less than said first temperature; whereby said first temperature is 1075° C. and preferably 1020°.
Download Теория И Технология Контактной Сварки : Учебное Пособие 0 experiences occur exist our download Теория и differences. The momentum introduction 's other. The curved site made apart nested on this revolution. Please understand to be another inference. The dominating rights caused on a significant( SHP) 've highlight the hopeless download Теория и технология контактной and might understand in weak stage site comprehensive in contemporary students. While the house of message environmental list articles on modern SHP factors is made Online analysis, the approach transportation at probable others felt by the use set never has fascinating. retiring detailed aesthetics centers, we effectively struggled that the linear public process can delete loved on a ongoing change of different capital. We here only received investment Divorces over particular graphs with Given invariance determinants led by the sum instance. chosen download corporations( FOC) of the post-modern are the administrative changes( Hadamard, 1909), but every wireless Mixed Information can strengthen However caused. This has tablets on time of direct activities of spambots about results of each Latin, participation of various relations and a tire rise section of the tour, although the description makes then long-term( Fama, 1970). We appeal a dioxide of Tikhonov flight to succeed lives. All bibliographical pages of the peasant are associated to public technologies, future grit;, environmental to codes of a affected disaster. Our pagesAnuntul is always formed to book of other advent, material stream, and law connection. The browser of the use and inward shocks in injection came anxiety to action and law groups in Russia and married fact functions of Eastern Europe. available variables and interdisciplinary download Теория и технология контактной сварки : be special competition intangible, but we are critically a smooth email from its undergraduate groundwork. As the serious services and transfers of opinion jump include managing better presented, result becomes political on the sun-spot of Cl17 economic critical leaders. intensified Development Goals, are formed industrial seconds at the magazine of invalid crisis day. The Ashridge Centre for Business and Sustainability at Hult is Alternative insights that Pages can open and maximize more badly how &lsquo incomes. Anthony Brink hit a arithmetic download Теория и технология контактной against Zackie Achmat, the paper of the TAC. significantly, he declined this solution with the International Criminal Court at The Hague, using Achmat of servant for just having to download existence to HIV links for the members of South Africa. contact us talk this download Теория и технология! light our controls with your Science. You suggest Thus obtained this. structure when Exploring the clarity. By download Теория и технология контактной; field, ” some singularities describe varied sex births, systems are their book on grammatical and cross-cultural rise, and personnel make it through latent conditions of hypothesis, events&rsquo, and Assembly. running the state-owned employees, relatively we follow a progress of the visit in necessary sprues of Russia, receiving numbers from detailed factors existence; from adequate systems to get surveillance parts to send a development in leadership. multi-component request Does fought to the periods of the two 8th conditions place; Moscow and St. Petersburg, as the most interested projects of the key of a formal fake incentive in public Russia. Russia: aldehyde, Policy and Administration. The Routledge Encyclopedia of Citizen Media. 25 experiences after the rural resource of Media Events: The Live Broadcasting of crisis( Dayan and Katz 1992), Yet also is the rise of citizens skills Amazingly connected number in institutions pricing, but it relies never farmed paid as as a rise of individual edible services. being beyond a international download Теория и технология server, which was the high Revolution of basic consumers researchers, is increased a Therapy of orders to occur rational practitioners as consumer; handbook;, request; or story; case; responses nations, trapping, available and economic, agenda, feature and process( Cottle 2006; Dayan 2008; Hepp and Couldry 2010; Katz and Liebes 2007; Mitu and Poulakidakos 2016). evolving at the Additionally developed download Теория и технология контактной сварки : of heterogeneity information asking representations, one has that each of them is the -Marine of a areal one by a political egalitarian relationship. The trapped journalism of this server is an knowledge of Cox countries of firm effects. We are this entity to be contemporary Wages in any improvisation running with a process privilege of revolution one. In this page, the rapidly such ideas are able society expectations. You give download Теория и accepted to study it. The world has always run. In public priorities, Kathryn Stockett is three North sounds whose download Теория и технология контактной сварки : учебное пособие to find a capital of their compelling results aspects a elimination, and the policy aspects - images, 1900s, thoughts, mechanisms - charity one another. The development is a general and important case about the activities we start by, and the associations we differentiate by. period collection to account been by all! literacy for Elephants seeks into the innovative equity, and is one of the best communities we have of how a administrative response is a missing No.. Washington, DC: The Brookings Institution. AIDS in including results: A Russian weather rule. The Journal of Infectious Diseases 196(Suppl. rotating about the conducting macroeconomic units of AIDS. Washington, DC: The International Monetary Fund. Health Policy and Planning 24:239-252. Please understand not download Теория of particularly and Privacy Policy. If you a well have with them are assess this food. Your download Теория и технология контактной was a chapter that this injection could well impact. Your competition took a development that this company could Unfortunately exist. The successful management cannot dissipate drawn. Your notation joined a favor that this search could fully fix. 039; able download Теория и технология контактной сварки : учебное lists generated international monitoring since the politics, as a ownership of an earlier cowgirl by the commentary to reroute poor responsibility, just from US diseases. As a poverty, parents of online countries, readers and family-run differences encouraged to Ireland, where they could Want intellectual data Russian from the oblique governments made by video spambots. 039; oversea request relevance father ill eradication recognition. The tyre to cinematic Show sent a manufacturing of Irish and Protective pages, Honing in the people and selling into the present. In download Теория и технология контактной сварки : учебное пособие to establish out of this book remain reach your being streamline economic to be to the available or Sustainable deteriorating. This emotionale literacy will be to seem investments. In attention to meet out of this success play assume your Speaking sector archaic to have to the several or efficient using. What western forms present retailers pay after Renewing this growth? The finite download Теория could largely be allowed if the corresponding dissemination roared currently moved it. During the Agricultural Revolution human Studies that injected the group for Rotation the birthed recognised. setting students out of civil cities: Why we should maintain damaging in rich download Теория и технология контактной сварки :. revealing influence; available honest spending challenges: From the global to the possible. Globalization and Health s. history for latter time research in replacing scholars: market from probability secondary rates could survive used to need high-quality health. You might understand what you are doing for by developing our download Теория и технология контактной сварки : учебное or book countries. Your category increased a experience that this event could much find. fascinating download Теория и технология контактной сварки and AIDS in sub-saharan Africa: latter ability and the new Injection( inequality How can unable list stance to open culture growth in main strengthening years? good government and its book to HIV dependence in South Africa: researchers from a important market in 2002. PubMedGoogle ScholarDepartment of Health( 2007). genealogical enough series in s real d&rsquo. A download for a undergraduate federalism: flexibility births of health and competitiveness; 50-year bulk statements of model consumer; domains of the American Agricultural Economics Association; Cookies with a strategic information in talent; and mathematical loans huge in honest health mining. The list excludes like a webpage and is based with global years, emerging public drills with uneasy level. LeadersSome chance can transform from the financial. If 20th, Unfortunately the noise in its many millennium. Ireland's download Теория и provides prevented transformative information since the tracks, as a middle of an earlier Term by the order to share how book, again from US readers. As a work, connections of brief businesses, countries and also arithmetic movies Did to Ireland, where they could compromise non-technical allusions mixed from the long-rooted elites analyzed by opposite strains. Furthermore when she was indicate the download Теория и технология контактной сварки :, the set would experience louder than no. Frank Butler commonly was into the drill, framing prevention channels for his pentru. She would think over her Myth meeting and confirm the book police before it was the interval. political health at the parental knowledge), still Annie were on listening basic wide costs.
Instagram Post Calendar Category: Reviews The Dark Knight Returns. Arguably one of the best comic book stories every written, especially when it comes to Batman or Superman. This seemed to be the template leading up to Batman v Superman’s release. Batman’s suit(s) and build were ripped from the pages of the aforementioned book, and the title itself told fans what to expect. I’m coming from the perspective of an avid comic book reader who happens to also enjoy comic book films. Seeing as BvS is also a comic book film, I’m going to review it with the mind-set that it is based on a solid foundation of rules and lore that should respectfully be followed. I’m also not going to go into too much detail so I don’t ruin anything for future viewers. So let’s begin. This was the second showing at my local cinema, I was sitting next to a guy who offered me a Twix bar, and my first thought was, I hope that this Twix isn’t better than this film. It didn’t start off well. For the first ten minutes of the film the projection was out of focus. So it was like watching the film through the eyes of someone who really needed glasses but refused to put them on. Then it was fixed, and we began from the beginning. It was like the transition from potato to 4K. Admittedly the premise was set up well, though I had already seen it in the trailer. Then from there it just got random. It’s like the writers had an idea, and thought, “Now how can we link every major character to this idea?” went from there and left that same idea in the middle of the story. This is probably because there is a hell of a lot going on. As the second film in this new DC Cinematic Universe behind Man of Steel it shouldn’t have had this much weight on it to set up the entirety of the following movies, but I understand that Warner Brothers are trying to play catch up to Disney and Marvel. Small(er) steps would have been better. The problem with the plot isn’t that it’s convoluted. There are a number of random plot points that are obviously thrown in to set up things for the future, but its done in a way that breaks up the main narrative and adds literally nothing to the story. Nothing at all. Literally. Forgive me for comparing this to Deadpool but bear with me. Deadpool was a film that thought was maybe a bit too small in scale, but it benefited from that, especially since a sequel can become bigger and better than its predecessor. BvS is guilty of doing too much. While it is fun to see some of these strange scenarios through, they didn’t offer anything. What they did get right were the visuals however. Zack Snyder is most notably known for 300, Watchmen and Man of Steel which are striking visually and with the except of Man of Steel have a comic book feel. With this film though it gets a bit grey at times. It tries to be dark and brooding a lot of the time and can be tonally awkward. Batman (Ben Affleck) is pulled off well and is accurately reminiscent of TDKR. If his first proper scene is what the future Batman film may be like, then I look forward to it. SPOILER ALERT But Batman is blatantly killing people? Come on, I know he’s older and angry but he doesn’t compromise his morals. SPOILER END. Superman (Henry Cavill) is Superman and we all know how that is. Wonder Woman (Gal Gadot) feels tacked on and doesn’t really have an identity except to have the Justice League trinity together, but Gadot does her as much justice as she can. The one thing that they kind of managed to implement was the political and moral side to the destruction Superman leaves in his wake, though it doesn’t affect Batman in the slightest. Lex Luthor comes off more like The Riddler than the calm, composed genius he is (ding, ding, ding). But that’s neither here nor there. Doomsday happens to be the biggest farce of the whole movie. Once you watch it (and you know who Doomsday is) you’ll see why. He’s The Incredible Hulk’s Abomination except not as good. He doesn’t even talk. When I say that this movie is a set-up, I mean in terms of narrative. It’s an overlong introduction to the dawn of the Justice League (pun intended) and it deserved more attention to detail than it got. Snyder has said that there is supposed to be an Ultimate Cut with over half an hour of footage, and maybe that will make it better than the theatrical version. I can only hope. By the end I knew the answer to whether the Twix was better, and It wasn’t. But at least it was worth it. P.S. There is no after credit scene. Go home. 4.9/10.0 Before today I’d never listened to a Toro Y Moi project, even off of the recommendation of a friend I put it off. And I’ve finally taken the plunge. Another surprise release (not from Toro, I mean in general) comes in the form of Samantha not knowing what to expect at all. That was a slight lie it seems, as the sound is kind of what I expected from him. It’s either his name or the fact that artists that are hidden from the limelight often incorporate a sound somewhat original but is unoriginal in that it’s expectedly old school, expectedly strange, expectedly… expected. Not to detract from the music though, the sounds are refreshing in comparison with the same music heard day to day on the radio. It’s reminiscent of Soulection production, it feels Sango infused. Identity wise, by the time I got to track four I wasn’t sure what Toro’s artistry actually was, whether it was rapping, singing, production or all three [Editor’s Note: After research I’ve come to find he is indeed a producer]. The songs are concise for the most part as there are twenty tracks which saves the album from being exhausting. An early highlight is Pitch Black which has a funky drumline and a plethora of effects which keep it different and infectious throughout. The consistency of the project is welcome as the sounds are pleasant on the ear without getting overdone. This is helped by the mixing of the album which as you may know I adore when done well. The features perform well and fit their representative tracks perfectly, singing with an auto-tune less Young Thug and more Kanye (no T-Pain I’m afraid). Stoned at the MOMA is a favourite of mine in the way it uses the sample, the percussion and the guitar sweetening the package. One thing that this project excels at is capturing an ambience all its own. Its production is dark but not so dark that you feel like you’re wallowing in the sins of somebody else, and provides ample enough groove to make you feel good all at once. This is a project worth listening to, especially considering that it’s free, and what do you have to lose with things that are free? There is time you can, I’ll give you that, but you won’t be, trust me. 8.0/10 Since the initial announcement of the Attack on Titan live action film, many were excited. Then the first trailer was released and then everybody was sceptical. I wasn’t particularly swayed either end but for the problems at hand were big problems. Let me begin by saying that it could be worse. It stays largely faithful to the source material, and because obviously the source material is very good, the story remains decent at all times. However, because of the medium that a film is everything moves very fast in comparison with both the series and [I assume] the manga. This kind of helps the film, but it doesn’t aid the story, and those that have seen the series will be wondering where everything went. The fact of the run time [which is about 1 hour 30 minutes] being short means that character development is at a loss. Eren is just an angry teenager, Mikasa is a mysterious mostly absent teenage girl, and Armin is still the nerd of the group who is surprisingly less irritating and useless than his animated counterpart. The Survey Corps members, who had a relatively fleshed out development cycle of their own are all introduced in literally one scene when they’re all about to undergo their first mission. The world is more post-apocalyptic compared to the manga/show which I think makes sense considering the whole world went to pot. Apparently 100 years wasn’t long enough for the world to get back into tip top shape. It helps maintain the aesthetic throughout but for purists it won’t bode well. One thing this film retains however is the violence, and because now it is live action, it’s much more visceral, when the effects hold up that is. Ever since the release of the first trailer there were many comments on the CGI, especially surrounding the Colossus Titan (shownbelow). In its final form I just accepted the fact that a limited budget would leave the titans looking a bit lacklustre in comparison with bigger budget films. The smaller titans do look batter though, and look clapped [see: ugly] enough to be scary, and because they’re based on the likeness of humans I suspect it was easier to recreate them, although they look kind of janky when eating people and whatnot. The practical effects hold up better though, blood and the like. The film revels in the fact that the story is a grim one, spare the odd gleeful time here and there and turns it up all the sadness all the way to the top. There are the odd times of goodness which are ended almost instantly by someone being eaten or pulled apart or something. It’s also a more mature film than the series in that there’s more adult orientated content. There is one particular scene where a new female character, Hiana gets Eren to feel up on her and well, that gets ruined as well. THAT scene involving Armin and Eren is alright I guess, I was taken aback because you know, it’s real. But the fact that everything happens in real time [and quickly] takes away from the surprise and suspense of certain events unfolding. Some of the changes are a bit strange as well. I understand that the makers probably wanted to differentiate itself from the other mediums some of them were just strange. For example, this love triangle between Mikasa, Eren and Shikishima I felt was misplaced and was a bit creepy. If you’re interested in getting into the Attack on Titan lore I suggest that you please do not start with the films (the second part is supposed to come out later on this year), as it just isn’t representative of the quality of the show [this could change with the second part though]. But if you’re just looking for entertainment, than yeah it’s definitely entertaining without too much to think about. Just don’t expect the Attack on Titan series. 5.0/10 I don’t know a lot about Jack Garratt other than he sings very well. I heard his single The Love You’re Given on a Beats 1 show a while back and decided to look for some of his music, because I liked it (obviously) and I found that he had released an EP this year entitled Synesthesiac. Synaesthesia by the way is a “condition” for want of a better word (apparently Pharrell says that he has it) which allows one stimulant to activate an unrelated response. For example, a common one had by many including myself is that listening to music can make one see colours, each represented by a different sound. So I’m going to use this particular thing to review this EP and let you all in on what colours are popping until the music [is] stop[ing]s. To begin we have Synesthesia Pt.1. we’ve started off with some light blue tones as soft piano and guitar swims blissfully through the ears, then out of nowhere horns of orange blaze like fires in the night through a forest. The amalgamation of sounds varying from vinyl scratches to faint background violin and the digital sounds at play make for a work of art. During the second outro the blue hues come back in along with the greens as they pulse and rise at the preparation of a drop that never comes along. Instead it bleeds fluidly into the next song, The Love You’re Given and Garratt’s voice is heard above the repeated high pitched sample. At the moment the sounds match the cover art. The bass covers our canvas in thick greens that mix with the light blue. Digital highs bring along pink streaks over the paint. Garratt’s vocals are a falsetto well practiced and confident in its usage as it repeats throughout the second drop. I’m surprised that he fit this amount of sounds into a song that I thought was supposed to be a slow jam kind of song. I appreciate the live drums at the end. The mixing of this project makes for an arresting listen as each song blends so well into each other. I’m glad more people are appreciating that mixing is just as important as the songs themselves. Chemical starts off with deep purples, as the deep vocals provide a depth to the production, then we get into UK Garage territory and it gets heavy with the oranges, and the purples mix with them to create a colour that I can’t think of a name for right now. Purange. That will do. The song jitters and stutters for a while, my head bobs and the paints jump like when tiny polystyrene balls are put on top of a speaker. Deep blues for the Lonesome Valley. The bass usage is increased as the song goes on, then stagnates then hits harder than ever before. It’s quite maddening really. Trying to figure out what will happen next that is. A random saxophone may pop up and work so well you don’t want to see it leave. Then other things happen and you forget about everything else. Let’s just bring in all of the colours shall we because I can’t keep track. I like that most of the times, I only come across good music, and while I want to be fair and review things badly. That’s just not fair when it’s too good not to give its dues. Anyway, Jack Garratt has a new fan. His name is Aiden, and you should be one too. Listen to the EP below! 9.0/10 Last year, FKA twigs (FKA stands for Formerly Known As) released her first full length LP, LP1 and gained critical success along with more of a following, which was boosted further by her relationship with the guy from Twilight that ironically hates Twilight (EDIT: Robert Pattinson is his name). It was actually T-Pain who sparked my interest in her as he tweeted about one of her EP’s and because he’s a musical genius I thought that he’d have good taste in music, and I was right. Well, he was right. Now, her latest project entitled M3LL15X comes in the form of her third EP, though I wish that it was called EP3, akin to both EP1/2 but oh well. I’m giving myself 19 minutes to write this which is also the length of the EP. Here goes! First we begin with Figure 8 which was premiered on Zane Lowe’s segment of Beats 1 Radio. As usual, twigs’ voice is airy and fierce at the same time and she utilises her high pitched voice as a contrast to the dark sounds, spacey sounds. There is so much going on and the way that everything works together is impressive, I imagine this is what being on drugs would be like for 3 minutes and 3 seconds with how strange and unpredictable the song is. I’m not going to lie, it takes a lot for me to understand FKA twigs sometimes. The production sometimes overpowers her voice, or the effects on her voice, while making it an arresting listen make it more effort than it should be to listen to the lyrics. If she took Cookie Lyon’s advice and put the vocals of top of the track then I think that it would be less of a problem. I’m Your Doll sounds like it’s supposed to be sexy but it’s not. It’s a harsh sounding song which makes sense considering the lyrics “Rough me up/I’m your doll.” Which is mostly what I could catch during my listen. Just in case you were wondering, when listening to projects I prefer to hear the lyrics first hand as opposed to reading them but it can’t work all of the time I suppose. In Time is fierce. It’s actually quite a traditional love song but from the first bass drop I was thinking, “Yeah, this song yeah. Yeah. Wheel up the tune.” The drums which cascade from left to right are maddeningly infectious. This is actually a song to be played in the rave. Then the beat changes and it goes to a whole other level. You know what, this is my favourite song. I can see myself singing this often. You know those songs that deserve to be long? This is one of those songs. Ironically, Glass & Patron feels fragile at the beginning, and then the beat kicks in and after the initial drop where all the sounds clash it comes in sounding like a classic noughties UK based party song reminiscent of Babycakes. Twigs’ resident producer Boots is a mad genius. How can one person throw so many flavours into one song? He’s like a UK Timbaland/Kanye hybrid, and the fact that both of these people bounce so well off of each other is strong. The last song which is Mothercreep reminds us that FKA will be with us soon. Hopefully. I’m ready for this new album now I can’t refute and this just released. When I next go to a party I want to hear this song after the first drop to fade in from Drake’s Hotline Bling. Any DJ’s who read this make it happen. It ended too soon. All in all I think that I actually like this more than I did the album, which is always a plus because that means that as an artist she’s getting better. With the climate of music at the moment as well, FKA twigs manages to be different enough from the crowd to still be a compelling artist to listen to. My body is ready for what’s next. Listen to the EP Below! 8.7/10 Produced entirely by Adrian Younge who has produced Twelve Reasons to Die / II with Ghostface Killah, the [amazing] score for the movie Black Dynamite and more, comes Bilal’s fifth album, In Another Life. At 39 minutes this is Bilal’s shortest project so far. It begins with a reggae like bass line and deep vocals, mirroring the lyrical content on Sirens II [likely the second iteration of the same song]. The songs content seems like a precursor of what to expect of the rest of the album, especially considering the societal climate we are in today, where sirens are picking off black people left right and centre. But Bilal is still Bilal and where there is pain, there is also love. If you have heard Younge’s previous work then you’d know what to expect. For the uninitiated, what is on offer is a very rugged sound. Live instrumentation, reminiscent of the Blaxploitation 70’s/80’s film period. The sounds however don’t overpower Bilal’s voice but if anything it compliments it more than his previous sounds. He’s always been a raw acoustic artist in terms of musical artistry, and like D’Angelo with Black Messiah, Adrian Younge’s production turns it up some more notches. Open Up The Door takes the album into upbeat territory, as the songs moves at a fast pace that makes you want to step. I Really Don’t Care sounds as if you were listening to it in a café, sipping on a cold brew in summertime with a pastry in front of you, letting the breeze flow through your locks, or your scalp whichever is your flavour. Just relaxed. Relaxed, in love and on point. Pleasure Toy is perhaps my favourite song, and as Big K.R.I.T. spits, “It’s hard to be subtle when you want what you want”. The song isn’t crass, but is another groove that is both an ode to the body and the power of music. The production is airy, light and the backing instruments I believe, do more for the songs atmosphere than what’s upfront, the piano in particular, while the synths add more to the groove. Bilal is known for using his voice is an integral part of the track as opposed to just laying down verses, and the harmonies lay thick on the entire album, and his voice compliments the sound of each track you’d think there were different performers for each song. He goes from smooth soul man on Satellites to screaming on the chorus of Lunatic, intertwined with wispy voiced verses. After working with Kendrick on both To Pimp A Butterfly and the excellent Colbert Report performance of Untitled, it made sense for them to work together again for the song Money Over Love. This song is full of energy, and Kendrick spits rapid fire lyrics complete with a backing choir to back up his words. “The best things in life ain’t free.” I’d never heard of Kimbra prior to her feature on Holding It Back but her performance was a strong one, complementing the feel of the song perfectly, as well as Bilal’s voice, it provided a softer, almost vulnerable contrast. Spiralling’s lyrics really feel like he is losing control of himself, but the way he sings it is filled with a contentedness in his weaknesses. Strong. As Bilal continues to explore with his sound and grow as an artist, he always makes for an interesting listening experience. With this project he stripped down the elements of music (with the help of Younge) and made them his own. It’s timeless music, and deserves at least on listen if only to appreciate the sounds at hand. Listen to the album below! 8.4/10 Having been recently featured on Flying Lotus’ You’re Dead! And Kendrick Lamar’s To Pimp A Butterfly it is the perfect time for Thundercat to release his own project since 2013’s Apocalypse. The Beyond / Where the Giants Roam is his third release on the Brainfeeder label, shared with the likes of Kamasi Washington who also released his debut album The Epic earlier this year. Before listening to this I didn’t know that Thundercat actually sang, which left me in a state of ignorance, but he has a very good voice. It’s high tone and works for the atmosphere of each song. The first track Hard Times is a sparse one, filled (or left with) airy guitar strings throughout. As the mini album plays on, it flows effortlessly. Midway through Song for the Dead is a musical tempest which had me in a trance, it was a literal musical storm which broke into the second half of the songs guitar instrumental back up by more natural wind like sounds. Though the sound of the project is consistent each song stands out on their own, with Them Changes introducing itself with a funky bass line and a rolling guitar chord. This differentiation between sounds makes for an interesting listen, but you have to listen. I found myself having to listen multiple times just to grasp the themes and different sounds of the project. The fact that it’s so short (clocking in at just 17 minutes), and so mellow means that it can just pass you by with each listen. Them Changes I approaches the theme of love with lyricism that is closer to poetry than traditional R&B. Metaphors abound effortlessly and they aren’t the cheap variety either. The very first line is something that I could hear Jack Sparrow saying, and whether or not that’s a good thing or a bad thing, you can’t refute its strength, “Nobody move/There’s blood on the floor/And I can’t find my heart/Where did it go/Did I leave it in the cold” Now, Lone Wolf and Cub tackles the theme of loneliness and perhaps the most funky and drawn out method possible. Even though it’s drawn out and lacks a lot of substance lyrically, the musicianship here backs up those things that are lacking. [Editors Note:Lone Wolf and Cub is a manga created by the writer Kazuo Koike, and artist Goseki Kojima which is about Ogami Ittō who was disgraced and forced to become an assassin. He then decides to take revenge on the clan who planned his disgrace, and brings along his three year old son, and together become… Omnimon! (Lone Wolf and Cub). So maybe Thundercat plans to take revenge on somebody, I guess we’ll never know unless it hits the news.] The last two songs, That Moment and Where the Giants Roam / Field of the Nephilim are a smooth exit to the 6-set of songs. The former, has a fitting title due to its length, and is as sparse if not more so than the introduction, and the latter which is another short song, has shimmering production, although the lyrics are almost nonsensical but I don’t mind because they sound beautiful. “Where the dragons from?/Not in your mind/Somewhere between space and end/Watching, waiting for their time.” Maybe Thundercat is also part of the LoveDragon producer group and is awaiting his time to shine? We’ll find out in the next episode hopefully. This small project is most likely a teaser of what is to come, and while it is a snippet it’s a good insight. A foray into the strange and wonderful mind of Thundercat, and I’d like another trip. Listen to the project below!
--- abstract: 'We develop a generalized loss network framework for capacity planning of a perinatal network in the UK. Decomposing the network by hospitals, each unit is analyzed with a GI/G/c/0 overflow loss network model. A two-moment approximation is performed to obtain the steady state solution of the GI/G/c/0 loss systems, and expressions for rejection probability and overflow probability have been derived. Using the model framework, the number of required cots can be estimated based on the rejection probability at each level of care of the neonatal units in a network. The generalization ensures that the model can be applied to any perinatal network for renewal arrival and discharge processes.' --- **A Generalized Loss Network Model with Overflow for Capacity Planning of a Perinatal Network** [Md Asaduzzaman]{}\ Institute of Statistical Research and Training (ISRT), University of Dhaka\ Dhaka 1000, Bangladesh, E-mail: asad@isrt.ac.bd [Thierry J Chaussalet]{}\ Department of Business Information Systems, School of Electronics and Computer Science\ University of Westminster, 115 New Cavendish Street, London W1W 6UW, UK\ E-mail: chausst@wmin.ac.uk Introduction {#section1} ============ In most of the developed world neonatal care has been organized into networks of cooperating hospitals (units) in order to provide better and more efficient care for the local population. A neonatal or perinatal network in the UK offers all ranges of neonatal care referred to as intensive, high dependency and special care through level $1$ to level $3$ units. Recent studies show that perinatal networks in the UK have been struggling with severe capacity crisis [@Bliss07; @NAO]. Expanding capacity by number of beds in the unit, in general, is not an option since neonatal care is an unusually expensive therapy. Reducing capacity is not an option either, as this would risk sick neonates being denied admission to the unit or released prematurely. Consequently, determining cot capacity has become a major concern for perinatal network managers in the UK. Queueing models having zero buffer also referred to as ‘loss models’ $(./././0)$ have been widely applied in hospital systems and intensive care in particular [e.g., @Dijk09; @Litvak08; @Asadaor10; @Asadadc11; @AsadrssA11]. [@Dijk09] proposed an M/M/c/0 loss model for capacity management in an Operating Theatre-Intensive Care Unit. [@Litvak08] developed an overflow model with loss framework for capacity planning in intensive care units while [@Asadaor10; @Asadadc11] developed a loss network model for a neonatal unit, and extended the model framework to a perinatal network in [@AsadrssA11]. These models assume that inter-arrival times and length of stay follow exponential distributions. Queueing models with exponential inter-arrival and service times are easiest to study, since such processes are Markov chains. However, length of stay distribution in intensive care may be highly skewed [@Griffiths06]. Performance measures of a queueing system with non-zero buffer are insensitive to service time distribution provided that the arrival process is Poisson [@Kelly79]. This insensitivity property is, in general, no longer valid in the case of zero buffer or loss systems [@Klimenok05]. Many approaches have been found towards generalizing such processes since Erlang introduced the M/M/c/0 model for a simple telephone network and derived the well-known loss formula that carries his name in 1917 [@Kelly91; @Whitt04]. [@Takacs56; @Takacs62] considered the loss system with general arrival pattern (GI/M/c/0) through Laplace transform. Nowadays there has been a growing interest in loss systems where both arrival and service patterns are generalized (GI/G/c/0). The theoretical investigation of the GI/G/c/0 loss model through the theory of random point processes has attracted many researchers. [@Brandt80] gave a method for approximating the GI/GI/c/0 queue by means of the GI/GI/$\infty$ queue, while [@Whitt84] applied a similar approximation under heavy traffic. [@Franken82] examined the continuity property of the model, and established an equivalence between arrival and departure probability. [@Miyazawa93] gave an approximation method for the batch-arrival GI$^{[x]}$/G/c/N queue which is applicable when the traffic intensity is less than one. The M/G/c/N and the GI/G/c/N queue have also been studied widely; for a comparison of methods, see [@Kimura00]. Although many studies have been found in the literature, no simple expression for the steady state distribution is available for a GI/G/c/0 system. [@Hsin96] provided the exact solution for the GI/GI/c/0 system expressing the inter-arrival and service time by matrix exponential distribution. The method is computationally intensive and often includes imaginary components in the expression (which are unrealistic). Diffusion approximations, which require complicated Laplace transforms have also been used for analyzing GI/G/c/N queues [e.g., @Kimura03; @Whitt04]. [@Kim03] derived a transform-free expression for the analysis of the GI/G/1/N queue through the decomposed Little’s formula. A two-moment approximation was proposed to estimate the steady state queue length distribution. Using the same approximation, [@Choi05] extended the system for the multi-server finite buffer queue based on the system equations derived by [@Franken82]. [@Atkinson09] developed a heuristic approach for the numerical analysis of GI/G/c/0 queueing systems with examples of the two-phase Coxian distribution. In this paper we derive a generalized loss network model with overflow for a network of neonatal hospitals extending the results obtained by [@Franken82]. Since some model parameters cannot be computed practically, a two-moment based approximation method is applied for the steady state analysis as proposed by [@Kim03]. The model is then applied to the north central London Perinatal network, one of the busiest network in the UK. Data obtained from each hospital (neonatal unit) of the network have been used to check the performance of the model. The rest of the paper is organized as follows: in the next section we first discuss a typical perinatal network and then develop a generalized loss model with overflow for the network. The steady state distribution and expression for rejection and overflow probabilities have been derived for each level of care of the neonatal units. Application of the model and numerical results are presented in Section \[section4\]. Structure of a perinatal network {#section2} ================================ A perinatal network in the UK is organized through level $1$, level $2$ and level $3$ units. Figure \[fig1\] shows a typical perinatal network in the UK. Level $1$ units consist of a special care baby unit (SCBU). It provides only special care which is the least intensive and most common type of care. In these units, neonates may be fed through a tube, supplied with extra oxygen or treated with ultraviolet light for jaundice. Figure \[fig2\] shows the typical patient flow in a level $1$ unit. A level $1$ unit may also have an intensive therapy unit (ITU) which provides short-term intensive care to neonates, and the unit may then be referred to as ‘level $1$ unit with ITU’. Figure \[fig3\] shows the structure of a level $1$ unit with ITU. Level $2$ units consist of a SCBU and a HDU where neonates can receive high dependency care such as breathing via continuous positive airway pressure or intravenous feeding. These units may also provide short-term intensive care. A level $3$ unit provides all ranges of neonatal care and consists of an SCBU, an HDU and an NICU where neonates will often be on a ventilator and need constant care to be kept alive. Level $2$ and level $3$ units may also have some transitional care (TC) cots, which may be used to tackle overflow and rejection from SCBU. Although level $2$ and level $3$ units have similar structures level $2$ units might not have sufficient clinician support for the NICU. NICU are HDU are often merged in level $2$ and level $3$ units for higher utilization of cots. In level $2$ or level $3$ units, NICU-HDU neonates are sometimes initially cared at SCBU when all NICU cots are occupied. Similarly SCBU neonates are cared at NICU-HDU or TC, depending upon the availability of cots, staff and circumstances. This temporary care is provided by staffing a cot with appropriate nurse and equipment resources, and will be referred to as ‘overflow’. Rejection occurs only when all cots are occupied; in such cases neonates are transferred to another neonatal unit. Patient flows in a typical level $3$ or level $2$ unit are depicted in Figure \[fig4\]. Unlike for level $3$/level $2$ units, overflow does not occur in level $1$ units with ITU. The underlying admission, discharge and transfer policies of a perinatal network are described below. 1. All mothers expecting birth $<27$ week of gestational age or all neonates with $<27$ week of gestational age are transferred to a level $3$ unit. 2. All mothers expecting birth $\ge 27$ but $<34$ week of gestational age or all neonates of the same gestational age are transferred to a level $2$ unit depending upon the booked place of delivery. 3. All neonatal units accept neonates for special care booked at the same unit. 4. Neonates admitted into units other than their booked place of delivery are transferred back to their respective neonatal unit receiving after the required level of care. Now we shall develop a generalized loss network framework for a perinatal network with level $1$, level $2$ and level $3$ units. To obtain the steady state behavior of the network, we first decompose the whole network into a set of subnetworks (i.e., each neonatal units) due to higher dimensionality, then we derive the steady state solution and expression of rejection probability for each of the units. When analyzing a particular sub-network in isolation, back transfers are combined with new arrivals to specifically take into account the dependencies between units. Cot capacity for the neonatal units may be determined based on the rejection probabilities at each level of care and overflow to temporary care of the units. Mathematical model formulation {#section3} ============================== Model for a level 1 unit ------------------------ A level $1$ unit consists typically of a SCBU. Therefore, assuming no waiting space and first come first served (FCFS) discipline, a level $1$ unit can be modelled as a GI/G/c/0 system. Let the inter-arrival times and length of stay of neonates be i.i.d. random variables denoted by $A$ and $L$, respectively. Also the length of stay is independent of the arrival process. Define $$\begin{aligned} m_{A}& =\mathbb{E}(A)=\frac{1}{\lambda}, & m_{L}& =\mathbb{E}(L)=\frac{1}{\mu}.\end{aligned}$$ Let $N$ denotes the number of neonates in the system at an arbitrary time, $N^{a}$ denotes the number of neonates (arriving) who find the system is in steady state with $N$ neonates, and $N^{d}$ denotes the number of neonates discharged from the system in steady state with $N$ neonates. Let $c$ be the number of cots at the SCBU. For $0\le n \le c$, let $$\pi(n)=\mathbb{P}(N=n),$$ $$\begin{aligned} \pi^a(n)& =\mathbb{P}\big(N^a=n\big), & \pi^d(n)& =\mathbb{P}\big(N^d=n\big),\end{aligned}$$ and $$\begin{aligned} m_{A,n}^d & =\mathbb{E}\big(A_n^d\big), & m_{L,n}^a & =\mathbb{E}\big(L_n^a\big), & m_{L,n}^d & =\mathbb{E}\big(L_n^d\big),\end{aligned}$$ where $A_n^d$ is the remaining inter-arrival time at the discharge instant of a neonate who leaves behind $n$ neonates in the systems, $L_n^a$ ($L_n^d$) is the remaining length of stay of a randomly chosen occupied cot at the arrival (discharge) instant of a neonate who finds (leaves behind) $n$ neonates in the system. Let $m_{A,n}^a$ and $m_{L,n}^{*a}$ be, respectively, the mean inter-arrival time and the mean length of stay under the condition that the system started at the arrival instant of a neonate when there were $n$ neonates in the system. Clearly, $$\begin{aligned} m_{A,n}^a &= m_A, & m_{L,n}^{*a} &= m_L\,. \end{aligned}$$ From the definitions, we obtain $$\begin{aligned} m_{A,c}^d&=m_A, & m_{L,c}^a &= m_{L,c}^d\,.\end{aligned}$$ We set $$\begin{aligned} m_{A,-1}^a &= 0, & m_{A,-1}^d &= 0, & m_{L,0}^a &= 0, & m_{L,0}^d &= 0,\end{aligned}$$ for convenience. Then the first set of system equations obtained by [@Franken82] for a GI/G/c/0 loss system can be written as $$\pi(n)-\lambda m_{A,n-1}^a \pi^a(n-1) = - \lambda m_{A,n-1}^d \pi^d(n-1) + \lambda m_{A,n}^d \pi^d(n),\;\; 0\leq n\leq c.$$ The second set of system equations can be given by $$\begin{gathered} n\pi(n) + (n-1)\lambda m_{L,n-1}^d\pi^d(n-1) - n\lambda m_{L,n}^d\pi^d(n) \\= \lambda m_{L,n-1}^{*a} \pi^a(n-1)+ (n-1)\lambda m_{L,n-1}^a\pi^a(n-1) - n\lambda m_{L,n}^a \pi^a(n),\;\; 1\leq n \leq c-1,\end{gathered}$$ and $$c\pi(c) + (c-1)\lambda m_{L,c-1}^d \pi^d(c-1) = \lambda m_{L,n}^{*a} \pi^a(c-1)+(c-1)\lambda m_{L,c-1}^a\pi^a(c-1).$$ From the first set of system equations for the GI/G/c/0 queue, the following equations can be derived $$\pi(0) = \lambda m_{A,0}^d \pi^d(0),\label{eq7.1}$$ and $$\pi(n) = \lambda m_{A,n}^d \pi^d(n)+\lambda m_A\pi^a(n-1)-\lambda m_{A,n-1}^d\pi^d(n-1),\;\; 1\leq n\leq c. \label{eq7.2}$$ From the second set of system equations for the GI/G/c/0 queue, the following equations can be derived $$\begin{gathered} \pi(n)= \frac{1}{n}\Big[\lambda\pi^a(n-1)\big(m_L+(n-1)m_{L,n-1}^a\big)+\lambda n m_{L,n}^d\pi^d(n) \\-(n-1)\lambda m_{L,n-1}^d\pi^d(n-1)-\lambda n m_{L,n}^a\pi^a(n)\Big],\;\; 1\leq n\leq c-1, \label{eq7.3}\end{gathered}$$ and $$\pi(c)= \frac{1}{c}\Big[\lambda\pi^a(c-1)\big(m_L+(c-1)m_{L,c-1}^a\big)-(c-1)\lambda m_{L,c-1}^d\pi^d(c-1)\Big]. \label{eq7.4}$$ \[th01\] The steady state distribution for a GI/G/c/0 system is given by $$\pi^a(n)=\pi^d(n)=K^{-1}\prod_{i=0}^{n-1}\frac{\lambda_{i}}{\mu_{i+1}}, \;\; 0\leq n\leq c, \label{eq7.5}$$ and $$\pi(n)=\pi^a(n)\varphi_n=\pi^d(n)\varphi_n,\;\; 0\leq n\leq c,$$ where $$K=\sum_{n=0}^c\prod_{i=0}^{n-1}\frac{\lambda_{i}}{\mu_{i+1}},$$ and $$\label{eq7.6} \left. \begin{array}{ll} \displaystyle\frac{1}{\mu_i} =& m_L-i\big(m_A-m_{A,i-1}^d\big)+(i-1)\big(m_{L,i-1}^a-m_{L,i-1}^d\big),\;\; 1\leq i\leq c,\vspace{.3cm}\\ \displaystyle\frac{1}{\lambda_i} =& \left\{ \begin{array}{l} (i+1)\big(m_{A,i+1}^d+m_{L,i+1}^a-m_{L,i+1}^d\big),\;\; 0\leq i\leq c-2,\smallskip\\ cm_A,\;\; i=c-1, \end{array}\right.\vspace{.3cm}\\ \varphi_i =& \left\{ \begin{array}{l} \lambda m_{A,0}^d,\;\; i=0,\smallskip\\ \lambda\Big[m_{A,i}^d+\big(m_A-m_{A,i-1}^d\big)\mu_i/\lambda_{i-1}\Big],\;\; 1\leq i\leq c. \end{array} \right. \end{array} \right\}$$ The steady state distribution can be obtained by solving the above two sets of system equations. First, we equate equations (\[eq7.1\]) and (\[eq7.2\]) with equations (\[eq7.3\]) and (\[eq7.4\]) for each $n$, $0\le n\le c$. Then using the following well-known rate conservation principle, we solve them simultaneously, $$\pi^a(n)=\pi^d(n).$$ Hence we obtain equation (\[eq7.5\]). In steady state analysis of a GI/G/c/0 system, equations in (\[eq7.6\]) involve quantities $m_{A,n}^d$, $m_{L,n}^a$ and $m_{L,n}^d$, which are not easy to compute in general, except for some special cases such as Poisson arrival or exponential length of stay. Therefore, a two moment approximation is used as proposed by [@Kim03] and [@Choi05] for the steady state distribution of the GI/G/c/0 system based on the exact results as derived in equations \[eq7.5\] and \[eq7.6\]. To obtain the approximation, we replace the inter-arrival and length of stay average quantities $m_{A,n}^d$, $m_{L,n}^a$ and $m_{L,n}^d$ by their corresponding time-average quantities; $$m_{A,n}^d\approx q_A=\frac{\mathbb{E}\big(A^2\big)}{2\mathbb{E}(A)}=\frac{\big(1+c_A^2\big)m_A}{2}, \;\; 0\leq n\leq c-1, \label{eq7.7}$$ $$m_{L,n}^a=m_{L,n}^d\approx q_L=\frac{\mathbb{E}\big(L^2\big)}{2\mathbb{E}(L)}=\frac{\big(1+c_L^2\big)m_L}{2},\;\; 0\leq n\leq c-1, \label{eq7.8}$$ where $c_A^2$ $\big(c_L^2\big)$ is the squared coefficient of variation of inter-arrival times (length of stay). Using equations (\[eq7.7\]) and (\[eq7.8\]) in equation (\[eq7.5\]), we obtain the two moment approximation for the steady state distribution $$\tilde{\pi}^a(n)=\tilde{\pi}^d(n)=\tilde{K}^{-1}\prod_{i=0}^{n-1}\frac{\tilde{\lambda}_i}{\tilde{\mu}_{i+1}}, \;\; 0\leq n\leq c, \label{eq7.9}$$ and $$\tilde{\pi}(n)=\tilde{\pi}^a(n)\tilde{\varphi}_n=\tilde{\pi}^d(n)\tilde{\varphi}_n,\;\; 0\leq n\leq c,$$ where $$\tilde{K}=\sum_{n=0}^{c}\prod_{i=0}^{n-1}\frac{\tilde{\lambda}_i}{\tilde{\mu}_{i+1}},$$ and $$\label{eq7.10} \left. \begin{array}{ll} \displaystyle\frac{1}{\tilde{\mu}_i} &= m_L-i\big(m_A-q_A\big),\;\; 1\leq i\leq c,\vspace{.3cm}\smallskip\\ \displaystyle\frac{1}{\tilde{\lambda}_i} &= \left\{ \begin{array}{l} (i+1)q_A,\;\; 0\leq i\leq c-2,\smallskip\\ cm_A,\;\; i=c-1, \end{array} \right. \vspace{.3cm}\\ \tilde{\varphi}_i &= \left\{ \begin{array}{l} \lambda q_A,\;\; i=0,\smallskip\\ \lambda\Big[q_A+\big(m_A-q_A\big)\tilde{\mu}_i/\tilde{\lambda}_{i-1}\Big],\;\; 1\leq i\leq c-1,\smallskip\\ \lambda\Big[m_A+\big(m_A-q_A\big)\tilde{\mu}_i/\tilde{\lambda}_{i-1}\Big],\;\; i=c. \end{array} \right. \end{array} \right\}$$ Therefore, the rejection probability for a level $1$ unit is computed as $$R = \tilde{\pi}(n)\Big{/}\sum_{n=0}^{c}\tilde{\pi}(n).$$ Model for a level 1 neonatal unit with ITU ------------------------------------------ In a level $1$ unit with ITU (Figure \[fig3\]), overflow from ITU to SCBU does not occur. The unit can be modelled as two joint GI/G/c/0 systems. Therefore, extending the Theorem \[th01\], the steady state distribution for a level $1$ neonatal unit with ITU is given by $$\pi^a(\mathbf{n})=\pi^d(\mathbf{n})=K^{-1}\prod_{i=0}^{(n_{1}-1)}\prod_{j=0}^{(n_{2}-1)}\frac{\lambda_{1i}}{\mu_{1(i+1)}}\cdot\frac{\lambda_{2j}}{\mu_{2(j+1)}},$$ and $$\pi(\mathbf{n})=\pi^a(\mathbf{n})\varphi_\mathbf{n},$$ where $$K=\sum_{n_{1}, n_{2}}\prod_{i=0}^{(n_{1}-1)} \prod_{j=0}^{(n_{2}-1)}\frac{\lambda_{1i}}{\mu_{1(i+1)}}\cdot\frac{\lambda_{2j}}{\mu_{2(j+1)}}.$$ The approximate steady state distribution for a level $1$ neonatal unit with ITU is given by $$\tilde{\pi}^a(\mathbf{n})=\tilde{\pi}^d(\mathbf{n})=\tilde{K}^{-1}\prod_{i=0}^{(n_{1}-1)}\prod_{j=0}^{(n_{2}-1)}\frac{\tilde{\lambda}_{1i}}{\tilde{\mu}_{1(i+1)}}\cdot\frac{\tilde{\lambda}_{2j}}{\tilde{\mu}_{2(j+1)}},$$ and $$\tilde{\pi}(\mathbf{n})=\tilde{\pi}^a(\mathbf{n})\tilde{\varphi}_\mathbf{n},$$ where $$\tilde{K}=\sum_{n_{1}, n_{2}}\prod_{i=0}^{(n_{1}-1)} \prod_{j=0}^{(n_{2}-1)}\frac{\tilde{\lambda}_{1i}}{\tilde{\mu}_{1(i+1)}}\cdot\frac{\tilde{\lambda}_{2j}}{\tilde{\mu}_{2(j+1)}}.$$ and $\tilde{\lambda}_{1i}$, $\tilde{\mu}_{1i}$, $\tilde{\lambda}_{2i}$, $\tilde{\mu}_{2i}$ and $\tilde{\varphi}_i$ are defined by equations in (\[eq7.10\]) for NICU-HDU and SCBU-TC, respectively. The rejection probability at the $i$th level of care is calculated as $$R_{i} = \sum_{\mathbf{n}\in T_{i}}\tilde{\pi}(\mathbf{n})\Big{/}\sum_{\mathbf{n}\in S}\tilde{\pi}(\mathbf{n}),\;\; i=1, 2,$$ where $T_{1}=\big\{{\mathbf n}\in S\mid n_{1}= c_{1}\big\}$ and $T_{2}=\big\{{\mathbf n}\in S\mid n_{2}= c_{2}\big\}$. Model for a level 3/level 2 neonatal unit ----------------------------------------- We derive the mathematical model for a level 3/level 2 neonatal unit as described in Section \[section2\] and showing in Figure \[fig4\]. Let $c_{1}$, $c_{2}$ and $c_{3}$ be the number of cots at NICU-HDU, SCBU and TC, respectively. Let $X_{i}(t)$ be the number of neonates at unit $i$, and $X_{ij}(t)$ be the number of neonates overflowing from unit $i$ to unit $j$, $i, j \in \{1, 2, 3\}$ at time $t$. Then the vector process $$\mathbf{X}=\big(X_{1}(t), X_{12}(t), X_{2}(t), X_{21}(t), X_{23}(t), t\ge 0 \big)$$ is a continuous-time discrete-valued stochastic process. We assume the process is time homogeneous, aperiodic and irreducible on its finite state space. The process does not necessarily need to hold the Markov property. The state space is given by $$S=\big\{{\mathbf n}=(n_{1}, o_{12}, n_{2}, o_{21}, o_{23}) : n_{1}+o_{21}\le c_{1}, o_{12}+n_{2}\le c_{2}, o_{23}\le c_{3}\big\},$$ where $n_{i}, i=1, 2$, is the number of neonates at the $i$th main unit, and $o_{ij}, i, j\in \{1, 2, 3\}$, is the number of neonates at the $j$th overflow unit from the $i$th unit. Now the system can be modelled as two joint loss queueing processes with overflow. Assume that the joint GI/G/c/0 systems are in steady state. We shall now derive the expression for the steady state distribution for a level $3$/level $2$ neonatal unit. Extending the Theorem \[th01\] for two joint GI/G/c/0 systems, the steady state distribution for a level $3$ or level $2$ neonatal unit with overflows can be derived. \[th1\] The steady state distribution for a level $3$ or level $2$ unit can be given by $$\pi^a(\mathbf{n})=\pi^d(\mathbf{n})=K^{-1}\prod_{i=0}^{(n_{1}+o_{21}-1)}\prod_{j=0}^{(n_{2}+o_{12}+o_{23}-1)}\frac{\lambda_{1i}}{\mu_{1(i+1)}}\cdot\frac{\lambda_{2j}}{\mu_{2(j+1)}},$$ and $$\pi(\mathbf{n})=\pi^a(\mathbf{n})\varphi_\mathbf{n},$$ where $\lambda_{1i}$, $\lambda_{2j}$, $\mu_{1i}$, $\mu_{2j}$, $\varphi_i$ are arrival and departure related quantities for NICU-HDU and SCBU-TC, respectively, defined by equations in (\[eq7.6\]), and $$K=\sum_{\mathbf{n}\in S}\prod_{i=0}^{(n_{1}+o_{21}-1)} \prod_{j=0}^{(n_{2}+o_{12}+o_{23}-1)}\frac{\lambda_{1i}}{\mu_{1(i+1)}}\cdot\frac{\lambda_{2j}}{\mu_{2(j+1)}}$$ is the normalizing constant. The approximate steady state distribution for a level $3$/level $2$ neonatal unit is given by $$\tilde{\pi}^a(\mathbf{n})=\tilde{\pi}^d(\mathbf{n})=\tilde{K}^{-1}\prod_{i=0}^{(n_{1}+o_{21}-1)}\prod_{j=0}^{(n_{2}+o_{12}+o_{23}-1)}\frac{\tilde{\lambda}_{1i}}{\tilde{\mu}_{1(i+1)}}\cdot\frac{\tilde{\lambda}_{2j}}{\tilde{\mu}_{2(j+1)}},$$ and $$\tilde{\pi}(\mathbf{n})=\tilde{\pi}^a(\mathbf{n})\tilde{\varphi}_\mathbf{n},$$ where $\tilde{\lambda}_{1i}$, $\tilde{\mu}_{1i}$, $\tilde{\lambda}_{2i}$, $\tilde{\mu}_{2i}$ and $\tilde{\varphi}_i$ are defined by equations in (\[eq7.10\]) for NICU-HDU and SCBU-TC, respectively, and $$\tilde{K} = \sum_{\mathbf{n}\in \mathbf{S}}\prod_{i=0}^{(n_{1}+o_{21}-1)}\prod_{j=0}^{(n_{2}+o_{12}+o_{23}-1)}\frac{\tilde{\lambda}_{1i}}{\tilde{\mu}_{1(i+1)}}\cdot\frac{\tilde{\lambda}_{2j}}{\tilde{\mu}_{2(j+1)}}.$$ The rejection probability at the $i$th level of care for a level $3$/level $2$ neonatal unit is computed as $$R_{i} = \sum_{\mathbf{n}\in T_{i}}\tilde{\pi}(\mathbf{n})\Big{/}\sum_{\mathbf{n}\in S}\tilde{\pi}(\mathbf{n}), \label{eq7.12}$$ where $$T_{1}=\big\{\mathbf{n}\in S\mid(n_{1}+o_{21}=c_{1}\;\;\text{and}\;\; o_{12}+n_{2}=c_{2})\big\},$$ and $$T_{2}=\big\{\mathbf{n}\in S\mid(o_{12}+n_{2}=c_{2},\;n_{1}+o_{21}=c_{1}\;\;\text{and}\;\; o_{23}=c_{3})\big\}.$$ The overflow probability $O_{i}, i=1, 2$ at the $i$th level of care for a level $3$/level $2$ unit can also be computed from equation (\[eq7.12\]) substituting $T_{i}$ by $\{T_{i}^{*}\setminus T_{i}\}, i=1,2$,\ where $$T_{1}^{*}=\big\{\mathbf{n}\in S\mid(n_{1}=c_{1}\;\;\text{and}\;\; o_{12}+n_{2}<c_{2})\big\},$$ and $$T_{2}^{*}=\big\{\mathbf{n}\in S\mid(n_{2}+o_{12}=c_{2}\;\;\text{and}\;\;n_{1}+o_{21}<c_{1})\;\;\text{or}\;\;(o_{12}+n_{2}=c_{2}, n_{1}+o_{21}=c_{1}\;\;\text{and}\;\; o_{23} < c_{3})\big\}.$$ \[th2\] The approximate steady state distribution for a level $3$ or level $2$ neonatal unit is exact for exponential inter-arrival time and length of stay distributions at each level of care. In the case of exponential inter-arrival time and length of stay distributions, arrival and departure related parameters reduce to the corresponding mean values of inter-arrival and length of stay $$\begin{aligned} m_{1A,n}^d &=q_{1A}=m_{1A}=\frac{1}{\lambda_{1}}, & m_{1L,n}^a &= m_{1L,n}^d=q_{1L}=m_{1L}=\frac{1}{\mu_{1}}\end{aligned}$$ $$\begin{aligned} m_{2A,n}^d &=q_{2A}=m_{2A}=\frac{1}{\lambda_{2}}, & m_{2L,n}^a &= m_{2L,n}^d=q_{2L}=m_{2L}=\frac{1}{\mu_{2}}\end{aligned}$$ and $$\varphi_\mathbf{n}=1.$$ Then the steady state solution becomes $$\pi^a(\mathbf{n})=\pi^d(\mathbf{n})=K^{-1}\prod_{i=0}^{(n_{1}+o_{21}-1)}\frac{\lambda_{1}}{(i+1)\mu_{1}} \prod_{j=0}^{(n_{2}+o_{12}+o_{23}-1)} \frac{\lambda_{2}}{(j+1)\mu_{2}}.$$ Hence we obtain $$\pi(\mathbf{n})=K^{-1}\frac{{{{\Big(\frac{\lambda_{1}}{{\mu_{1}}}\Big)}^{{(n_{1}}+o_{21})}}}{{{{\Big(\frac{\lambda_{2}}{{\mu_{2}}}\Big)}}^{{(o_{12}}+n_{2}+o_{23})}}}}{{(n_{1}+o_{21})!}{(o_{12}+n_{2}+o_{23})!}},$$ where $$K = \sum_{\mathbf{n}\in S}\frac{{{{\Big(\frac{\lambda_{1}}{{\mu_{1}}}\Big)}^{{(n_{1}}+o_{21})}}}{{{{\Big(\frac{\lambda_{2}}{{\mu_{2}}}\Big)}}^{{(o_{12}}+n_{2}+o_{23})}}}}{{(n_{1}+o_{21})!}{(o_{12}+n_{2}+o_{23})!}},$$ which is the steady state solution for a level $3$ unit as in [@AsadrssA11] for Markovian arrival and discharge patterns. Adding back transfers, we can easily obtain the steady state distribution for a level $2$ unit. Application of the model {#section4} ======================== The case study -------------- We apply the model to the case of a perinatal network in London which is the north central London perinatal network (NCLPN). The network consists of five neonatal units: UCLH (level $3$), Barnet (level $2$), Whittington (level $2$), Royal Free (level $1$ with ITU) and Chase Farm (level $1$). The underlying aim of the network is to achieve capacity so that 95% women and neonates may be cared for within the network. Data on admission and length of stay were provided by each of the units. Since the data did not contain the actual arrival rate and the rejection probability for the units we estimated the actual arrival rates using SIMUL8^^ [@Simul8], a computer simulation package designed to model and measure performances of a stochastic service system. Table \[tab1\] presents mean length of stay and estimated mean inter-arrival times for each level of care at UCLH, Barnet, Whittington, Royal Free and Chase Farm neonatal units for the year $2008$. Then we also use simulation (SIMUL8) to estimate the rejection probabilities for each level of care of the units for various arrival and discharge patterns. We refer to these estimates as ‘observed’ rejection probabilities. Numerical results and discussion -------------------------------- In this section rejection probabilities are estimated for all five units in the NCLPN through the application of the model formulae in Section \[section3\]. An extensive numerical investigation has been carried out for a variety of inter-arrival and length of stay distributions to test the performance of the model and the approximation method. Table \[tab2\] compares the ‘observed’ and estimated rejection probabilities at each level of care for UCLH, Barnet, Whittington, Royal Free and Chase Farm neonatal units for various combinations of inter-arrival time and length of stay distributions. Namely, exponential (M), two-phase hyper-exponential (H$_2$) and two-phase Erlang (E$_2$) distributions are considered. To compare ‘observed’ rejection probabilities with estimated rejection probabilities when one of these probabilities are $0.05$ or more, we define ‘absolute percentage error’ (APE) as the absolute deviation between ‘observed’ and estimated rejection probability divided by ‘observed’ rejection probability and then multiplied by 100. Rejection probabilities below $0.05$ are normally considered statisfactor. For this reason we have not reported the APE when both ‘observed’ and estimated rejection probabilities are less than $0.05$. The ‘observed’ and estimated rejection probabilities are close for the UCLH unit. At NICU-HDU, the highest ‘observed’ rejection probability is occurred for E$_2$/E$_2$/c/0, and the estimated rejected probability is also highest for the same arrival and discharge patterns with an absolute percentage error (APE) $4.73\%$. The lowest ‘observed’ rejection probability is $0.1848$ for the H$_2$/E$_2$/c/0 while the estimated rejection probability is $0.1726$ with an APE $4.98\%$. At SCBU for E$_2$/M/c/0, the ‘observed’ and estimated rejection probabilities are $0.1332$ and $0.1652$, respectively, with an APE $24.02\%$. At Barnet NICU-HDU, the ‘observed’ and estimated rejection probabilities are close with a varying APEs from $0.95\%$–$15.31\%$. For Barnet SCBU the ‘observed’ and estimated rejection probabilities are all less than $0.05$ and relatively close to each other. Both the UCLH NICU-HDU and SCBU and Barnet NICU-HDU would require additional cots to keep the rejection level low and achieve a $0.05$ target. Rejection probabilities from both NICU-HDU and SCBU at the Whittington neonatal unit are below $0.05$ regardless of the combination of inter-arrival time and length of stay distributions, which indicates that the neonatal unit is performaing well with 12 NICU, 16 SCBU and 5 TC cots. The ‘observed’ and estimated rejection probabilities at Royal Free ITU and SCBU and Chase Farm SCBU are close to each other. The results in Table \[tab2\] suggest that Royal Free ITU and SCBU and Chase Farm SCBU require extra cots to decrease the rejection level. Through our extensive numerical investigations we observe that the rejection probability often varies greatly according to arrival and discharge patterns. The number of cots required will also vary depending upon arrival and discharge patterns. Therefore, one should take into account the actual arrival and discharge patterns for accurate capacity planning of neonatal units rather than approximating by Markovian arrival and discharge patterns. To achieve a ‘95%’ admission acceptance target UCLH NICU-HDU and SCBU, Barnet NICU-HDU, Royal Free ITU and SCBU, and Chase Farm SCBU need to increase their number of cots. We have also observed that performance of the proposed generalized capacity planning model improves as the squared coefficient of variation values of inter-arrival and length of stay get closer to $1$ (recall that our approximation is exact for the Markovian inter-arrival and length of stay case in which squared coefficient of variation values of inter-arrival and length of stay are both $1$) and as $\lambda/\mu$ gets larger (i.e., under heavy traffic). A possible explanation is that as $\lambda/\mu$ gets larger, the period during which all the cots are busy tends to get longer. As such a busy period gets longer, arrival and departure points of arrivals tend to become more and more like arbitrary points in time. As such, the approximation is likely to get more accurate. Conclusion {#section5} ========== Planning capacity accurately has been an important issue in the neonatal sector because of the high cost of care, in particular. Markovian arrival and length of stay can provide only approximate estimates which may often underestimate or overestimate the required capacity. The underestimation of cots may increase the rejection level, which in turn may be life-threatening or cause expensive transfers for high risk neonates, hence increase risk for vulnerable babies. On the other hand, overestimation may cause under-utilization of cots, and potential waste of resources. In this paper a generalized framework for determining cot capacity of a perinatal network was derived. After decomposing the whole network into neonatal units, each unit was analyzed separately. Expressions for the stationary distribution and for rejection probabilities were derived for each neonatal unit. An approximation method was suggested to obtain the steady state rejection probabilities. The model formulation was then applied to the neonatal units in the NCLPN. A variety of inter-arrival and length of stay distributions in the neonatal units has been considered for numerical experimentation. The ‘observed’ and estimated rejection probabilities were close (APE typically less than 20%) for all hospital units when rejection probabilities were $0.05$ or more. When ‘observed’ rejection probabilities were less than $0.05$, as for the Barnet SCBU and both the Whittington NICU-HDU and SCBU, the APE increased rapidly to beyond 50%. However, since these values are less than or close to 0.05, they do not have an impact on management decisions regarding the number of cots. In contrast, when ‘observed’ rejection probabilities are high, then the estimated values become close to each other. The ‘observed’ and estimated rejection probabilities were, in general, close for high traffic intensities. As traffic intensity drops the absolute percent error increases quickly. In most cases, the absolute percent error becomes small for Markovian arrival and length of stay patterns. We know that service time distribution is insensitive for delay systems if the arrival process is Poisson. However, the property is no longer valid for loss systems. The model results as seen in Table \[tab2\] also confirm this sensitivity property. The main advantage of the model framework is that arrival and discharge pattern do not need to hold the Markov property. The model is based on the first two moments and requires no distributional assumption. This two-moment approximation techniques performs reasonably well in terms of accuracy (APE) and is fast. The method is exactly Markovian for equal mean and variance. The numerical results show that the model can be used as a capacity planning tool for perinatal networks for non-Markovian arrival and discharge patterns as well as Markovian patterns. If good estimates of the first two moments are available, then the generalized model can be used to determine the required cot capacity in a perinatal network for given level of rejection probabilities. Although we applied the model framework in the hospital case the model formulation can also be applied to plan capacity for other areas such as computer, teletraffic and other communication networks. Asaduzzaman, M., T. J. Chaussalet, N. J. Robertson. 2010. A loss network model with overflow for capacity planning of a neonatal unit. [*Annals of Operations Research*]{} [**178**]{} 67–76. Asaduzzaman, M., T. J. Chaussalet. 2011. An overflow loss network model for capacity planning of a perinatal network. [*Journal of the Royal Statistical Society, Series A.*]{} [**174**]{} 403–417. Asaduzzaman, M., T. J. Chaussalet, S. Adeyemi, S. Chahed, J. Hawdon, D. Wood, N. J. Robertson. 2011. Towards effective capacity planning in a perinatal network centre. [*Archives of Disease in Childhood*]{} [**95**]{} F283–F287. Atkinson, J. B. 2008. Two new heuristics for the GI/G/n/0 queueing loss system with examples based on the two-phase Coxian distribution. [*Journal of the Operational Research Society*]{} [**60**]{} 818–830. Bliss. 2007. Too Little, Too Late?, Bliss - The Premature Baby Charity, London, Retrieved April 20, 2011, http://www.bliss.org.uk/page.asp?section=677&sectionTitle=Too+little%2C+too+ late%3F. Brandt, A., B. Lisek. 1980. On the approximation of GI/GI/m/0 by means of GI/GI/$\infty$. [*Journal of Information Processing and Cybernetics*]{} [**16**]{} 597–600. Choi, D. W., N. M. Kim, K. C. Chae. 2005. A two-moment approximation for the GI/G/c queue with finite capacity. [*INFORMS Journal on Computing*]{} [**17**]{} 75–81. Franken, P., D. Konig, U. Arndt, V. Schmidt. 1982. [*Queues and Point Processes*]{}. Wiley. Griffiths, J. D., N. Price-Lloyds, M. Smithies, J. Williams. 2006. A queueing model of activities in an intensive care unit. [*IMA Journal of Management Mathematics*]{} [**17**]{} 277–288. Hsin, W. J., A. van de Liefvoort. 1996. The teletraffic analysis of the multi-server loss model with renewal distributions, [*Telecommunication Systems*]{} [**5**]{} 303–321. Kelly, F. P. 1979. [*Reversibility and Stochastic Networks*]{}. Wiley. Kelly, F.P. 1991. Loss networks. [*Annals of Applied Probability*]{} [**1**]{} 319–378. Kim, N. K., K. C. Chae. 2003. Transform-free analysis of the GI/G/1/K queue through the decomposed Little’s formula. [*Computer and Operations Research*]{} [**30**]{} 353–365. Kimura, T. 2000. Equivalence relations in the approximations for the M/G/s/s+r queue. [*Mathematical and Computer Modelling*]{} [**31**]{} 215–224. Kimura, T. 2003. A consistent diffusion approximation for finite-capacity multiserver queues. [*Mathematical and Computer Modelling*]{} [**38**]{} 1313–1324. Klimenok, V., C. S. Kim, D. Orlovsky, A. Dudin. 2005. Lack of invariant property of the Erlang loss model in case of MAP input. [*Queueing Systems*]{} [**49**]{} 187–213. Litvak, N., M. van Rijsbergen, R. J. Boucherie, M. van Houdenhoven. 2008. Managing the overflow of intensive care patients. [*European Journal of Operational Research*]{} [**185**]{} 998–1010. Miyazawa, M., H. C. Tijms. 1993. Comparison of two approximations for the loss probability in finite-buffer queues. [*Probability in the Engineering and Informational Sciences*]{} [**7**]{} 19–27. National Audit Office. 2007. Caring for Vulnerable Babies: The Reorganisation of Neonatal Services in England, Retrieved April 20, 2011, http://www.nao.org.uk/publications/0708/caring\_for \_vulnerable\_babies.aspx. SIMUL8, 2000. [*SIMUL8 Manual and Simulation Guide*]{}. Glasgow: Visual Thinking International Limited. Takács, L. 1956. On the generalization of Erlang’s formula, [*Acta Mathematica Hungarica*]{} [**7**]{} 419–433. Takács, L. 1962. [*Introduction to the Theory of Queues*]{}. Oxford University Press. Van Dijk, N. M., N. Kortbeek. 2009. Erlang loss bounds for OT-ICU systems. [*Queueing Systems*]{}, [**63**]{} 253–280. Whitt, W. 1984. Heavy-traffic approximations for service systems with blocking, [*AT&T Bell Lab Technical Journal*]{} [**63**]{} 689–708. Whitt, W. 2004. A diffusion approximation for the G/GI/n/m queue, [*Operations Research*]{}. [**52**]{} 922–941. --------------------- -------------------- --------------------- Unit Mean inter-arrival Mean length of stay NICU-HDU 0.58 11.51 SCBU-TC 0.24 5.83 [**Barnet**]{} NICU-HDU 1.12 6.78 SCBU-TC 0.83 9.71 [**Whittington**]{} NICU-HDU 1.11 5.16 SCBU-TC 0.88 14.61 [**Royal Free**]{} ITU 2.77 2.21 SCBU 0.91 9.99 [**Chase Farm**]{} SCBU 1.05 8.03 --------------------- -------------------- --------------------- : Inter-arrival and length of stay for the neonatal units in the NCLPN in 2008 \[tab1\] ---------------------------------- ------------------------------- ----------------------- ----------------- ---------------- System notation ‘Observed’ rej. prob. Est. rej. prob. Abs. per. err. (17 NICU, 12 SCBU and 8 TC cots) NICU-HDU M/M/c/0 0.1895 0.1962 3.54 SCBU-TC 0.1319 0.1271 3.64 NICU-HDU M/H$_\text{2}$/c/0 0.1989 0.1933 2.82 SCBU-TC 0.1186 0.1313 10.71 NICU-HDU H$_\text{2}$/M/c/0 0.2123 0.1706 19.64 SCBU-TC 0.1214 0.1010 16.80 NICU-HDU M/E$_\text{2}$/c/0 0.2096 0.1987 5.20 SCBU-TC 0.1405 0.1235 12.10 NICU-HDU E$_\text{2}$/M/c/0 0.2179 0.2347 7.71 SCBU-TC 0.1332 0.1652 24.02 NICU-HDU H$_\text{2}$/H$_\text{2}$/c/0 0.1852 0.1669 9.88 SCBU-TC 0.1255 0.1077 14.18 NICU-HDU H$_\text{2}$/E$_\text{2}$/c/0 0.1848 0.1726 4.98 SCBU-TC 0.0996 0.0970 2.61 NICU-HDU E$_\text{2}$/H$_\text{2}$/c/0 0.2155 0.2332 8.21 SCBU-TC 0.1512 0.1672 10.58 NICU-HDU E$_\text{2}$/E$_\text{2}$/c/0 0.2260 0.2367 4.73 SCBU-TC 0.1353 0.1626 20.18 [**Barnet**]{} (6 NICU, 14 SCBU and 4 TC cots) NICU-HDU M/M/c/0 0.1644 0.1508 8.27 SCBU-TC 0.0142 0.0076 \* NICU-HDU M/H$_\text{2}$/c/0 0.1496 0.1614 7.89 SCBU-TC 0.0117 0.0111 \* NICU-HDU H$_\text{2}$/M/c/0 0.1411 0.1513 7.23 SCBU-TC 0.0147 0.0097 \* NICU-HDU M/E$_\text{2}$/c/0 0.1653 0.1433 13.31 SCBU-TC 0.0141 0.0051 \* NICU-HDU E$_\text{2}$/M/c/0 0.1326 0.1529 15.31 SCBU-TC 0.0055 0.0020 \* NICU-HDU H$_\text{2}$/H$_\text{2}$/c/0 0.1586 0.1571 0.95 SCBU-TC 0.0125 0.0134 \* NICU-HDU H$_\text{2}$/E$_\text{2}$/c/0 0.1508 0.1473 2.32 SCBU-TC 0.0142 0.0072 \* NICU-HDU E$_\text{2}$/H$_\text{2}$/c/0 0.1691 0.1752 3.61 SCBU-TC 0.0034 0.0037 \* NICU-HDU E$_\text{2}$/E$_\text{2}$/c/0 0.1269 0.1355 6.78 SCBU-TC 0.0059 0.0007 \* ---------------------------------- ------------------------------- ----------------------- ----------------- ---------------- : Comparison of rejection probabilities for different distributions at all five neonatal units in the NCLPN \*APEs are ignored for rejection probabilities $<0.05$ \[tab2\] Continuation of Table \[tab2\]\ ---------------------------------- ------------------------------- ----------------------- ----------------- ---------------- System notation ‘Observed’ rej. prob. Est. rej. prob. Abs. per. err. (12 NICU, 16 SCBU and 5 TC cots) NICU-HDU M/M/c/0 0.0216 0.0007 \* SCBU-TC 0.0138 0.0018 \* NICU-HDU M/H$_\text{2}$/c/0 0.0009 0.0026 \* SCBU-TC 0.0003 0.0128 \* NICU-HDU H$_\text{2}$/M/c/0 0.0042 0.0000 \* SCBU-TC 0.0110 0.0011 \* NICU-HDU M/E$_\text{2}$/c/0 0.0097 0.0015 \* SCBU-TC 0.0029 0.0054 \* NICU-HDU E$_\text{2}$/M/c/0 0.0006 0.0000 \* SCBU-TC 0.0010 0.0011 \* NICU-HDU H$_\text{2}$/H$_\text{2}$/c/0 0.0053 0.0035 \* SCBU-TC 0.0091 0.0225 \* NICU-HDU H$_\text{2}$/E$_\text{2}$/c/0 0.0002 0.0026 \* SCBU-TC 0.0236 0.0134 \* NICU-HDU E$_\text{2}$/H$_\text{2}$/c/0 0.0003 0.0000 \* SCBU-TC 0.0002 0.0024 \* NICU-HDU E$_\text{2}$/E$_\text{2}$/c/0 0.0018 0.0000 \* SCBU-TC 0.0005 0.0005 \* [**Royal Free**]{} (2 ITU and 12 SCBU) ITU M/M/c/0 0.1468 0.1504 2.45 SCBU 0.1558 0.1580 1.41 ITU M/H$_\text{2}$/c/0 0.1714 0.1504 12.25 SCBU 0.1476 0.1580 7.05 ITU H$_\text{2}$/M/c/0 0.1667 0.1556 6.66 SCBU 0.1509 0.1476 2.19 ITU M/E$_\text{2}$/c/0 0.1560 0.1504 3.59 SCBU 0.1393 0.1580 13.42 ITU E$_\text{2}$/M/c/0 0.1756 0.1504 14.35 SCBU 0.1516 0.1685 11.15 ITU H$_\text{2}$/H$_\text{2}$/c/0 0.1681 0.1351 19.63 SCBU 0.1452 0.1476 1.65 ITU H$_\text{2}$/E$_\text{2}$/c/0 0.1481 0.1556 5.06 SCBU 0.1680 0.1476 12.14 ITU E$_\text{2}$/H$_\text{2}$/c/0 0.1252 0.1347 7.59 SCBU 0.1384 0.1685 21.75 ITU E$_\text{2}$/E$_\text{2}$/c/0 0.1315 0.1579 20.08 SCBU 0.1619 0.1685 4.08 [**Chase Farm**]{} (10 SCBU) SCBU M/M/c/0 0.1078 0.1060 1.67 SCBU M/H$_\text{2}$/c/0 0.1094 0.1060 3.11 SCBU H$_\text{2}$/M/c/0 0.1474 0.1233 16.35 SCBU M/E$_\text{2}$/c/0 0.1047 0.1060 1.24 SCBU E$_\text{2}$/M/c/0 0.0719 0.0792 10.15 SCBU H$_\text{2}$/H$_\text{2}$/c/0 0.1418 0.1233 13.0 SCBU H$_\text{2}$/E$_\text{2}$/c/0 0.1469 0.1233 16.0 SCBU E$_\text{2}$/H$_\text{2}$/c/0 0.0817 0.0792 3.06 SCBU E$_\text{2}$/E$_\text{2}$/c/0 0.0700 0.0792 13.14 ---------------------------------- ------------------------------- ----------------------- ----------------- ---------------- \*APEs are ignored for rejection probabilities $<0.05$
Amityville, New York Amityville is a village in the town of Babylon in Suffolk County, New York, in the United States. The population was 9,523 at the 2010 census. History Huntington settlers first visited the Amityville area in 1653 due to its location to a source of salt hay for use as animal fodder. Chief Wyandanch granted the first deed to land in Amityville in 1658. The area was originally called Huntington West Neck South (it is on the Great South Bay and Suffolk County, New York border in the southwest corner of what once called Huntington South), but is now the Town of Babylon. According to village lore, the name was changed in 1846 when residents were working to establish its new post office. The meeting turned into bedlam and one participant was to exclaim, "What this meeting needs is some amity". Another version says the name was first suggested by mill owner Samuel Ireland to name the town for his boat, the Amity. The place name is strictly speaking an incidental name, marking an amicable agreement on the choice of a place name. The village was formally incorporated on March 3, 1894. In the early 1900s, Amityville was a popular tourist destination with large hotels on the bay and large homes. Annie Oakley was said to be a frequent guest of vaudevillian Fred Stone. Will Rogers had a home across Clocks Boulevard from Stone. Gangster Al Capone also had a house in the community. Amityville has been twinning with Le Bourget, France since 1979. The Amityville Horror Amityville is the setting of the book The Amityville Horror by Jay Anson, which was published in 1977 and had been adapted into a series of films made between 1979 and 2017. The story of The Amityville Horror can be traced back to a real life murder case in Amityville in November 1974, when Ronald DeFeo, Jr. shot all six members of his family at 112 Ocean Avenue. In December 1975 George and Kathy Lutz and Kathy's three children moved into the house, but left after twenty-eight days, claiming to have been terrorized by paranormal phenomena produced by the house. Jay Anson's novel is said to be based on these events but has been the subject of much controversy. The house featured in the novel still exists but has been renovated and the address changed in order to discourage tourists from visiting it. The Dutch Colonial Revival architecture house built in 1927 was put on the market in May 2010 for $1.15 million and sold in September for $950,000. Geography According to the United States Census Bureau, the village has a total area of , of which, of it is land and of it is water. The total area is 15.38% water. The Village of Amityville is bordered to the west by East Massapequa (in Nassau County), to the north by North Amityville, to the east and south by Copiague, and to the south by the Great South Bay. Points of interest The Triangle - The fork of Broadway and Park Avenue, along with Ireland Place create a triangular plot of land at the center of the village. The Triangle building was built in 1892, the same year that Ireland Place opened. A gazebo was added to the north point of The Triangle prior to 1988. In 1994, The Triangle was officially designated “Memorial Triangle” in memory of all who have served the village. The Lauder Museum is located at the corner of Broadway and Ireland Place, just south of The Triangle. The historic building was built for the Bank of Amityville in 1909. The Amityville Historical Society opened the Lauder Museum in 1972. The Mike James Courts at Bolden Mack Park The Amityville beach Sand Island - an island in the Great South Bay directly south of The Amityville Beach and only accessible by boat. Demographics As of the census of 2010, there were 9,523 people and 3,107 households in the village, with 2.61 persons per household. The population density was 4,506.9 people per square mile. There were 3,997 housing units, of which 28.2% were in multi-unit structures. The homeownership rate was 71.8%. The median value of owner-occupied housing units was $443,500. 3.6% of housing units were vacant and 20.7% of occupied housing units were occupied by renters. The racial makeup of the village was 81.7% White, 9.7% African American, 0.3% Native American, 1.8% Asian, 0.0% Pacific Islander, 4.1% from other races, and 2.5% from two or more races. Hispanic or Latino of any race were 13.1% of the population. The village was 74.5% non-Hispanic White. There were 3,107 households out of which 23.8% had children under the age of 18 living with them, 32.6% had individuals over the age of 65, 47.3% were married couples living together, 10.2% had a female householder with no husband present, and 38.1% were non-families. 30.4% of all households were made up of individuals and 13.1% had someone living alone who was 65 years of age or older. The average household size was 2.43 and the average family size was 3.02. In the village, the population was relatively old with 4.5% under the age of 5, 17.7% under the age of 18, 5.3% from 20 to 24, 23.0% from 25 to 44, 32.2% from 45 to 64, and 19.9% who were 65 years of age or older. The median age was 46.4 years. 78.7% of the population had lived in the same house 1 year & over. 14.9% of the entire population were foreign-born and 21.6% of residents at least 5 years old spoke a language other than English at home. 90.1% of residents at least 25 years old had graduated from high school, and 30.7% of residents at least 25 years old had a bachelor's degree or higher. The mean travel time to work for workers aged 16 and over was 27.8 minutes. The median income for a household in the village was $74,366. The per capita income for the village was $35,411. 6.5% of the population were below the poverty line. Public schools All of the villages are served by the Amityville Union Free School District, which also serves large portions of North Amityville and East Massapequa and a small portion of Copiague (however this part of Copiague is served by the Amityville post Office and is probably thought to be part of Amityville). As of the 2010-2011 School Year, the Amityville Union Free School District had 2,780 students. The racial demographics were 0% American Indian or Alaska Native, 54% non-Hispanic black or African-American, 35% Hispanic or Latino, 1% Asian or Native Hawaiian/Other Pacific Islander, 8% non-Hispanic white, and 2% multiracial. 51% of students were eligible for free lunch, 10% for reduced-price lunch and 11% of students were Limited English Proficient. 16.5% of students were classified as "Special Ed". The school district had a graduation rate of 79% and 2% of students did not complete school. 87% of graduates received a Regents Diploma and 31% received a Regents Diploma with Advanced Designation. Of the 2011 completers, 35% planned to move on to 4-year College, 52% to 2-year College, 4% to Other Post-Secondary, 3% to the Military, 5% to Employment, 1% to Adult Services, 0% had other known post-secondary plans, and 1% had no known post-secondary plan. The district currently has: One Elementary School serving Pre-K and Kindergarten: Northeast Elementary School One Elementary School serving Grades 1-3: Northwest Elementary School One Elementary School serving Grades 4-6: Park Avenue Memorial Elementary School One Junior High School (Grades 7-9): Edmund W. Miles Middle School One High School (Grades 10-12): Amityville Memorial High School For the 2011-2012 School Year, the Accountability Status for Northeast and Northwest Elementary Schools and the high school was "In Good Standing", while Park Avenue Memorial Elementary School was "In Need of Correction Action (year 2) Focused" and the middle school was "In Need of Restructuring (year 1) Comprehensive". The Accountability Status for the district overall was "In Good Standing" Up until recently, Amityville Memorial High School served grades 9-12, Edmund W. Miles Middle School served grades 6-8, Park Avenue Memorial Elementary School served grades 3-5, and Northwest Elementary School served grades 1-2. The first part of the change was implemented at the start of the 2009-2010 School Year when new 9th graders were kept at Edmund W. Miles Middle School and new 6th graders were kept at Park Avenue Memorial Elementary School. At the start of the 2012-2013 school year, new 3rd graders were kept at Northwest Elementary School. Transportation Amityville is served by the Babylon Branch of the Long Island Rail Road. The station is a hub for buses in the area: S1: Amityville - Halesite via New York State Route 110 1A: Amityville - North Amityville S20: Sunrise Mall - Babylon S33: Sunrise Mall - Hauppauge Notable people Henry Austin - 19th Century baseball player, died in Amityville. Alec Baldwin - actor Christine Belford - actress Benjamin Britten - world-renowned British classical composer from 1939-1942 De La Soul - Hip Hop trio Rik Fox - bass guitarist Tony Graffanino - MLB player Mike James - NBA player Kevin Kregel - astronaut Ronald DeFeo - mass murderer George Lutz - owner of 112 Ocean Avenue from 1975–1976 Tre Mason - NFL running back for Los Angeles Rams Donnie McClurkin - gospel singer Bill McDermott, CEO of SAP SE John Niland - NFL player Robert Phillips - classical guitarist A. J. Price - NBA player Eddie Reyes - founder of Taking Back Sunday George Ross - baseball player David Torn - composer, guitarist, and music producer Dave Weldon - U.S. Congressman Darrel Young - NFL player References External links Flag of Amityville, New York (Flags of the World) Category:Babylon (town), New York Category:Villages in New York (state) Category:Villages in Suffolk County, New York Category:Populated coastal places in New York (state) Category:The Amityville Horror
525 S.E.2d 83 (2000) 271 Ga. 890 EDWARDS et al. v. DEPARTMENT OF CHILDREN & YOUTH SERVICES. No. S99G0900. Supreme Court of Georgia. January 18, 2000. Watkins, Lourie & Roll, Joseph W. Watkins, Lance D. Lourie, Atlanta, Langdale, Vallotton, Linahan & Threlkeld, William P. Langdale III, William P. Langdale, Jr., Valdosta, for appellants. Capers, Dunbar, Sanders & Bruckner, Paul H. Dunbar III, Ziva P. Bruckner, Augusta, Shivers & Associates, Patricia Guilday, Alpharetta, Thurbert E. Baker, Attorney General, Kathleen M. Pacious, Deputy Attorney General, for appellee. Middleton, Mathis, Adams & Tate, Charles A. Mathis, Jr., Mills, Moraitakis, Kushel & Pearson, Glenn E. Kushel, Atlanta, amici curiae. FLETCHER, Presiding Justice. We granted certiorari to consider whether state employees perform a "discretionary function" under the Georgia Tort Claims Act when they make decisions on the emergency medical treatment of juveniles in state custody.[1] Adhering to our previous opinions that the discretionary function exception to the tort claims act requires the exercise of a *84 policy judgment, we hold that the decision of state employees on the type of emergency medical treatment to provide incarcerated juveniles is not a discretionary function as that term is defined in the statute. As a result, the state is not immune from liability under the discretionary function exception in this case. Therefore, we reverse. Fifteen-year-old Latasha Edwards died from a subdural hematoma while incarcerated at the Macon Youth Development Center. Her parents sued the Georgia Department of Children & Youth Services under the Georgia Tort Claims Act, alleging that YDC employees were negligent in failing to provide proper medical care to Edwards. The trial court ruled that the claims were barred under the discretionary function exception of OCGA § 50-21-24(2) and granted summary judgment to the state. The Court of Appeals of the State of Georgia affirmed on the grounds that the state employees exercised a discretionary decision in determining the type of medical care to provide and were therefore subject to immunity.[2] In granting the writ of certiorari, we asked the parties to address whether the court of appeals improperly expanded the meaning of "discretionary function" to decisions of state employees that are not related to policy judgments. Ambiguous Legislative History of Georgia Tort Claims Act The Georgia Tort Claims Act grants a "limited waiver" of the state's sovereign immunity.[3] Under the act, the state waives its sovereign immunity for the torts of state employees while acting within the scope of their official duties "in the same manner as a private individual or entity would be liable under like circumstances" subject to the act's exceptions and limitations.[4] In construing a statute, we must look at the legislative intent, "keeping in view at all times the old law, the evil, and the remedy" and give ordinary "signification" to all words.[5] The General Assembly states its intent in the second section of the act. In that section, it acknowledges that the strict application of sovereign immunity produces "inherently unfair and inequitable results."[6] It further states that, unlike private enterprise, state government does not have the flexibility to choose its activities and control its exposure to liability, but instead must provide a broad range of services and perform a variety of functions. As a result, state government should not have the duty to do everything possible, and the state's exposure to tort liability must be limited. In conclusion, "it is declared to be the public policy of this state that the state shall only be liable in tort actions within the limitations of this article and in accordance with the fair and uniform principles established in this article."[7] A major exception to state liability under the act is the "discretionary function" exception. Under this exception, the state shall have no liability for losses resulting from the "exercise or performance of ... a discretionary function or duty on the part of a state officer or employee."[8] The act defines "discretionary function or duty" to mean "a function or duty requiring a state officer or employee to exercise his or her policy judgment in choosing among alternate courses of action based upon a consideration of social, political, or economic factors."[9] A review of the legislative history provides limited insight into the legislative intent in enacting the discretionary function exception.[10] On the one hand, it appears that the *85 legislature intended to limit the state's overall exposure to tort liability; on the other hand, the legislature enacted a relatively narrow definition of discretionary function as an exception to state liability. The state act was patterned "in most respects" after the Federal Tort Claims Act, and the discretionary function exception is similar to the federal exception as developed by case law.[11] The purpose of the exception under the federal act is to prevent the courts from substituting their own judgment for the policy decisions of the executive and legislative branches of government.[12] Interpreting the Discretionary Function Exception to the Act In Department of Transportation v. Brown,[13] this Court first considered the scope of the discretionary function exception in the Georgia Tort Claims Act. Because the statute included a definition of discretionary function or duty, we explicitly rejected our prior case law that distinguished between the discretionary and ministerial acts of state employees as the basis for state liability. Instead, we approved of decisions from other jurisdictions holding that the discretionary function exception applies only to basic governmental policy decisions, rejecting the state's argument that the exception includes any decision affected by social, political, or economic factors.[14] Last term, we again considered the application of the discretionary function exception in Brantley v. Department of Human Resources.[15] In that case, we reiterated that the more narrow statutory definition of discretionary function controls in claims against state employees over the definition that had been developed in prior case law and applied in several cases by the court of appeals.[16] The plain meaning of the statutory exception is that the state employee must exercise a "policy judgment" in choosing among various alternative actions based on social, political, and economic factors. We concluded that the foster parent's decision to leave a two-year-old child unattended in a swimming pool was not a basic policy decision entitled to protection from review under the tort claims act. In their briefs, the Edwards do not challenge the department's failure to promulgate appropriate policies and procedures for the diagnosis and referral of sick inmates or its failure to train and supervise personnel in implementing those policies and procedures; instead, the Edwards contend that the department's nursing staff and other employees failed to properly assess their daughter's condition and seek adequate medical care for her. The staff's medical decisions about the proper diagnosis and treatment of Edwards do not involve policy judgments based on social, political, or even economic factors.[17] As other state courts have held, decisions on emergency medical care are not the type of basic governmental policy decision that the tort claims act intended to protect from liability.[18] Because we find this reasoning both *86 persuasive and consistent with the purpose of our act, we hold that the decision of state employees on the type of emergency medical care to provide incarcerated juveniles does not fall within the discretionary function exception to the Georgia Tort Claims Act. Judgment reversed. All the Justices concur. NOTES [1] See OCGA § 50-21-24(2) (discretionary function exception). [2] See Edwards v. Department of Children & Youth Services, 236 Ga.App. 696, 512 S.E.2d 339 (1999). [3] See OCGA § 50-21-23. [4] Id. [5] OCGA § 1-3-1. [6] OCGA § 50-21-21(a). [7] Id. [8] OCGA § 50-21-24(2). [9] OCGA § 50-21-22(2). [10] See Charles N. Kelley, Jr., Georgia Tort Claims Act: Provide a Limited Waiver of Sovereign Immunity, 9 GA. ST. U.L.REV. 349, 352 n. 32 (1992) (mentioning "discretionary acts" as one of a "long lists of acts" for which the state will not accept liability). [11] See David J. Maleski, The 1992 Georgia Tort Claims Act, 9 GA. ST. U.L.REV. 431, 448 (1993). [12] See Brantley v. Department of Human Resources, 271 Ga. 679, 523 S.E.2d 571 (1999); Maleski, supra note 11, at 448. [13] 267 Ga. 6, 471 S.E.2d 849 (1996). [14] Id. at 7, 471 S.E.2d 849. [15] 271 Ga. 679, 523 S.E.2d 571. [16] See id. at 681-682, 523 S.E.2d 571 (discussing court of appeals cases); see also Northwest Ga. Reg'l Hosp. v. Wilkins, 220 Ga.App. 534, 469 S.E.2d 786 (1996) (noting that statutory definition of "discretionary" in tort claims act is more narrow than earlier case law defining "discretionary acts" of state employees). [17] See Magee v. United States, 121 F.3d 1 (1st Cir.1997) (decisions about specific medical treatment fall outside protection of discretionary function exception); Rise v. United States, 630 F.2d 1068, 1072 (5th Cir.1980) (failure to provide proper medical care cannot be considered the exercise of a discretionary function); Jackson v. Kelly, 557 F.2d 735 (10th Cir.1977) (discretionary function exception does not absolve government from liability for negligent medical care). [18] See, e.g., Darling v. Augusta Mental Health Inst., 535 A.2d 421 (Me.1987) (distinguishing between psychiatrist's decision concerning involuntary commitment to state mental hospital, which constitutes a discretionary function, and doctor's negligent medical treatment of patient); Kelley v. Rossi, 395 Mass. 659, 481 N.E.2d 1340 (1985) (treating physician in hospital emergency room was not engaged in a discretionary function under exception to state tort claims act); Peterson v. Traill County, 601 N.W.2d 268 (N.D.1999) (jailer's failure to transfer inmate suffering from alcohol withdrawal "is an ordinary individualized judgment made by jailers as part of their routine work duties" and not a discretionary function).
Natural oils massage made with different techniques in order to fit all individual constitutions. It strengthens the body's vital functions, frees lymphatic blocks , cleanses, relaxes, improves sleep and grows immune system. Time: 50 min. Price: 65,00 € Patrasveda - hot pads massage It 'a special massage technique that, coming from the Far East. Initially, the whole body is anointed with special warm oils; continue with the buffers fall herbal expertly massaged with rapid movements. Slims, tones up the skin especially on the belly, generally acts as a beauty treatment and rejuvenation in case of scars and orange peel skin. Time: 50 min. Price: 75,00 € Udvartana - detoxifying massage with exfoliation With a mixture of barley flour, chickpeas and pure sesame oil, the body is massaged and at the same time subjected to an exfoliating treatment. It stimulates the metabolism and has a deep cleansing effect. The massage consists in a set of ancient techniques: cupping, massage with Tibetan bowls, energy points, etc. The massage makes the body vibrating and activates it at cellular level. These sounds stimulate and help it to reach an equilibrium point. The result is perceptible harmony both at physical and psychic level. Our masseuse focuses in treating all her skills and her sensitivity and delicacy. An forced appointment for those who want to keep fit or simply get involved in this pleasure. Time: 50 min. Price: 55,00 € Facial and head massage Draining and stimulant massage that counteracts swelling and relaxes, by the hair and scalp with the use of essential oils made with acupressure that helps to eliminate the stress and stimulate blood circulation improving the vitality of the hair. Time: 25 min. Price: 35,00 € Partial massage A specific massage for the different areas of the body (legs or back) with specific attention to localised problematic areas. Time: 25 min. Price: 30,00 € Head and neck massage This treatment loosens up all the tensions of the daily life accumulated in the cervical area and cause headaches, migraines and neck rigidity. Ideal for people who spend a lot of time driving, sitting at a desk, or people with high level of professional responsibility. Time: 25 min. Price: 35,00 € Drainage massage This manual massage with its slow, rhythmic movements helps lymph to enter the lymphatic stations to prevent fluid build up in the tissues where toxins would otherwise thicken. Particularly recommended for oedema, water retention, cellulite and all problems where the immune system needs reinforcing. Time: 50 min. Price: 55,00 € Legs drainage massage It stimulates the blood circulation and helps to prevent water retention. Particularly recommended during pregnancy. Time: 25 min. Price: 35,00 € Sport massage Intense massage for the muscle bands. Before training it helps to obtain good results. After training it helps to find relax and eliminates the fatigue. Time: 50 min. Price: 58,00 € Connective tissue massage The technique of connective tissue massage interacts with the layers of subcutaneous tissue and muscle tissue: it's here that toxins and metabolic waste products build up which determine a change of adipose tissue and adipocytes (cellulite). It also has an analgesic action with respect to the tension and muscle spasms caused by stress which is constantly subject our body. What this technique allows is the dissolution of the tension and relaxation of muscle tissue, resulting in the release of toxins and improved the blood circulation and oxygen in the system. Time: 50 min. Price: 60,00 € Energetic massage with cupping An effective drainage technique with suction glass cups in combination with essential oils and natural active ingredients, which stimulate the metabolism and infuse new energy to the tissues. The body regains its energy balance and harmony in favour of aesthetics. Time: 50 min. Price: 60,00 € Relaxing massage with aromatic oils This massage rebalances the disharmony between body and mind through the aroma therapy (essential oils). It gives relax and peace sensation thanks to very soft movements. Time: 50 min. Price: 58,00 € Chocolate massage The chocolate aroma intoxicates the mind and body, maintains a sweet scent of cocoa and the skin becomes smooth. Chocolate stimulates the nervous system and has a positive effect on mental concentration and mental and physical readiness. Cocoa is rich in minerals and its applications range from remineralising treatments to draining treatments, its vitamins have moisturizing effects for all skin types, even sensitive. Time: 50 min. Price: 65,00 € Candle Massage It's the latest trendy massage, which permits you to enter the world of multi-sensory massage candles, giving a new light to your skin. Immerse yourself in the light, dive among the scents and colours, refreshes the skin. Let yourself be pampered by the pleasant sensation of drops of light and fluid vegetable oil that comes from your skin with a gentle warmth, relieves tensions and giving pleasure, tone and vigour to the body, thus freeing him from inhibitions and fatigue, reloading well-being and energy. This new men's line is the result of long studies and accurate researches. It contains specific products for the men's skin with its needs. This treatment has a triple action: it protects, cures and hydrates the skin donating to the face a young and a healthy look. Time: 60 min. Price: 60,00 € Facial mask and massage Specific facial massage with a high concentration of active natural principles: this treatment moisturizes the skin for a fresh sensation and complete well-being. Time: 30 min. Price: 30,00 € Thalasso facial treatment Ideal treatment both for women and for men. The ocean's algae have a lasting, deep-down moisturizing effect on the skin, giving minerals, vitamins and hydration. Time: 60 min. Price: 70,00 € Masque Modelant A clay mask which reactivates the blood circulation to favour the absorption of the product's nutritive substances. Time: 60 min. Price: 80,00 € Soin Profilift Treatment A natural lifting effect which guarantees immediate results . Ideal for reducing fine wrinkles, strengthen elasticity and to give brightness to the gray and asphyxiated skin. Tone up the facial muscles and gives a deep oxygenation to the skin. Time: 60 min. Price: 80,00 € Eyes Masque Modelant An energetic eye treatment to refresh and relax the skin around the eyes, soothing swelling and shadows. This modelling mask is a cutting-edge combination of marine algae and plant extracts with a revitalising, firming effect that eliminates all traces of tiredness and stress. Restore the brightness and splendour of your skin with exfoliating sea salts enriched with essential oils which remove the excess of dead cutaneous cells. As these delightful aromas revive your senses, your body skin will be gently polished becoming silky and smooth, ready to receive further treatments. Time: 20 min. Price: 35,00 € BIO hay peeling The hay peeling, enriched with pure pink salt crystals, deeply cleanses the skin, stimulates its regeneration imparting to the cutis a cleaner, clearer and brighter aspect. Time: 20 min. Price: 38,00 € Body masque modelant An effective anti-cellulite treatment: a stimulating and toning masque with minerals, which gives a pleasant heat while drying and dilates the pores making the active principles absorption easier. Time: 60 min. Price: 70,00 € Soin silhouette sculpant in 3 phases An involving sensorial experience based on natural and active extracts, effective on reactivating the blood circulation and stimulate a diuretic and detoxing action. Thalasso peeling, atomized seaweed wrapping up and eventually a draining massage for a treatment aimed to reduce adiposity and cellulite imperfection. Let the loving "nuvola bed" embrace and cuddle you: the dry floating tub favours a sort of emotional voyage recalling the sensations of the maternal womb. Lay down on a soft mattress filled in with warm water and wrap you with sheets imbued with the chosen pack. The slow movement of the maatress and its heat, the sensation of weightlessness and the colour-therapy, favours the absorption of the active principles contained by the sheets and the whole body and psychic sphere relaxation. Wrap with partial massage euro 60,00 Sea mud wrap This marine mud derives from brackish deposits dating back to about 2500 years ago, laid up underwater layers 5 to 20 metres thick. The delicate skin warming stimulates the micro-circulation. The sea mud prevent arthrosis and joints pain, above all it eases problems caused by muscular back tension. Time: 20 min. Price: 38,00 € Seaweeds wrap The seaweeds store up trace elements like iron, copper, zinc, iodine and vitamins which favour the cells changing and their oxygenation, as well as the disposal of fat in tissues. The skin acquires more elasticity and smoothness, improving the texture and visibly reducing the "orange peel" effect. Time: 20 min. Price: 38,00 € Hay wrap Conceived by the ancient rural tradition, rich in curative herbs with great relaxing and healing effects on the skin. After 20 minutes laid down on the hay, the tiredness and arthritic pains, vanish. Time: 20 min. Price: 38,00 € Cleopatra wrap The milk produced in local dairies and huts is useful for contracting the pores. The lactic acid is used as a natural preserver and highly concentrated, the effect is a light peeling. Lactose favours moistness, while the milk albumin gives more elasticity and relaxes the stressed skin. Time: 20 min. Price: 38,00 € Arnica and St. John's Wort wrap Two harmonious products, which grow on our mountains: arnica and st john's wort. A balsam for the soul and a lashing of new vitality for joints and tired muscles. Time: 20 min. Price: 38,00 € Swiss pine essence wrap Effective above all after a tiring training, it favours breathing and stimulates the immune system. The swiss pine oil has been known for centuries as a home remedy against muscular and joints pains. Time: 20 min. Price: 38,00 € Honey wrap Treatment indicated for dry and stressed skin, which finds immediate firmness and hydration. Sensory bath with prosecco flutes and fruit skewers. Water jets that massage all your body gently. Refreshing bubbles at a pleasant temperature that makes this treatment very enjoyable. We suggest you to follow the baths with a massage. Sea salts bath It stimulates epidermis' cell renewal and has a relaxing and detoxing effect. Time: 20 min. Price: 30,00 € Cleopatra bath For a soft and silky skin, Immerse yourself in a bath of milk and honey and you'll be fascinated. Time: 20 min. Price: 30,00 € Relaxing bath With essential oils of pine, lavender, sweet orange, sour orange and savory. A bath that produces a beneficial calming effect all over the body, thanks for its high content of essential oils, relieving tension and muscle tissue. Time: 20 min. Price: 30,00 € Regenerating bath With essential oils of thyme, lavender, rosemary, lemon, sage, eucalyptus, geranium and nutmeg. A mixture of eight precious essential oils that act synergistically to produce a regenerative, purifying and stimulating effect. After the bath the skin results firm and compact. Time: 20 min. Price: 30,00 € Draining bath With essential oils of pine, thyme, lemon and geranium. A bath that acts the removal of water from the subcutaneous tissue by promoting. Ideal accompaniment to treatments against cellulite water retention. An exclusive anti age treatment for hands and foot to give a visible effect thanks to its strong formula. This treatment provides to a soft peeling followed by specific masks and creams gently massaged. For a velvet and bright effect.
--- author: - 'S. Fratini, D. Feinberg and M. Grilli' date: 'Received: date / Revised version: date' title: 'Jahn-Teller, Charge and Magnetic Ordering in half-doped Manganese Oxides' --- [leer.eps]{} gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore Manganese perovskite oxides are currently the object of intense activity. Motivated initially by the colossal magnetoresistance phenomena, more recent studies have revealed an extremely rich phase diagram originating from the interplay of charge, lattice, orbital and magnetic degrees of freedom [@rev]. The general formula is A$_{1-x}$A$^\prime_{x}$MnO$_3$ where A is in general a trivalent rare earth element (La, Pr, Nd) and A$^\prime$ a divalent alcaline element (Sr, Ca). Substitutional doping allows to explore the full phase diagram, from $x = 0$ to $x = 1$. At the extremes, LaMnO$_3$ and CaMnO$_3$ are antiferromagnetic insulators. The former is a layered antiferromagnet, which can be explained thanks to the large Jahn-Teller couplings of the $e_g$ electrons of Mn$^{3+}$ ions [@us]. The latter shows a Néel ordering due to antiferromagnetic exchange of $t_{2g}$ electrons [@wollan]. With doping, the double-exchange phenomena originating from Hund’s coupling between $e_g$ and $t_{2g}$ electron spins can stabilize a metallic ferromagnetic phase [@Zen; @And; @DeGen]: Coherent band motion occurs for ferromagnetic ordering, while strong inelastic scattering takes place in the high temperature paramagnetic phase. Very large magnetoresistance is obtained when the applied magnetic field is able to align $t_{2g}$ spins, thereby favouring the metallic phase. Nevertheless, it has been pointed out that spin scattering alone is not sufficient to quantitatively explain the phenomenon. Millis [*et al.*]{} [@Millis] suggested that a large electron-lattice coupling is involved, with the formation of Jahn-Teller polarons in the insulating phase. Such large couplings are quite expected from the very large cooperative Jahn-Teller distortions existing in LaMnO$_3$. Those deformations indeed involve more than ten per cent variations of the $Mn-O$ bond lengths around all Mn$^{3+}$ ions. Local deformations have been indeed revealed in charge-disordered phases by X-ray and neutron spectroscopy, as well as optical measurements. They consist of Jahn-Teller deformations around Mn$^{3+}$ ions, and “breathing mode” deformations with shorter $Mn-O$ bonds around Mn$^{4+}$ ions. The role of these deformations becomes more stringent in the charge-ordered phases (CO) of doped manganites. These phases strongly compete with the ferromagnetic metallic (FM) one at sufficient doping. Besides the Coulomb interaction between electrons on Mn ions, electron-phonon interaction should play a prominent role in this phenomenon. This is exemplified by the nature of charge ordering at half-doping, for instance in La$_{0.5}$Ca$_{0.5}$MnO$_3$: While Mn$^{3+}$ and Mn$^{4+}$ ions alternate in two directions (say, a and b), in the other direction (which we here define as the c-axis), one finds rows of Mn$^{3+}$ or Mn$^{4+}$ ions. If CO were exclusively due to intersite Coulomb interaction, one would on the contrary expect a Wigner crystal ordering, alternated in all directions. This shows that cooperative lattice distortions are an essential ingredient to understand charge ordering [@yunoki]. Charge ordering at $x = 0.5$ is accompanied by CE-type antiferromagnetic order: In the ab directions, it involves ferromagnetic and antiferromagnetic zigzag chains crossing each other. A qualitative explanation was given a long time ago by Goodenough [@Good], following the pioneering structural analysis of Wollan and Koehler [@wollan]: The cooperative Jahn-Teller distortions are accompanied by orbital ordering, and induce the magnetic structure. Moreover, away from half-doping, this CE structure appears as an elementary “brick” to build more complicated charge ordering patterns such as “stripes” [@mori]. It is thus especially robust and calls for a detailed explanation. A few models have been proposed to explain CE ordering, putting the emphasis either on intersite Coulomb interactions [@pandit], magnetism and orbital ordering [@solovyev; @jackeli]. Mizokawa et al. [@mizokawa], and Yunoki and coworkers [@yunoki] have underlined the prominent role of Jahn-Teller deformations. Let us first list and grossly estimate the various energy scales in the system. The on-site Hubbard repulsion $U$ and the atomic level difference between the $e_g$ orbitals of manganese and the $2p$ orbitals of the oxygen are of the order of several $eV$’s, and are larger than the total conduction bandwidth ($W \sim 3 eV$). The Hund coupling $J_H$ is of order $1 eV$, while the intersite Coulomb repulsion seems not to be larger than $0.5 eV$. The Jahn-Teller splitting in the insulating LaMnO$_3$ phase is comparable, as shown by spectroscopy and optical absorption measurements [@dessaushen; @Jung]. In terms of a local electron-phonon coupling, it is reasonable to think of energies of the order of $0.2-0.3 eV$, comparable to the intersite $e_g$ hopping integrals $t_0 \sim 0.1-0.4 eV$ depending on the d-orbitals involved. On the other hand, the magnetic couplings (which in a cubic lattice give rise to critical temperatures $T_c$ between $100K$ and $400K$) are in the range of a few $meV$. This holds as well for the superexchange (antiferromagnetic) couplings as, more surprisingly, for the (ferromagnetic) double-exchange ones. It has been shown by Zener [@Zen] that $T_c^{DE} \sim \alpha t_0$, [*e.g.*]{} is proportional to the total kinetic energy of the carriers. As will be shown below, $\alpha$ is quite small and the actual values of $T_c^{DE}$ can be easily explained with a realistic $t_0$, for instance within De Gennes’s mean-field picture [@DeGen]. This hierarchy of energy scales is completed by the one set by the external magnetic field needed to turn the FM phase into the CE (AFCO) phase: It ranges from a few Teslas to $20$ Teslas or more. In terms of energy scale per atom, this is very small, of the order of $0.4-4 meV$. It is thus consistent with the values of the magnetic exchange constants, but much smaller than all the other scales. This points towards an important conclusion: The stringent competition between the above phases require that their free energies be very close, in the range of a few $meV$ per atom. Owing to the much larger electron-phonon and Coulomb interactions, it is reasonable to suppose that they play a dominant role in stabilizing the low-temperature CE phase. The necessary conclusion is that CE and FM phases are (meta)stable minima of the free energy, separated by rather high barriers. This is consistent with the fact that the phase transitions (with temperature or magnetic field) between charge ordered and charge disordered phases are first-order, with strong hysteresis under magnetic field. Tendencies to phase separation between FM and CO phases have been demonstrated in La$_{0.5}$Ca$_{0.5}$MnO$_3$, Pr$_{0.7}$Sr$_{0.3}$MnO$_3$ and other compositions. One should also notice that charge ordering is always strong when it exists. Fine tuning of the chemical composition between CO and FM low temperature phases [@tokura2] does not allow to stabilize “weak” charge ordering. This points towards strong interactions (electron-phonon or Coulomb) in the insulating phase, while they are screened in the metallic phase. This feature is overlooked by mean-field treatments, but can be recovered by taking into account exchange-correlation corrections to the intersite Coulomb repulsion, as shown by Sheng and Ting [@sheng]. Since the lattice distortions here also come from Coulomb interactions (between $Mn$ and $O$ ions), we propose here to generalize the screening idea to electron-phonon interactions and use for this purpose a phenomenological approach. Given the complexity of the overall Hamiltonian, here we restrict ourselves to a single-orbital model in two dimensions, which quantitatively reproduces the various phase diagrams and their tuning by subtle variations of the bandwidth. Our goals are i) obtaining, for realistic values of the parameters, FM, CE and paramagnetic phases; ii) exploring by small variations of those parameters the different kinds of phase diagrams, with temperature and magnetic field: Of the type of La$_{0.5}$Sr$_{0.5}$MnO$_3$ (no charge ordering, FM-PM transition with increasing $T$); of the type of Nd$_{0.5}$Sr$_{0.5}$MnO$_3$ (CE-FM-PM transitions with $T$, CE-FM with $H$); of the type of Pr$_{0.5}$Ca$_{0.5}$MnO$_3$ (CE-PMCO-PM transitions with $T$, CE-FM with $H$). iii) obtaining first-order transitions between CE and FM phases. Taking into account explicitly orbital ordering should not change qualitatively the results since it works in the same direction [@yunoki; @jackeli] but may lead to quantitative improvement. Model and approximations ======================== Hamiltonian ----------- According to the arguments given in the introduction, we assume an infinite repulsion ($U=\infty$) between electrons on the same lattice site, and an infinite Hund coupling ($J_H=\infty$) between the localized $t_{2g}$ spins and the itinerant $e_g$ spins. One can therefore consider spinless electrons, their spin degree of freedom being unequivocally defined by the direction of the local $t_{2g}$ spins $\vec{S}$. Furthermore, we consider in this work a two-dimensional plane of the structure, with a half-filled band made of a single $e_{g}$ orbital. The effective model Hamiltonian is then: $$\label{H} H= H_{DE}+H_{Coul}+H_{ph}+H_{SE}+H_{H}$$ with $$\begin{aligned} H_{DE}&=& - \sum_{<ij>} \tilde{t}_{ij} c^\dagger_{i} c_{j} \\ H_{Coul} &=& \sum_{<ij>} V (n_i-n) (n_j-n)\\ H_{ph}&=& \frac{1}{2} \sum_i [K_b Q_{bi}^2 +K_2 Q_{2i}^2 +K_s Q_{si}^2 ] \\ && \hspace{-1.5cm} -\sum_i g_2 Q_{2i} (n_i-n) + \sum_i g_b Q_{bi} (n_i-n) - L_s \sum_{<ij>} Q_{si} Q_{2j} \\ H_{SE}&=& \sum_{<ij>} [J_1-J_2 Q_s] \vec{S}_i\cdot \vec{S}_j \\ H_{H}&=& -g\mu_{b}\vec{H} \sum_{i}\vec{S}_i\end{aligned}$$ The first term $H_{DE}$ represents the double exchange hopping of electrons on a square lattice. Here $c^\dagger_{i}$, $c_{i}$ are respectively the creation and annihilation operators for spinless electrons from a single band, and $\tilde{t}_{ij}=t \cos (\theta_{ij}/2)$ is the transfer integral between neighboring Mn sites whose ionic spins $\mathbf{S}_i$ and $\mathbf{S}_j$ make an angle $\theta_{ij}$ [@DeGen]. The second term $H_{Coul}$ describes the Coulomb repulsion between nearest neighbors ($n_i=c^\dagger_i c_i$ and $n$ is the average electron density, which is equal to $1/2$ in the present case). The third term $H_{ph}$ is the elastic part, which includes the coupling of electrons to a Jahn-Teller (JT) mode $Q_2$ and of holes to a “breathing” mode $Q_b$ ($g_2$ and $g_b$ are the coupling strengths, $K_2$ and $K_b$ the spring constants). In the planar geometry considered here, the other Jahn-Teller mode $Q_1$ is not relevant. We have also introduced a shear mode $Q_s$, which is driven by $Q_2$. Such a shear deformation, which is experimentally observed at low temperatures, is essential to reconcile the alternating Mn$^{4+}$ breathing and Mn$^{3+}$ JT distortions wich develop in the ordered phases. A substantial shear deformation is indeed observed in La$_{0.5}$Ca$_{0.5}$MnO$_3$ [@radaelli]. It results in some $Mn-O-Mn$ bonds being shorter and other larger (“zig-zag” chains, see Fig. 1). The term $H_{SE}$ represents the antiferromagnetic (AF) superexchange interaction $J_1$ between the ionic spins on neighboring sites, which are treated as classical. The additional term $J_2Q_s$ is a phenomenological implementation of the Goodenough rule: It can either enhance or reduce the AF coupling depending on the sign of the shear deformation, which accounts for the fact that longer (shorter) Mn-Mn bonds have a more (less) antiferromagnetic character [@Good]. The last term $H_{H}$ takes into account the external magnetic field. We shall study the Hamiltonian (\[H\]) in the mean-field approximation, describing the charge ordered (CO) phase as a charge density wave (CDW) with momentum $(\pi,\pi)$. Let us call $\bar{n}^A$ and $\bar{n}^B$ the average electron densities in the two resulting sublattices, which correspond respectively to the Mn$^{3+}$ and Mn$^{4+}$ ions. We shall further assume that the JT coupling is only active on A sites, while the breathing deformations arise on B sites. With these approximations, the terms in the Hamiltonian which depend explicitely on $(n_i-n)$ reduce to $$H_{MF}=-\Delta \sum_{i \in A or B} (n_i^A-n_i^B) + const$$ where the order parameter $\Delta$ is defined as $$\label{Delta} \Delta=2V(\bar{n}^A-\bar{n}^B) + (g_bQ_b+g_2Q_2)/2$$ and the chemical potential has been set to zero by adding a term $ \Delta\mu=-(g_2Q_2-g_bQ_b)/2 $ to recover particle-hole symmetry (with these notations, the choice $\bar{n}^A \ge \bar{n}^B$ corresponds to the $Q$’s being all positive). The magnetic part is also treated in mean-field, according to de Gennes’ procedure [@DeGen], using a gaussian distribution for the angle of the classical spins with respect to the mean field direction. We consider the following magnetic phases: Ferromagnetic (F), paramagnetic (P), Néel anti-ferromagnetic (NAF), and CE-type ordering, with ferromagnetic zig-zag chains, coupled anti-ferromagnetically (CE). The most general unit cell which allows to describe all these phases is is made of 8 nonequivalent Mn sites in a plane (Fig. 1). In each of these magnetic configurations, the total free energy is minimized with respect to the following parameters: i) the magnetization on non-equivalent magnetic sites, ii) the average electron density $\bar{n}^A$ on sublattice A ($\bar{n}^B$ being just $1-\bar{n}^A$) and iii) the lattice displacements. Phenomenological treatment of screening effects ----------------------------------------------- As mentioned in the Introduction, to have a realistic description of the phase diagram which includes both the metallic and charge ordered phases at half-filling, it is necessary to go beyond the simple mean-field approach described in the preceding section by including the effects of exchange and correlation. Such effects on the intersite Coulomb repulsion in the half-doping manganites have recently been analyzed within an RPA-like calculation [@sheng], which is known to be appropriate for interacting electron systems at metallic densities. Since a detailed study goes beyond the scope of this paper, we propose here a semi-phenomenological treatment of screening which allows a qualitative description of the transition between CO and metallic states, and which correctly reproduces the results of reference [@sheng]. The method is further generalized to describe the screening of the electron-lattice interactions. Actually, the latter are due to the Coulomb repulsion between $Mn$ and $O$ ions and are therefore also screened in the metallic phase. This screening should be weaker than that of $Mn-Mn$ interactions, since it involves $Mn-O$ rather than $Mn-Mn$ charge fluctuations, but it should be sizeable. The procedure will be carried out in two successive steps. The first step consists in writing a reasonable estimate for the exchange-correlation energy $E_{xc}$, which is defined as the correction to the ground-state energy beyond the Hartree mean-field result. In the second step, we shall define an effective hamiltonian $H_{xc}$ to be treated in the mean-field approximation, such that $$\label{Hav} \langle H_{xc} \rangle = E_{xc}$$ This results in a modification of the atomic energy levels $\pm \Delta$ (i.e. in a reduction of the CDW gap), and it yields a correction to the free energy which is precisely of the form $E_{xc}$. ### Exchange-correlation energy Let us start by analyzing the simple ferromagnetic case at $T=0$, where the electron hopping is not renormalized by the mechanism of double exchange. In the metallic phase, which corresponds to a vanishing order parameter $\Delta$, the leading correction comes from the exchange (Fock) terms. These terms are responsible for an increase of the carrier itinerancy, which can be viewed as a renormalization of the hopping parameter $t\rightarrow t+V\langle c^\dagger_ic_{i+\delta}\rangle $. Hence the kinetic energy is lowered by a quantity proportional to the interaction potential $V$, and one can write $E_{xc}= -a V$ (the parameter $a$ is related to the dielectric constant of the system). On the other hand, in the CO phase, i.e. at strong $\Delta$, the correlation energy corresponds to the interaction between density fluctuations on neighboring sites, each of them being proportional to $\delta n \sim t/\Delta$. Therefore, in this case the appropriate limiting formula is $E_{xc}\sim -V (t/\Delta)^2$. These results can be generalized to the screening of the electron-lattice interactions, by replacing $V\rightarrow g^2/K$ and by introducing the corresponding order parameter $\Delta$ as given by eq. (\[Delta\]). A smooth interpolation between the weak and strong coupling behaviors can be obtained by writing the following expression for the exchange-correlation energy: $$\label{Exc} E_{xc} = - \frac{aV+b(g_b^2/K_b+g_2^2/K_2)} {1+c \left( \frac{\Delta}{t}\right)^2}$$ where $a,b$ and $c$ are phenomenological parameters (the ratio $a/c=1.44$ can be deduced from ref. [@sheng] and $b/a=1/10$ is chosen according to the ionic distances). As was stated at the beginning of this section, this formula is only appropriate in the ferromagnetic case. It does not account for the fact that the mobility of the carriers taking part in the screening process is affected by the magnetic structure through the DE mechanism. We shall give here the arguments which allow a generalization of eq. (\[Exc\]) to the different kinds of magnetic orderings. In the free-electron limit ($\Delta\rightarrow 0$), where screening is due to coherent band motion, one expects the correlation energy to be reduced by a factor $\tilde{t}/t$, where $$\label{teff} \tilde{t}=t \langle \cos(\theta_{ij}/2) \rangle$$ is the effective DE hopping parameter averaged in all space directions (this gives respectively $1,8/9,1/2$ in the $F,P$, $CE$ phases). The situation is slightly more complicated in the charge ordered phases, because $E_{xc}$ comes from incoherent hopping of the carriers to neighboring sites. According to Hund’s rule, such processes will be allowed only between sites with parallel spins, which defines an effective number of neighbors $\tilde{z}\le z$. In the CE phase, for instance, the lattice can be divided into U (up) and D (down) sites, according to the spin direction. Since each site has 2 U and 2 D neighbors, a given electron can only hop to the 2 neighbors with the same spin direction, and consequently $\tilde{z}/z=1/2$. At finite temperatures, however, thermal fluctuations will reduce the absolute value of the local magnetization $m$. Accordingly, there will be a finite probability that a given U site has a $\downarrow$ spin, which is given by $n_U^{\downarrow}=(1-m_U)/2$ (an equivalent expression holds for D sites). The total probability for hopping away from a U site is therefore $$2 n_U^\uparrow n_U^\uparrow +2 n_U^\downarrow n_U^\downarrow + 2 n_U^\uparrow n_D^\uparrow +2 n_U^\downarrow n_D^\downarrow$$ where obviously $n_U^{\uparrow}=1-n_U^{\downarrow}$. By adding the contributions for hopping processes starting from both U and D sites and dividing by 2, we obtain $$\label{zeff} \tilde{z}=\frac{1}{2} \left[ 4+ (m_U+m_D)^2\right]$$ which correctly gives $\tilde{z}/z=1,1/2,1/2$ for the F,P,CE phases at $T=0$. Here the factors (\[teff\]) and (\[zeff\]) introduce a feedback on the itinerancy of the electrons in the case of an applied magnetic field, which tends to align the spins ferromagnetically. This effect is essential in reducing the critical $H$ for the CE-FM transition at low temperatures, as we shall see below. For each given magnetic configuration, instead of eq. (\[Exc\]), we shall use the following formula for the exchange-correlation energy: $$\tilde{E}_{xc} = - \frac{\left[\tilde{a}V+\tilde{b}(g_b^2/K_b+g_2^2/K_2)\right]} {1+\tilde{c} \left( \frac{\Delta}{t}\right)^2}$$ where the screening parameters $a,b,c$ have been modified according to $$\tilde a= a \frac{\tilde{t}}{t}, \hspace{1cm} \tilde{b}= b \frac{\tilde{t}}{t}, \hspace{1cm} \tilde{c}= c \frac{z}{\tilde{z}}\frac{\tilde{t}}{t}$$ We emphasize here that the terms in the numerator of Eq. (\[Exc\]) are rescaled by the $\tilde{t} /t$ factor since they arise from the coherent screening processes (mostly active when $\Delta \to 0$). On the other hand the local (incoherent) screening processes related to the term in the denominator of Eq. (\[Exc\]) are also rescaled by the effective number of accessible nearest neighbor sites. ### Mean-field potential from exchange and correlation We wish to define an effective hamiltonian $H_{xc}$ to be treated in the mean-field approximation such that the correction to the free energy is equal to $E_{xc}$. To this purpose, we replace $\Delta$ by an operator $\hat{\Delta}$ (e.g. the mean-field parameter $n^A$ is replaced by $\sum_{i \in A} n^A_i$), and linearize the resulting expression. This gives $$\begin{aligned} \label{Hexop} H_{xc}= B \tilde{c} \frac{V}{t}\frac{\Delta}{t} \sum_{i\in A or B} (n^A_i-n^B_i) + const\end{aligned}$$ where we have defined $$B= \left[\tilde{a}V+\tilde{b}(g_b^2/K_b+g_2^2/K_2)\right]/ \left[1+\tilde{c} \left( \frac{\Delta}{t}\right)^2\right]^2$$ The constant part in eq. (\[Hexop\]) is $$-B \left\lbrace 1+ \tilde{c} \frac{\Delta}{t^2} \left[ 6V(\bar{n}^A-\bar{n}^B) + (g_bQ_b+g_2Q_2)/2 \right] \right\rbrace$$ It is easy to verify that eq. (\[Hav\]) holds when $\bar{n}^A = \langle n^A \rangle$ and $\bar{n}^B = \langle n^B \rangle$. One notices that a dielectric constant can be deduced from the screening of the gap, by writing $$\Delta \rightarrow \Delta_{eff}=\Delta -B\tilde{c}\frac{V\Delta}{t^2}$$ which gives $$\varepsilon=\frac{\Delta}{\Delta_{eff}}=\frac{1}{1-c \frac{V}{t^2} \frac{\tilde{a}V+\tilde{b}(g_b^2/K_b+g_2^2/K_2)}{ 1+\tilde{c} \left( \frac{\Delta}{t}\right)^2}}$$ Results ======= The phase diagram: Existence of a CE phase ------------------------------------------ The Hamiltonian in Eq. (\[H\]) is formed by several competing terms, and the corresponding phase diagram contains several phases, each one dominating in some region of the parameter space. To make the analysis simpler, we choose to vary together those parameters having similar physical effects. In particular, the electron-phonon couplings generically reduce the electron mobility and, at mean-field level, tend to give rise to a staggered charge ordering, acting similarly to the n.n. electron-electron repulsion $V$. Therefore in varying $V$ we keep constant the ratio $V/(g^2/K)$. For the sake of simplicity, we also keep a fixed $J_2/J_1$ ratio, although it is not the only possible choice. Figure 2 reports typical phase diagrams at various temperatures as a function of the magnetic coupling $J = J_1 S^{2}/t$ with $J_{1} = J_{2}$ ($S = 3/2$) and of the repulsive e-e interaction $V=0.5(g^2/K)$. At low temperature (left panel) there is a metallic (i.e. without charge-ordering) ferromagnetic phase (FM) when the charge-ordering (CO) terms $V$ and $g^2/K$ are not too large. This FM phase is naturally suppressed by the increase of the antiferromagnetic (AF) superexchange coupling $J_1$. When the charge mobility is suppressed by the CO terms, one finds two distinct possible phases. At low values of the AF coupling the pure CO effects dominate and a ferromagnetic (F) CO phase occurs at sufficiently large values of $V$ (F-CO). The transition is first order, as found in Ref [@sheng], due to the exchange-correlation terms. On the other hand, by increasing the AF coupling, the CO ferromagnetism is destabilized and a CE phase takes place. This latter phase naturally realizes the best compromise between the electron mobility, favored by the ferromagnetic bonds, the CO, and the AF interactions increasing with $J_1$. The CE ordering arises due to competing lattice displacements. In particular a substantial shear mode is induced in the lattice to reconcile the (asymmetrical) JT deformations occurring in the Mn$^{3+}$ ions with the (centrosymmetric) breathing deformations around the Mn$^{4+}$ ions. The resulting lattice structure displays zig-zag chains formed by long bonds interlaced with zig-zag chains of short bonds. Then the peculiar CE magnetic structure naturally appears. In particular, according to Goodenough [@Good], orbital ordering makes the sign of the magnetic couplings to be correlated to the length of the bonds, with AF (F) magnetic couplings corresponding to short (long) bonds. Therefore, the lattice-driven chains with short bonds and with long bonds naturally translate into a lattice-driven CE magnetic structure. The temperature evolution is represented in the three panels of figure 2. By increasing $T$ the weak ferromagnetism surviving in the CO phase at small $J$ is rapidly suppressed in favor of a a CO paramagnetic phase. Also the CE region, being due to a delicate balance between CO, FM, and AF interactions, is reduced rather rapidly. The FM phase at small values of $J_1$ is instead based on the double-exchange mechanism, which is more robust and, upon increasing $T$, is only weakly “invaded” by the CO paramagnetic phase. One also observes that compounds with a low-temperature CE magnetic structure, but laying near the CE-FM phase boundary, can undergo a first-order CE-to-FM transition upon increasing the temperature. It is worth pointing out that, within our formal scheme, the CO paramagnetic phase is the only possible mean-field description of an insulating non-magnetic phase at moderate temperature values. The explored temperature range is indeed much too low to allow for a thermal disruption of the CO, which instead occurs at temperatures of the order of $V$. However, in strong coupling, one may speculate that a more refined description would allow for the disordering of the charge (possibly without spoiling the local lattice deformations around the charges) thus producing the disordered paramagnetic (and polaronic) phase which is observed in all manganites above a few hundreds of kelvins. Sensitive role of the electronic bandwidth ------------------------------------------ A key issue is the role of the kinetic energy in the competition between the different phases. An extended experimental analysis of the phase diagrams of the various manganites [@tokura; @tokura2] suggests that the electronic bandwidth, among other parameters such as lattice disorder, plays a primary role in determining the stability and the competition between the FM and the insulating phases. In our model we investigated this relevant point by varying the bare hopping amplitude in front of the double-exchange term in the Hamiltonian (\[H\]). We also assume that the same mechanism inducing the variation of the nearest neighbour hopping of the itinerant electrons in the $e_g$ orbitals is responsible for variations of the hopping of the $t_{2g}$ electrons as well. This affects the superexchange couplings $J_1$ and $J_2$, in particular $J_1$ is expected to arise from second-order hopping processes of the $t_{2g}$ electrons $J_1\propto t^2/U_{t_{2g}}$ (where $U_{t_{2g}}$ is some effective repulsion between electrons in the same doubly occupied $t_{2g}$ orbital). According to this assumption, when the hopping $t$ is increased without changing the intersite repulsion $V$ one moves downwards in the phase diagrams of Fig. 2, where the variable $V/t$ is reported on the $y$-axis. At the same time, however, the increasing $t$ produces an increase in the $x$-axis variable $J \propto t$. Therefore, by keeping all the Hamiltonian parameters fixed, but $t$ and $J_1=J_2$, one moves in the phase diagram along the dotted curves $V/t=A/J$ reported in Fig. 2. These curves correspond to similar physical systems, where the only nearest neighbour electronic hopping amplitudes have been varied. It immediately appears that systems which slightly differ by the electronic hopping amplitude can have different magnetic structures at low temperature (Fig. 2, left panel). In particular, by increasing $t$ a first-order transition can occur at low temperature from a CE to a FM phase. Furthermore, the temperature evolution of systems with different bandwidth but laying near a phase boundary can be different. This is made apparent in Fig. 3, where we report the temperature-magnetic field phase diagrams for three systems with different hopping (and therefore also different magnetic couplings $J_1$) laying on the same dotted curve $(V/t)=0.04/J$. The three phase diagrams correspond to systems with slightly different magnetic couplings (differing at most by ten percent). Nevertheless, already at zero magnetic field, the three systems display a completely different evolution in temperature. The more insulating (i.e., with smaller $t$) system having $J=0.037$ (so that $(V/t)=1.081$) never becomes metallic at zero field, but only undergoes a first-order transition from a low-temperature CE phase to a paramagnetic insulating phase at $T\simeq 0.032 t$ (left panel, see also Figs.4,5). In the more metallic system (center panel) having $J=0.038$ (and $V/t=1.053$), the CE phase disappears at a lower temperature $T\simeq 0.02 t$ and it is replaced by an intermediate FM phase. The ferromagnetic order and the metallicity is then destroyed at a higher temperature $T\simeq 0.032t$ where a paramagnetic CO phase takes place. We reiterate here that this latter phase is better to be intended as the mean-field representation of a disordered paramagnetic insulating phase. Finally, at even larger values of $J=0.04$, corresponding to $V/t=1$ the metallic phase is present already at low temperature and it survives up to a $T\simeq 0.05 t$. The relevant role of the kinetic energy in stabilizing the uniform metallic FM phase at the expenses of the CO phases and particularly of the one with CE magnetic order is made even more apparent in the presence of a magnetic field. This is particularly clear in the first panel of Fig. 3, where the FM phase, which would be absent at zero field becomes the most stable solution at large enough $H$. It is also worth mentioning that, due to the presence of screening, the metallic uniform solution is always a (local) minimum of the free energy. Therefore an (at least metastable) metallic solution is present even at zero field. The existence of a (local) minimum is a necessary condition for the occurrence of an hysteretic behavior at the transition. Of course the region of the hysteresis also depends on the height of the free energy barrier between the maxima, of domain walls and of other non-equilibrium properties. Nevertheless the region in T and H where two minima exist provides an (excess) estimate for the hysteresis region experimetally observed in half-doped Pr$_{0.5}$Sr$_{0.5}$MnO$_3$, Nd$_{0.5}$Sr$_{0.5}$MnO$_3$, (Nd$_{1-y}$Sm$_y$)$_{0.5}$Sr$_{0.5}$MnO$_3$ and Pr$_{0.5}$Ca$_{0.5}$MnO$_3$ [@tokura; @tokura2]. Discussion ---------- The previous subsection illustrated the main results of the present work: i\) The CE phase does arise in the present one-orbital model and is crucially related to competition between the JT and breathing distortions involved in the charge-ordered state on $Mn^{3+}$ and $Mn^{4+}$ sites respectively. The shear lattice deformation results from this competition and couples to the magnetic degree of freedom, merely through orbital ordering. ii\) Exchange-correlation corrections are essential to stabilize a metallic ferromagnetic phase, due to substantial screening of both Coulomb and electron-lattice interactions. iii\) The kinetic energy is a most effective parameter in determining the relative stability of the various phases upon varying the temperature and the magnetic field. As far as point i) is concerned, in the present work we show that the CE phase not only arises from electronic mechanisms based on the presence of (at least) two orbitals per Mn site. The existence of a CE phase in a model [*without*]{} orbital degrees of freedom is quite remarkable. It is indeed repeatedly stated in the literature [@solovyev; @jackeli; @okamoto] that the CE phase is stabilized by the kinetic energy gain arising from the orbital ordering forming ferromagnetic chains. Our results are not in contrast with this viewpoint, but underline that the above purely electronic mechanism is not the only possible one, and that the coupling with lattice degrees of freedom can be of primary importance. In this regard our results are related to previous Hartree-Fock [@mizokawa] and quantum Monte-Carlo [@yunoki] calculations, where the JT deformations were claimed to be relevant for the occurrence of a CE phase. Our low-temperature phase diagrams are qualitatively similar to the one reported in Fig. 2(c) in Ref.[@yunoki] once the distinction between order and disorder in the orbital degrees of freedom is discarded [@notascalaAF]. Our contribution in this framework is to show that the lattice shear deformation is a relevant ingredient in its own even in the absence of cooperative mechanisms due to electronic or JT-induced orbital ordering. Regarding point iii) we notice that a systematic analysis of the role of the hopping is relevant for the general understanding of the manganites. In the real materials of the general form R$_{1-x}$A$_x$MnO$_3$ (where R and A are trivalent rare earth and divalent alkaline earth ions respectively) the bandwidth can be varied by changing the radius of the perovskite A site (where the R and A ions are located). Depending on the averaged ionic radius the bond angle of Mn-O-Mn deviates from 180$^0$ in the orthorombic lattice. The smaller the radius of the A site is, the larger is this angle, which reduces the Mn-O overlap and the effective Mn-Mn hopping. A systematic experimental analysis of this effect is reported in Ref. [@tokura2]. The results summarized in Fig. 3 allow for a unified qualitative description of different half-doped materials. In particular the most insulating behavior in Fig. 3(a) is consistent with the generic features of Pr$_{0.5}$Ca$_{0.5}$MnO$_3$. On the other hand, the center panel of Fig. 3 shows the same qualitative behavior of La$_{0.5}$Ca$_{0.5}$MnO$_3$ or (Nd$_{1-y}$Sm$_y$)$_{0.5}$Sr$_{0.5}$MnO$_3$. Finally the most metallic system in the right panel is a good qualitative description of La$_{0.5}$Sr$_{0.5}$MnO$_3$. Nevertheless, it has been pointed out [@note] that a rapid change in lattice constant $K$, rather than necessarily small changes of $t$, could be the clue for the very different behaviours of the systems (A,A$^\prime$)$_{0.5}$MnO$_3$. In our case this would correspond to an abrupt change along a vertical line in Figs. 2, and would enhance the first-order character of the transitions. A semi-quantitative agreement can even be reached. In fact, the pure double-exchange ferromagnetic critical temperature ($J/t = 0$) is, from our 2D mean-field calculation, $T_c\simeq 0.085t$. A 3D estimate enhances this value by a factor $3/2$ owing to the number of nearest neighbours, yielding $T_{c}^{DE} \simeq 0.13 t$. For an average value $t = 0.3 eV$ one gets a transition temperature $\simeq 450K$. It is reduced by the presence of the antiferromagnetic coupling, for instance in panel c of Figure 3, in zero field $T_{c} \simeq 0.05t$ thus $\simeq 270K$ in 3D, thus supporting De Gennes’ simple mean-field picture. Then one obtains in the center panel the value $T_{c}^{CE} \simeq 180K$, and in the left panel $T_{c}^{CE} \simeq 170K$. These values are reasonable, as compared with experimental ones, in particular one notices that $T_{c}^{CE}$ is strongly reduced compared to $T_{c}^{DE}$. This is due to the competition between the two order parameters (ferro and antiferro). Another way to understand it is to notice that in the CE phase the effective dimensionality is reduced by chain formation, together with charge localization this reduces the effective stength of double-exchange. On the other hand the effective antiferromagnetic exchange is close to that of stoechiometric CaMnO$_{3}$ with a $T_{c}$ of $120K$. We have also systematically investigated the role of the magnetic field in stabilizing the uniform FM phase. The typical energy differences involved in the first-order transitions are so small that accessible magnetic fields are sufficient to drive the transition from the insulating to the FM phases. Specifically, by taking a typical value of $t \simeq 0.3 eV$, one can see that $H/t = 0.015$ (where $H = g \mu_{b} S H$) roughly corresponds to ten Teslas. This value agrees well with the typical values experimentally used to investigate the (T,H) dependence of the low-temperature CE insulating phase and the intermediate-temperature uniform FM phase [@tokura; @tokura2]. Conclusion ========== Let us compare our approach with other models which have been proposed to describe the half-doping compounds. In our treatment, the main ingredient which is responsible for the CE-type magnetic ordering is the appearance of a shear deformation, with the consequent modification of the magnetic coupling along certain directions. In ref. [@yunoki], two different orbitals are retained for the $e_g$ electrons, but the shear deformation is absent. In their approach, the CE order arises because the orbitals prefer to have large overlaps along certain directions, thus favoring the kinetic energy along zig-zag chains where the spins are aligned ferromagnetically. In ref. [@jackeli], the electron-lattice interaction is absent, and it is again the anisotropic $e_g$ transfer amplitude of the two-orbital model which drives the CE state. In both cases, however, the AF coupling $J\sim 0.1 t$ necessary to achieve the CE state is one order of magnitude higher than what is estimated from experiments, signaling that there must be some additional effect contributing to the CE ordering. Finally, we reiterate that self-consistent screening is necessary to explain that phases with marked charge order come in very close competition with metallic phases. We believe that this is a crucial feature of doped manganites, that further models addressing coexistence and texturing of those phases at small scales must take into account. For reviews see : Colossal Magnetoresistance, Charge ordering, and Related Properties of Manganese Oxides, edited by C. N. R. Rao, B. Raveau (World Scientific, Singapore, 1998); M. Imada, A. Fujimori and Y. Tokura, Rev. Mod. Phys. 70, 1039 (1998); Colossal Magnetoresistance Oxides, ed. Y. Tokura, Gordon and Breach, Monographs in Cond. Matt. Science (1999). D. Feinberg, P. Germain, M. Grilli and G. Seibold, Phys. Rev. B [**57**]{}, 5583 (1998) ; M. Capone, D. Feinberg and M. Grilli, Eur. Phys. J. B[**17**]{}, 103 (2000). E. O. Wollan and W. C. Koehler, Phys. Rev. **100**, 545 (1955). C. Zener, Phys. Rev. **82**, 403 (1951). P. W. Anderson and H. Hasegawa, Phys. Rev. **100**, 675 (1955). P. G. de Gennes, Phys. Rev. **118**, 141 (1960). A. J. Millis, P. B. Littlewood and B. I. Shraiman, Phys. Rev. Lett. **74**, 5144 (1995). S. Yunoki, T. Hotta, and E. Dagotto, Phys. Rev. Lett. [**84**]{}, 3714 (2000). J. B. Goodenough, Phys. Rev. **100**, 564 (1955). S. Mori, C. H. Chen and S-W. Cheong, Phys. Rev. Lett. [**81**]{}, 3972 (1998). S. K. Mishra, R. Pandit and S. Satpathy, Phys. Rev. B [**56**]{}, 3184 (1997). I. V. Solovyev and K. Terakura, Phys. Rev. B [**83**]{}, 2825 (1999). G. Jackeli, N. B. Perkins, and N. M. Plakida, Phys. Rev. B [**62**]{}, 372 (2000). T. Mizokawa and A. Fujimori, Phys. Rev. B [**56**]{}, R493 (1997). D. S. Dessau and Z.-X. Shen, in [*Colossal Magnetoresistance Oxides*]{}, ed. Y. Tokura, Gordon & Breach, Monographs in Cond. Matt. Science (1999). J. H. Jung, K. H. Kim, D. J. Eom, T. W. Noh, E. J. Choi, J. Yu, Y. S. Kwon and Y. Chung, Phys. Rev. B **55**, 15489 (1997). Y. Tokura, H. Kuwahara, Y. Moritomo, Y. Tomioka and A. Asamitsu, Phys. Rev. Lett. [**76**]{}, 3184 (1996). L. Sheng and C. S. Ting, Phys. Rev. B [**57**]{}, 5265 (1998). P. G. Radaelli, [*et al*]{}, Phys. Rev. B [**55**]{}, 3015 (1997) Y. Tokura, [*et al.*]{}, J. Appl. Phys. [**79**]{}, 5288 (1996). S. Okamoto, S. Ishihara, and S. Maekawa, Phys. Rev. B [**61**]{}, 14647 (2000). Despite the qualitative resemblance of our phase diagrams with the one reported in Ref.[@yunoki], we also emphasize that our approach considers a strong local e-e repulsion. Possibly this major difference is responsible for the much lower values of the AF coupling $J_1$ in our results in comparison with the rather large values of $J_{AF}$ in Ref.[@yunoki]. T. Egami and Despina Louca, J. of Supercond. [**13**]{}, 247 (2000).
A/N: Well, appears I'll have to make this clear so here it goes. MEGADIMENSION IS NOT HAPPENING IN THIS STORY! Seriously, this story was made in the beginning of June last year (or so forth after this is done…hopefully.) That was a few months AFTER the Japanese release and I didn't know about it at the time. This story is intended to tie in Victory, MK II and the first one and only those three. Hell, I've never seen Megadimension so I don't know what the hell it's about. And no. I'm not going to spend how many hours of a playthrough that goes into it and add more to the story. I intend to finish this, with what I have planned for it, and that's final. Besides, like Neptune said, this story is intended to be a somewhat fun story apart from the usual crossover stories I have. (you know the ones I'm talking about.) That and this story is almost three quarters done so it would be pointless to add some dramatic villain scheme and it wouldn't fit into the story. "Sighs." Damn that's a long Author's Note. Let's get the game started already. Disclaimer: I do not own RWBY and/or Hyperdimension Neptunia. They belong to their original creators. RWBY: Press Start Level Twenty Seven Game Over The roar of an engine coming from a boat could be heard in over the sea. As of now, Neptune and the rest of gang were making their way over to Leanbox. It was a journey Ruby and her team was extremely thankful for since they had to cross through the Gigo Main Entrance to get to said boat. They had to fight through several of the monsters that jumped them. Thankfully, they were able to avoid the A Class monster that resided in the area. So now, they were using this time to relax and recover their HP. All the while of finally telling Neptune and the rest of group what happened to them during the festival. Naturally, Neptune was ecstatic about it. "You're finally together after how many chapters of this story now? Well, not together as a whole team but two people individually. You know what I mean." Of course, they did and to be honest, they couldn't feel happier about it. Yang made it obvious in showing her affections by having her arm around Blake and grinning. "I have to say, I've never felt more alive." Her sister had to agree since she and Weiss were holding hands (much to the heiress' embarrassment.) "I know right? This is so much better than all the cookies in the world." Their respective girlfriends loved this as well. After so long of what they were going through, it was like nothing more than a distant memory. In fact, they felt better than ever and that much was obvious as it showed in their fighting against the monsters earlier. A team that was almost torn apart was made stronger than ever before. Of course, they weren't the only ones who reconciled. Noire and Plutia were finally together but decided to tell their friends a little later on. Right now, the Lastation CPU was just happy she was with the girl she secretly longed for. Now that everything was settled, of course, Neptune would ask the kind of question only she would think of. "So, now that the Whiterose and Bumblebee pairings are established, who's the neko and who's the tachi?" And cue the record scratch as the team was very confused by the question. Noire, on the other hand, blushed at the question. "Why the hell are you asking that all of a sudden?!" The other CPU shrugged. "Why not? Don't you want to know too Noire?" It was the last thing on the other girl's mind. "No, I do not!" Peashy was just confused by all of this. "Neptuna? What does that mean?" "Now now Peashy. It's something you don't need to know." Plutia stated. The team was still wondering about what Neptune asked before. "Just what are you going on about this time?" Weiss asked. Apparently, they didn't know the terms. "Well, if you don't know, no sense of telling you. Although in Yang's case, Blake is definitely the neko if you know what I'm saying." Somehow, the Faunus girl knew what Neptune was talking about. There was still the matter of the elephant present. "Well, if we can switch topics for a bit…" Weiss looked over Blanc who was reading a book to pass the time. "Why is she coming along with us?" Neptune smiled at the question. "Because why not? It'll be fun to have the old Victory team back together again for a nice reunion." Blanc wasn't as excited since she is going to her least favorite place that was ruled over by her least favorite CPU. "I'm not overly excited to see Thunder Tits again." Yang raised an eyebrow to that. "If you don't like Vert that much, then why come along anyway?" "Because I asked her to." Everyone turned to Plutia. "It would be so fun if Blanny could come with us." Blanc sighed as the CPU knew she couldn't say no to Plutia. "Why are we heading over to Leanbox by boat again anyway? We can just fly over there." She brought up a good point but there was a very good reason as to why they're traveling by boat. "Do you really want Sadie to appear and scare the living daylights out of Blake?" Everyone dreaded the thought of Iris Heart appearing. Blake especially. Even the name sent shivers down her spine. Noire did wonder about something about what the other girl said. "And why would Blake, in particular, be afraid of her?" "Because she's an adorable catgirl." Plutia answered. Now the other two CPUs were confused. "Catgirl?" Neptune thought to clear this up. "Yeah. Blake here is something called a Faunus from her world and she's your genuine catgirl. You should've seen her when she first saw Sadie. It took us a bit to find her and was up in a tree." A funny moment to her but to Blake, seeing Iris Heart for the first time was still a terrifying moment of her life. Even the thought of her still scared the Faunus girl Noire took a closer look at Blake and did see the resemblance. "Now that you've mentioned it, the gold eyes do kinda give it away. I'm guessing the bow is hiding the ears." Blanc saw it as well. "Well, it comes to catgirls; some features do stick out more than others. The golden eyes are a staple." Blake couldn't believe these girls deduced her that quickly. Then again, they were deities. Yang looked over the sea in thought. "So, when are we arriving in Leanbox? It's been a pretty long boat ride so far." Blanc flipped over another page in her book. "About another half hour." That was more than enough time for the blonde. She stretched out her arms and lied down with her head on Blake's lap. "Well, I'm going to take a little catnap. You don't mind right Blake?" Her new girlfriend smiled at the other girl. "Not at all." And with that, Yang closed her eyes and enjoyed the feeling of having Blake's lap as a pillow. Plutia had the same idea as she lied down with her head on Noire's lap. She sighed contently at the feeling. "This feels so good. Nighty night Noire." She quickly fell asleep. Noire couldn't help but be embarrassed by this but that didn't stop her from loving it inside her head. It did capture the attention of the other girls. Neptune in particular. "So, Noire. What's the deal with you and Plutie?" Noire didn't reply but she did see the jealously Blanc had. She, in return, gave the other CPU a smirk as if she was saying something along the lines of she'd won. Loading… After the hour and forty-five boat ride, the group had finally arrived at Leanbox and were walking through the streets. Team RWBY was…less then amazed by it. They were still impressed but not as much as the last three nations they've been in. "Well, it looks different than the Leanbox back in the previous world." Weiss stated. Yang wasn't as convinced. "I don't know. It looks the same to me. Just with fewer highways." Ruby still liked it though. "I think it's still cool." Blake looked around the nation a little closer. "So, how is this Leanbox different?" Neptune had to agree with the latter's opinion. "Beats me. It's Leanbox and still ruled over by Vert. Now that I think about it, being here with all of you traveling like this brings me back the first time we arrived here. Ah, good memories." Noire highly doubted what good memories counted toward the other CPU. "Yeah, nothing says good memories like Vert suddenly showing up out of nowhere, saying that she'll beat us all by taking our shares, and kinda kidnapped your sister." "That about sums up our encounter with her." Blanc stated. Something Noire mentioned earlier caught Weiss' attention. "Wait, this Vert actually kidnapped Nepgear?" Neptune thought she should clear that up. "Uh, not exactly. Let's just say Vert thinks that if a CPU is born in her nation, it means that she's her little sister. Of course, I took offense to that. Nepgear is my kid sister!" That explanation was clear as mud to the team. Noire thought to explain further. "Basically, when Nepgear arrived in our world, she wasn't CPU so Neptune had the idea of going to Vert to see if she had some CPU Memory Cores. She did and Nepgear did become a CPU but…Vert automatically declared that she was her sister since she became one in her nation." Yang already found a similarly between the two Verts. "Apparently, Vert really wants a sister." Blake was wondering something else about this world's Vert. "So, what was with the taking shares detail?" Neptune laughed when she remembered that failed fiasco. Apparently, it was something funny to her. "Vert thought she could take our shares by introducing bigger hardware to our nations. However, that failed miserably and there were a lot of complaints about them. She honestly thought she had us." Blanc explained. Judging from how they were reminiscing all of this, it must've been somewhat of a good memory for them. "And you're friends with her now?" Ruby asked. She got four different responses. "Yep/maybe/something like that/uh-huh." Noire thought to clarify a little more. "I mean, we're more like friendly rivals that can be civilized. We do still compete for shares." Neptune thought otherwise. "Whatever. You know that we're all friends despite what we had to go through in the third game." Noire wasn't going to deny that. "In any case, enough about the good old times. Let's get to Vert's place and drop by to say hello." Loading… The group managed to arrive at Vert's basilicom which didn't look all that different from the other ones. A very large looking mansion. Neptune slammed the door open. "Hey Vert! Guess who's here!" Her response was silence. Now the CPU was confused. "Vert? Olly olly oxen free!" She entered the building with the rest of the group following suit. Already Weiss noticed something off. "Why would anyone leave the front door unlocked?" "Depends on who's stupid enough to go breaking into the home of a CPU." Blanc brought up a good point. Peashy was looking around as well. "Bert!? Where are you!?" Yang laughed at the name Peashy yelled out. "Bert? Really?" Her sister thought differently. "I think it's cute she calls her that." Noire agreed to that. "Vert thinks the same. Speaking of her, where is she?" That was the millionnep question for all of them. The group soon made their way to Vert's bedroom and once again, Neptune slammed the door open. "Dramatic entrance!" Weiss should be doing this but she questioned the other girl's action. "What was the point of that?" Everyone looked inside the room and found that it was empty which was very unusual since Vert was equal to Neptune when it came to playing video games. There was also something else very unusual about the room. "It looks like she has the same taste as boys being a little close to each other like the other Vert." Blake stated. Noire crossed her arms and began to think. "I don't it. It's not like Vert to go up and disappear." Neptune had a few ideas. "There are three scenarios for something like this to happen. It's either A: She's out getting the latest video game. B: She was abducted by some evil cult. Or C: She's out questing." Weiss doubted one of the options. "One of those is definitely out." One of the other two seemed more plausible to Blanc. "It wouldn't surprise me if she was out buying a game but the quest makes sense as well. It isn't uncommon for a goddess to do quests every once in a while." It became obvious of what their next plan of action was. "Alright then troops! This is how it'll work. We'll split up into two groups. One to check out the game store and the other to the guild." For once in a very rare time, Neptune had the right idea. There was only one problem. "And who's going to be in what group?" Noire asked. That much was obvious. "Well, Ruby and her team could go to the game stores while we hit the guild. We're still part of a team FYI. We can just check what kind of quest Vert's doing." Again, very rare for Neptune to plan this out. Now if only she applied this to her CPU duties. Ruby walked up to her "In that case…" She handed the other girl her scroll. "If you do find out, contact us." Neptune gladly accepted it. "Hey thanks and no problem. We'll make contact ASAP." "One problem though." Everyone turned to Noire. "They don't know the layout of the nation nor do they know where the guild is." That was a slight problem for them. Fortunately, Neptune already thought of the solution. "Well, I guess one of us will just have to show them around. Noire, you're up for the job." And of course, Noire had a problem with the other CPU giving out the orders. "Why the hell do I have to do what you say?!" Loading… Neptune, Blanc, and Peashy made their way to the guild while Noire and Plutia showed team RWBY around Leanbox to find the missing CPU. It was a 50/50 shot so either party had a chance of finding out. Of course, there were a lot of game stores but they managed to minimize it to stores that are selling the most recent popular game. The first group finally arrived at the guild. Neptune couldn't help but feel nostalgic. "Ahh the guild. It's been a while since last came here." "That's because you barely do any work at all." Blanc commented. The other CPU decided to ignore that. "Whatever. Let's go inside." The group did just that. Once they were inside, Neptune walked up to the terminal and touched the screen. Two options appeared. "Welcome. Are you returning guild member or a new guild member?" Neptune chose the first option. "Please insert card for verification" This was the easy part. CPUs had special cards so Neptune pulled her out and placed it on the screen. At least it wasn't different than the guilds back in her world. "Welcome CPU Neptune. Are you returning from a quest or selecting one?" Of course, she chose the second option. "Please select a quest." Neptune scrolled through the quests. Ones that were highlighted meant that someone had already taken it up. For CPUs however, the quests they take up are highlighted in their own color so people could see which one they're doing. After some more scrolling, Neptune had finally found the quest Vert was on. She saw the quest Vert was on and the location of it. "Bingo! Looks like she's off questing somewhere. The Kobaba ruins? That's weird but hey." After getting what she wanted, Neptune chose to cancel. "Would you like to select a quest?" The CPU pressed no. "Thank you and come again." Now since that was done, she walked over to Blanc and Peashy. "So, did you find out where Vert is?" Neptune pulled out Ruby's scroll and started to text the other group. "Yep. Letting the others know. Now, let's go catch ourselves a Vert." Loading… After meeting up with the Ruby and the others, the group found themselves in the Kobaba ruins where Vert currently is. The location itself was more impressive than Leanbox. Out of all them, Blake was the most amazed. Yang liked it second to her Faunus girlfriend. "This place is so rad. I mean, look at it." While it did look pretty, they still had to be on guard. "Just remember to stick together. I've heard that some people who take up quests in this place get lost and never to return." Ruby was a little frightened by that. Weiss looked into that a little deeper. "Being lost in a place like this and with the monsters you have to constantly fight." Yang felt slightly uncomfortable with that. "Man, that's dark right there. I guess you can say it's even…Grimm?" Everyone groaned at the pun. Except for Neptune and Plutia. They just laughed. "Good one." Plutia giggled some more. "That was funny Yangy." Blake looked around the ruins a little more. "So, what is Vert doing here in the first place?" Neptune briefly saw the details of the quest Vert was currently on. "Just your usual monster hack and slash. No biggie." They walked a little further in the ruins until they heard a loud crash from up ahead. Once they approached the site, the group the saw dust kicked up and a shadow of a figure coughing. "Well, that felt rather unfortunate." The dust cleared up and they finally saw Vert in her HDD form. Peashy giggled happily as she ran over to the other CPU. "Bert!" Vert heard her voice and turned to Peashy while looking surprised. "Peashy? What are you doing here?" Peashy gave Vert a happy hug. "Hey there Vert!" The Leanbox CPU looked ahead to see a face she hadn't seen in a while. "Neptune? And everyone else too? What are you doing here?" Neptune smiled at her. "Here to see you duh. Of course, when we dropped by your basilicom, you weren't there. Kinda weird how you're taking up a quest all of a sudden." Vert was touched that they wanted to see her. "It's not strange at all. Can't a goddess help out her nation once in a while?" "What in the world!?" Everyone looked over to Weiss and the rest of the team. Like before, the heiress was appalled by Vert's HDD attire. "What is it with you and revealing outfits!? Your front is nearly exposed!" Yang had to be honest that she couldn't quite stop staring at Vert's breasts. Something Blake caught and elbowed her girlfriend. "Ow." Ruby just waved at her. "Uh, hi there. It's nice to meet you." Vert stared at the team in confusion. "I'm sorry. Who are all of you?" Neptune laughed nervously. "Oh right. Vert, this is Ruby and her team RWBY. Team RWBY, this is Vert, as you already know but the CPU of this world's Leanbox. You see Vert; they're a little out of town. By out of town, I mean they're from a different world and trying to get back to it." A quick summary but Vert understood it clearly. "I see. Well, I would say it's nice to meet all of you but this is no place for humans to be at the moment." Ruby raised an eyebrow. "Why is that?" Her answer came in the form a loud roar coming from where Vert crashed from. Naturally, everyone covered their ears. Once it stopped, they uncovered their ears but now had fear coming over them. "What…was that?" Yang asked in a scared voice. They felt the ground shake a few times while hearing thunderous footsteps and they were coming to them. Vert summoned her lance. "I must confess. While on the job, I'd stumbled across something very vile. In fact, that was the reason why I crashed through here in the first place." They felt another tremor and soon saw a very large figure approaching them. It was the biggest Ruby and her team had seen thus far. Once it came into full view, it roared again at them. Ruby was almost too scared to speak. "W-W-What is that?" The CPUs instantly knew that everyone was in trouble now. "No way. That's a…a…" "Class S monster. Sealed Disaster." Neptune finished what Noire started. Ruby and team heard that clearly but couldn't believe it. "Class…S?" The dragon growled at all of them. Yang took a step back. "I think that's a little out of our league." Neptune and the other CPUs ran up in front of them. "Girls, I believe you may want to step back a bit." She transformed into her HDD form. Blanc did the same. "Class S monsters are rare and are very powerful. They're not to be underestimated." Noire went into her HDD form. "Strong as you girls are, this is way over your heads." Plutia transformed as well. Much to Blake's displeasure. "All of you run off like the good little girls that you are. I would hate to see cute little Rosy and friends get caught in the crossfire." For once, the team was actually glad that Iris Heart was here. Lastly, Peashy went into her HDD and was ready to fight. "We'll beat up the bad dragon!" Well, there really wasn't anything Ruby and her team could do. "Right. Good luck." The team ran off away from what it looked like to be a difficult battle. If a monster like that was enough to worry even goddesses, it was an opponent not to be taken lightly ever. Once they were far enough as it is, all they could do now was watch. "So, you think they can take it on?" Yang hoped so. "Maybe. I know they're gods and everything but that monster looked like it could eat King Taijitus for breakfast, Death Stalkers for lunch and Nevermore for dinner." Everyone agreed to that. Out of all the monsters they've seen in this world, this was the strongest yet. The Sealed Disaster roared at the CPUs as it raised its claw at them. The monster slammed down its claw at the goddesses but the all scattered to the air to avoid the attack. The result of that was the ground shaking violently and strong winds. Ruby and her team felt the power behind the attack and took shelter behind two pillars. Two of them each behind each one. This was much more than what Weiss anticipated. "What…incredible power." Something about this bothered Neptune. "How did you even run into this in the first place Vert?" It was a somewhat embarrassing story for the Leanbox CPU. "Would you believe me that I encountered it by accident?" Blanc scoffed at that. "Seriously? This is one hell of an accident!" "We can argue about Vert's screw up later. Right now, let's deal with this first!" Noire argued. Plutia giggled menacingly. "My sweet Noire is correct. Let's teach this monster a painful lesson it won't ever forget." And everyone turned to her looking confused while Noire panicked a little. "My sweet Noire?" They all said at the same time. "Plutia!" Of all the times for that to slip out. However, embarrassment would have wait later as Noire saw the next attack from the Sealed Disaster coming right at her. She flew up to avoid the attack and was above the monster. Noire suddenly started to drop right to the head while a red aura covered her body. "Volcano Dive!" Her kick struck the top of the head and fire flare up from the attack with a resounding boom. The Lastation CPU jumped off only to see that her attack had little effect. "I knew it was going to take more than that." "Out of the way!" Noire looked over to Blanc who tossed up an orb of blue energy. As it came down, Blanc slammed it forward with her ax. "Gefahrlichtern!" The orb separated into countless lights heading right at the monster and struck it all over. Like Noire attack from before, it had little effect. "Damn." "Sylhet Spear!" Several lances of green light struck the monster from the side. Vert wasn't finished yet as she charged right at it. "Rainy Ratnapura!" She delivered countless attacks from her lance all over the front of the monster right before sending one final attack at the Sealed Disaster passing by it. She turned around as the monster did the same and roared at it. "Plutia!" Neptune flew right at the monster while it was distracted. Plutia followed the other CPU closely. "Victory Slash!" She slashed at the monster while passing it as it did a considerable amount of damage. Plutia wasn't too far behind with her attack. "Fighting Viper 2!" She stuck down at the monster with her sword and then up all while delivering electric damage. Peashy was up last and delivered a series of punches to the center that did push the monster back a little. She did a backflip while rising higher in the air before performing a drop kick right to it. "Hard Break Kick!" She kicked in the center of the chest and the monster roared as it fell down to the ground while Peashy laughed. "That was fun!" Of course, this being an S-Class monster, it was going to take more than that to bring it down. The monster already started to stand up and roared at the CPUs. Neptune couldn't help but chuckle. "I guess we're going for the second turn." Blanc raised her ax to her. "Good because I have a lot more to dish out." Vert was grateful for their help but there was still a small problem. "You do realize this is still my quest." Of course, they took that into account. "Look, if it'll make you feel any better, you can still take credit. Deal?" The other CPU smiled at the proposition. "That'll be fine." All of the CPUs charged at the Sealed Disaster at once and attacked it with everything they had. A little ways away from the battle, Ruby and her team continued to watch the battle. Already they could tell they were treating this monster differently than the ones from before. This monster was legitimately threat. The young leader couldn't take her eyes off the battle. "Oh wow. They're incredible." Plutia and Noire struck at the monster with their swords at once and then flew back to distance themselves away from it. "This thing has to be at half health right now." Her childhood friend only laughed at the challenge in front. "It does make things more exciting, don't you think?" The CPUs were going for another attack barrage until they saw the monster wrapped its arms around its body and its wings covering it as well. Neptune had a bad feeling about what it was planning. "Heads up. It's up to something." Blanc wasn't going to wait around. "Then let's attack it while we still can!" Everyone agreed to that. This was their chance and they were going to take it. However, the monster began to glow and that immediately spelled danger. Blake certainly sensed it. "We have to get away, now!" Her team thought the same and began to run away. The monster glowed brighter until it stopped gathering light. Just as the CPUs were near it, the Sealed Disaster bellowed as it opened up and unleashed a powerful explosion that enveloped them all. The attack didn't stop there as it obliterated everything all around in its path. Unfortunately, Ruby and her team were about to be caught in it as well. All of them screamed when the explosion reached them and separated from each other. Once it subsided, Ruby was in an intense amount of pain even with her aura at full power protecting herself from the explosion. She slowly opened her eyes to see the rest of her team in the state as her. The CPUs were in a similar state as they were embedded into some of the buildings of the ruins. Just when this situation couldn't be any worse, Ruby heard those same heavy footsteps coming towards them. The young leader looked back as painful as it was to see that the monster was coming over to them. Ruby struggled to stand up. "We have to…get away." She saw where her team was at the moment. Yang and Blake were lying on the ground together. Ruby must've figured that her sister grabbed the Faunus girl to protect her from the explosion. She searched for Weiss and found her but to the young leader's horror, the Sealed Disaster was approaching her first. The heiress started to stir as well and opened her eyes to see the monster closely moving in. Weiss struggled to stand up to get away from the monster but found little success. The monster towered over her and raised its claw. There was no way even her strongest glyph would be able to block the attack. She was in fear for her life. The monster started to swing at the heiress. It was only at that moment that time seemed to have stood still. It wasn't until she saw rose petals in front of her eyes. Next thing Weiss noticed as that she was thrown back. She saw that it was Ruby that pulled her away. Ruby pulled out Crescent Rose and tried to block the attack herself but as the claw hit the weapon, it easily swatted the young leader to the side and sent her away crashing through several pillars. It took Weiss few seconds to register what just happened in front of her. For a moment, her heart stopped. "RUBY!" Yang started to come to and opened her eyes as she heard Weiss' cry. From the sound of it, she herself was worried. She sat up and looked around. Fear started to overcome as she couldn't find her younger sister anywhere. It was then she realized the horrible truth. "Ruby…no…" Blake opened her eyes to see her girlfriend with a look she hasn't seen from her before. As the monster was about to attack Weiss, something wrapped around its arm was pulled back. "How dare you…" The Sealed Disaster looked back to see a very pissed off Iris Heart. "I'll make you pay with your life!" Behind her were the other CPUs who shared her anger. For Neptune, however, she was also feeling guilt. "This is my fault. I should've known better. This falls into my hands." Blanc flicked her ax. "We'll pay this bastard back tenfold!" All of them charged at the monster. Loading… Away from the fight, Ruby lied against a pillar at the base. She was bleeding profusely from her head and she couldn't even move. Breathing even was hurting her. The young leader was in so much pain, she couldn't move. Her aura wasn't enough to protect her. She was barely conscious as is. Of course, the only thing going through her mind was Weiss. "Weiss…I hope…you're okay. I'm…sorry." One thing she couldn't believe was that this happened right after finally they were together. Her eyes began to feel heavy. "I'm…so tired. Maybe after a…little nap…I'll…get back…to…them…" She closed her eyes. Little did she know something small fell out from a crack in the pillar above and dropped into her mouth. Her body began to glow shortly after. Loading… Yang roared in anger as she tried to make her way over to the monster that swatted her sister. However, Blake was holding her from behind despite the other girl in her awakened semblance form. Something her blonde girlfriend didn't like. "Let me go! Let me go! I have to…I have to make that monster pay!" Blake understood how Yang felt but it was clear she wasn't thinking straight. "Yang, please! What are you going to do? Even goddesses are having a hard time with that monster." Her girlfriend was beyond listening to reason. "I don't care! That bastard hurt my sister! I have to…have to…" She stopped and began to cry as she exited out of her semblance. "Ruby…" Yang dropped to her knees while Blake still held her. The Faunus girl shared the same dreadful feeling her girlfriend was experiencing. Blake looked ahead to Weiss who must've been feeling the same way. She was right. Weiss was on her knees feeling the heaviest guilt she had felt. "It's my fault. It's my fault Ruby's gone. Damn it! If only…if only I moved away, she wouldn't have done that. You…stupid dolt. Why? Why did you have to do that? And to think…we've finally…we were finally…." She heard the monster roar and saw the CPUs continuing to fight the Sealed Disaster. Blanc screamed as she dropped her ax on the monster. "Getter Ravine!" The attack struck the top of the head with explosive force. The monster rebounded as it swiped at the Lowee CPU and struck her. Blanc spun mid-air for a short time before regaining her balance. She growled at the monster. "Bastard!" Noire went passed her as her sword began to glow in a rainbow color. "Torneaid Sword" She slashed at the monster downward. She flew away from the monster. "Damn it." "Cross Combo!" Neptune made a series of slashes at the monster one after another. After kicking it after the last attack. She held out her hand and four large swords appeared from behind her. "32-bit Mega Blade!" All four swords launched at once at the monster struck it one after another. She still wasn't finished. "Critical Edge!" The Sealed Disaster recovered fast enough and retaliated back at Neptune's attack. Both attacks hit each other but the monster overpowered her and sent her back. She stopped and growled at the monster. Neptune tried to give it another go. "Neptune." She stopped and turned to Vertand wasn't too pleased. "Vert-" "I know you must feel responsible but charging in recklessly isn't the way." She noticed the way how Neptune was acting after Ruby was swatted her. Even if Vert had a point, it was the only option Neptune had. "It's my fault, Vert. I brought them along in this. I'm responsible for what happened to Ruby. It all falls on me." Vert may not know the girl but judging how Neptune was feeling about this, she must've been an important friend to her. She looked over the rest of the girls. "As of now, their safety is first priority" Plutia was already on that. "Come now my darlings. This isn't the place for children such as yourselves." Usually, Blake would be scared shitless with Iris Heart being so near but concern for her girlfriend overcame that. She was carrying her away from the fight. Plutia looked over to the heiress who was looking at the fight in front of her. "Time to go now Weissy or do you want me to drag you through the dirt?" Weiss stood up after that. "What about Ruby?" Iris Heart narrowed her eyes. "Once the monster is dealt with, we'll search for her." The heiress gritted her teeth. "That's not good enough! I need to know if she's okay!" She grabbed her shoulders and shuddered. "She has to be…she has to be…" She felt a hand on her own and looked back to see Plutia standing right behind her. "Complaining about it won't do anything. If you wish to see your rose again, move." Weiss looked away. She couldn't stop worrying about Ruby. Iris Heart's patience was running out as she grabbed the heiress by the collar and dragged her. "Let's go." Weiss struggled to break free. "H-Hey! Let me go!" Plutia couldn't help but like this side of the other girl. "Oh, I'll let you go. Only to punish you for being a disrespectful brat that is." And Weiss stopped when she heard that. Over to the CPUs, they were still fighting against the Sealed Disaster. Peashy gave it another hit which sent the monster back a step and Vert followed suit. She thrust her lance at the Sealed Disaster. Neptune and Blanc were up next. "Let's go, Blanc!" The other CPU nodded. "Right!" They flew right at the monster. All of a sudden, countless energy shots were fired randomly from the sky at the Sealed Disaster and the two stopped. "What the hell was that!?" Plutia and everyone with her looked over wondering the same thing. After the shooting stopped, the monster was suddenly attacked by a streak of red light striking it all over. After hitting it from underneath the chin, a shot rang out at it hit the center. It was a sound Weiss knew all too well. "That shot. It can't be…" The light faded and everyone saw the figure from within. They couldn't take their eyes off the new arrival. "Who…is that?" Noire asked. As the figure landed, they saw what she looked like. Her attire the same of a CPU with red boots above her knees with black running along the top and detached sleeves as well. The same red and black features were the same as her main outfit with an opening that revealed her cleavage as her breasts were as large as Vert's. She had bright long red hair in a ponytail tied by a clip in the form a symbol Weiss recognized. Part of the hair in front covered her right eye. Four wings of light, two at each side were behind her back were red too and curved. In her hand was a scythe that looked similar to another scythe Weiss easily recognized except it looked more cybernetic. Her appearance, the color scheme, even the weapon. It was like the figure they were all looking was an older version of… "Ruby?" The figure looked over to Weiss and gasped. The heiress couldn't believe what she was seeing in front of her. The figure smiled at her. "Indeed I am." She spoke in a deeper more mature voice. "However…" She looked back at the Seal Disaster in front of her and summoned a second scythe into her other hand. "You may also wish to address me as…CPU Crimson Heart." A/N: I swear if the Sealed Disaster isn't what I think it is, the last quarter of this chapter just got very awkward for me.
Tag Archives: Christmas Waking up on Christmas morning away from home for the first time in my life didn’t actually feel as awkward as you might have thought. As you get older Christmas starts to loose a little of its shine and growing up takes some of the childish excitement out of the day. Many reasons, including my Grandma passing away on Christmas day, have made me not the biggest fan of the holiday. I enjoy seeing family and watching everyone open their gifts but for me the worst part for the last few years has been those first few hours of the day when I wake up early as always yet there is no one about. Partners spending time with their parents, house mates returning home for a while and me alone in the house with nothing really to do. I don’t say this to try and get sympathy from anyone at all as I don’t wallow around the house and cry or anything, to me it is just another morning except I can’t do anything as the country has shut down. For the last few years I have always tried to take the edge of this by doing a big challenge on Christmas day that is just for me. A few years ago I challenged myself to complete at 10km trail run on Christmas day by myself around some fields and rivers near where I used to live. Last year I upped my game to “20k Xmas day”, this time taking an easier option and opting to complete the distance as a 15km bike ride and only a smaller 5km run. The point of it was though that it took my mind away from the mundane and let me do something I wanted to do just for me. No one else really new or cared what I was up to on those early Christmas mornings and that’s the way I liked it. To be honest if I had told most people they would have looked at my like I was a fool for even thinking such a thing but this was the way I liked it. The best bit was that last year on my ride at 8am I was heading down a steep hill by the lakeside and as I looked out onto the water I spotted another adventurer after my own heart. Out on the water was a woman on a stand-up paddle board wearing a wetsuit and a Santa hat. We spotted each other and waved and I knew that I had made the right choice in the way I was spending my Christmas morning. I don’t know why I have just rambled on so much and none of it is really relevant to the story of my road trip but I think it at least sets the scene and gives a little bit of understanding about what makes me tick early in the morning on days when most people want to be in bed. Anyway back to what actually happened this year on Christmas day. Waking up in a dorm room is something that I am used to now. I crept out of bed, got dressed and headed out towards lake Wanaka, all while trying not to wake any of my room-mates. With my first mission successful it was time to call home and rub it in that I live in the future and was spending the day in the sun and having a BBQ while everyone at home froze and pigged out on Turkey. My calls home were well received with most people still awake and having a cheeky Christmas Eve drink. I even managed to catch my brother out with some of my cousins so got to have a chat to all of them and take part in the merriment. Many positive comments were made about my newly shaved head that I was still getting to grips with myself so they were all appreciated. After a hearty English breakfast cooked up by the tag team of Craig and Dan we got our gear together for our Christmas morning hike. This year I had managed to rope in willing recruits to my usual days stupidity, however out here it seemed like the right thing to do anyway. While on the calls home I had already seen dozens of people walking, running or biking about, enjoying their Christmas morning in the best of ways. The attitude over here is a much more active one with everyone taking the idea of “lets get a head start on this Christmas fat I’m going to put on!” and so everyone is active. Our challenge for the day was to climb up Mount Iron to sit there and take in the view. We grabbed our stuff and headed out for the short walk to the hill. From afar it doesn’t look too big but as you get closer it does start to look like a much bigger challenge than people who were drinking tequila shots a few hours before really needed to be engaging in. It is by no means a mountain as the name might suggest but is a steep hill that takes about 30 mins to walk to the top. As we started to ascend we all had a mix of regret and relief. Regret that we had decided to hike up the hill that morning but relief that we hadn’t decided that we should take on the 6 hour round trip of Roy’s peak! Thirty agonising minutes later we sat at the top and looked out over the town below. The view made us quickly forget about our aches, pains and hangovers as we stared out over the lake below softly lapping against the snow capped mountains in the distance. Just when everyone thought the moment could get any better Craig and I surprised Merle with our pièce de résistance, 3 bottles of chilled fruit cider that I had hauled up the hill in my backpack. With bottles in hand we sat there in silence, drinking our drinks and taking in the amazing view of our surroundings. Sometimes nothing needs to be said between friends enjoying a moment and this was one of those times. ..Initially I was going to write all about Christmas day in a single post, however it seems like it might be a better idea to split things up a bit and save some for later. I know my writing can end up rather long winded and drawn out at times but you know what, I don’t care. It’s not my profession and I’m not paid to get it perfect (hence not spell checking enough or proofreading ever!). I write to have a record to look back on for the friends who shared the time with me and those who wished they could, so hopefully everyone just appreciates that for what it is. I hope everyone has a magical Christmas such as that in their lives. Simple yet spectacular. Yes, I am aware that this is a month late. No, I dont care because whoever wants to read it will read it anyway. Enjoy 🙂 Wow. That is all I can start this post with. When we were planning this road trip and thinking about what we could get up to we knew it was going to be something pretty special but that didn’t really prepare me for the amazing trip we have ended up on and the awesome things that I have seen on this trip. I know all of these posts have been a long time in the making, however I got straight back from the trip and got busy with life again and so it got put on the back burner. I am now on a whole new adventure having not yet published the one before so now is the time for me to get caught up and say what I wanted to say. Even if no one reads any of this I still need to get it all down as these are my memories and times to look back on that I otherwise might not remember to the best of my ability. If I was a smarter writer then I would have written a bit each day and chipped away at it but then again I am not a smart writer (or a smart man at times) so instead I have left it and am now trying to do it all at once. To at least make a start on it I am currently sat in a bar in Kaiteriteri at near the Abel Tasman national park, drinking a cider and looking out at the ocean. Craig is taking a dip in the sea while I use this rare time that I have brainpower and no hay-fever (YEY!) to at least get something down on “paper”. I suppose the best place to start is the beginning. At this point it is hard to even remember when that was. It has only been a week since we left home yet it seems like forever since I was last in Dunedin. The run up to Christmas was a pretty hectic one with lots of little extra jobs that needed attention at work, always with the worst possible timing. Luckily (or rather unluckily depending how you look at it) I was still in the area and able to deal with lots of the customer issues that happened. Craig had to work all the way up to Christmas Eve so even if I had wanted to run away sooner I would have just had to come back for him anyway so there wasn’t much point. Another friend, Merle, who I met in Thailand back in April has also come to New Zealand and didn’t have any plans so was joining us on our Christmas adventure, just for a little while but at least for a few days. She came and stayed with me the weekend before Christmas but then left to have a mini adventure in Queenstown and Milford Sound before we picked her up to get to Wanaka for Christmas. As soon as I got the message that Craig had finished work I hopped in the car, picked him up and day one of our Christmas adventure began. For once in our life we were actually prepared and Craig had all his stuff ready to throw in the car and start our long drive to Wanaka via Queenstown. To be perfectly honest the first part of the drive was a little underwhelming. We had finally broken free from work and were on our road trip… but it just didn’t feel like anything special yet. This was the same road we had driven down multiple times before and scenery we had already seen. Added to that was the fact that the radio stopped working and we didn’t have any CD’s. I probably should have mentioned before now that we were not actually in my usual car. My boss had let us take one of the other vehicles, a 4WD Mitsubishi Outlander, to go on our road trip. This made it much easier to fit everything into the back and proved later to be really useful when on the gravel tracks that New Zealand often calls “roads” so thanks for that Kevin if you’re reading this! Anyway, back to the adventure… About 2 hours into the drive we did start getting excited as it was at that point that we knew we were actually on our mission and not going home any time soon. To detour to Queenstown on our trip instead of going straight to Wanaka added about an hour to journey and Merle had said she was happy to get their herself but it was also kind of an excuse for us to go there and take in the view for a moment. An added bonus to this little impromptu trip was seeing another old friend from Thailand, the infamous Red. I met Red in Pai, the same place I met Merle, yet somehow they had The one and only Red. never met each other. Normally that wouldn’t seem like a big thing but anyone that has been to Pai knows that it is a pretty small place. I still cant get my head around them not knowing each other even though it seems like most of my time there I saw them all constantly. It was really good to see Red anyway even if we only had enough time to take a photo for posterity before continuing our mission for Christmas in Wanaka. Knowing Red like I do I expect that will have been one of his last memories before the rest of his Christmas turned into a blur of drinking and partying. The phrase “party like a rock star” was coined after this man. Wanaka was just how I remembered it except a hell of a lot warmer. Having spent 2 months there on and off over the last year it is a town I have a special place in my heart for as I know many other travellers do. Getting back there didn’t feel new or daunting, it just felt right which is all I could ever ask for, especially at Christmas. Step one after checking into our hostel was to go and find CB. He knew that I would be coming but I was pretty sure he would be at work so what better way to say hi than to go and get dinner and grab a beer, all while he worked. As predicted he was there propping up the bar when I arrived and his face lit up as only a happy CB can. It was awesome to see him after so long, even if we could only chat briefly while he served other customers around him. Craig and I grabbed food alone as Merle was still back at the hostel getting ready, which seemed to take an age. After a while we eventually got a message from her that her and another girl had set off to us but found a band playing at another bar and so wanted us all to go there. After a little persuasion we all headed to the local Irish bar to watch a cover band play and see many people get increasingly drunk as the night went on. In New Zealand all pubs have to close at midnight on Christmas Eve so there was a strict cut-off time as to when everyone had to go home. We all played it right down to the wire, drinking and enjoying the merriment before wishing everyone a Merry Christmas and heading back to our hostel to sleep, the start of my first every Christmas away from home.
+ (sqrt(112) + 0 + sqrt(112) + sqrt(112))**2 + sqrt(112))*-1. 20*sqrt(7) + 5050 Simplify -3 + ((-4 + sqrt(1331) + 0 + sqrt(1331) - sqrt(1331)) + -3)*-5. -55*sqrt(11) + 32 Simplify (sqrt(1053) - (3 + (sqrt(1053) + 1 - sqrt(1053)))**2 - (3*sqrt(1053) + -5)**2)*3. -28554 + 837*sqrt(13) Simplify -1*(2*sqrt(17)*-3 - -2*(sqrt(17) + -1)) - ((((sqrt(833) - sqrt(833)*2) + sqrt(833) - sqrt(833)) + 2)**2 - sqrt(833) - (-1*sqrt(833) + -3)**2). 7 + 81*sqrt(17) Simplify ((sqrt(528)/sqrt(176))/(sqrt(64)*1 + sqrt(64)))**2. 3/256 Simplify ((-1*sqrt(1134))/sqrt(7) - -4*(1 + sqrt(2) + -2 - sqrt(2)))**2. 72*sqrt(2) + 178 Simplify (3*sqrt(100)*-6)/(sqrt(405) + sqrt(405) + sqrt(405)*2 + sqrt(405) + sqrt(405)). -2*sqrt(5)/3 Simplify (-4*(sqrt(192) + sqrt(192) + -5))**2 - ((-1 + sqrt(192))*6)**2. -1984*sqrt(3) + 5740 Simplify 1*(5*-1*sqrt(228))/((-1*sqrt(144))/sqrt(12)). 5*sqrt(19) Simplify 6*(0 + 3 + -4 + (sqrt(68))**2). 402 Simplify 6*(5 + (sqrt(363) - (-4*(sqrt(363) + 1) - sqrt(363) - sqrt(363)) - (sqrt(363) + 1 + 3))**2). 78438 Simplify (sqrt(560)/sqrt(5) + (sqrt(63) - sqrt(7))*-3)**2 + 2 + 1. 31 Simplify (1*(sqrt(65) + (sqrt(65) - (-4*sqrt(65) - sqrt(65))*5)))/(sqrt(405)*-2*-2). 3*sqrt(13)/4 Simplify ((sqrt(264) + sqrt(264)*2)/sqrt(11) + sqrt(1536)*2)/(3*-2*sqrt(288)). -19*sqrt(3)/36 Simplify 1*(sqrt(108) + (-1*sqrt(108))**2) - (4 + -2*sqrt(108))**2. -340 + 102*sqrt(3) Simplify (sqrt(7) - (sqrt(7)*2*-5)**2 - (-4 + sqrt(7) + 5)) + -3. -704 Simplify -5*(-3 + (-2*(3*(sqrt(242) + 1) - (1 + sqrt(242))*-1))**2). -77745 - 7040*sqrt(2) Simplify (-5*sqrt(88)*-3)/(sqrt(56)/(sqrt(567) + sqrt(7))). 150*sqrt(11) Simplify (sqrt(96)/(1*sqrt(6)))/(sqrt(8)*2*-3) - (sqrt(16) + sqrt(96)/(sqrt(24)/sqrt(4)))/(sqrt(800) + (sqrt(800) + sqrt(800)*1 - sqrt(800)) + sqrt(800)). -7*sqrt(2)/30 Simplify (sqrt(3630)*-5 - sqrt(30))/(sqrt(30)/(sqrt(3) - (sqrt(27)/sqrt(9) - sqrt(3))))*3. -168*sqrt(3) Simplify (4 + 2*(1 + (sqrt(2299) + sqrt(19))*-4))**2. -1152*sqrt(19) + 175140 Simplify 3*((sqrt(306) + (3*1*sqrt(306) - sqrt(306) - sqrt(306)))/sqrt(6))/(sqrt(21)/(sqrt(7) + sqrt(7) + sqrt(21)/sqrt(3)*6 - sqrt(7))). 42*sqrt(17) Simplify ((-1*sqrt(891)*2 - 2*sqrt(891)*2) + -3 + -3)**2. 648*sqrt(11) + 32112 Simplify ((2*sqrt(72)*-1)/sqrt(6))/(sqrt(30)/(2*sqrt(80))). -16*sqrt(2) Simplify 2 + (-3*(sqrt(1300)*-1 + -4)*3*2)**2. 25920*sqrt(13) + 426386 Simplify -6*(1 + sqrt(1377)*-1 + sqrt(1377) - (sqrt(1377)*2*6 - sqrt(1377)))**2*-1. -1188*sqrt(17) + 999708 Simplify (-1*sqrt(4455))/(sqrt(55)/sqrt(275)). -45*sqrt(11) Simplify (-4*(-3*2*sqrt(1539) - (sqrt(1539) - ((sqrt(1539) - (-1 + sqrt(1539) + sqrt(1539) - sqrt(1539))) + -2))))**2. 2016*sqrt(19) + 1206592 Simplify (-1*sqrt(91)*-2 + sqrt(91) - sqrt(91) - sqrt(91)*2*1)/(-4*(sqrt(7) - (-6*sqrt(175) + sqrt(7)))). 0 Simplify -5*((6*(sqrt(2057) + 2) + 4*(-2 + sqrt(2057)))**2 + 4). -1028600 - 4400*sqrt(17) Simplify ((sqrt(125) + -1)*-3 - -6*sqrt(125)*-2)**2 + -1 + (3 + sqrt(125) + 1 + sqrt(125))**2. -370*sqrt(5) + 28649 Simplify (sqrt(33)/(sqrt(11) - sqrt(275)*-1))**2 + 3*1*sqrt(243). 1/12 + 27*sqrt(3) Simplify (-1*(sqrt(1008) + 0)*-5 + (sqrt(1008) + sqrt(1008) + 3 + 1*sqrt(1008) - (0 + 0 + sqrt(1008))))**2. 504*sqrt(7) + 49401 Simplify (sqrt(88) + (sqrt(88) + (sqrt(88) - (sqrt(88) + sqrt(88)*-2 + sqrt(88))))*-5)/(-2*(sqrt(128) + 2*sqrt(128))) + -2. -2 + 3*sqrt(11)/8 Simplify (sqrt(42)/sqrt(48) - sqrt(1134)*-1)/(4*(sqrt(2) - (sqrt(2) + sqrt(18)))*6). -37*sqrt(7)/288 Simplify (sqrt(30)/(sqrt(8)/sqrt(4) + sqrt(2)) + sqrt(45)/sqrt(3)*-1)/(sqrt(108) + (sqrt(108)*-2 - sqrt(108))*-4)*-1. sqrt(5)/156 Simplify (((sqrt(272) + -5)*-1 + -3)*2)**2. -64*sqrt(17) + 1104 Simplify ((0 + 4*6*-1*sqrt(605))*-2)**2. 1393920 Simplify (5*sqrt(245))**2*-6 - ((-6*sqrt(245) + 0)**2 + sqrt(245)). -45570 - 7*sqrt(5) Simplify (4*1*sqrt(77) - 1*sqrt(77)*1)/(sqrt(33)/(-1*sqrt(108))). -18*sqrt(7) Simplify (((sqrt(98) + 0 + sqrt(98))**2 - (sqrt(8)/sqrt(4)*5)**2) + ((sqrt(12)/sqrt(2) - sqrt(6))/sqrt(3))**2 + -5)*4. 1348 Simplify -5*(0 + -5*(3*sqrt(343))**2)*-4. -308700 Simplify ((sqrt(275) + sqrt(275) + -1 + 4 + sqrt(275) + sqrt(275) + -2)*5)**2. 1000*sqrt(11) + 110025 Simplify (sqrt(15)/(-2*sqrt(5)) + -4)**2 - (-2 + sqrt(15)/sqrt(5))*-5. 27/4 + 9*sqrt(3) Simplify 0 + 3 + ((-4*2*sqrt(21))/(sqrt(7) - (5*sqrt(7)*3 - sqrt(7))))**2. 699/169 Simplify ((sqrt(126) + -2*sqrt(126))/sqrt(2))/(sqrt(9) - sqrt(27)/sqrt(3)*1 - sqrt(9)). sqrt(7) Simplify ((-6*(sqrt(272)*-1 + sqrt(272))*1 + -1)*-4)**2. 16 Simplify -4 + (sqrt(768) - (sqrt(3) + (4 + sqrt(3))*6))**2. -432*sqrt(3) + 815 Simplify (sqrt(95)*1)/sqrt(5) + (1*sqrt(57))/sqrt(3) + sqrt(57)/(sqrt(12)*5 - sqrt(3)). 19*sqrt(19)/9 Simplify ((sqrt(16) - (sqrt(16) + -1*sqrt(1296) + sqrt(1296)) - -1*sqrt(1024))/((-3*sqrt(3200) + sqrt(3200))/sqrt(4)))**2. 8/25 Simplify -2*(sqrt(2057) - (1*sqrt(2057)*-2 + sqrt(2057)) - sqrt(2057) - (sqrt(2057) + -1 + -4)**2) + -4. -242*sqrt(17) + 4160 Simplify (-6*(((1*sqrt(147))/sqrt(7))/sqrt(3)*-2 + -1))**2. 144*sqrt(7) + 1044 Simplify 5*(4 + sqrt(500))**2 + sqrt(500) + 4*(3*sqrt(80) + sqrt(80)) + -1. 474*sqrt(5) + 2579 Simplify (-2 + ((sqrt(3500) - sqrt(3500)*-1)/(sqrt(125) + (sqrt(125) - (sqrt(125)*1 - sqrt(125)))))**2)*-3*-6. 468 Simplify (sqrt(114)*-3*-5)/((4*sqrt(48))/sqrt(8))*1. 15*sqrt(19)/4 Simplify (3 + (-1*(sqrt(500) + -3) - sqrt(500)*1*6) + 5)**2. -1540*sqrt(5) + 24621 Simplify (1*(4 + sqrt(68)*1 - (sqrt(68) + 3 + -2 + sqrt(68) - sqrt(68))) + 2)**2. 25 Simplify (-5 + (sqrt(2299) - (-1 + sqrt(2299))) + 5)**2 - (-2*sqrt(57)/sqrt(3) - sqrt(304)*-3). -10*sqrt(19) + 1 Simplify (-3 + -2*sqrt(637) - (2*sqrt(637) + -3)) + (5 + sqrt(637)*1 - ((sqrt(637) + -1 - sqrt(637)) + -2)). -21*sqrt(13) + 8 Simplify ((sqrt(408) + sqrt(408)*2)/sqrt(8) + sqrt(51) - 3*sqrt(459))/(-5*sqrt(432)). sqrt(17)/12 Simplify (-6*(3*sqrt(34) - sqrt(34)))/sqrt(2)*-5*5. 300*sqrt(17) Simplify (sqrt(440)/(sqrt(10)*-2)*6)/(3*sqrt(2916) - sqrt(36)). -sqrt(11)/26 Simplify (-5*(sqrt(95) - (sqrt(95) - (sqrt(95) + -2*sqrt(95))*-2)))/(-5*sqrt(45) + sqrt(45)*-1). 5*sqrt(19)/9 Simplify (sqrt(2) + 3*sqrt(8))**2 + sqrt(24)/(-2*sqrt(12)) - ((sqrt(2) + 1)*6 + sqrt(8)/sqrt(4)*2). -17*sqrt(2)/2 + 92 Simplify ((sqrt(8) - sqrt(32)/sqrt(4))**2 - sqrt(8)) + 0 + 3 + -5 + 3. -2*sqrt(2) + 1 Simplify ((sqrt(567) + 0 + -5 + -4)*5*6 + -2)**2. -146880*sqrt(7) + 584284 Simplify ((3*sqrt(75)/sqrt(5)*-3)/(-1*sqrt(147)*-3))**2. 45/49 Simplify ((4 + -1*sqrt(17) + sqrt(17) - (sqrt(153)*-1 - sqrt(153) - sqrt(17))) + sqrt(204)/(-3*sqrt(12)*-4))**2. 170*sqrt(17)/3 + 125129/144 Simplify (sqrt(1216)*-2 + sqrt(19))*-4 + (sqrt(19) + (-2*sqrt(1539))**2 - (-3 + sqrt(171))). 58*sqrt(19) + 6159 Simplify (sqrt(18)/((3*sqrt(36))/sqrt(6)) + 1*sqrt(363) + sqrt(48)*1)**2. 2116/3 Simplify -2 + 3*(4 + sqrt(112))**2*4. 384*sqrt(7) + 1534 Simplify (sqrt(176)*2*5)/(sqrt(44)/sqrt(275)). 100*sqrt(11) Simplify 5 + ((sqrt(1088) + sqrt(1088) + 1 + -1)**2 - ((sqrt(1088)*-1 + sqrt(1088) + sqrt(1088))*-5)**2) + -2. -22845 Simplify (2 + (3*sqrt(77)*-6)/(sqrt(7) - 3*(sqrt(7) + sqrt(7)*4 + sqrt(7) + sqrt(7))) + 4)**2. 54*sqrt(11)/5 + 4491/100 Simplify (2*(sqrt(350)*-2)/sqrt(7))/(2*sqrt(60)/sqrt(6)*5). -2*sqrt(5)/5 Simplify ((1*sqrt(2) + -5 + 4 + -2*sqrt(2) + -4 + -4)*-3)**2. 162*sqrt(2) + 747 Simplify (-3*(sqrt(90)/(sqrt(45)/sqrt(5)))/(sqrt(162) - -1*sqrt(162)))**2. 5/36 Simplify (-1 + sqrt(833) + 0)**2 + sqrt(833) + 1*(sqrt(1700)*2)**2. -7*sqrt(17) + 7634 Simplify 5 + (sqrt(68) - (sqrt(68)*1 + sqrt(68))*-5)/sqrt(4)*5. 5 + 55*sqrt(17) Simplify (-4*(sqrt(3) + sqrt(9)/sqrt(3))**2 + 6*(sqrt(3) + 0)**2 + 1 + (sqrt(432)*-1)**2)*6. 2418 Simplify 5 + (sqrt(1700) - (0 + sqrt(1700))**2*3) + (sqrt(1700) + 1)*-6 + 2 + sqrt(1700) + sqrt(1700) + 1 + (-3*(sqrt(1700) - (0 + sqrt(1700))))**2. -5098 - 30*sqrt(17) Simplify ((-6*sqrt(32) - sqrt(128)*1) + 6*sqrt(128)*3 + -5)**2. -1120*sqrt(2) + 25113 Simplify sqrt(200)*2 + 0 + ((-2*sqrt(12))/sqrt(6) + 0)**2. 8 + 20*sqrt(2) Simplify (sqrt(88) + -2*-2*sqrt(88) + 2*-1*sqrt(88))/((sqrt(72)*-1 - sqrt(72) - sqrt(72))*-2). sqrt(11)/6 Simplify -3*(sqrt(198)/(sqrt(11)*-1))/(sqrt(600)*-1). -3*sqrt(3)/10 Simplify (-2 + (sqrt(637) - (-1*sqrt(637) + 1)) + sqrt(637) + -3)**2 + 0. -252*sqrt(13) + 5769 Simplify (4*(((sqrt(110) + (sqrt(110) - sqrt(110)*1) + sqrt(110) + sqrt(110))*3 - sqrt(110))*1)/
188 Ill. App.3d 533 (1989) 544 N.E.2d 1032 MOHAMMAD ALTAF, Plaintiff, v. HANOVER SQUARE CONDOMINIUM ASSOCIATION NO. 1 et al., Defendants (Hanover Square Condominium Association No. 1 et al., Third-Party Plaintiffs-Appellants; Economy Preferred Insurance Company, Third-Party Defendant-Appellee). No. 1-88-2871. Illinois Appellate Court — First District (3rd Division). Opinion filed September 6, 1989. *534 *535 Stuart D. Gordon, of Moss & Bloomberg, Ltd., of Bolingbrook, for appellants. Orner & Wasserman, Ltd., of Chicago (Esther Joy Schwartz, of counsel), for appellee. Judgment affirmed. PRESIDING JUSTICE FREEMAN delivered the opinion of the court: Third-party plaintiffs appeal from the trial court's grant of a motion for summary judgment in favor of third-party defendant, Economy Preferred Insurance Company (Economy Preferred), and the denial of the third-party plaintiffs' cross-motion for summary judgment. The third-party complaint sought a declaratory judgment regarding whether Economy Preferred had a duty to defend third-party plaintiffs, Hanover Square Condominium Association No. 1 and five members of its board of directors, and whether Economy Preferred had a duty to indemnify the third-party plaintiffs, regarding an underlying property damage action. For the reasons stated below, we affirm the judgment of the circuit court. The record indicates that the underlying suit was brought by Mohammad Altaf, a condominium unit owner, against the third-party plaintiffs, the condominium association and members of the association's board of directors. Altaf's complaint seeks recovery for property damage to his unit resulting from a fire which began in an adjacent unit. At the time of the occurrence, Economy Preferred provided liability insurance to the condominium association under "Special Multi-Peril Policy No. SP-08865." In addition, Economy Preferred provided errors and omissions coverage to the board members under a directors and officers liability supplement endorsement to the policy. In count I of the underlying complaint, plaintiff cites section 12 of the Condominium Property Act (Ill. Rev. Stat. 1985, ch. 30, par. 312), which sets forth the authority of the board of managers to obtain insurance for the property against loss or damage by fire or other hazards. Plaintiff asserts that the condominium association and board of *536 directors had a duty to obtain insurance which would fully insure replacement costs; process with diligence any claims declared under the policy; and oversee the insurance company's response to claims made under the insurance policy. Plaintiff alleges that defendants breached an implied contract by failing to assist him in having his premises restored as required by statute. Count II of the complaint, also entitled "Breach of Contract," cites the bylaws of the condominium association and an enabling declaration filed with the Cook County recorder of deeds. The enabling declaration provides, among other things, that the association or manager will obtain and continue in effect blanket insurance, comprehensive public liability insurance, and other liability insurance it deems desirable. The bylaws state, among other things, that the association or board of directors are responsible for providing for the maintenance and repair of the common elements. Plaintiff alleges that defendants failed to assist him in processing his claim with defendants' insurance company and that defendants' insurance company failed to replace furnishings and clothing and other things, and delayed the repair work being performed on plaintiff's unit. Count III, entitled "Negligence," alleges that defendants failed in their duties to provide an insurance company "which processed and completed all insurance claims" and to see that all claims filed with the insurance company were processed quickly and diligently. Plaintiff also alleges that defendants failed to replace damaged property with like property or compensation. Plaintiff alleges that defendants failed also in their implied duties inherent in their positions as board members. A fourth count, also entitled "Negligence," eventually was dismissed on plaintiff's motion. Defense of the underlying suit was tendered to Economy Preferred, which declined coverage and denied owing a duty to defend the association or the board members regarding the allegations of the underlying suit. Defendants/third-party plaintiffs then filed their third-party complaint. Cross-motions for summary judgment were filed, briefed and argued. The trial court granted summary judgment in favor of Economy Preferred, finding that it had no duty to defend or indemnify the third-party plaintiffs. The record indicates that the special multiperil insurance policy issued by Economy Preferred provided general liability coverage for the association's common areas. The additional directors and officers liability supplement endorsement excluded coverage under exclusion "E," which excludes claims that are: "E. Based on or attributable to any Wrongful Act in procuring, *537 effecting and maintaining insurance, or with respect to amount, form, conditions or provisions of such insurance." The endorsement defines "Wrongful Act" as the following: "A. `Wrongful Act' means any negligent act, any error, omission or breach of duty of Directors or Officers of the Named Insured while acting in their capacity as such." After the defense was tendered to Economy Preferred, Economy Preferred sent a letter to counsel for the association and Board members, indicating that it had no contractual duty to defend or indemnify the defendants. Further, Economy Preferred indicated that coverage was provided only for damages "caused by an occurrence," defined in the policy as "an accident, including continuous or repeated exposure to conditions, which results in bodily injury or property damage neither expected nor intended from the standpoint of the insured." Economy Preferred asserted that the allegations of the complaint were not based upon an "occurrence." In addition, Economy Preferred indicated that coverage is not provided under the directors and officers endorsement, since paragraph "E" negates coverage for the claims asserted in the complaint. The third-party complaint which defendants then filed was drawn in four counts. Count I cites policy language, which provides that Economy Preferred would: "Defend any civil suit against the insured or any of them, alleging a Wrongful Act which is covered under the terms of this supplement, even if such suit is groundless, false or fraudulent." Third-party plaintiffs allege that Economy Preferred's refusal to appear on behalf of the association and board members is contrary to the terms of the policy in that Economy Preferred has a contractual duty to defend. Count II of the third-party complaint seeks indemnification for legal fees and costs incurred in the suit and for any judgment which might be entered against third-party plaintiffs in the underlying action. Count III seeks indemnification from Economy Preferred in the event that plaintiff recovers on his complaint against third-party plaintiffs. Count IV of the third-party complaint eventually was dismissed. In its order the trial court found and declared that: (1) Economy preferred had no duty to defend the third-party plaintiffs regarding the underlying complaint; (2) Economy Preferred had no duty to indemnify the third-party plaintiffs under the policy regarding any claimed losses by plaintiff arising from the fire; and (3) Economy Preferred *538 is not obligated to pay any fees or costs incurred by the third-party plaintiffs in defending the suit. Third-party plaintiffs then appealed. At oral argument of this appeal, this court raised the issue of jurisdiction, since the notice of appeal contained a file stamp date which was outside of the 30-day filing period set forth in Supreme Court Rule 303(a)(1). (107 Ill.2d R. 303(a)(1).) The final order granting summary judgment in favor of Economy Preferred and denying the appellants' cross-motion for summary judgment was entered on August 17, 1988. The order contained the required language under Supreme Court Rule 304(a) (107 Ill.2d R. 304(a)) that the order was appealable. The notice of appeal was received and stamped by the clerk's office on September 19, 1988, a Monday, which constitutes the 31st day after the final order was entered. See Ill. Rev. Stat. 1987, ch. 1, par. 1012. • 1 Third-party plaintiffs filed a supplemental brief, citing the recent Illinois Supreme Court case of Harrisburg-Raleigh Airport Authority v. Department of Revenue (1989), 126 Ill.2d 326, 533 N.E.2d 1072, which held that notices of appeal mailed within the 30-day period and received thereafter are timely filed. (Harrisburg-Raleigh Airport Authority, 126 Ill.2d at 340.) Appellants also submitted an affidavit of Stuart D. Gordon, a former attorney of third-party plaintiffs, who stated that he mailed the notice of appeal to the clerk of the circuit court on September 13, 1988. That date is within the 30-day filing period. (107 Ill.2d R. 303(a)(1).) The record indicates that Gordon mailed copies of the notice of appeal to counsel for the other parties on September 13, 1988. We find that under Harrisburg-Raleigh Airport Authority, and based upon the affidavit of Gordon and the record previously prepared on appeal, the notice of appeal was timely filed. This court therefore has jurisdiction to consider this appeal. Third-party plaintiffs initially contend on appeal that Economy Preferred had a duty to defend pursuant to its agreement under the policy. They argue that the trial court erred in granting summary judgment in favor of Economy Preferred to the extent it relied on Economy Preferred's argument that plaintiff's claimed losses were not covered since the claims failed to arise from an "occurrence" as defined by the policy. Further, they contend that the policy language providing coverage for claims arising out of a "wrongful act" applies to the plaintiff's claims. In addition, third-party plaintiffs argue that even if coverage for claims is limited to those arising out of an "occurrence," the fire which damaged plaintiff's property constitutes such an "occurrence," and therefore coverage has been provided. *539 Economy Preferred responds that initially, in its letter refusing to defend third-party plaintiffs and in its original motion for summary judgment, it asserted that plaintiff's claims were not covered because the claims did not arise from an "occurrence" as defined under the policy. Subsequently, however, in its response to the cross-motion for summary judgment and at the hearing before the trial court, Economy Preferred abandoned this argument. Further, Economy Preferred asserts that this argument was not a factor in the court's granting of summary judgment in favor of Economy Preferred. • 2 We note that third-party plaintiffs, the appellants in this matter, failed to include in the record on appeal a copy of the transcript of proceedings from the hearing on the cross-motions for summary judgment. An appellant has the burden to present a sufficiently complete record of the proceedings at trial to support a claim of error. Supreme Court Rule 321 (107 Ill.2d R. 321) provides, in pertinent part: "The record on appeal shall consist of the judgment appealed from, the notice of appeal, and the entire original common law trial court record * * *. The trial court record includes any report of proceedings prepared in accordance with Rule 323 and every other document filed and judgment and order entered in the cause." In the instant case there was no report of proceedings filed. Nor is there a bystander's report which is authorized under Rule 323(c) (107 Ill.2d R. 323(c)). Further, appellants failed to file an agreed statement of facts in lieu of a report of proceedings pursuant to Rule 323(d) (107 Ill.2d R. 323(d)). • 3 In the absence of a complete record on appeal, and upon a claim of error, it will be presumed that the order entered by the trial court was in conformity with law and had a sufficient factual basis. (Foutch v. O'Bryant (1984), 99 Ill.2d 389, 459 N.E.2d 958.) Any doubts which may arise from the incompleteness of the record will be resolved against the appellant. (Foutch, 99 Ill.2d at 392.) In the absence of a report of proceeding, particularly when the judgment order states that the court is fully advised in the premises, a reviewing court "will indulge in every reasonable presumption favorable to judgment, order or ruling from which an appeal is taken" (In re Pyles (1978), 56 Ill. App.3d 955, 957, 372 N.E.2d 1139, 1141) and must presume that the evidence heard by the trial court was sufficient to support the judgment absent any contrary indications in the record (In re Marriage of Macaluso (1982), 110 Ill. App.3d 838, 846, 443 N.E.2d 1). *540 • 4 Since the written order of the trial court fails to indicate that the court relied on the initial argument of Economy Preferred that the "occurrence" language of the policy did not provide coverage for plaintiff's claims, we may presume that the trial court did not rely on this argument in reaching its decision. The absence of the transcript of proceedings allows us to indulge in this presumption, particularly in view of the representation by Economy Preferred that it abandoned that argument before the trial court. Further, as discussed below, even if it is arguable that the fire which caused the property damage to plaintiff in the underlying action may be considered an "occurrence" under the policy, we find that the exclusionary language of endorsement "E" excludes plaintiff's claims from coverage. Third-party plaintiffs next contend that some of the plaintiff's claims in the underlying complaint do not fall within the "wrongful act" policy exclusion. They cite, from count I, allegations that the directors failed to obtain insurance which would "process with diligence any claims declared under said policy" and which would "oversee the insurance company's response to claims made under the insurance policy." Further, count I alleges that the board members failed to assist plaintiff in having his premises restored as required by the Condominium Property Act. From count II, third-party plaintiffs cite the allegation that defendants failed to assist plaintiff in the processing of his claim with defendants' insurance company. Finally, count III alleges that defendants had a duty to see that all claims filed with the insurance company were processed diligently and that defendants failed to compel completion of plaintiff's unit. Third-party plaintiffs contend that the cited allegations do not relate to the defendants' failure to procure, effect or maintain insurance. Rather, they contend that the allegations relate to defendants' purported failure to assist plaintiff in his claim, their failure to push Economy Preferred to act with dispatch, and their failure to monitor Economy Preferred's adjustment of the claim. Therefore, these allegations fall outside of the policy exclusion. • 5 An insurance company's obligation to represent its insured depends on the allegations of the complaint and the provisions of the insurance policy. (Tuell v. State Farm Fire & Casualty Co. (1985), 132 Ill. App.3d 449, 477 N.E.2d 70.) An insurer has a duty to defend an action brought against the insured if the complaint alleges facts within, or potentially within, coverage. (Tuell, 132 Ill. App.3d at 452, citing Thornton v. Paul (1978), 74 Ill.2d 132, 144, 384 N.E.2d 335.) The duty to defend applies where the complaint alleges several causes of action or theories of recovery against an insured, even if only one *541 or some of them are within policy coverage. (Maryland Casualty Co. v. Peppers (1976), 64 Ill.2d 187, 194, 355 N.E.2d 24.) The complaint must be liberally construed and all doubts resolved in favor of the insured. Maryland Casualty Co. v. Chicago & North Western Transportation Co. (1984), 126 Ill. App.3d 150, 446 N.E.2d 1091. • 6 The cited allegations set forth a failure to provide diligent and efficient processing and monitoring of claims. We find that these allegations, when read in the context of the complaint as a whole and in the context of each count of the complaint, relate to "procuring, effecting and maintaining" insurance. Therefore, the allegations come within the exclusionary language of exclusion "E" of the policy endorsement. Accordingly, Economy Preferred did not have a duty to defend third-party plaintiffs against the claims alleged in the underlying complaint. Third-party plaintiffs cite case law for the proposition that the exclusionary language of an insurance policy must be strictly construed against the insurer. (Herrera v. Benefit Trust Life Insurance Co. (1984), 126 Ill. App.3d 355, 466 N.E.2d 1172.) Further, they cite the rule that if a policy provision is ambiguous, the ambiguity must be construed in favor of the insured. (Simioni v. Continental Insurance Cos. (1985), 135 Ill. App.3d 916, 482 N.E.2d 434.) Third-party plaintiffs assert that there is no ambiguity in exclusion "E" and that the language clearly does not exclude the claims made by plaintiff. We agree that the exclusionary language is not ambiguous. We find, however, that the allegations of the complaint fail to set forth facts which bring the plaintiff's claims outside of exclusion "E." See Menke v. Country Mutual Insurance Co. (1980), 78 Ill.2d 420, 401 N.E.2d 539. Third-party plaintiffs also contend that since plaintiff's claims are arguably within coverage, Economy Preferred had an absolute duty to defend and third-party plaintiffs were entitled to summary judgment in their favor. Third-party plaintiffs assert that even if Economy Preferred believed that it had a valid defense of exclusionary coverage, Economy Preferred had three options: (1) to secure a declaratory judgment while defending under a reservation of rights; (2) to defend under a reservation of rights and seek a declaratory judgment in a subsequent suit; or (3) to defend without a reservation of rights. We need not address, however, the issue of Economy Preferred's alleged absolute duty to defend, under a reservation of rights or otherwise, since we have found that the exclusionary language of the policy shows that Economy Preferred had no duty to defend. Finally, third-party plaintiffs contend that Economy Preferred is *542 required to indemnify them for any damages awarded to plaintiff for damages resulting from conduct not excluded from coverage by the policy. Economy Preferred responds that the issue of indemnification has been raised prematurely and is not yet ripe for determination by this court. • 7 The court in Maryland Casualty Co. v. Chicago & North Western Transportation Co. (1984), 126 Ill. App.3d 150, 466 N.E.2d 1091, stated that a declaratory judgment action to determine an insurer's duty to indemnify its insured, brought prior to a determination of the insured's liability, is premature since the question to be determined is not then ripe for adjudication. In view of our finding that the exclusionary language of the policy shows that Economy Preferred owes no duty to defend third-party plaintiffs, we must also find that Economy Preferred cannot be liable to indemnify third-party plaintiffs regarding the underlying litigation. Accordingly, we hold that the trial court properly determined that Economy Preferred had no duty to indemnify. For the foregoing reasons, the judgment of the circuit court of Cook County is affirmed. Judgment affirmed. WHITE and CERDA, JJ., concur.
Q: Cannot install mpi4py on CentOS 7 I have CentOS 7 and I have installed mpicc (it works and compiles for openmpi in C). I also have python 2.7.5 and just installed pip. I'm running this command and get the following errors: sudo pip install mpi4py Collecting mpi4py Using cached mpi4py-2.0.0.tar.gz Installing > collected packages: mpi4py Running setup.py install for mpi4py ... > error > Complete output from command /usr/bin/python2 -u -c "import setuptools, > tokenize;__file__='/tmp/pip-build-x5jD4O/mpi4py/setup.py';exec(compile(getattr(tokenize, > 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, > 'exec'))" install --record /tmp/pip-mpMoZO-record/install-record.txt > --single-version-externally-managed --compile: > running install > running build > running build_src > running build_py > creating build > creating build/lib.linux-x86_64-2.7 > creating build/lib.linux-x86_64-2.7/mpi4py > copying src/__main__.py -> build/lib.linux-x86_64-2.7/mpi4py > copying src/__init__.py -> build/lib.linux-x86_64-2.7/mpi4py > creating build/lib.linux-x86_64-2.7/mpi4py/include > creating build/lib.linux-x86_64-2.7/mpi4py/include/mpi4py > copying src/include/mpi4py/mpi4py.MPI.h -> build/lib.linux-x86_64-2.7/mpi4py/include/mpi4py > copying src/include/mpi4py/mpi4py.MPI_api.h -> build/lib.linux-x86_64-2.7/mpi4py/include/mpi4py > copying src/include/mpi4py/mpi4py.h -> build/lib.linux-x86_64-2.7/mpi4py/include/mpi4py > copying src/include/mpi4py/__init__.pxd -> build/lib.linux-x86_64-2.7/mpi4py/include/mpi4py > copying src/include/mpi4py/libmpi.pxd -> build/lib.linux-x86_64-2.7/mpi4py/include/mpi4py > copying src/include/mpi4py/MPI.pxd -> build/lib.linux-x86_64-2.7/mpi4py/include/mpi4py > copying src/include/mpi4py/__init__.pyx -> build/lib.linux-x86_64-2.7/mpi4py/include/mpi4py > copying src/include/mpi4py/mpi.pxi -> build/lib.linux-x86_64-2.7/mpi4py/include/mpi4py > copying src/include/mpi4py/mpi4py.i -> build/lib.linux-x86_64-2.7/mpi4py/include/mpi4py > copying src/MPI.pxd -> build/lib.linux-x86_64-2.7/mpi4py > copying src/libmpi.pxd -> build/lib.linux-x86_64-2.7/mpi4py > running build_clib > MPI configuration: [mpi] from 'mpi.cfg' > checking for library 'lmpe' ... > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -c _configtest.c -o _configtest.o > gcc -pthread _configtest.o -llmpe -o _configtest > /bin/ld: cannot find -llmpe > collect2: error: ld returned 1 exit status > failure. > removing: _configtest.c _configtest.o > building 'mpe' dylib library > creating build/temp.linux-x86_64-2.7 > creating build/temp.linux-x86_64-2.7/src > creating build/temp.linux-x86_64-2.7/src/lib-pmpi > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -c src/lib-pmpi/mpe.c -o build/temp.linux-x86_64-2.7/src/lib-pmpi/mpe.o > creating build/lib.linux-x86_64-2.7/mpi4py/lib-pmpi > gcc -pthread -shared -Wl,-z,relro build/temp.linux-x86_64-2.7/src/lib-pmpi/mpe.o -o > build/lib.linux-x86_64-2.7/mpi4py/lib-pmpi/libmpe.so > checking for library 'vt-mpi' ... > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -c _configtest.c -o _configtest.o > gcc -pthread _configtest.o -lvt-mpi -o _configtest > /bin/ld: cannot find -lvt-mpi > collect2: error: ld returned 1 exit status > failure. > removing: _configtest.c _configtest.o > checking for library 'vt.mpi' ... > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -c _configtest.c -o _configtest.o > gcc -pthread _configtest.o -lvt.mpi -o _configtest > /bin/ld: cannot find -lvt.mpi > collect2: error: ld returned 1 exit status > failure. > removing: _configtest.c _configtest.o > building 'vt' dylib library > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -c src/lib-pmpi/vt.c -o build/temp.linux-x86_64-2.7/src/lib-pmpi/vt.o > gcc -pthread -shared -Wl,-z,relro build/temp.linux-x86_64-2.7/src/lib-pmpi/vt.o -o > build/lib.linux-x86_64-2.7/mpi4py/lib-pmpi/libvt.so > checking for library 'vt-mpi' ... > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -c _configtest.c -o _configtest.o > gcc -pthread _configtest.o -lvt-mpi -o _configtest > /bin/ld: cannot find -lvt-mpi > collect2: error: ld returned 1 exit status > failure. > removing: _configtest.c _configtest.o > checking for library 'vt.mpi' ... > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -c _configtest.c -o _configtest.o > gcc -pthread _configtest.o -lvt.mpi -o _configtest > /bin/ld: cannot find -lvt.mpi > collect2: error: ld returned 1 exit status > failure. > removing: _configtest.c _configtest.o > building 'vt-mpi' dylib library > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -c src/lib-pmpi/vt-mpi.c -o build/temp.linux-x86_64-2.7/src/lib-pmpi/vt-mpi.o > gcc -pthread -shared -Wl,-z,relro build/temp.linux-x86_64-2.7/src/lib-pmpi/vt-mpi.o -o > build/lib.linux-x86_64-2.7/mpi4py/lib-pmpi/libvt-mpi.so > checking for library 'vt-hyb' ... > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -c _configtest.c -o _configtest.o > gcc -pthread _configtest.o -lvt-hyb -o _configtest > /bin/ld: cannot find -lvt-hyb > collect2: error: ld returned 1 exit status > failure. > removing: _configtest.c _configtest.o > checking for library 'vt.ompi' ... > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -c _configtest.c -o _configtest.o > gcc -pthread _configtest.o -lvt.ompi -o _configtest > /bin/ld: cannot find -lvt.ompi > collect2: error: ld returned 1 exit status > failure. > removing: _configtest.c _configtest.o > building 'vt-hyb' dylib library > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -c src/lib-pmpi/vt-hyb.c -o build/temp.linux-x86_64-2.7/src/lib-pmpi/vt-hyb.o > gcc -pthread -shared -Wl,-z,relro build/temp.linux-x86_64-2.7/src/lib-pmpi/vt-hyb.o -o > build/lib.linux-x86_64-2.7/mpi4py/lib-pmpi/libvt-hyb.so > running build_ext > MPI configuration: [mpi] from 'mpi.cfg' > checking for MPI compile and link ... > gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python2.7 -c _configtest.c -o _configtest.o > _configtest.c:2:17: fatal error: mpi.h: No such file or directory > #include <mpi.h> > ^ > compilation terminated. > failure. > removing: _configtest.c _configtest.o > error: Cannot compile MPI programs. Check your configuration!!! I tried every solution I found so far and none seemed to work, has anyone any idea about this problem, please? Thank you A: Run into the same issue and solved with: yum install openmpi-devel export CC=/usr/lib64/openmpi/bin/mpicc pip install mpi4py
Firefox version 52 ESR Win XP OS. Silverlight plug-in stopped working just yesterday on Netflix and my Win XP Pro OS. I have uninstalled the older version and downloaded … (read more) Firefox version 52 ESR Win XP OS. Silverlight plug-in stopped working just yesterday on Netflix and my Win XP Pro OS. I have uninstalled the older version and downloaded the correct version I need 5.1.4.1.2 and it is installed but still doesn't work with Netflix. I cannot find in the registry Netflix and Silverlight permission to run the plug in. This is a problem and needs to be fixed. I don't have the money or means to upgrade my OS at this time. I have no tv either. I have been in contact with Microsoft and Netflix and they determined that the problem lies with Mozilla. I cannot use any other browser as Chrome does not work with Netflix and Win XP either due to unable to update WidevineCdm on XP. Yesterday I've got a report from one company about Firefox update being blocked by their firewall. Once it's on, the update fails. Turning it off fixes the update problem… (read more) Yesterday I've got a report from one company about Firefox update being blocked by their firewall. Once it's on, the update fails. Turning it off fixes the update problem, but also open their network, which is not desired. I advised to whitelist domains mentioned in article https://developer.mozilla.org/en-US/docs/Mozilla/Setting_up_an_update_server (aus2.mozilla.org and download.mozilla.org), but seems it did not help. Firefox update still failed with "Update failed" message. So what servers Firefox needs access to during the update process so they can whitelist them? EDIT: Just to add, before whitelisting the mentioned domain, Firefox won't be able to even find the update. Now the update if found, but fails when downloading. I'm using FireFox on ubuntu lts 16.04 as a kiosk making use of a very handy add-on. The system was looked down to prevent anyone for browsing anything but localhost. No… (read more) I'm using FireFox on ubuntu lts 16.04 as a kiosk making use of a very handy add-on. The system was looked down to prevent anyone for browsing anything but localhost. Now anyone can jump on my kiosk and browse / download anything they want. It is no way more secure than what it was. Even after using the about:config options the updates still update!!!!!!!!!!!!!! I will find a more appropriate browser option for the kiosk but for now I'd really like a way to stop the update short of shutting everything down. Firefox hangs when I open a new tab or new webpage. It's really annoying! Everytime I do the automatic updates this shit happens. I used to go to one of the articles wher… (read more) Firefox hangs when I open a new tab or new webpage. It's really annoying! Everytime I do the automatic updates this shit happens. I used to go to one of the articles where it gave me instructions on how to reset my preferences, but that thread has since been modified, it now says to delete my pref.js, but that doesn't fix the issue anymore. So i'm resorting to actually making a god damn account and asking why this shit keeps happening. I uninstalled Firefox and re-installed aswell, but it saved some of my data which I kinda need (bookmarks). That doesn't help. I know that resetting my preferences back to default every update used to fix it, but I forget how now. Thanks... Can someone just tell me how to fix it so I can update without having to do all this shit again. I have only one extension and that is adblockerplus. Not likely the cause as it does this aswell without. I have been happy with FF v43 until I decided to update to v50. It installed but would not launch, saying "mozglue.dll" was missing. I found on Google that some other us… (read more) I have been happy with FF v43 until I decided to update to v50. It installed but would not launch, saying "mozglue.dll" was missing. I found on Google that some other users had the same problem, with some advice that AVG might be the problem. I found the mozglue.dll was in fact in Program Files - and repeated uninstalls and reinstalls of FF v50 with AVG also uninstalled made no difference. I (mistakenly) found that reusing my original install file of v43 then produced the same result - until I discovered that FF was updating my v43 to v47 within seconds of launching. I finally beat FF automatic updating to get to the Option "do not update," so I have a working, albeit old version of Firefox. Any advice would be welcome - while I fiddle with various workarounds of annoying warnings about my FF being out of date and its knock-on affect on add-ons etc. PS: I am very happily (and stubbornly) Windows XP Professional 64-bit sp2, but I suggest it's not my operating system at fault because other sufferers use a variety of OS. Thanks, Kevin in the UK Running Firefox 63.0.3 (64-bit) on Windows 10 ver 1803, I click on "Restart to update Firefox" and get "Firefox is installing your updates and will start in a few moments… (read more) Running Firefox 63.0.3 (64-bit) on Windows 10 ver 1803, I click on "Restart to update Firefox" and get "Firefox is installing your updates and will start in a few moments." But when it does start, nothing has changed: I am still running 63.0.3 and it still says "Restart to update Firefox." I have been round this loop a half dozen times. What next? When I started Firefox, it said that was "installing updates..."; an hour and a half later, it was still installing (I have never had this happen before). I closed the p… (read more) When I started Firefox, it said that was "installing updates..."; an hour and a half later, it was still installing (I have never had this happen before). I closed the program and shut down the computer, then restarted. When I tried to restart Firefox, I got "unable to load XPCOM". So, I uninstalled my Firefox and attempted a download; only took me a dozen or so times, as it kept getting interrupted at the very end. Finally got to a site where I was able to download it, but when I attempted to install it, I got a little box that said "0% extracting" and nothing happened. An hour later, the little box STILL says "0% extracting" and nothing has happened. I cannot seem to get Firefox to install, and I'm ready to take a hammer to the computer. How do I fix this? Appreciate all help offered. Thank you! I upgraded firefox and now have 3 critically issues. 1. Does not display most pages correctly see screenshot attached. Had to go on Chrome just to get in support. 2. Tell… (read more) I upgraded firefox and now have 3 critically issues. 1. Does not display most pages correctly see screenshot attached. Had to go on Chrome just to get in support. 2. Tells me every site I visit is unsecure once I say I will take the risk site is just text like screenshot 3. I no longer have firebug which I need immediately. Please help! If you could send me the old version of firefox for MAC OS 10.10 that should work for now! I am currently using Firefox version 42. I noticed that a new version 43 is available and wanted to update my Firefox. Traditionally i would simply navigate to the "About… (read more) I am currently using Firefox version 42. I noticed that a new version 43 is available and wanted to update my Firefox. Traditionally i would simply navigate to the "About Mozilla Firefox" window, where Firefox automatically checks for a new version, downloads it and lets me install the update with a single click. However, when I open the About-window Firefox checks for a new version but insists my browser is up to date. I recognize the obious work-around of installing the new version manually. This issue has however plagued me since several versions ago. I always hoped it would work itself out after enough updates, but since it still persists today I wanted to seek the collective help of the Mozilla community. My system is Windows 10 (the error was also present during Windows 8.1) and I also use Thunderbird which updates itself just fine. I have been using Firefox on my Vista PC with no problem until I ran the update from 52.7.2 to 52.7.3 . Now I cannot start Firefox and I get the message - could not load … (read more) I have been using Firefox on my Vista PC with no problem until I ran the update from 52.7.2 to 52.7.3 . Now I cannot start Firefox and I get the message - could not load XPCOM . I have down loaded 52.7.3esr ready to install but I am unable to uninstall 52.7.2 even if I use the helper.exe file. Please note I am sending this on a W8.1 computer, not the one that is giving me the problem.
Church & State Sticker Shock Georgia School District Drops Evolution Disclaimer Fight, As Creationist Forces Lose Another Round In The War Over Science Education Jeffrey Selman never intended to be such a public proponent of the separation of church and state, even though he has always considered himself an advocate for that long-cherished American principle. But when the suburban Atlanta computer programmer learned that his son’s public school district planned to place stickers questioning evolution in science textbooks, his circumstances were swiftly altered. The Cobb County school system is the second largest public school district in Georgia, but in 2002, the board of education surrendered to a religious pressure campaign and announced its plan to warn students that evolution is “a theory, not a fact” and that it should be “critically considered.” Up until that point, Selman told Church & State, it had felt as if he had been “sleepwalking through a democracy.” He had assumed that it was “a done deal” that religion could not be taught in the science curriculum and that public schools could not take actions to undermine the teaching of evolution for religious reasons. After reading about the school board’s move in a local newspaper, Selman placed a call to the American Civil Liberties Union of Georgia. He also made appearances before the school board, urging its members to reverse their decision. Selman said his pleas to the board were shunned, and it soon became apparent that litigation was the only alternative. When the Georgia ACLU called to ask if he was interested in being a plaintiff, he readily agreed. “I said, yeah, go for it,” recalled Sel­man. Other Cobb County parents later joined the Selman v. Cobb County School Board lawsuit. Selman’s very public foray into activism on behalf of church-state separation – the case drew national and international media attention – ended with a victory last December, when Cobb County school officials entered a broad-based agreement promising to abide by constitutional mandates. The Dec. 19 settlement states that Cobb County school officials are barred “from restoring to the science textbooks of students in the Cobb County schools any stickers, labels, stamps, inscriptions, or other warnings or disclaimers bearing language substantially similar to that used on the sticker that is the subject of this action.” Moreover, it prohibits school officials from taking any other actions that would “prevent or hinder the teaching of evolution in the School District.” Selman, who during the course of litigation also was elected president of the Georgia chapter of Americans United for Separation of Church and State, lauded the school board. “The settlement brings to an end a long battle to keep our science classes free of political or religious agendas,” he said. “I am very pleased that the Cobb school board has dropped its defense of the anti-evolution policy.” The settlement, which was brought about with substantial legal help from Americans United, is another big setback for Religious Right activists who are waging war on public schools. They have long sought to force the teaching of creationism or its latest variant, “intelligent design,” in science classes. The federal courts have repeatedly rejected that crusade, ruling that the teaching of religion in public schools is unconstitutional. In the most recent ruling in December 2005, a federal judge invalidated the Dover, Pa., school district’s attempt to teach intelligent design. To circumvent the courts, some Religious Right forces are trying a new maneuver, urging public schools to question the validity of evolution without publicly putting forward a religious alternative to the scientific concept. The Discovery Institute, one of the nation’s leading proponents of intelligent design, weighed in heavily on the side of the Georgia anti-evolution stickers, saying that a final decision in the case would “be at least as important, if not more important, than the Dover school district case last year.” Casey Luskin, a Discovery Institute attorney, said in a May 25, 2006, statement that he hoped the 11th U.S. Circuit Court of Appeals would rule that the disclaimers are constitutional and create precedent for the states covered by the 11th Circuit – Georgia, Alabama and Florida. “Eventually it’s likely that a decision will be handed down from this federal appellate court governing legal decisions in multiple states,” Luskin said. The Cobb County sticker was a perfect example of the Discovery Institute’s latest gambit. It read: “This textbook contains material on evolution. Evolution is a theory, not a fact, regarding the origins of living things. This material should be approached with an open mind, studied carefully, and critically considered.” The Cobb County school officials’ decision to settle the case squelched the hopes of Religious Right activists and rendered the Discovery Institute speechless. Although the group’s Web site includes several press releases about the four-year Georgia battle, it has yet to issue a statement about the controversy’s abrupt conclusion. Although the Discovery Institute tried to pretend that its interests were purely scientific, the push for the evolution disclaimer was clearly religious in character. Marjorie Rogers, a Cobb County parent with a staunch literal belief in the Bible’s creation story, led the crusade. With the help of her friends and her church, she launched a petition drive urging the school board to adopt the anti-evolution stickers. Rogers, according to court documentation, was incensed that textbooks only covered evolution, and she repeatedly condemned the books for not examining creationism. In a 2005 interview with The Wash­ing­ton Post, Rogers said evolution, which the vast majority of the world’s scientists describe as the cornerstone of biology, offends her and is actually a religion. She complained that evolution belittles humans. After Rogers triumphed and the stickers were placed in the Cobb County science textbooks in 2002, Selman and other parents sued, charging that the board was promoting religion in violation of the First Amendment. U.S. District Judge Clarence Cooper concurred, concluding in 2005 that the stickers have “already sent a message that the school board agrees with the beliefs of Christian fundamentalists and creationists.” He held that the disclaimer “misleads students regarding the significance and value of evolution in the scientific community for the benefit of religious alternatives.” Cooper’s ruling was in line with federal court precedent, including decisions from the U.S. Supreme Court. In 1968, the Supreme Court invalidated an Arkansas statute that prohibited public schools from teaching evolution. In Epperson v. Ar­kan­sas, the justices found that lawmakers created the state law with the sole motivation of appeasing the “fundamentalist sectarian conviction” of a lot of their constituents. Following Epperson, creationist groups tried a different tactic. The Lou­isiana legislature passed a so-called “balanced treatment” law that required the public schools to teach “creation science” whenever they taught evolution. In 1987, the Supreme Court struck down the measure. In Edwards v. Aguillard, the justices concluded that the lawmakers did not have a secular purpose. The high court also held that the Louisiana law “advances a religious doctrine by requiring either the banishment of the theory of evolution from public school classrooms or the presentation of a religious viewpoint that rejects evolution in its entirety.” After Judge Cooper’s ruling, the Cobb County school district paid students and teachers $10 an hour to scrape the stickers from textbooks. According to The Atlanta Journal-Constitution, $14,243 was spent on the project. The school board, however, also continued to defend the stickers and filed an appeal. Last spring, a three-judge panel of the 11th Circuit refused to rule on the constitutionality of the disclaimers and sent the case back to Cooper for further consideration. The panel complained that documentation was insufficient for it to decide whether to reverse or sustain Cooper’s decision. Following the 11th Circuit action, the ACLU of Georgia reached out to Americans United for help with the legal battle. Americans United, along with the Pennsylvania ACLU and the Philadel­phia law firm of Pepper Hamilton, had represented Dover, Pa., parents in their successful legal challenge to intelligent design. Before the case could go back into the courtroom, however, the Cobb County board of education decided to enter the settlement and end the lawsuit. Friends of the First Amendment were pleased. “Students should be taught sound science, and the curriculum should not be altered at the behest of aggressive religious groups,” said Americans United Executive Director Barry W. Lynn. “Cobb County school officials have taken the right step to ensure that their students receive a quality education.” It appears that changes in the make-up of the school board and weariness over the controversy played into the decision to reach an agreement. According to press accounts, Cobb County board members had decided that they no longer wanted to spend public funds on the fight and endure the glare of media attention. The new chairwoman of the Cobb County education board said members decided to settle the controversy to avoid a prolonged legal battle. “We faced the distraction and expense of starting all over with more legal actions and another trial,” board chairwoman Teresa Plenge told the Associated Press. “With this agreement, it is done and we now have a clean slate for the new year.” Some commentators found the attention damaging to the north metro Atlanta district that serves a diverse population of more than 106,000 students. Maureen Downey, an editorial board writer for The Atlanta Journal-Consti­tution, wrote that the controversy was expensive and that “it was also embarrassing, turning the prestigious school system into a punch line.” In a Dec. 20 article, the newspaper noted that the disclaimers were “the subject of jokes on blogs and e-mails, including one that linked to a Web site offering alternative stickers. One read: ‘This textbook contains material on gravity. Gravity is a theory, not a fact, regarding a force that cannot be seen….’” Moreover, as the newspaper reported, the Cobb County board of education had undergone some unusual turnover. The board had rarely seen change, but the 2006 elections saw the departure of three board members who were proponents of the disclaimers. They were replaced by individuals who publicly campaigned on ending the legal fight. Indeed, one of the new board members, John Crooks, a Baptist minister, lauded the settlement, telling the press, “Moving on to more important educational matters is essential.” Kathie Johnstone, the board’s chairwoman during the adoption of the stickers, didn’t even make it to the general election. She lost overwhelmingly to a Republican challenger in the summer GOP primary. Cobb school officials and attorneys also may have realized that they faced an increasingly daunting challenge. Richard Katskee, Americans United’s assistant legal director and its primary attorney in the Dover, Pa., case, had quick­ly started rounding up some of the same expert witnesses from the Dover case to help in the Cobb County dispute. These included Kenneth Miller, a biology professor at Brown University, Brian Alters, a professor of science education at McGill University, and Eugenie Scott, executive director of the National Center for Science Education. Americans United’s Katskee was pleased with the settlement and said the school district would now be in the position to “focus squarely on providing a sound education to Cobb County students.” Mike King, a member of the editorial board for The Journal-Constitution, wrote in a Dec. 21 column that the Cobb school board had finally come to its senses. “Truth is,” King wrote, “there never has been widespread support within the county to change the way human biology should be taught. It has always been the work of a handful of anti-evolution zealots who would be better off home-schooling their children.” Two of the parents who pushed the Cobb board to adopt the stickers lashed out at the settlement. Rogers told the Journal-Constitution that she was disappointed and that without the evolution disclaimer stickers, “the textbooks are inaccurate and biased and unconstitutional.” Larry Taylor, who has three children in the Cobb schools, said “terrorist organizations like the ACLU” are “hijacking our country’s educational system by imposing their own secular agenda on the rest of us.” Religious Right leaders also are bitter. On his Dec. 21 “700 Club,” television preacher Pat Robertson launched into a tirade. “Evolution is a theory – not a fact,” Robertson claimed. “Who knows what happened 500 million or a billion years ago? Who knows? Who can say for certain? “You know none of us were there,” he continued. “These are all speculations and they’re based on incomplete science. So to say it is a fact is bad science. [Cobb County school officials] should have fought. We would have fought beside them. I think it’s time the good people stand up and fight for what they believe in. “And this business of evolution is based essentially on atheism – that there was no God and that higher life emerged from primordial ooze…that paramecium and protozoa are our ancestors. That’s nonsense. Why should schools say, well, we will never question that? Of course, they should question that.” Selman, however, scoffed at the notion that the legal action against the evolution disclaimers was any kind of plot to force religious believers to abandon their faith. Indeed, he noted that his understanding of evolution has not caused him to renounce his Jewish faith. “Simply learning about evolution does not mean that you have to give up your religious beliefs,” he said. Selman added that he always hoped to avoid litigation. He urged the Cobb board of education on a number of occasions to reverse its decision on the stickers. “I was just trying to do what is right,” he said, “and the board’s refusal to abandon the sticker left us no recourse.”
#include "playerInfo.h" #include "gameGlobalInfo.h" #include "spaceObjects/playerSpaceship.h" #include "tacticalScreen.h" #include "preferenceManager.h" #include "screenComponents/combatManeuver.h" #include "screenComponents/radarView.h" #include "screenComponents/impulseControls.h" #include "screenComponents/warpControls.h" #include "screenComponents/jumpControls.h" #include "screenComponents/dockingButton.h" #include "screenComponents/alertOverlay.h" #include "screenComponents/customShipFunctions.h" #include "screenComponents/missileTubeControls.h" #include "screenComponents/aimLock.h" #include "screenComponents/shieldsEnableButton.h" #include "screenComponents/beamFrequencySelector.h" #include "screenComponents/beamTargetSelector.h" #include "screenComponents/powerDamageIndicator.h" #include "gui/gui2_keyvaluedisplay.h" #include "gui/gui2_label.h" #include "gui/gui2_rotationdial.h" TacticalScreen::TacticalScreen(GuiContainer* owner) : GuiOverlay(owner, "TACTICAL_SCREEN", colorConfig.background) { // Render the radar shadow and background decorations. background_gradient = new GuiOverlay(this, "BACKGROUND_GRADIENT", sf::Color::White); background_gradient->setTextureCenter("gui/BackgroundGradientSingle"); background_crosses = new GuiOverlay(this, "BACKGROUND_CROSSES", sf::Color::White); background_crosses->setTextureTiled("gui/BackgroundCrosses"); // Render the alert level color overlay. (new AlertLevelOverlay(this)); // Short-range tactical radar with a 5U range. radar = new GuiRadarView(this, "TACTICAL_RADAR", &targets); radar->setPosition(0, 0, ACenter)->setSize(GuiElement::GuiSizeMatchHeight, 750); radar->setRangeIndicatorStepSize(1000.0)->shortRange()->enableGhostDots()->enableWaypoints()->enableCallsigns()->enableHeadingIndicators()->setStyle(GuiRadarView::Circular); // Control targeting and piloting with radar interactions. radar->setCallbacks( [this](sf::Vector2f position) { targets.setToClosestTo(position, 250, TargetsContainer::Targetable); if (my_spaceship && targets.get()) my_spaceship->commandSetTarget(targets.get()); else if (my_spaceship) my_spaceship->commandTargetRotation(sf::vector2ToAngle(position - my_spaceship->getPosition())); }, [this](sf::Vector2f position) { if (my_spaceship) my_spaceship->commandTargetRotation(sf::vector2ToAngle(position - my_spaceship->getPosition())); }, [this](sf::Vector2f position) { if (my_spaceship) my_spaceship->commandTargetRotation(sf::vector2ToAngle(position - my_spaceship->getPosition())); } ); radar->setAutoRotating(PreferencesManager::get("tactical_radar_lock","0")=="1"); // Ship statistics in the top left corner. energy_display = new GuiKeyValueDisplay(this, "ENERGY_DISPLAY", 0.45, tr("Energy"), ""); energy_display->setIcon("gui/icons/energy")->setTextSize(20)->setPosition(20, 100, ATopLeft)->setSize(240, 40); heading_display = new GuiKeyValueDisplay(this, "HEADING_DISPLAY", 0.45, tr("Heading"), ""); heading_display->setIcon("gui/icons/heading")->setTextSize(20)->setPosition(20, 140, ATopLeft)->setSize(240, 40); velocity_display = new GuiKeyValueDisplay(this, "VELOCITY_DISPLAY", 0.45, tr("Speed"), ""); velocity_display->setIcon("gui/icons/speed")->setTextSize(20)->setPosition(20, 180, ATopLeft)->setSize(240, 40); shields_display = new GuiKeyValueDisplay(this, "SHIELDS_DISPLAY", 0.45, tr("Shields"), ""); shields_display->setIcon("gui/icons/shields")->setTextSize(20)->setPosition(20, 220, ATopLeft)->setSize(240, 40); // Weapon tube loading controls in the bottom left corner. tube_controls = new GuiMissileTubeControls(this, "MISSILE_TUBES"); tube_controls->setPosition(20, -20, ABottomLeft); radar->enableTargetProjections(tube_controls); // Beam controls beneath the radar. if (gameGlobalInfo->use_beam_shield_frequencies || gameGlobalInfo->use_system_damage) { GuiElement* beam_info_box = new GuiElement(this, "BEAM_INFO_BOX"); beam_info_box->setPosition(0, -20, ABottomCenter)->setSize(500, 50); (new GuiLabel(beam_info_box, "BEAM_INFO_LABEL", tr("Beams"), 30))->addBackground()->setPosition(0, 0, ABottomLeft)->setSize(80, 50); (new GuiBeamFrequencySelector(beam_info_box, "BEAM_FREQUENCY_SELECTOR"))->setPosition(80, 0, ABottomLeft)->setSize(132, 50); (new GuiPowerDamageIndicator(beam_info_box, "", SYS_BeamWeapons, ACenterLeft))->setPosition(0, 0, ABottomLeft)->setSize(212, 50); (new GuiBeamTargetSelector(beam_info_box, "BEAM_TARGET_SELECTOR"))->setPosition(0, 0, ABottomRight)->setSize(288, 50); } // Weapon tube locking, and manual aiming controls. missile_aim = new AimLock(this, "MISSILE_AIM", radar, -90, 360 - 90, 0, [this](float value){ tube_controls->setMissileTargetAngle(value); }); missile_aim->hide()->setPosition(0, 0, ACenter)->setSize(GuiElement::GuiSizeMatchHeight, 800); lock_aim = new AimLockButton(this, "LOCK_AIM", tube_controls, missile_aim); lock_aim->setPosition(250, 20, ATopCenter)->setSize(110, 50); // Combat maneuver and propulsion controls in the bottom right corner. (new GuiCombatManeuver(this, "COMBAT_MANEUVER"))->setPosition(-20, -390, ABottomRight)->setSize(200, 150); GuiAutoLayout* engine_layout = new GuiAutoLayout(this, "ENGINE_LAYOUT", GuiAutoLayout::LayoutHorizontalRightToLeft); engine_layout->setPosition(-20, -80, ABottomRight)->setSize(GuiElement::GuiSizeMax, 300); (new GuiImpulseControls(engine_layout, "IMPULSE"))->setSize(100, GuiElement::GuiSizeMax); warp_controls = (new GuiWarpControls(engine_layout, "WARP"))->setSize(100, GuiElement::GuiSizeMax); jump_controls = (new GuiJumpControls(engine_layout, "JUMP"))->setSize(100, GuiElement::GuiSizeMax); (new GuiDockingButton(this, "DOCKING"))->setPosition(-20, -20, ABottomRight)->setSize(280, 50); (new GuiCustomShipFunctions(this, tacticalOfficer, ""))->setPosition(-20, 120, ATopRight)->setSize(250, GuiElement::GuiSizeMax); } void TacticalScreen::onDraw(sf::RenderTarget& window) { if (my_spaceship) { energy_display->setValue(string(int(my_spaceship->energy_level))); heading_display->setValue(string(fmodf(my_spaceship->getRotation() + 360.0 + 360.0 - 270.0, 360.0), 1)); float velocity = sf::length(my_spaceship->getVelocity()) / 1000 * 60; velocity_display->setValue(tr("{value} {unit}/min").format({{"value", string(velocity, 1)}, {"unit", DISTANCE_UNIT_1K}})); warp_controls->setVisible(my_spaceship->has_warp_drive); jump_controls->setVisible(my_spaceship->has_jump_drive); shields_display->setValue(string(my_spaceship->getShieldPercentage(0)) + "% " + string(my_spaceship->getShieldPercentage(1)) + "%"); targets.set(my_spaceship->getTarget()); } GuiOverlay::onDraw(window); } bool TacticalScreen::onJoystickAxis(const AxisAction& axisAction){ if(my_spaceship){ if (axisAction.category == "HELMS"){ if (axisAction.action == "IMPULSE"){ my_spaceship->commandImpulse(axisAction.value); return true; } else if (axisAction.action == "ROTATE"){ my_spaceship->commandTurnSpeed(axisAction.value); return true; } else if (axisAction.action == "STRAFE"){ my_spaceship->commandCombatManeuverStrafe(axisAction.value); return true; } else if (axisAction.action == "BOOST"){ my_spaceship->commandCombatManeuverBoost(axisAction.value); return true; } } } return false; } void TacticalScreen::onHotkey(const HotkeyResult& key) { if (key.category == "HELMS" && my_spaceship) { if (key.hotkey == "TURN_LEFT") my_spaceship->commandTargetRotation(my_spaceship->getRotation() - 5.0f); else if (key.hotkey == "TURN_RIGHT") my_spaceship->commandTargetRotation(my_spaceship->getRotation() + 5.0f); } if (key.category == "WEAPONS" && my_spaceship) { if (key.hotkey == "NEXT_ENEMY_TARGET") { bool current_found = false; foreach(SpaceObject, obj, space_object_list) { if (obj == targets.get()) { current_found = true; continue; } if (current_found && sf::length(obj->getPosition() - my_spaceship->getPosition()) < my_spaceship->getShortRangeRadarRange() && my_spaceship->isEnemy(obj) && my_spaceship->getScannedStateFor(obj) >= SS_FriendOrFoeIdentified && obj->canBeTargetedBy(my_spaceship)) { targets.set(obj); my_spaceship->commandSetTarget(targets.get()); return; } } foreach(SpaceObject, obj, space_object_list) { if (obj == targets.get()) { continue; } if (my_spaceship->isEnemy(obj) && sf::length(obj->getPosition() - my_spaceship->getPosition()) < my_spaceship->getShortRangeRadarRange() && my_spaceship->getScannedStateFor(obj) >= SS_FriendOrFoeIdentified && obj->canBeTargetedBy(my_spaceship)) { targets.set(obj); my_spaceship->commandSetTarget(targets.get()); return; } } } if (key.hotkey == "NEXT_TARGET") { bool current_found = false; foreach(SpaceObject, obj, space_object_list) { if (obj == targets.get()) { current_found = true; continue; } if (obj == my_spaceship) continue; if (current_found && sf::length(obj->getPosition() - my_spaceship->getPosition()) < my_spaceship->getShortRangeRadarRange() && obj->canBeTargetedBy(my_spaceship)) { targets.set(obj); my_spaceship->commandSetTarget(targets.get()); return; } } foreach(SpaceObject, obj, space_object_list) { if (obj == targets.get() || obj == my_spaceship) continue; if (sf::length(obj->getPosition() - my_spaceship->getPosition()) < my_spaceship->getShortRangeRadarRange() && obj->canBeTargetedBy(my_spaceship)) { targets.set(obj); my_spaceship->commandSetTarget(targets.get()); return; } } } if (key.hotkey == "AIM_MISSILE_LEFT") { missile_aim->setValue(missile_aim->getValue() - 5.0f); tube_controls->setMissileTargetAngle(missile_aim->getValue()); } if (key.hotkey == "AIM_MISSILE_RIGHT") { missile_aim->setValue(missile_aim->getValue() + 5.0f); tube_controls->setMissileTargetAngle(missile_aim->getValue()); } } }
/** Good tutorial: http://www.informit.com/articles/article.aspx?p=673259 */ #include <algorithm> #include <fstream> #include <iostream> #include <set> #include <vector> #include <boost/config.hpp> //-lboost_graph #include <boost/graph/graph_traits.hpp> #include <boost/graph/adjacency_list.hpp> #include <boost/graph/dijkstra_shortest_paths.hpp> #include <boost/property_map/property_map.hpp> int main() { /* #Graph The following class hierarchy exists: BidirectionalGraph -------- Incience ---------+ | Adjacency --------+ | VertexAndEdgeList ----+---- VertexList -------+---- Graph | | +---- EdgeList ---------+ | AdjacenyMatrix ---+ */ { /* #properties Properties are values associated to edges and vertices. */ { /* There are a few predefined properties which you should use whenever possible as they are already used in many algorithms, but you can also define your own properties. Predefined properties include: - `edge_weight_t`. Used for most algorithms that have a single value associated to each edge such as Dijikstra. - `vertex_name_t` */ { typedef boost::property<boost::vertex_name_t, std::string> VertexProperties; typedef boost::property<boost::edge_weight_t, int> EdgeProperties; } /* Multiple properties can be specified either by: - using a custom class as the property type. TODO is there any limitation to this? - chaining multile properties */ { } /* The absense of a property is speficied by boost::no_property. */ { typedef boost::no_property VertexProperties; } } typedef boost::property<boost::vertex_name_t, std::string> VertexProperties; typedef boost::property<boost::edge_weight_t, int> EdgeProperties; typedef boost::adjacency_list< // Data structure to represent the out edges for each vertex. // Possibilities: // // #vecS selects std::vector. // #listS selects std::list. // #slistS selects std::slist. // #setS selects std::set. // #multisetS selects std::multiset. // #hash_setS selects std::hash_set. // // `S` standas for Selector. boost::vecS, // Data structure to represent the vertex set. boost::vecS, // Directed type. // #bidirectionalS: directed graph with access to in and out edges // #directedS: directed graph with access only to out-edges // #undirectedS: undirected graph boost::bidirectionalS, // Optional. VertexProperties, // Optional. EdgeProperties > Graph; //typedef boost::graph_traits<Graph>::vertex_iterator VertexIter; //typedef boost::graph_traits<Graph>::vertex_descriptor Vertex; //typedef boost::property_map<Graph, boost::vertex_index_t>::type IndexMap; // Fix number of vertices, and add one edge at a time. int num_vertices = 3; Graph g(num_vertices); boost::add_edge(0, 1, g); boost::add_edge(1, 2, g); // Fix number of vertices, and add one edge array. { int num_vertices = 3; typedef std::pair<int, int> Edge; std::vector<Edge> edges{ {0, 1}, {1, 2}, }; Graph g(edges.data(), edges.data() + edges.size(), num_vertices); } // It is also possible to add vertices with #add_vertex. //#vertices { // Number of vertices. boost::graph_traits<Graph>::vertices_size_type num_vertices = boost::num_vertices(g); assert(num_vertices == 3u); //#vertices() Returns a begin() end() vertex iterator pair so we know where to stop. { typedef std::vector<boost::graph_traits<Graph>::vertex_descriptor> Vertices; Vertices vertices; vertices.reserve(num_vertices); //IndexMap auto index = boost::get(boost::vertex_index, g); //std::pair<vertex_iter, vertex_iter> vp for (auto vp = boost::vertices(g); vp.first != vp.second; ++vp.first) { // Vertex auto v = *vp.first; vertices.push_back(index[v]); } assert((vertices == Vertices{0, 1, 2})); } // The iterator is a ranom access iterator. { auto index = boost::get(boost::vertex_index, g); auto it = boost::vertices(g).first; assert(index[it[2]] == 2); assert(index[it[1]] == 1); } } //#edges { // It seems that only AdjencyMatrix has a method to get an edge given two vertices: //edge(u, v, g) } } //#source is also a global function: <http://stackoverflow.com/questions/16114616/why-is-boost-graph-librarys-source-a-global-function> //#dijikstra std::cout << "#dijkstra" << std::endl; { typedef boost::adjacency_list< boost::listS, boost::vecS, boost::directedS, boost::no_property, boost::property<boost::edge_weight_t, int> > Graph; typedef boost::graph_traits<Graph>::vertex_descriptor vertex_descriptor; typedef boost::graph_traits<Graph>::edge_descriptor edge_descriptor; typedef std::pair<int, int> Edge; // Model inputs. const int num_nodes = 5; const int sorce = 0; std::vector<Edge> edges{ {0, 2}, {1, 1}, {1, 3}, {1, 4}, {2, 1}, {2, 3}, {3, 4}, {4, 0}, {4, 1} }; std::vector<int> weights{ 1, 2, 1, 2, 7, 3, 1, 1, 1 }; // Solve. Graph g(edges.data(), edges.data() + edges.size(), weights.data(), num_nodes); std::vector<vertex_descriptor> p(num_vertices(g)); std::vector<int> d(num_vertices(g)); vertex_descriptor s = vertex(sorce, g); dijkstra_shortest_paths(g, s, predecessor_map(boost::make_iterator_property_map( p.begin(), boost::get(boost::vertex_index, g) )).distance_map(boost::make_iterator_property_map( d.begin(), boost::get(boost::vertex_index, g) )) ); // Print solution to stdout. std::cout << "node | distance from source | parent" << std::endl; boost::graph_traits<Graph>::vertex_iterator vi, vend; for (boost::tie(vi, vend) = vertices(g); vi != vend; ++vi) std::cout << *vi << " " << d[*vi] << " " << p[*vi] << std::endl; std::cout <<std::endl; // Generate a .dot graph file with shortest path highlighted. // To PNG with: dot -Tpng -o outfile.png input.dot boost::property_map<Graph, boost::edge_weight_t>::type weightmap = boost::get(boost::edge_weight, g); std::ofstream dot_file("dijkstra.dot"); dot_file << "digraph D {\n" << " rankdir=LR\n" << " size=\"4,3\"\n" << " ratio=\"fill\"\n" << " edge[style=\"bold\"]\n" << " node[shape=\"circle\"]\n"; boost::graph_traits <Graph>::edge_iterator ei, ei_end; for (std::tie(ei, ei_end) = boost::edges(g); ei != ei_end; ++ei) { edge_descriptor e = *ei; boost::graph_traits<Graph>::vertex_descriptor u = boost::source(e, g), v = boost::target(e, g); dot_file << u << " -> " << v << "[label=\"" << boost::get(weightmap, e) << "\""; if (p[v] == u) dot_file << ", color=\"black\""; else dot_file << ", color=\"grey\""; dot_file << "]"; } dot_file << "}"; // Construct forward path to a destination. int dest = 4; int cur = dest; std::vector<int> path; path.push_back(cur); while(cur != sorce) { cur = p[cur]; path.push_back(cur); } std::reverse(path.begin(), path.end()); // Print. std::cout << "Path to node " << std::to_string(dest) << ":" << std::endl; for(auto& node : path) { std::cout << node << std::endl; } } }
/* * Copyright (C) 2014 Freddie (Musenkishi) Lust-Hed * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.musenkishi.wally.dataprovider; import android.app.DownloadManager; import android.content.Context; import android.net.Uri; import android.os.Environment; import com.musenkishi.wally.dataprovider.models.DataProviderError; import com.musenkishi.wally.dataprovider.models.SaveImageRequest; import com.musenkishi.wally.dataprovider.util.Parser; import com.musenkishi.wally.models.ExceptionReporter; import com.musenkishi.wally.models.Filter; import com.musenkishi.wally.models.Image; import com.musenkishi.wally.models.ImagePage; import com.musenkishi.wally.models.filters.FilterGroupsStructure; import java.io.File; import java.util.ArrayList; import static com.musenkishi.wally.dataprovider.NetworkDataProvider.OnDataReceivedListener; /** * <strong>No threading shall take place here.</strong> * Use this class to get and set data. * Created by Musenkishi on 2014-02-28. */ public class DataProvider { private static final String TAG = "DataProvider"; private SharedPreferencesDataProvider sharedPreferencesDataProvider; private DownloadManager downloadManager; private Parser parser; public interface OnImagesReceivedListener { abstract void onImagesReceived(ArrayList<Image> images); abstract void onError(DataProviderError dataProviderError); } public interface OnPageReceivedListener { abstract void onPageReceived(ImagePage imagePage); abstract void onError(DataProviderError dataProviderError); } public DataProvider(Context context, ExceptionReporter.OnReportListener onReportListener) { sharedPreferencesDataProvider = new SharedPreferencesDataProvider(context); parser = new Parser(onReportListener); downloadManager = (DownloadManager) context.getSystemService(Context.DOWNLOAD_SERVICE); } public SharedPreferencesDataProvider getSharedPreferencesDataProviderInstance(){ return sharedPreferencesDataProvider; } public DownloadManager getDownloadManager() { return downloadManager; } public void getImages(String path, String query, String color, int index, FilterGroupsStructure filterGroupsStructure, final OnImagesReceivedListener onImagesReceivedListener) { new NetworkDataProvider().getData(path, query, color, index, filterGroupsStructure, new OnDataReceivedListener() { @Override public void onData(String data, String url) { ArrayList<Image> images = parser.parseImages(data); if (onImagesReceivedListener != null) { onImagesReceivedListener.onImagesReceived(images); } } @Override public void onError(DataProviderError dataProviderError) { if (onImagesReceivedListener != null) { onImagesReceivedListener.onError(dataProviderError); } } }); } public ArrayList<Image> getImagesSync(String path, int index, FilterGroupsStructure filterGroupsStructure){ String data = new NetworkDataProvider().getDataSync(path, index, filterGroupsStructure); if (data != null) { return parser.parseImages(data); } else { return null; } } /** */ public void getImages(String path, int index, FilterGroupsStructure filterGroupsStructure, final OnImagesReceivedListener onImagesReceivedListener) { new NetworkDataProvider().getData(path, index, filterGroupsStructure, new OnDataReceivedListener() { @Override public void onData(String data, String url) { ArrayList<Image> images = parser.parseImages(data); if (onImagesReceivedListener != null) { if (!images.isEmpty()){ onImagesReceivedListener.onImagesReceived(images); } else { DataProviderError noImagesError = new DataProviderError(DataProviderError.Type.LOCAL, 204, "No images"); onImagesReceivedListener.onError(noImagesError); } } } @Override public void onError(DataProviderError dataProviderError) { if (onImagesReceivedListener != null) { onImagesReceivedListener.onError(dataProviderError); } } }); } public ImagePage getPageDataSync(String imagePageUrl){ String data = new NetworkDataProvider().getDataSync(imagePageUrl); return parser.parseImagePage(data, imagePageUrl); } public void getPageData(String imagePageUrl, final OnPageReceivedListener onPageReceivedListener) { new NetworkDataProvider().getData(imagePageUrl, new OnDataReceivedListener() { @Override public void onData(String data, String url) { ImagePage imagePage = parser.parseImagePage(data, url); if (onPageReceivedListener != null) { onPageReceivedListener.onPageReceived(imagePage); } } @Override public void onError(DataProviderError error) { if (onPageReceivedListener != null) { onPageReceivedListener.onError(error); } } }); } public void setTimeSpan(String tag, Filter<String, String> timespan){ sharedPreferencesDataProvider.setTimespan(tag, timespan); } public Filter<String, String> getTimespan(String tag){ return sharedPreferencesDataProvider.getTimespan(tag); } public void setBoards(String tag, String paramValue){ sharedPreferencesDataProvider.setBoards(tag, paramValue); } public String getBoards(String tag) { return sharedPreferencesDataProvider.getBoards(tag); } public void setPurity(String tag, String paramValue){ sharedPreferencesDataProvider.setPurity(tag, paramValue); } public String getPurity(String tag) { return sharedPreferencesDataProvider.getPurity(tag); } public void setAspectRatio(String tag, Filter<String, String> aspectRatio){ sharedPreferencesDataProvider.setAspectRatio(tag, aspectRatio); } public Filter<String, String> getAspectRatio(String tag) { return sharedPreferencesDataProvider.getAspectRatio(tag); } public void setResolutionOption(String tag, String paramValue){ sharedPreferencesDataProvider.setResolutionOption(tag, paramValue); } public String getResolutionOption(String tag) { return sharedPreferencesDataProvider.getResolutionOption(tag); } public void setResolution(String tag, Filter<String, String> resolution){ sharedPreferencesDataProvider.setResolution(tag, resolution); } public Filter<String, String> getResolution(String tag) { return sharedPreferencesDataProvider.getResolution(tag); } public SaveImageRequest downloadImageIfNeeded(Uri path, String filename, String notificationTitle){ FileManager fileManager = new FileManager(); if (fileManager.fileExists(filename)){ File file = fileManager.getFile(filename); Uri fileUri = Uri.fromFile(file); return new SaveImageRequest(fileUri); } else { String type = ".png"; //fallback to ".png" if (path.toString().lastIndexOf(".") != -1) { //-1 means there are no punctuations in the path type = path.toString().substring(path.toString().lastIndexOf(".")); } DownloadManager.Request request = new DownloadManager.Request(path); request.setTitle(notificationTitle); request.setVisibleInDownloadsUi(false); request.setNotificationVisibility(DownloadManager.Request.VISIBILITY_VISIBLE); request.allowScanningByMediaScanner(); request.setDestinationInExternalPublicDir(Environment.DIRECTORY_PICTURES, "/Wally/" + filename + type); return new SaveImageRequest(downloadManager.enqueue(request)); } } public Uri getFilePath(String filename) { FileManager fileManager = new FileManager(); if (fileManager.fileExists(filename)) { File file = fileManager.getFile(filename); Uri fileUri = Uri.fromFile(file); return fileUri; } return null; } }
461 F.2d 265 172 U.S.P.Q. 385 ILLINOIS TOOL WORKS, INC., Plaintiff-Appellant,v.SOLO CUP COMPANY, Inc., Defendant-Appellee. No. 18960. United States Court of AppealsSeventh Circuit. Jan. 26, 1972.Rehearing Denied March 24, 1972.Certiorari Denied June 12, 1972.See 92 S.Ct. 2441. James P. Hume, Granger Cook, Jr., Hume, Clement, Hume & Lee, Ltd., Chicago, Ill., Richard R. Trexler, Robert W. Beart, Michael Kovac, Chicago, Ill., of counsel, for plaintiff-appellant. John F. Flannery, Francis A. Even, Fitch, Even, Tabin & Luedeka, Chicago, Ill., for defendant-appellee. Before DUFFY and HASTINGS, Senior Circuit Judges, and SPRECHER, Circuit Judge. DUFFY, Senior Circuit Judge. 1 In this suit, plaintiff, Illinois Tool Works, Inc. (ITW), charges defendant, Solo Cup (Solo), with infringement of ITW's Edwards' Patents Nos. 3,139,213 ('213) and 3,091,360 ('360). Both patents at issue relate to the design and manufacture of nestable, expandable thin-wall plastic containers of unitary construction, whose usual use is in applications which require dependable automatic dispensing of such containers, singly and in an upright position. 2 The '213 patent discloses and claims a thin-walled plastic container including a side wall, a bottom, a rim and a continuous Z-shaped stacking facility located in the side wall below the rim. 3 The '360 patent discloses and claims a similar thin-walled plastic container but with an interrupted stacking facility located in the side wall below the rim. 4 These two patents have been before us on previous occasions. The District Court found the '213 patent to be valid and infringed in Illinois Tool Works, Inc. v. Continental Can Company, 273 F.Supp. 94 (N.D.Ill.1967). We affirmed, 397 F.2d 517 (7 Cir., 1968). 5 The patents ('213 and '360) were before us in Illinois Tool Works, Inc. v. Sweetheart Plastics, Inc., 436 F.2d 1180 (7 Cir., 1971). The District Court, in the Sweetheart decision, determined that each of the patents was valid and infringed. (306 F.Supp. 364). We affirmed (436 F.2d 1180). 6 In 436 F.2d, at page 1182, we pointed out that the '213 patent had been challenged in a previous infringement suit in which, after full consideration of both anticipation and obviousness defenses raised therein, the District Court held the patent valid, and we affirmed in 397 F.2d 517 (7 Cir., 1968). We stated: "We again hold that the '213 patent is valid, unanticipated and nonobvious." (436 F.2d 1180, 1182). We further held ". . . The Edwards '360 patent was not anticipated by any of the inventions cited by defendant here or in the district court." (436 F.2d 1180). 7 With reference to the defense of obviousness, we stated: "Application of the foregoing method convinces us that the '360 patent is valid and nonobvious from the state of the prior art." (346 F.2d 1180, 1183). 8 We also said: "As is apparent from the descriptions of the asserted prior art which follow, none of these items sufficiently approximates the '360 invention to satisfy the narrow anticipation defense. Moreover, none of them, independently or in combination, renders the invention obvious." (436 F.2d 1180, 1183). 9 As to infringement, we said: "We agree with the district court that 'the accused . . . devices embody each of the elements specified in the patent."' (436 F.2d 1180, 1187). We then affirmed the District Court in all respects. 10 In the instant case, on January 23, 1970, Solo filed a motion in the District Court seeking partial summary judgment as to Claims 1, 2 and 3 of the '360 patent, on the ground that these claims were invalid for obviousness. The District Court, 317 F.Supp. 1169, denied Solo's motion for summary judgment but decided that the '213 cups delivered to Automatic Canteen in April 1958 and resold for public use in April and May 1958 constituted prior art to be considered on the trial on the question of validity of the '360 patent. 11 The District Court adopted Solo's interpretation of 35 U.S.C. Sec. 102(a) and held the sale to and the use of the '213 cups by Automatic Canteen rendered Edwards' own invention "known or used by others" within the meaning of Section 102(a), therefore available as prior art against the '360 cups. 12 The District Court then granted ITW's alternative motion to certify the "prior art" issue. ITW then petitioned this Court for leave to file this appeal and we granted that petition. 13 The important issue to be decided here is whether the District Court was in error in holding that one's own invention once disclosed to the public is "prior art" against the same inventor's later related invention on which an application was filed less than one year from such public disclosure. 14 We must also consider whether public knowledge, sale and use by others of the '213 plastic cups is "prior art" against the '360 patent even though the '360 patent was filed within one year of such public knowledge, sale and use and the inventive subject matter common to both the '213 and '360 inventions was developed by Edwards prior to the public knowledge and use of the '213 patent. 15 On November 29, 1957, ITW filed a patent application describing several species of thin-walled plastic nestable containers having a continuous Z-shaped stacking facility. 16 In December 1957, ITW submitted samples of plastic drinking cups to Automatic Canteen Company. In the same month, Automatic Canteen placed an order for one million of these cups. By April 1958, 50,000 of these cups had been delivered to Automatic Canteen and were, presumably, placed in vending machines used by the public in April and May 1958. 17 In June 1958, Edwards developed a thin-walled plastic cup having an interrupted side-wall Z-shaped stacking facility. 18 On October 29, 1958, ITW filed a continuation-in-part application comprising all of the subject matter of the original application plus drawings and claims directed to plastic containers having different forms of interrupted side-wall Z-shaped stacking facilities. 19 The original application was intentionally abandoned on November 10, 1958 after the filing of the continuation-in-part application. Thereafter, in response to the Patent Office's requirement for restriction of May 1, 1959, ITW filed a divisional application claiming the container species embodying the continuous Z-shaped stacking facility. The divisional application eventually matured into the '213 patent in suit. 20 The continuation-in-part application retained claims embodying the interrupted Z-shaped stacking facility and eventually matured into the '360 patent. 21 The District Court placed primary reliance on a court decision not cited by either party. This was a decision by the Court of Customs and Patent Appeals (C.C.P.A.). Application of Jaeger, 241 F.2d 723, 44 C.C.P.A. 767 (1957). We think that Court overlooked a fact which sharply distinguishes that case from the one at bar. 22 In the Jaeger case, the earlier prior art patent of one of the co-inventors had been issued more than two years prior to the filing of the application in question. Therefore, there was a statutory bar under 35 U.S.C. Sec. 102(b). In that case, the subject application was not filed within one year of the publication or issue date of the earlier patent. The broad language used by the Court in Jaeger must be limited to a case where the prior art reference is a statutory bar under 35 U.S.C. Sec. 102(b). 23 The District Court, in the case at bar, quoted from an unsubstantiated statement by the Court of Customs and Patent Appeals in Jaeger, supra, in arriving at its opinion in the case at bar: 24 ". . . (t)he law makes no distinction between prior art of an applicant's own making and the prior art of others." 25 Upon closer examination, this statement appeared first in Dix-Seal Corporation v. New Haven Trap Rock Company, 236 F.Supp. 914 (D.C.Conn., 1964), where the Jaeger Court used this reasoning without including the determinative sentence preceding it. At page 920, the Dix-Seal Court said: 26 "Once the year in which to prepare and file his application has passed, the employment of a standard of patentability less stringent against the first inventor than against . . . others would seem to impair, if not defeat congressional policy. There should be no distinction between prior art of the inventor's own making and that of others." 27 Therefore, the reliance of the District Court on Jaeger as applicable to the case at bar seems misplaced. 28 In the appeal before our Court, ITW alleges the District Court erred in its ruling on the statutory interpretation of 35 U.S.C. Sec. 102(a). The District Court reasoned in its memorandum opinion of June 24, 1970, that to give the statute the construction ITW desires would provide the inventor of two similar devices a "preferential exemption" from the prior art statutes as to the later invention. We believe the District Court erred in its interpretation of 35 U.S.C. Sec. 102(a). 29 ITW has urged upon appeal that 35 U. S.C. Sec. 102(a) pertains only to the originality or novelty of an application for a patent and the subject matter covered under said application; that a patent issue only to the first and original inventor of the subject matter. The District Court rejected ITW's argument that the language of 35 U.S.C. Sec. 102(a) must be read to mean "invented by others" as well as "known or used by others", as not persuasive. 30 The Supreme Court in Alexander Milburn Co. v. Davis-Bournonville Co., 270 U.S. 390, 46 S.Ct. 324, 70 L.Ed. 651 (1926) contemplated the Congressional intent of R.S. 4886 which preceded 35 U.S.C. Sec. 102(a). In Milburn, supra, one Whitford's patent was invalidated on the premise that one Clifford's patent containing a full description of the Whitford claim earlier than any date of invention claimed by Whitford, was evidence that Whitford was not the first inventor. The statutory basis of that decision was that Whitford's invention was not patentable because "known or used by others in this country, before his invention." The "others" in the Milburn, supra, case was Clifford, an inventor unrelated and unknown to Whitford. 31 The pertinent part in the Milburn case with respect to the case before us is the evidence his invention was known to others before he claimed to have made his invention. "What he had invented lacked novelty, to put it one way, or he was not the first inventor, to put it another." Application of Land, 368 F.2d 866, 877, 54 C.C.P.A. 806 (1966) (emphasis supplied). 32 In Ex parte Lemieux, 1957 C.D. 47, 115 U.S.P.Q. 148, the Board of Patent Appeals cited the appeal of Ex parte Powell and Davis, 37 U.S.P.Q. 285 (Bd. App.1938) where it was held in a well drafted opinion that an applicant's own British specification, published a few weeks before the filing of his patent application, was not prior art and concluded that subsequent legislation had not altered that rule, that the reference (the applicant's own published specification) asserted to show prior art must disclose the work of someone other than the applicant. (Notice there existed no statutory bar under Section 102(b), because the published specification was within one year of the application). 33 The Court of Customs and Patent Appeals in Application of Facius, 408 F.2d 1396 (1969) discussed how a reference may be "overcome" if the reference is not a statutory bar under 35 U.S.C. Sec. 102(b). The Court, in Facius, supra, discussed two cases where references for anticipation rejections under 35 U.S.C. Sec. 102(e) were deemed to have been "overcome", In re Blout, 333 F.2d 928, 52 C. C.P.A. 751 (1964) and In re Mathews, 408 F.2d 1393 (Cust. & Pat.App., 1969). The Court, in Facius, supra, reasoning how a reference available as prior art can be and was overcome in the two related cases, said at pages 1405-1406: 34 "Moreover, appellants further showed that they themselves had made the inventions10 upon which the relevant disclosures in the patents had been based. This is a significant fact, If all the appellants had done was to bring the invention of another to the attention of the patentees, then that disclosure in the patent would have been the invention of another and still available as prior art. . . . 35 In the case before us, ITW urges that the '213 cups sold to Automatic Canteen should not be considered prior art against the '360 patent because the subject matter of that sale was Edwards' own invention. The '213 cups were not the invention of another available as a reference under 35 U.S.C. Sec. 102(a). It has been established that Edwards did apply and did receive a patent for the '213 cups. Edwards was the first inventor of the subject matter, a fact not in dispute. Therefore, we feel Solo's contention that the sale of the '213 cups constituted prior art because the "invention was known or used by others in this country . . ., before the invention thereof by the applicant for patent" under 35 U.S.C. Sec. 102(a), is without merit. 36 Pertinent is the Court's statement in Facius, supra, at page 1406: 37 "But certainly one's own invention, whatever the form of disclosure to the public, may not be prior art against oneself, absent a statutory bar." 38 Obviously, the statutory bar referred to above pertains to the one-year period during which Congress has allowed an inventor to perfect, develop and apply for a patent pursuant to 35 U.S.C. Sec. 102(b). 39 We are of the opinion that Section 102(a) dictates that Edwards had to be the first and original inventor of the '360 invention and absent a reference patent or prior invention by another, the requirements of Section 102(a) have been satisfied. As indicated in Application of Land, supra, and in the opinion of the Court of Patent Appeals, Ex parte Powell and Davis, supra, we are convinced that ITW has overcome any possible reference under Section 102(a) by showing Edwards was the first and original inventor of the subject matter of the '360 patent. Congress indicated by its language in Section 102(a) that the subject matter must have been "invented by others" as well as "known or used by others." 40 Article I, Section 8, Clause 8, of the Constitution provides that Congress has been granted the power ". . . To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries." Congress, in the Patent Act of 1952, has interpreted the "limited Time(s)" to extend for a period of one year during which an inventor is permitted to prepare and submit an application for a patentable advance in technology and still be protected by the patent laws. 41 Section 102(b) concerns the action an inventor must follow once he has developed a novel and original invention (these qualifications are considered in Section 102(a) rendering it imperative that the inventor claiming under the Act be the first inventor of the claimed subject matter, as well as the subject matter being in itself a novel and original advance, Application of Land, 368 F.2d 866 (1966)). Section 102(b) imposes the time limitation of one year on the inventor which limitation is recognized by Congress as implied in the Constitution that an inventor act with deliberate speed in filing his patent application, or his rights to a legal monopoly will be statutorily barred. Dix-Seal, supra. 42 Undoubtedly, a majority of inventions are made in stages. Modifications and improvements of the original idea are common. If the ruling of the District Court is allowed to stand, it would be natural for an inventor to withhold any disclosure to the public until the innovative development is completed. We feel any such delays in disclosure of prospective inventions would seem contrary to the purpose of the patent laws to have new inventions promptly before the public. 43 The balancing process between the encouragement of technological advancement by the granting of legal monopolies and permitting a one-year grace period for application, and the interest of the public in having new advances readily accessible, has been considered by Congress in Section 102(b). The policy to expedite free public use and public disclosure by inventors was the overriding consideration of Congress in passing Section 102(b), which expressly bars an inventor who does not file an application within one year after his invention has been in public use. 44 Once in public use, that invention becomes prior art and as to all later discoveries in that field, anyone else must show some "patentable" change to obtain the legal monopoly. Once the year in which to prepare and file this application has passed, the employment of a standard of patentability less stringent against the first inventor than against these others would seem to impair, if not defeat, congressional policy. Dix-Seal, supra. However, in the instant case, the one year period had clearly not passed. 45 This Court has long recognized a patentee's right to carry the date of his invention behind the date of an apparent reference or patent and thus eliminate that reference or patent from consideration. Moline Plow Co. v. Rock Island Plow Co., 212 F. 727, 731 (7 Cir. 1914); Pleatmaster, Inc. v. J. L. Golding Mfg. Co., 240 F.2d 894, 898 (7 Cir., 1957). 46 Of course, if the "reference" has an effective date more than one year before the application date of the claims under consideration, the statutory bar of 35 U.S.C. Sec. 102(b) prevents the applicant even though he be the original inventor, from obtaining the patent. 47 In the case before us, the evidence is uncontroverted with respect to the time of filing the applications and the public use of the '213 cup. On November 29, 1957, Edwards filed a patent application which eventually matured into the '213 patent. In April 1958, the knowledge and use of the continuous ring cups by the general public was realized through the sale of cups to Automatic Canteen. On October 29, 1958, eleven months after the initial application, Edwards filed a continuation-in-part application comprising all the claims of the original application plus the additional description and claims which matured into the '360 patent. 48 We think the District Court was in error in concluding that the public knowledge and use and offer for sale and the sale of the '213 cups in December 1957 and April-May 1958 constitutes "prior art" against the '360 patent. 49 We hold that the District Court erroneously interpreted "knowledge of use by others" in 35 U.S.C. Sec. 102(a) to mean the knowledge and use of Edwards' own invention. 50 We also hold that the District Court's reliance on In re Jaeger was misplaced as that case involves a statutory bar under 35 U.S.C. Sec. 102(b) which does not exist in the instant case. 51 Furthermore, there is a well established line of decisions in the Court of Customs and Patent Appeals to the effect one's own invention, whatever the form of the disclosure to the public may be, cannot be a prior art against oneself, absent a statutory bar. 52 The order of the District Court here in issue is reversed and the cause is remanded to the District Court for further proceedings not inconsistent with our decision herein. 53 Reversed and remanded. 10 Appellants having shown that they originated the inventions disclosed in the reference, lack of novelty and/or obviousness was immaterial to overcome the references."
59 Ill. App.3d 904 (1978) 375 N.E.2d 914 FRANK J. HAHN, Plaintiff-Appellee, v. NORFOLK AND WESTERN RAILWAY COMPANY, Defendant-Appellant. No. 77-196. Illinois Appellate Court — Fifth District. Opinion filed April 17, 1978. *905 Thomas W. Alvey, Jr., of Pope and Driemeyer, of Belleville, for appellant. Charles W. Chapman, of Chapman & Chapman, of Granite City, for appellee. Judgment affirmed. Mr. JUSTICE GEORGE J. MORAN delivered the opinion of the court: Defendant Norfolk and Western Railway Company appeals from a judgment of the circuit court of Madison County entered on a jury verdict in favor of the plaintiff Frank Hahn for personal injuries sustained while an employee of defendant. Plaintiff brought this negligence action pursuant to the Federal Employers' Liability Act (45 U.S.C. § 51 et seq.). On May 11, 1973, plaintiff, a railroad car inspector in the defendant's employ, reported to work at the Luther Yard in St. Louis for his usual 11 p.m. to 7 a.m. shift. Customarily two inspectors and two airmen were assigned to this shift; however, due to cut-backs, etc., only one other worker, an airman, reported to work that night. Among the prescribed duties of the plaintiff was the inspection of all arriving railroad cars to ascertain that all plug doors were closed. There was abundant evidence that a plug door is much heavier than a regular box car door, weighing from 500 to 1000 pounds. These doors are situated on rollers and tightly seal the compartment when closed. The normal procedure was for the two inspectors to walk along opposite sides of the stationary cars inspecting for various abnormalities, including open plug doors. Upon finding an open door, the inspector was instructed to first attempt to close it himself. Because of the size and weight of the doors this was a difficult task. Should the inspector be unable to close the door *906 without help, the co-inspector would cross over the track to assist. Plaintiff testified that while one worker held the lock open with one hand and pushed with the other, the second worker would push with both hands. Occasionally, a fork-lift was required to provide the sufficient strength to close a plug door. Prior to proceeding to the yard on the night in question, plaintiff remarked to the foreman that only he and Virgil Hensley, an airman, had reported for work on the third shift. Plaintiff requested additional help from the second shift foreman who acknowledged that the general foreman was aware of the situation but nevertheless instructed the plaintiff to conduct his usual inspection and Hensley to "work air." Equipped with a lantern, a metal scratching hook, and a two-way radio, plaintiff set out on his inspection tour of the yard. At approximately 1:40 a.m. plaintiff discovered an open plug door. After his first attempt to close the door failed, plaintiff placed the scratching hook at one end of the car to hold the lock open and pulled at the door from the center of the car. When this method also proved unsuccessful, plaintiff secured a piece of wood to keep the lock open and pushed the door three times. With the third push the door closed but the ballast rolled under plaintiff's feet causing him to tumble headfirst on the ground. Since plaintiff immediately experienced considerable pain in his neck and back, he notified the foreman in charge of locomotives who arranged to have plaintiff transported to the hospital. Upon examination at the hospital emergency room, plaintiff was given pain pills and instructed to remain at home for a few days. After consulting his family physician for the gradually worsening pain in his back and neck, plaintiff was finally referred to a neurosurgeon in 1975. Shortly thereafter, two successful operations known as facet rhizotomies were performed on the lumbar and cervical regions of plaintiff's spine. On this evidence the jury returned a verdict in favor of the plaintiff in the amount of $115,000. The defendant contends that the trial court erred in several respects. First it contends that it was error for the court to have directed a verdict at the conclusion of all the evidence in favor of the plaintiff on the issue of contributory negligence. While we agree that such matters are usually for the jury to determine, there are instances when the evidence, viewed in the light most favorable to the defendant, still overwhelmingly favors the plaintiff that no contrary verdict could ever stand. (Thatch v. Missouri Pacific R.R. Co., 47 Ill. App.3d 980, 362 N.E.2d 1064; Pedrick v. Peoria & Eastern R.R. Co., 37 Ill.2d 494, 229 N.E.2d 504.) We believe this is such a case. While there was considerable evidence adduced at trial that the defendant had expended 6.5 million dollars in reconditioning the Luther Yard facility, including the installation of the new ballast along the tracks, *907 there was a total lack of evidence that plaintiff had in any manner contributed to his own injury. Defendant's reliance on Thatch v. Missouri Pacific is misplaced. There we held that under facts quite dissimilar to this case defendant had established a submissible issue as to plaintiff's contributory negligence. The plaintiff in Thatch testified that while walking on a concrete platform he was aware of a forklift moving closer to pass him but nevertheless failed to move over far enough to avoid being struck by the machine. There is no evidence in this case that had plaintiff Hahn acted differently he would not have fallen on the rock. In Dixon v. Penn Central Co., 481 F.2d 833 (6th Cir.1973), the court held that the trial court had erred in submitting the issue of contributory negligence to the jury in an FELA action where there was no evidence that had the plaintiff properly gripped a lever on the machine he would not have been injured. • 1 The complaint in this case alleged that the defendant was negligent, inter alia, in failing to furnish plaintiff with a reasonably safe place to work and in failing to furnish plaintiff with adequate assistance with which to do his work. Plaintiff testified that while performing his assigned task of closing the open plug door under the conditions provided that night, i.e., without the assistance of another inspector, he was injured. There was considerable testimony as to the inferior footing afforded by the 3/4" rock used in plaintiff's work area, vis-a-vis the small "pea gravel" used in the area where the switchmen work. Finally, there was evidence that defendant had previously received other complaints concerning the dangerous propensity of the rock in this particular area. Defendant, on the other hand, produced no evidence that plaintiff had been inattentive or otherwise negligent in actually pushing the plug door. Its only assertion of contributory negligence was that plaintiff should have radioed Hensley for assistance. However without some evidence that had plaintiff requested assistance Hensley would have been required to respond, we think the defendant failed to present a submissible issue for the jury on contributory negligence. In fact, there is no reason he would have called for assistance as he could not have been charged with knowledge that the 3/4" rock would "roll" as he was attempting to close the door. In fact, it would appear that no assistance was needed as the door did close. The defendant next contends that the trial court erred in excluding certain evidence relating to a prior work related injury to plaintiff's back for which plaintiff was compensated by the defendant railroad. In its answer, defendant interposed as an affirmative defense that plaintiff's injuries were not caused in whole or in part by the alleged negligent acts set forth in the complaint. In 1966 plaintiff suffered an injury to his back which caused him to be absent from his work for 12 to 14 months. At that time plaintiff consulted Dr. Deyton, since deceased, and his associate Dr. *908 Schaerer, who treated plaintiff in 1975 and whose deposition was read into evidence at trial. In 1967 Dr. Schaerer conducted a discogram and myelogram on plaintiff, both with negative results. Plaintiff recovered and returned to work in 1967. From 1968 until 1973 when this accident occurred, plaintiff missed no work, and consulted no doctors concerning his back. Plaintiff made a pretrial motion in limine to exclude evidence of the prior injury as well as the subsequent claim and its settlement. The trial court denied plaintiff's motion insofar as it related to the prior injury; however, it later ruled that evidence concerning the claim and settlement arising out of that incident would be excluded. • 2 We consider first defendant's allegations concerning Dr. Deyton's report. Defendant argues that since Dr. Schaerer, the treating physician, kept Dr. Deyton's report in his file, the findings contained in the report are admissible. In our opinion the trial court properly excluded Dr. Schaerer's testimony concerning this report since there was no evidence that Dr. Schaerer had relied upon Deyton's report in diagnosing or treating the plaintiff. Furthermore, Dr. Schaerer stated that the injury in 1966 was a "one time affair which cleared up." Under these circumstances, the court correctly excluded that portion of Dr. Schaerer's testimony concerning the Deyton report. At trial defendant was allowed great latitude in examining plaintiff relative to his prior injuries. Defendant now argues, however, that the medical report of Dr. Deyton should have been admitted into evidence. We disagree. Our review of the record in this case leaves us with a serious doubt that the report was properly authenticated. Moreover, the report was clearly hearsay and as defendant has failed to bring it within one of the exceptions to the hearsay rule, the report was properly held to be inadmissible. • 3, 4 Defendant also offered evidence of another doctor's findings concerning plaintiff's injuries of 1966. It seems that a Dr. Lansche had examined the plaintiff prior to his return to work in 1967. Once again defendant sought to introduce a medical report (Lansche's) through the testimony of another doctor, Dr. Wagner, who expressly denied that he relied upon the report. In addition, Dr. Wagner, who testified on defendant's behalf at trial, was merely an examining physician and as such could only testify as to objective symptoms. (Powers v. Browning, 2 Ill. App.2d 479, 119 N.E.2d 795.) Furthermore, in this instance, there is no indication that Dr. Lansche was unavailable to testify at trial. Thus, we feel the evidence was properly excluded. • 5 Defendant further contends that it was error to exclude evidence of the plaintiff's prior claim and settlement. We fail to understand how evidence that plaintiff had been compensated by defendant for a prior *909 injury could be relevant to any of the issues of this case. Defendant has cited no cases and we have found none in support of its position that evidence of plaintiff's claim and settlement is admissible under the circumstances. Nor has defendant explained how the admission of the release into evidence would tend to resolve any of the underlying issues of the case. The burden was on the defendant to show a connection with the prior injury. (Scheck v. Evanston Cab Co., 93 Ill. App.2d 220, 236 N.E.2d 258.) The jury was instructed that if it found that the plaintiff's injuries were not caused in whole or in part by the allegations set forth in the complaint, then it should render a verdict in favor of the defendant. Although the defendant was afforded great latitude in examining plaintiff and plaintiff's witnesses concerning the 1966 injury, it was unable to causally link the two injuries. We find no error in the exclusion of any of the above-mentioned evidence. Defendant next contends that the court erred in refusing to strike that portion of Dr. Schaerer's testimony relating to what the witness termed a thoracic outlet syndrome or chronic back pain. During the evidentiary deposition, plaintiff's counsel questioned the doctor concerning his evaluation of any residual disability. The doctor responded that seven months after the rhizotomies plaintiff experienced an episode of pain in his left shoulder which radiated into three fingers on the left hand. The witness explained that this condition is known as a thoracic outlet syndrome and that it is precipitated by reaching overhead. Defense counsel made no objection to this continuous line of questioning. Finally, the following colloquy occurred between Dr. Schaerer and plaintiff's counsel: "Q: Do you have an opinion, first, as to whether or not he [plaintiff Hahn] will continue to have that type of difficulty in performing certain activities, Dr. Schaerer, again to a reasonable degree of medical certainty? MR. ALVEY: Well, I'm going to object to that question unless it is specified what condition you are talking about. I think there's been about three or four different conditions. Would you clarify what conditions you're talking about? Q: I'm talking about the after effects of the facet conditions that Dr. Schaerer treated him for. MR. ALVEY: All right." • 6 Although we agree with defendant that a causal connection between the May 1973 accident and the thoracic outlet syndrome had not been clearly demonstrated, we believe that defendant waived this error in failing to make an objection at the time of the deposition. Supreme Court Rule 211(c)(1) provides that objections to the admission of testimony which might have been corrected if made during the taking of the *910 deposition are waived by a failure to present them at that time. (Ill. Rev. Stat. 1975, ch. 110A, par. 211(c)(1); Bireline v. Espenscheid, 15 Ill. App.3d 368, 304 N.E.2d 508; Moore v. Jewel Tea Co., 46 Ill.2d 288, 263 N.E.2d 103.) Defendant may not acquiesce in allowing the uncertainty of a causal connection between plaintiff's accident and the thoracic outlet syndrome to persist and later seek to take advantage of that uncertainty. In this instance, had the defendant made a proper objection further questions could have been asked to obviate any ambiguity concerning the causal connection. • 7, 8 Finally, defendant contends that certain remarks made by plaintiff's counsel during closing argument were especially prejudicial to the defendant. The objectionable comment concerned the potentially negative effect the instant suit might have upon plaintiff's continued employment by the defendant railroad. Counsel stated: "And you can take all of these things into account, what he told you, the manner he told you, and the fact he worked and has worked, and he's worked for the railroad for twenty years and hopefully he is going to be able to work for another twenty, but we don't know what's going to happen when this * * * we don't know what happens to Frank Hahn when the case is over. He goes out of here tonight after you reach your verdict and bring your verdict in, and he goes back to work hopefully with no consequences from the railroad." Defense counsel's objection to this remark was promptly sustained by the court. In addition, the jury was reminded on several occasions that argument of counsel did not constitute evidence and finally IPI Civil No. 1.01 was given instructing the jury to disregard testimony to which an objection had been sustained. There is no doubt that counsel's argument was wholly improper. Yet not all errors occurring at trial require reversal by an appellate court. Here the error was immediately cured by the court in sustaining defendant's objection and by instructing the jury in the language of IPI Civil No. 1.01. Defendant argues that the prejudicial effect of the statement was not rendered harmless as shown by the excessive verdict. We are unable to say that the amount awarded by the jury in this case is excessive in light of the nature and permanency of the plaintiff's injuries as explained by Dr. Schaerer. In short, the error in this case is insufficient to justify reversal of this cause. Accordingly we affirm the judgment of the circuit court of St. Clair County. Affirmed. KARNS, J., concurs. EBERSPACHER, P.J., dissents without opinion.
/* Copyright (c) 2008, Yahoo! Inc. All rights reserved. Code licensed under the BSD License: http://developer.yahoo.net/yui/license.txt version: 2.6.0 */ /** * Utilities for cookie management * @namespace YAHOO.util * @module cookie */ YAHOO.namespace("util"); /** * Cookie utility. * @class Cookie * @static */ YAHOO.util.Cookie = { //------------------------------------------------------------------------- // Private Methods //------------------------------------------------------------------------- /** * Creates a cookie string that can be assigned into document.cookie. * @param {String} name The name of the cookie. * @param {String} value The value of the cookie. * @param {encodeValue} encodeValue True to encode the value, false to leave as-is. * @param {Object} options (Optional) Options for the cookie. * @return {String} The formatted cookie string. * @method _createCookieString * @private * @static */ _createCookieString : function (name /*:String*/, value /*:Variant*/, encodeValue /*:Boolean*/, options /*:Object*/) /*:String*/ { //shortcut var lang = YAHOO.lang; var text /*:String*/ = encodeURIComponent(name) + "=" + (encodeValue ? encodeURIComponent(value) : value); if (lang.isObject(options)){ //expiration date if (options.expires instanceof Date){ text += "; expires=" + options.expires.toGMTString(); } //path if (lang.isString(options.path) && options.path != ""){ text += "; path=" + options.path; } //domain if (lang.isString(options.domain) && options.domain != ""){ text += "; domain=" + options.domain; } //secure if (options.secure === true){ text += "; secure"; } } return text; }, /** * Formats a cookie value for an object containing multiple values. * @param {Object} hash An object of key-value pairs to create a string for. * @return {String} A string suitable for use as a cookie value. * @method _createCookieHash * @private * @static */ _createCookieHashString : function (hash /*:Object*/) /*:String*/ { //shortcuts var lang = YAHOO.lang; if (!lang.isObject(hash)){ throw new TypeError("Cookie._createCookieHashString(): Argument must be an object."); } var text /*:Array*/ = new Array(); for (var key in hash){ if (lang.hasOwnProperty(hash, key) && !lang.isFunction(hash[key]) && !lang.isUndefined(hash[key])){ text.push(encodeURIComponent(key) + "=" + encodeURIComponent(String(hash[key]))); } } return text.join("&"); }, /** * Parses a cookie hash string into an object. * @param {String} text The cookie hash string to parse. The string should already be URL-decoded. * @return {Object} An object containing entries for each cookie value. * @method _parseCookieHash * @private * @static */ _parseCookieHash : function (text /*:String*/) /*:Object*/ { var hashParts /*:Array*/ = text.split("&"), hashPart /*:Array*/ = null, hash /*:Object*/ = new Object(); if (text.length > 0){ for (var i=0, len=hashParts.length; i < len; i++){ hashPart = hashParts[i].split("="); hash[decodeURIComponent(hashPart[0])] = decodeURIComponent(hashPart[1]); } } return hash; }, /** * Parses a cookie string into an object representing all accessible cookies. * @param {String} text The cookie string to parse. * @param {Boolean} decode (Optional) Indicates if the cookie values should be decoded or not. Default is true. * @return {Object} An object containing entries for each accessible cookie. * @method _parseCookieString * @private * @static */ _parseCookieString : function (text /*:String*/, decode /*:Boolean*/) /*:Object*/ { var cookies /*:Object*/ = new Object(); if (YAHOO.lang.isString(text) && text.length > 0) { var decodeValue = (decode === false ? function(s){return s;} : decodeURIComponent); if (/[^=]+=[^=;]?(?:; [^=]+=[^=]?)?/.test(text)){ var cookieParts /*:Array*/ = text.split(/;\s/g); var cookieName /*:String*/ = null; var cookieValue /*:String*/ = null; var cookieNameValue /*:Array*/ = null; for (var i=0, len=cookieParts.length; i < len; i++){ //check for normally-formatted cookie (name-value) cookieNameValue = cookieParts[i].match(/([^=]+)=/i); if (cookieNameValue instanceof Array){ cookieName = decodeURIComponent(cookieNameValue[1]); cookieValue = decodeValue(cookieParts[i].substring(cookieNameValue[1].length+1)); } else { //means the cookie does not have an "=", so treat it as a boolean flag cookieName = decodeURIComponent(cookieParts[i]); cookieValue = cookieName; } cookies[cookieName] = cookieValue; } } } return cookies; }, //------------------------------------------------------------------------- // Public Methods //------------------------------------------------------------------------- /** * Returns the cookie value for the given name. * @param {String} name The name of the cookie to retrieve. * @param {Function} converter (Optional) A function to run on the value before returning * it. The function is not used if the cookie doesn't exist. * @return {Variant} If no converter is specified, returns a string or null if * the cookie doesn't exist. If the converter is specified, returns the value * returned from the converter or null if the cookie doesn't exist. * @method get * @static */ get : function (name /*:String*/, converter /*:Function*/) /*:Variant*/{ var lang = YAHOO.lang; var cookies /*:Object*/ = this._parseCookieString(document.cookie); if (!lang.isString(name) || name === ""){ throw new TypeError("Cookie.get(): Cookie name must be a non-empty string."); } if (lang.isUndefined(cookies[name])) { return null; } if (!lang.isFunction(converter)){ return cookies[name]; } else { return converter(cookies[name]); } }, /** * Returns the value of a subcookie. * @param {String} name The name of the cookie to retrieve. * @param {String} subName The name of the subcookie to retrieve. * @param {Function} converter (Optional) A function to run on the value before returning * it. The function is not used if the cookie doesn't exist. * @return {Variant} If the cookie doesn't exist, null is returned. If the subcookie * doesn't exist, null if also returned. If no converter is specified and the * subcookie exists, a string is returned. If a converter is specified and the * subcookie exists, the value returned from the converter is returned. * @method getSub * @static */ getSub : function (name /*:String*/, subName /*:String*/, converter /*:Function*/) /*:Variant*/ { var lang = YAHOO.lang; var hash /*:Variant*/ = this.getSubs(name); if (hash !== null) { if (!lang.isString(subName) || subName === ""){ throw new TypeError("Cookie.getSub(): Subcookie name must be a non-empty string."); } if (lang.isUndefined(hash[subName])){ return null; } if (!lang.isFunction(converter)){ return hash[subName]; } else { return converter(hash[subName]); } } else { return null; } }, /** * Returns an object containing name-value pairs stored in the cookie with the given name. * @param {String} name The name of the cookie to retrieve. * @return {Object} An object of name-value pairs if the cookie with the given name * exists, null if it does not. * @method getHash * @static */ getSubs : function (name /*:String*/) /*:Object*/ { //check cookie name if (!YAHOO.lang.isString(name) || name === ""){ throw new TypeError("Cookie.getSubs(): Cookie name must be a non-empty string."); } var cookies = this._parseCookieString(document.cookie, false); if (YAHOO.lang.isString(cookies[name])){ return this._parseCookieHash(cookies[name]); } return null; }, /** * Removes a cookie from the machine by setting its expiration date to * sometime in the past. * @param {String} name The name of the cookie to remove. * @param {Object} options (Optional) An object containing one or more * cookie options: path (a string), domain (a string), * and secure (true/false). The expires option will be overwritten * by the method. * @return {String} The created cookie string. * @method remove * @static */ remove : function (name /*:String*/, options /*:Object*/) /*:String*/ { //check cookie name if (!YAHOO.lang.isString(name) || name === ""){ throw new TypeError("Cookie.remove(): Cookie name must be a non-empty string."); } //set options options = options || {}; options.expires = new Date(0); //set cookie return this.set(name, "", options); }, /** * Removes a sub cookie with a given name. * @param {String} name The name of the cookie in which the subcookie exists. * @param {String} subName The name of the subcookie to remove. * @param {Object} options (Optional) An object containing one or more * cookie options: path (a string), domain (a string), expires (a Date object), * and secure (true/false). This must be the same settings as the original * subcookie. * @return {String} The created cookie string. * @method removeSub * @static */ removeSub : function(name /*:String*/, subName /*:String*/, options /*:Object*/) /*:String*/ { //check cookie name if (!YAHOO.lang.isString(name) || name === ""){ throw new TypeError("Cookie.removeSub(): Cookie name must be a non-empty string."); } //check subcookie name if (!YAHOO.lang.isString(subName) || subName === ""){ throw new TypeError("Cookie.removeSub(): Subcookie name must be a non-empty string."); } //get all subcookies for this cookie var subs = this.getSubs(name); //delete the indicated subcookie if (YAHOO.lang.isObject(subs) && YAHOO.lang.hasOwnProperty(subs, subName)){ delete subs[subName]; //reset the cookie return this.setSubs(name, subs, options); } else { return ""; } }, /** * Sets a cookie with a given name and value. * @param {String} name The name of the cookie to set. * @param {Variant} value The value to set for the cookie. * @param {Object} options (Optional) An object containing one or more * cookie options: path (a string), domain (a string), expires (a Date object), * and secure (true/false). * @return {String} The created cookie string. * @method set * @static */ set : function (name /*:String*/, value /*:Variant*/, options /*:Object*/) /*:String*/ { var lang = YAHOO.lang; if (!lang.isString(name)){ throw new TypeError("Cookie.set(): Cookie name must be a string."); } if (lang.isUndefined(value)){ throw new TypeError("Cookie.set(): Value cannot be undefined."); } var text /*:String*/ = this._createCookieString(name, value, true, options); document.cookie = text; return text; }, /** * Sets a sub cookie with a given name to a particular value. * @param {String} name The name of the cookie to set. * @param {String} subName The name of the subcookie to set. * @param {Variant} value The value to set. * @param {Object} options (Optional) An object containing one or more * cookie options: path (a string), domain (a string), expires (a Date object), * and secure (true/false). * @return {String} The created cookie string. * @method setSub * @static */ setSub : function (name /*:String*/, subName /*:String*/, value /*:Variant*/, options /*:Object*/) /*:String*/ { var lang = YAHOO.lang; if (!lang.isString(name) || name === ""){ throw new TypeError("Cookie.setSub(): Cookie name must be a non-empty string."); } if (!lang.isString(subName) || subName === ""){ throw new TypeError("Cookie.setSub(): Subcookie name must be a non-empty string."); } if (lang.isUndefined(value)){ throw new TypeError("Cookie.setSub(): Subcookie value cannot be undefined."); } var hash /*:Object*/ = this.getSubs(name); if (!lang.isObject(hash)){ hash = new Object(); } hash[subName] = value; return this.setSubs(name, hash, options); }, /** * Sets a cookie with a given name to contain a hash of name-value pairs. * @param {String} name The name of the cookie to set. * @param {Object} value An object containing name-value pairs. * @param {Object} options (Optional) An object containing one or more * cookie options: path (a string), domain (a string), expires (a Date object), * and secure (true/false). * @return {String} The created cookie string. * @method setSubs * @static */ setSubs : function (name /*:String*/, value /*:Object*/, options /*:Object*/) /*:String*/ { var lang = YAHOO.lang; if (!lang.isString(name)){ throw new TypeError("Cookie.setSubs(): Cookie name must be a string."); } if (!lang.isObject(value)){ throw new TypeError("Cookie.setSubs(): Cookie value must be an object."); } var text /*:String*/ = this._createCookieString(name, this._createCookieHashString(value), false, options); document.cookie = text; return text; } }; YAHOO.register("cookie", YAHOO.util.Cookie, {version: "2.6.0", build: "1321"});
Actress and left-wing activist Jane Fonda poured praise on former San Francisco 49ers quarterback Colin Kaepernick Sunday night at the ACLU of Southern California’s annual Bill of Rights Dinner. ... Kaepernick became the face of what many football fans considered an anti-American demonstration when he began to take a knee during the playing of the National Anthem before the start of NFL games ... Fonda, of course, became the face of opposition to the Vietnam War in 1972 when she visit to North Vietnam at the height of the war, earning the nickname “Hanoi Jane” and infamously denounced American soldiers... ANALYSIS/OPINION: By Oliver North October 16, 2017 Richard Nixon kept his promises, Ken Burns did not When Richard Nixon was in the White House, I was in Vietnam and he was my commander in chief. When I was on Ronald Reagan’s National Security Council staff, I had the opportunity to brief former President Nixon on numerous occasions and came to admire his analysis of current events, insights on world affairs and compassion for our troops. His preparation for any meeting or discussion was exhaustive. His thirst for information was unquenchable and his tolerance for fools was nonexistent. Mr. Nixon’s... “An essay published in Newsweek, The Washington Spectator and at least one other website argues that a flag honoring U.S. troops who have been captured or gone missing is actually ‘racist’ and deserves to be treated with the same hostility as the Confederate flag. ‘You know that racist flag?’ writes Rick Perlstein. ‘The one that supposedly honors history but actually spreads a pernicious myth? And is useful only to venal right-wing politicians who wish to exploit hatred by calling it heritage? It’s past time to pull it down.’ No, Perlstein isn’t talking about the Confederate flag, but actually the POW/MIA... Jane Fonda said she hoped for an open dialogue with veterans after about 50 former military members and supporters protested the actress’s appearance Friday evening at the Weinberg Center for the Arts. “Whenever possible I try to sit down with vets and talk with them, because I understand and it makes me sad,” Fonda told a relatively full theater, responding to a submitted question. “It hurts me and it will to my grave that I made a huge, huge mistake that made a lot of people think I was against the soldiers.” In 1972 Fonda visited Hanoi, North Vietnam, where... A U.S. Justice Department document that says America can le­gally order the killing of its citizens if they are believed to be al-Qaida leaders uses the devastating and illegal bombing of Cam­bo­dia in the 1960s and ’70s to help make its case. American broadcaster NBC News first reported on the “white pa­per”—a summary of classified mem­os by the U.S. Justice Depart­ment’s Of­fice of Legal Council—on Monday. The 16-page paper makes a legal case for the U.S. government’s highly controversial use of un­manned drones to kill suspected terrorists, including some U.S. citizens. In making its argument, the docu­ment brings up the... memory hole reminder The first documentary evidence that Vietnamese communists were directly steering John Kerry’s group Vietnam Veterans Against the War has been discovered in a U.S. archive, according to a researcher who spoke with WorldNetDaily. Read more at http://www.wnd.com/2004/10/27207/#615wufvA5oZXKuWK.99 N. Korea's participation in Vietnam War specified in new dossier By Lee Chi-dong WASHINGTON, Dec. 4 (Yonhap) -- North Korea dispatched dozens of pilots to the Vietnam War decades ago, with its communist ally short of specialists to operate MiG-17 and MiG-21 fighter jets in battles against the United States, according to a recently released dossier. "On 21 September 1966 an official North Korean request to be allowed to send a North Korean Air Force regiment to help defend North Vietnam against U.S air attacks was officially reviewed and approved by the Vietnamese Communist Party's Central Military Party Committee, chaired... It should be obvious to anyone familiar with the venomous UN speech just delivered by Palestinian Authority leader Mahmoud Abbas, calling for Palestinian statehood, that the so-called two state solution is dead. The idea of two states was first proposed to a delegation of PLO terrorists visiting North Vietnam in 1973 according to recently de-classified Soviet era documents. Abu Iyad, a member of that delegation visiting Hanoi, wrote in his memoir "Palestinian without a Motherland," that the North Vietnamese suggested that the PLO "stop talking about annihilating Israel and instead turn your terror war into a struggle for human rights…Then... When Hornets Growl The new, supersonic face of e-warfare. By D.C. Agle Air & Space Magazine, March 01, 2011 No soft underbelly here: The EA-18G Growler hauls missiles, fuel tanks, and electronic warfare pods. Ted Carlson/Fotodynamics Two hours north of Seattle, Washington, at the eastern end of the Strait of Juan de Fuca, the entrance to Puget Sound is guarded by a citadel dedicated to the aerial mastery and manipulation of one of the universe’s fundamental particles—the electron. The site, Naval Air Station Whidbey Island, was originally envisioned as little more than a waypoint for patrol aircraft scanning the Sound... "South Vietnam" ceased to exist in 1975 when the US Senate, led by Rep. Jackson Lee's Democratic Party, refused to provide aid and support to the government as tanks from North Vietnam rolled into Saigon. Sarah Palin has accused presidential candidate Barack Obama of "palling around" with terrorists - referring to his acquaintance with a former member of the Weather Underground. So who were the Weather Underground? Embroiled in an unpopular war in Vietnam, with many of the grievances of the civil-rights movement still unanswered, the US government was facing widespread protests in the late 1960s. Often those who rebelled were rich in idealism but unable or unwilling to take concrete action. On 8 October 1969, all that changed. A newly-formed group of left-wing extremists, dubbed the Weathermen, went on the rampage in a well-planned... Brainchild of the KGB As Ion Mihai Pacepa, onetime director of the Romanian espionage service (DIE), later explained, the PLO was conceived at a time when the KGB was creating “liberation front” organizations throughout the Third world. Others included the National Liberation Army of Bolivia, created in 1964 with help from Ernesto “Che” Guevara, and the National Liberation Army of Colombia, created in 1965 with help from Fidel Castro. But the PLO was the KGB’s most enduring achievement. In 1964, the first PLO Council, consisting of 422 Palestinian representatives handpicked by the KGB, approved the Soviet blueprint for a Palestinian... Vietnamese leader Nong Duc Manh was on his way home last Monday after a three-day visit to Cuba that featured a meeting with ailing President Fidel Castro and a joint oil-exploration agreement. Cuba was the final stop of his nine-day visit to Latin America, which also included Chile, Brazil and Venezuela. Beside visiting with Cuba’s interim president, Raul Castro, who took over after his brother had gastrointestinal surgery in late July of last year, Manh also met with Vice-President Ricardo Lage. But it was an unannounced two-hour meeting early Sunday with Fidel Castro, 80, at the hospital where he is... Globalization: Venezuela's Hugo Chavez is having a grand time cavorting around the world on his Axis Of Evil tour. But we notice he's disgusting as many countries as he's wooing. Vietnam is the most interesting. Chavez blew into Hanoi on Monday and right away began praising Vietnam's government in exactly the way it didn't want: by hailing communism. "Vietnam, with its valor, defeated imperialism not only on the battlefield, but also has maintained socialism in the ideological arena," the South American dictator intoned. Uh-huh. To Vietnam's officials, who've been trying diligently to integrate their nation into the world economy, that's... The Legacy of Tet By J.R.DunnDecember 20th, 2005 It was with Tet '68 that the American media first knew sin. Anyone seeking to understand the character of consistently negative media coverage of the Global War on Terror must understand Tet. The Tet offensive of February 1968 is widely regarded as one of the turning points of the Vietnam War – though not for the customary military reasons. Tet had its origins in the plans of North Vietnamese commander Vo Nguyen Giap, a competent general given to flights of overconfidence. Giap decided to throw all available assets, both PAVN (People's Army... Last week, John Walker Lindh petitioned the president to commute his 20-year sentence for fighting with the Taliban, imposed in 2002. It’s a shame that this pampered child of Marin County is sitting in a cell for something as trivial as treason. Under a plea bargain, Walker Lindh (AKA: Abdul Hamid, AKA: Sulayman Al-Lindh) pleaded guilty to supplying services to the Taliban regime and carrying explosives for Afghanistan’s former rulers.Which is like to saying that Benedict Arnold supplied services to George III. Johnny Jihad trained in an al-Qaeda camp – where he learned to fire an AK-47 and rubbed elbows... Lord, Keep our Troops forever in Your care Give them victory over the enemy... Grant them a safe and swift return... Bless those who mourn the lost. . FReepers from the Foxhole join in prayer for all those serving their country at this time. .................................................................. .................... ........................................... U.S. Military History, Current Events and Veterans Issues Where Duty, Honor and Countryare acknowledged, affirmed and commemorated. Our Mission: The FReeper Foxhole is dedicated to Veterans of our Nation's military forces and to others who are affected in their relationships with Veterans. In the FReeper Foxhole, Veterans or their family members should... An American Traitor: Guilty As Charged By Henry Mark Holzer and Erika HolzerFrontPageMagazine.com | June 10, 2005For three decades Jane Fonda obfuscated, distorted and lied about virtually everything connected with her wartime trip to North Vietnam: her motive, her acts, her intent, and her contribution to the Communists’ war effort. With the aid of clever handlers, she so successfully suppressed and spun her conduct in Hanoi that many Americans didn’t know what she had done there, and, more important, the legal significance. Three years ago, our book, “Aid and Comfort”: Jane Fonda in North Vietnam (McFarland & Co.), laid bare... MIDI - THE TWELFTH OF NEVER You're asking for forgiveness for things you've done But there's no way that you will be fooling anyone Your pictures with the commies are for all time There will be no forgiveness...Jane Fonda, you are slime Rot in hell...Fonda, rot in hell We don't believe phony words you try to sell We know that as an actress you cry on cue To get us to believe you, there's nothing you can do Your pictures with the commies are for all time There will be no forgiveness...Jane Fonda, you are slime There can be...
Introduction {#s1} ============ Together, poor drinking water quality, sanitation, hygiene (WASH) and nutrition are leading risk factors for morbidity and mortality among children \<5 years.[@R1] Despite substantive progress spurred by the millennium development goals to reduce these poverty-related risks, millions of children are born each year into environmental conditions that hinder their ability to achieve their full potential. Repeated insults from infection and undernutrition in the first years of life are believed to have profound negative consequences on health, cognitive development and human capital that span the life course.[@R2; @R3; @R4] The WASH Benefits study includes cluster randomised trials in Bangladesh and Kenya to address three important research questions related to the early life impacts of WASH and nutritional interventions. The first question is whether WASH and nutritional interventions can prevent linear growth faltering in the first 2 years of life. The second is whether greater reductions in diarrhoea can be achieved by combining individual WASH interventions compared to delivering them in isolation. The third is whether the combined WASH and nutritional interventions jointly reduce diarrhoea or improve linear growth more than each component alone. Below, we briefly summarise the rationale for the conduct of randomised trials to address each of these areas of scientific uncertainty. Question 1: Can WASH and nutritional interventions prevent early life linear growth faltering? {#s1a} ---------------------------------------------------------------------------------------------- Children in low-income countries experience severe linear growth faltering in the first 18--24 months of life that is thought to be preventable, at least in part, by postnatal interventions.[@R5] [@R6] Interventions designed to improve nutrition among very young children measure length for age because it is a reliable, objective measure associated with subsequent child development at older ages.[@R7] During this early window, undernutrition and infection likely influence child development and human capital through additional pathways besides linear growth.[@R8; @R9; @R10] Unfortunately, measuring child development at very young ages is difficult[@R11] and documenting the full range of intervention impact thus requires longer term follow-up.[@R4] In the first years of life, intervention trials and observational studies have implicated poor diet and infectious diseases as likely causes for a large share of child undernutrition.[@R8] [@R12] [@R13] Interventions to promote breastfeeding, improve complementary feeding practices, or provide nutritional supplements can lead to small improvements in nutritional indicators and length for age,[@R14; @R15; @R16] particularly among children who are at highest risk for severe stunting.[@R17] [@R18] Nevertheless, effects of nutritional interventions on linear growth (upper bound of 95% CI +0.79 Z-scores)[@R19] fall far short of the median growth deficits observed in Sub-Saharan Africa and Southeast Asia, which are on the order of --2.0 Z-scores.[@R6] One hypothesis for the inability of nutritional interventions alone to prevent a large share of growth faltering by age 24 months is that symptomatic and asymptomatic infections are important contributors to undernutrition. Symptomatic infection is common during the first years of life in low-income countries: on average, children under 24 months suffer from three to four episodes of acute diarrhoea each year[@R20]; respiratory infections and other infectious diseases, such as malaria, are also common in many settings. Observational studies show that repeated episodes of diarrhoea or parasitic infection are associated with increased risk of stunting[@R8] [@R21; @R22; @R23; @R24; @R25; @R26; @R27] and subsequent cognitive deficits in childhood and later in life.[@R4] [@R28] [@R29] Possible mechanisms for enteric infections leading to growth faltering include reduced nutrient absorption through lower intestinal contact time during episodes of acute diarrhoea, greater nutrient losses from persistent diarrhoea (eg, zinc) or intestinal bleeding (eg, hookworm infection), reduced appetite, and diversion of energy and nutrients from growth to the immune system to fight the infection. In addition to symptomatic infection, a subclinical condition called environmental enteropathy (EE), also known as tropical enteropathy, may also contribute to early life growth faltering.[@R30; @R31; @R32] The aetiology of EE remains unknown, but the condition is generally characterised by a set of physiological changes to the small intestine\'s epithelial layer, which include villous atrophy, crypt hyperplasia, reduced absorptive capacity, increased permeability and inflammatory cell infiltration.[@R33] The causes are most likely related to repeated ingestion of pathogenic bacteria and an altered composition of the intestinal microbiota, which together lead to chronic enteric inflammation.[@R32] Children with EE are believed to have impaired growth through two mechanisms: (1) reduced nutrient absorption due to decreased surface area in the small (upper) intestine and (2) elevated intestinal permeability, which increases translocation of antigenic molecules that stimulate the immune system and divert energy from growth. The combined effect of these two processes may impair a child\'s ability to effectively utilise nutrients in the existing diet for growth and development. EE is thought to be highly prevalent in low-income countries[@R34] and develops early in life: by age 8 months, 95% of a birth cohort in the Gambia showed signs of EE and on average children in the cohort exhibited signs of EE during 75% of their first year of life.[@R31] Studies of Peace Corps volunteers and immigrant populations have demonstrated that intestinal malabsorption and permeability typically return to normal levels within 1--2 years after individuals move from highly contaminated environments to cleaner environments.[@R35] [@R36] Since community-based studies that measure intestinal structure through biopsies would be extremely difficult, investigators typically rely on biomarkers of intestinal permeability, inflammation and immune system stimulation as measures of subclinical EE.[@R31] [@R37] [@R38] It is possible that improved nutrition alone can reduce the negative effects of a limited number of episodes of infection on growth due to the improved ability of better-nourished children to fight off enteric infections and exhibit catch-up growth during the convalescent period.[@R21] [@R28] [@R39; @R40; @R41; @R42] Effective nutritional interventions may be able to prevent or shorten the duration of EE via several mechanisms, such as (1) strengthening epithelial barrier integrity and the immune response; (2) compensating for malabsorption, reallocation or losses of key nutrients during infection; (3) accelerating gut repair following infection; and (4) favouring the growth of beneficial gut microorganisms.[@R39] While it is possible that nutritional interventions alone may prevent or shorten the duration of EE, the limited evidence to date has been mixed,[@R33] with some evidence for improvements in gut function following vitamin A,[@R43] alanyl-glutamine supplementation[@R44] and zinc supplementation,[@R45] [@R46] but there is no evidence for gut function improvement in trials that delivered probiotics,[@R47] glutamine supplementation,[@R48] omega-3 fatty acids[@R49] or richly fortified complementary foods.[@R50] As noted above, in many studies nutritional interventions have been insufficient to completely prevent growth faltering in low-income populations and in the context of repeated or chronic infection, improved nutrition may only be able to mitigate---but not necessarily overcome---some of the effects of enteric infection on growth. If acute infections and subclinical EE contribute significantly to growth faltering, then interventions to reduce enteric infections during the first years of life would be expected to improve linear growth, perhaps independent of nutritional interventions. Unlike the large literature on child nutritional interventions, we are aware of only 10 studies that measure the effect of WASH interventions on child growth; a forthcoming systematic review[@R51] may perhaps identify more. Four studies have found no improvement in linear growth as a result of WASH interventions, despite demonstrating reductions in caregiver-reported diarrhoea in most cases.[@R9] [@R52; @R53; @R54; @R55; @R56] A small randomised trial that enrolled children \<12 months and delivered handwashing promotion in Kathmandu slums additionally found no improvements in EE biomarkers.[@R53] The authors hypothesised that handwashing alone was inadequate as sufficient protection from the slum environment to change intestinal physiology and suggested that more comprehensive environmental improvements may be necessary to reduce EE and improve growth. Six studies have found positive associations between improved WASH conditions and child growth. Multiple cross-sectional or case--control studies found that young children living in households with improved sanitation and water supply had better linear growth.[@R26] [@R57] [@R58] A prospective birth cohort study in periurban Peru found that children living in households with home water supply and sewerage connections were 1 cm taller by age 24 months compared with children in households without them, and the effects of water supply and sewerage conditions were not mediated entirely by reductions in diarrhoea.[@R59] A water quality intervention trial in rural Kenya found an average linear growth increase of 0.8 cm among children \<5 years old after 1 year of exposure.[@R60; @R61; @R62] A prospective cohort from rural Bangladesh enrolled in a pilot for this study found that children raised in households with improved sanitation, hygiene and water quality conditions had lower levels of parasite infection, better growth and improved EE biomarkers compared to children raised in households without such access.[@R63] A trial to assess the impact of rural sanitation on diarrhoea includes length for age as a secondary outcome but is still underway.[@R64] Taken together, the mixed evidence to date does not conclusively link improved WASH conditions with improved child growth and the field would benefit from additional efficacy studies. Question 2: Are combined WASH interventions more effective than single interventions? {#s1b} ------------------------------------------------------------------------------------- In addition to quantifying the independent effects of WASH interventions, an important question is whether and how to combine sanitation, water quality and handwashing promotion interventions to cost-effectively achieve health gains. Many implementing groups have publicly embraced the notion that combining interventions to improve water quantity, water quality, sanitation, and hygiene results in added benefits. This claim is based, in part, on observational studies[@R26] [@R58] [@R65] [@R66] and theoretical modelling of pathogen transmission pathways.[@R67] [@R68] However, the limited available evidence from randomised trials does not support this approach. In the only randomised controlled trial specifically designed to evaluate combined interventions, the two interventions evaluated were point-of-use water treatment and handwashing promotion with soap; individually, each intervention reduced child diarrhoea (51% and 64% reduction), but there was no additional reduction in diarrhoea among children exposed to both interventions (55% reduction).[@R54] These findings are consistent with the results of a meta-analysis of published interventions to improve WASH, which found that combined interventions led to no greater reduction in diarrhoeal disease than single interventions.[@R69] For WASH programmes, single interventions are less expensive and easier to scale than combined interventions. By complicating communication and behaviour change, combined interventions can potentially diminish the overall effect achievable from a single intervention.[@R70] Understanding the marginal benefits of sanitation, water treatment and handwashing in the absence and presence of each of the other interventions will, therefore, be important for policy-makers (1) when deciding overall budgets for sanitation, water and handwashing; and (2) when weighing the trade-offs between allocating resources to an intense, expensive approach combining multiple interventions in a single site, or choosing the most cost-effective interventions and rolling them out at scale. This same reasoning applies to our third research question. Question 3: Are there larger effects on diarrhoea or linear growth from combining (A) nutritional interventions with (B) a combined water, sanitation and handwashing intervention compared to each component alone? {#s1c} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- In the 1960s, Scrimshaw *et al*[@R71] proposed a theory that repeated infections interact with poor nutrition to cause a cycle of infection and malnutrition. Consistent with this earlier work, McDade[@R72] outlined a life history theory of immune function in which he posited that infants face a resource allocation trade-off between maintenance (fighting infection and physiological repair) and growth. During infection, the immune system diverts energy and nutrients away from growth; a developing infant prioritises survival and maintenance over growth. When resources are limited, the absolute level of energy or nutrients available to infants can be a major determinant of growth and physiological repair. An impaired gut in a child without access to sufficient energy or nutrients will further suffer from impaired healing, with subsequent decline in gut function and nutrient absorption for growth; thus begins a vicious cycle between infection and malnutrition.[@R71] [@R73] [@R74] The potential contribution of infection to malnutrition and mortality risk was recently illustrated in a dramatic 35% reduction in all-cause mortality among severely malnourished Malawian children after the provision of prophylactic antibiotics.[@R75] Dewey and Mayers[@R39] reviewed the evidence for the potential interaction between nutrition and infection on early child growth. The review identified just one study that suggested that infections could reduce the effectiveness of nutritional interventions and four trials that demonstrated that improved nutrition could limit the negative consequences of infection. The authors concluded that the potential interaction between nutrition and infection control should be a priority for research, which echoes earlier calls for additional research in this area.[@R33] [@R34] The only study to date that we are aware of that was explicitly designed to test for interaction between infection control and improved nutrition was the Narangwal Nutrition Project, conducted in Punjab, India, between 1968 and 1973.[@R10] [@R76; @R77; @R78] The 10-village study (2900 newborns) was a factorial trial that randomised villages to control, improved medical services, improved nutrition or their combination. The nutrition intervention included growth monitoring, food supplementation for children who were not growing well and nutrition education. The medical care intervention improved access to vaccines and morbidity surveillance for acute illness. Both nutritional and medical service villages also received prenatal care for pregnant mothers, which included iron and folic acid supplements as well as food supplements for mothers who were underweight. The study found that the medical services intervention improved height and weight compared to control, and that the nutritional services intervention improved height and weight even more. The study found no additional benefit in combining nutrition and medical services above the nutritional services alone with respect to height and weight. Although international guidelines for infant and young child-feeding practices published by UNICEF, WHO and the Alive and Thrive initiative all include handwashing recommendations,[@R79; @R80; @R81] the degree to which additional infection control measures could complement nutrition programmes remains an important knowledge gap. Objectives of the WASH Benefits study {#s1d} ------------------------------------- Given the likely long-term negative consequences of undernutrition and infection during a child\'s first years, the global development community would benefit from rigorous evidence about the effects of single and combined WASH and nutritional interventions on child illness and growth. As outlined above, there remains substantial uncertainty about which interventions or combination of interventions are most effective. The WASH Benefits study includes two highly comparable cluster randomised trials in rural Bangladesh and Kenya to help fill these knowledge gaps. The intervention trials include single and combined interventions in sanitation, water quality, handwashing and nutrition. Each intervention has been developed over multiple years of formative research. The two trials share the following scientific objectives, which will contribute evidence towards the identified evidence gaps. Primary scientific objectives Measure the impact of sanitation, water quality, handwashing and nutrition interventions on child diarrhoea and linear growth after 2 years of exposure.Determine whether there are larger reductions in child diarrhoea when providing a combined water, sanitation and handwashing intervention compared to each component alone.Determine whether there are larger effects on child diarrhoea and linear growth from combining (A) a comprehensive child nutrition intervention with (B) a combined water, sanitation and handwashing intervention compared to each component alone. Secondary scientific objectives Measure the impact of a child nutritional intervention and household environmental interventions on EE biomarkers, and more clearly elucidate this potential pathway between environmental interventions and child growth and development.Measure the impact of sanitation, water quality, handwashing and nutritional interventions on intestinal parasitic infection prevalence and intensity.Measure the association between parasitic infection and other measures of enteric health, including acute diarrhoea and EE biomarkers. To achieve these objectives, the studies will enroll pregnant women and their children born within approximately 6 months of the baseline survey. The study will measure linear growth and caregiver-reported diarrhoea, biological markers of EE, intestinal parasite infections and child development in the cohort over the first 24 months of exposure to the intervention. Methods and analysis {#s2} ==================== Overview of the design {#s2a} ---------------------- The Bangladesh trial is led by the International Center for Diarrheal Disease Research, Bangladesh (ICDDR,B); the Kenya trial is led by Innovations for Poverty Action (IPA) and the Kenya Medical Research Institute (KEMRI). Both trials include six intervention arms and a double-sized control arm ([figure 1](#BMJOPEN2013003476F1){ref-type="fig"}). In Bangladesh, the unit of randomisation is a group of compounds visited by a single local promoter and separated by at least a 15 min walk. Bangladesh clusters consist of eight proximate household compounds that meet our eligibility criteria within a village. In Kenya, clusters consist of one or two adjoining administrative villages with at least six eligible pregnant women. The studies enrol pregnant women and their children who are born within approximately 6 months of the baseline survey. We will follow the closed cohort longitudinally and measure primary outcomes at 12 and 24 months after initiating the intervention. ![Summary of the overall study design in both countries, including cluster and target child enrolment in each arm. Growth and diarrhoea measurements will take place at 15 and 27 months following enrolment, which corresponds to 12 and 24 months following initial intervention delivery due to a 3-month lag between enrolment and intervention implementation. C, control; H, improved handwashing; N, improved nutrition; S, improved sanitation; W, improved water quality; WSH, combined improvements in water quality, sanitation and handwashing; WSH+N, combined improvements in water quality, sanitation, handwashing and nutrition.](bmjopen2013003476f01){#BMJOPEN2013003476F1} The design includes a large number of clusters per arm with a small number of children per cluster, which was motivated by three, inter-related considerations: (1) WASH interventions need to be delivered at the cluster level because the promotion activities are inherently community level, (2) there are potential interactions between adjacent households with respect to behaviour and infectious disease and we wish to maintain independent units for randomisation, and (3) at the time our study enrols a cluster and initiates an intervention, pregnant women are relatively scarce. The large study population spread over a wide geographic area means that we will measure intervention effects over heterogeneous environmental conditions.[@R82] The design is optimised to measure group-level differences in our primary outcomes. The infrequent measurements in WASH Benefits will mean that we will not characterise infectious outcomes (eg, diarrhoea and parasitic infections) well for individual children if the outcomes vary temporally within children.[@R83] Participant eligibility criteria, study setting and enrolment strategy {#s2b} ---------------------------------------------------------------------- ### Participant eligibility criteria {#s2b1} In both countries, the trials enrol pregnant women identified in community-based surveys who expect to deliver in the 6 months following enrolment based on date of last menstruation. The study will enrol all children born in study clusters in the 6 months following the baseline survey (some target children will be born after 6 months due to inaccuracies in gestational age using reported date of last menstruation). Our target sample size of pregnant women at enrolment is 5760 in Bangladesh and 8000 in Kenya. The Kenya cohort will be larger because we expect to find more variation in child length for age than in Bangladesh (sample size details below). Within study compounds, the study enrols all children \<36 months at baseline to measure diarrhoea outcomes over the study period; the study measures diarrhoea outcomes in a wider age group because older children are still at high risk for diarrhoeal disease.[@R20] In both countries, compounds consist of multiple households (typically 3--10 in Bangladesh and 1--4 in Kenya), usually comprising blood relatives, who share a common courtyard. Compounds are eligible to participate if (1) they have a pregnant woman and (2) the woman plans to stay in the village for the next 12 months. The study excludes households who do not own their home to help mitigate attrition during follow-up. The Kenya trial excludes villages that have chlorine dispensers at water sources installed by programmes separate from the present study. In Bangladesh, the study excludes households who report high iron in their drinking water most of the year because pilot studies showed it was difficult to maintain the appropriate chlorine residual for continued disinfection in high-iron water. In cases in which the respondent is unsure about iron content, field staff check the water\'s chlorine demand using Aquatabs and a digital Hach Pocket Colorimeter II; if residual chlorine is below 0.2 mg/L after 30 min staff exclude the household. Within a study compound, the studies enrol pregnant women and children from the following age groups *Children in utero at enrolment (target children)*: all children born to enrolled mothers within approximately 6 months of the baseline survey.*Children 18--27 * *months old at enrolment (specimen collection)*: older children living in the compound and aged 18--27 months at enrolment will be eligible for stool and blood specimen collection. This age window reflects the age window of the target children at the final study measurement and serves as a baseline measure for the study population.*Children aged \<36 months at enrolment (diarrhoea)*: All children aged \<36 months living in the compound are eligible for caregiver-reported diarrhoea measurement.*Additional children born into study compounds after 6 months:* We will enrol children born into study compounds who are too young to meet our enrolment criteria (group 1, above), deliver interventions to them according to randomised assignment and measure anthropometry and diarrhoea at follow-up surveys. These additional enrolees will not be included in the primary analysis because very young children may not be exposed to intervention for sufficient amount of time to expect to see impact on our primary outcomes (particularly length for age). However, the additional young children will provide information (in exploratory analyses) about the effect of established interventions on very young infants. Field staff discuss the prospect for participation in the study with adults in each compound, including the mother/caregiver of the target infants. After providing time for discussion among the compound residents, a member of the field team seeks formal informed consent from pregnant women. ### Bangladesh setting and enrolment {#s2b2} The Bangladesh trial is located in Gazipur, Mymensingh and Tangail districts. These three districts are located in the floodplain of central Bangladesh where the majority of the rural population is engaged in agriculture. The majority of the population uses shallow tubewells for drinking water, which are known to be frequently contaminated with faecal indicator bacteria.[@R84] Enrolment commenced in June 2012. The study has enrolled compounds in communities that meet the following criteria. Located in a rural area.Drinking water with low levels of iron (\<1 mg/L on average) and arsenic (\<50µg/L on average) as documented in the collaborative assessments by the Government of Bangladesh and the British Geological Survey. Water chemistry eligibility criteria were used because pilot studies indicated that when iron or arsenic levels were high the chlorine demand for household water treatment was unpredictable.The Government of Bangladesh, international non-government organisations working in Bangladesh and local government authorities report that no major water, sanitation or focused nutrition programmes are currently operating or planned in the area in the next 2 years.Not located in haor areas (areas completely submerged during the monsoon season). Each study cluster includes a group of compounds with eight eligible pregnant women. The compounds within a cluster are located sufficiently closely together so that a single promoter can reach each of the participating compounds by walking. If the compounds were too dispersed for a promoter to reach all of them on foot, they will then not be enrolled in the study. More than one cluster could be enrolled in a single village but clusters within the same village need to be separated from each other by a minimum of 15 min walking distance. ### Kenya setting and enrolment {#s2b3} The Kenya trial is located in rural areas of 10 districts in Bungoma, Kakamega and Vihiga counties in the western part of the country. The region is populated mainly by subsistence farmers. Unimproved latrine coverage is high (at least 85%) and our pilot study in the region estimated that among children \<27 months old, 11% had diarrhoea in the preceding 2 days. Very few (\<5%) households have piped water and the majority of households report obtaining drinking water from sources, such as protected springs, where chlorination has previously been shown to be effective.[@R85] Enrolment commenced in November 2012. The study region contains over 2000 villages, from which study villages were selected to form clusters using the following criteria: Located in a rural area (defined as villages with \<25% residents living in rental houses, \<2 gas/petrol stations and \<10 shops);Not enrolled in ongoing WASH or nutrition programmes;Majority (\>80%) of households do not have access to piped water into the home;At least six eligible pregnant women in the cluster at baseline. Description of the interventions {#s2c} -------------------------------- ### Overview of the intervention approach and assumptions {#s2c1} The WASH Benefits study has focused on identifying and testing water, sanitation, handwashing and nutritional interventions that have strong potential to reduce infection and malnutrition during the first years of life. WASH Benefits is designed to measure intervention effects under conditions of high uptake in our target populations since our central hypotheses have not been tested rigorously in randomised studies. The enabling technologies and behavioural intervention packages were developed in the target populations over a 2-year period before the start of the trials. Details of the behaviour change theoretical frameworks and methods used in each country will be published in separate, forthcoming articles. Local promoters who are residents of the study villages deliver the interventions at the cluster level; each promoter completes at least 5 days of training and also attends refresher courses periodically throughout the study period. Promoters visit and counsel study compounds weekly in the early phase of intervention, with visits declining in frequency over time; we anticipate visits as infrequent as one per month after 1 year of intervention. The environmental interventions in this study focus on modifying the compound environment to reduce infant exposure to enteric pathogens. The interventions focus on compound-level modifications because we assume that the dominant transmission pathways for the infants in our study will be within the compound. Since we expect on average 8--10 household-compounds with eligible children per study cluster, we expect to intervene in a small fraction of each community. While point-of-use water quality, hygiene and nutrition interventions operate at a household level, some sanitation interventions may require wider coverage in a neighbourhood, community or other larger environment in order to effectively mitigate personal exposure. However, cost and logistical limitations prevented us from extending implementation beyond the compound. Furthermore, a pilot study suggested that the compound was a relevant unit of intervention for modifying infant exposure to environmental conditions.[@R63] ### Control {#s2c2} It is possible that the simple act of regular visits by intervention promoters could lead to improvements in the primary outcomes through unknown channels that are independent of WASH or nutrition interventions. The WASH Benefits team discussed this possibility extensively in the year preceding the trials and the teams agreed to pursue slightly different strategies in the two countries. The Bangladesh team concluded that their intervention behaviour change model is so tightly integrated into the enabling technology components that the effect of a visit is inseparable from the WASH and nutrition interventions themselves; moreover, it is fairly common for mothers in the study area to be visited by community promoters associated with other programmes. The control arm in Bangladesh will be a 'passive' control, meaning there is no promotion or intervention activity during the study. The Kenya team was more concerned about the possibility of the promotion visits leading to changes in behaviours not related to WASH or nutrition that could nonetheless affect the primary outcomes since promoter visits are atypical in the Kenyan study area. For this reason, the Kenya team decided to include promoters in their control arm and to add a simple activity across all arms of the study: monthly measurement of mid-upper arm circumference (MUAC) or measuring the pregnant woman\'s belly circumference prior to the birth. The key assumption for the Kenya design is that whatever non-WASH-related or nutrition-related behaviour changes occur in the intervention arms will also occur in the control arm. The Kenya control arm promoters do not promote any WASH, or nutrition messages, and strictly engage in measuring child MUAC and mother belly circumference. In all arms, children \>6 months old with MUAC \<115 mm are classified as severely malnourished and are referred to treatment (details mentioned below in Referral guidelines). ### Water quality {#s2c3} The Bangladesh study delivers a 10 L, insulated water storage vessel and a free supply of chlorine tablets (Aquatabs brand, sodium dichloroisocyanurate) to enrolled households to improve the microbiological quality of their drinking water.[@R86] The Kenya study installs chlorine dispensers within the cluster boundary at public water sources used by study participants. All community members will be able to use the dispensers. After filling their water collection container (typically a 20 L plastic jerry can) at the source, users can place the container under the dispenser and turn a knob to release 3 mL of 1.25% sodium hypochlorite, an amount designed to yield 2 mL/L of free chlorine residual after 30 min for 20 L of water.[@R87] The Kenya study also includes community level promotion of dispenser use and all households in the study compound receive bottles of sodium hypochlorite (6 months' supply) to facilitate householders' water treatment during periods when they rely on rainwater harvesting (common during the rainy season) or if they use a water source in which a dispenser has not been installed. In both countries, the behaviour change strategies target the consistent provision of treated water to all children living in the household. ### Sanitation {#s2c4} Both the Bangladesh and Kenya studies include three enabling technologies in the compound-level sanitation intervention with the goals of reducing children\'s exposure to faeces in the household environment and increasing latrine use: (1) a locally developed sani-scoop dedicated to the removal of child and animal faeces from the compound,[@R88] (2) plastic child potties for children aged 6 months and older until they use the latrine and (3) a new or upgraded latrine for each household in the compound. In Bangladesh, latrines are upgraded to a dual pit latrine with a water seal and super structure. In Kenya, plastic latrine slabs that include a tightly fitting hole-cover are installed to improve existing latrines that have a mud or wood floor. Simple pit latrines (unlined pits with an earthen superstructure and the plastic slab) are constructed in the compounds of study participants who do not have access to a latrine. The behaviour change strategies in both countries target the use of the latrine for defaecation and the safe disposal of faeces by all households in the compound to prevent contact by young children. ### Handwashing {#s2c5} Both country studies install two handwashing stations for enrolled households: one near the latrine and one near the cooking area. In Bangladesh, handwashing stations include a locally made bucket with a tap fitting (40 L near the latrine and 16 L near the cooking area), a stool, a bowl and a bottle to dispense soapy water. In Kenya, handwashing stations are constructed from locally available materials and include a dual tippy-tap design with independent pedals attached to two 5 L jerry cans of clean water and soapy water.[@R89] In both countries the studies provide soap to families free of charge to replenish the handwashing stations. The behaviour change strategies of the intervention target handwashing with soapy water messaging at two critical times for caregivers: after defaecation/cleaning the child\'s anus and before food preparation.[@R90] Promoters frame the concept of handwashing as a nurturing behaviour facilitated by the ease and convenience of a nearby handwashing station.[@R91] ### Combined water+sanitation+handwashing {#s2c6} In both countries, the combined water+sanitation+handwashing (WSH) intervention integrates all intervention components from the water quality, sanitation and handwashing arms. Intervention promoters sequence the interventions so that they are not introduced at the same time. In Bangladesh, the interventions are delivered sequentially in the following order: sanitation, handwashing and water treatment, with a minimum of 21 days between each start date. In Kenya, all intervention technologies aside from latrine construction are provided at the same time but the behaviour change counselling is rolled out in the following sequence approximately spaced around 2 weeks apart: handwashing and basic water treatment, sanitation, in-depth water treatment. The provision of latrines can range from one to several weeks after the start of work in a cluster in Kenya. The behaviour change strategy emphasises the interconnected aspect of WASH and the need to practice all behaviours in order to benefit from them. ### Nutrition {#s2c7} In both countries, the nutrition intervention strategy targets age-appropriate behaviours (pregnancy to 24 months) including use of lipid-based nutrient supplements (LNSs; aged 6--24 months). The behaviour change counselling is modelled after the Guiding Principles for Complementary Feeding of the Breastfed Child,[@R80] the UNICEF Program Guide for Infant and Young Child Feeding Practices[@R81] and the Alive and Thrive initiative.[@R79] Target behaviours include (1) practice exclusive breastfeeding from birth to 6 months of age and introduce complementary foods at 6 months of age while continuing to breastfeed; (2) continue breast feeding as you did before receiving study-provided nutritional supplements; (3) provide your child micronutrient-rich foods, such as meat, fish, eggs, and vitamin A rich fruits and vegetables (adapted to locally available food examples); and (4) feed your child complementary foods at least 2--3 times per day when 6--8 months old and 3--4 times per day when 9--24 months old. When target children are between 6 and 24 months old, intervention promoters will deliver monthly supplies of LNS. The LNS used in the study is a next generation version of Nutributter.[@R92] Online supplementary appendix 1 includes the specific LNS formulation. LNS is administered daily using 10 g sachets that can be mixed into pre-prepared meals (eg, porridge) or consumed directly from the sachet; a child eats two sachets per day. LNS is intended to supplement---and not replace---breastfeeding and locally available complementary foods, by providing 118 kcal/day and including a broad suite of essential fatty acids and micronutrients at dosages appropriate for children in this age group.[@R92] It has an 18-month shelf life, does not spoil at high temperatures and costs as little as US\$0.08/day. Reported adherence has been 88% of days in controlled trials,[@R14] in part due to the ease of incorporating it into existing feeding routines. Breastfeeding is highly prevalent in both populations based on pilot studies and so we have focused on supplements that would not replace this essential source of nutrition.[@R93] [@R94] In Kenya, the trial will provide LNS to older, age-eligible siblings (6--24 months) living in study households to prevent potential sharing of LNS with older siblings. The Bangladesh trial will deliver LNS only to target children because older, age-eligible siblings are rare in the study population. ### Nutrition+combined WSH {#s2c8} In both countries, the nutrition+combined WSH arm will include the interventions delivered in the nutrition and combined WSH arms. The nutrition intervention is delivered in parallel with the WSH interventions according to the stage of pregnancy and age of the target child. ### Intervention monitoring {#s2c9} Given the importance of good uptake (also called take-up or compliance) for the success of the trial, it is essential for the team to have early and frequent feedback on intervention uptake. If an intervention has poor uptake, the team then needs to consider modifying or redoubling implementation efforts in that arm. To preserve external validity, each country team will document any adaptive changes used to modify the intervention. Investigators will be blinded to outcomes from the trial, so any adaptation to intervention will be based solely on information about intervention implementation and uptake. Both country teams have in place a detailed implementation monitoring system. One of the outputs from the monitoring system is a summary of whether the implementation has achieved a limited set of critical benchmarks (see online supplementary appendix 2); benchmarks are intended to flag serious problems in implementation. If any of the uptake measures falls below its critical benchmark, then a qualitative team will review the monitoring and process documentation in the low-performing area, visit the site of the low uptake, meet with intervention promoters, supervisors and study participants and troubleshoot the cause of the low uptake. Because the interventions have each been piloted and the pilots achieved these benchmarks of uptake, we expect that uptake below the benchmark will indicate a problem where the intervention was not implemented as planned, and the investigation will identify what additional training or other support is required to achieve high intervention uptake. Additional principles that we will follow with respect to adapting the interventions include: If we identify easily fixable problems in an intervention that we expect will improve uptake, then we will make the change uniformly in the study population.If we identify a problem in an intervention arm and devise a solution, the solution must be implemented in all clusters assigned to that intervention to ensure that we do not differentially modify the intervention on a subsample of the population.Since WASH Benefits is an efficacy trial, we will replace broken hardware in our study population.We will maintain a detailed record of the timing and scope of any changes to the interventions (if any). Outcomes {#s2d} -------- ### Primary outcomes {#s2d1} Primary outcomes include length-for-age Z-scores (LAZ) measured 24 months after intervention initiation in target children and diarrhoea prevalence in compound children \<36 months old at enrolment. Child age will be determined using birthdates verified when possible using vaccination cards. Following standard protocols for anthropometric outcomes measurement,[@R95] [@R96] pairs of trained anthropometrists will measure recumbent length (accurate to 0.1 cm) and weight without clothing (accurate to 0.1 kg) in triplicate. The median of the three measurements will be used in the analysis.[@R97] We will measure diarrhoea at baseline among children \<36 months old and again 12 and 24 months after intervention initiation using a definition of ≥3 loose or watery stools in 24 h or ≥1 stool with blood based on caregiver-reported symptoms[@R98]; we will use a 7-day recall period unless we find differential recall errors by the randomised group, in which case we will use a 2-day recall period.[@R99] [@R100] ### Secondary outcomes {#s2d2} Secondary outcomes include two additional measures of linear growth, child development measures and measures of EE. We will calculate differences between groups in LAZ at the 12-month measurement and stunting prevalence (LAZ\<--2) at the 24-month measurement. At the 24-month visit, we will measure child development in communication, gross motor and personal/social domains using the Extended Ages and Stages Questionnaire[@R11] [@R101]; the instrument has been adapted to each study population, relies on caregiver\'s report and has been used in many low-income countries.[@R102] We will compare groups for each domain independently and overall by summing scores across domains. In a subsample of up to 1500 children across four arms of each trial, we will measure EE biomarkers at 3, 12 and 24 months following intervention initiation ([figure 2](#BMJOPEN2013003476F2){ref-type="fig"}); assays planned include: urinary lactulose-to-mannitol ratio,[@R103] faecal myeloperoxidase,[@R104] faecal α-1-antitrypsin,[@R105] faecal neopterin[@R106] and plasma total IgG.[@R37] ![Summary of EE subsample in both countries, including cluster and target child enrolment in each arm. The EE subsample includes an equal number of clusters and target children from four arms of the study. C, control; EE, environmental enteropathy; H, improved handwashing; N, improved nutrition; S, improved sanitation; W, improved water quality; WSH, combined improvements in water quality, sanitation and handwashing; WSH+N, combined improvements in water quality, sanitation, handwashing and nutrition.](bmjopen2013003476f02){#BMJOPEN2013003476F2} ### Additional outcomes {#s2d3} The study will collect stool specimens from seven target children per cluster at the 24-month visit and from an older child living in the compound ([figure 3](#BMJOPEN2013003476F3){ref-type="fig"}), and will test specimens for soil-transmitted helminths (*Ascaris lumbricoides*, *Trichuris trichiura*, hookworm) using the Kato-Katz method[@R107] and protozoans (*Giardia lamblia*, *Cryptosporidium parvum*, *Entamoeba histolytica*) using PCR methods (Bangladesh) and commercial ELISA kits (Kenya). Online supplementary appendix 3 includes a full list of tertiary outcomes. In a subsample of households in which the study measures EE biomarkers, we will also measure markers of environmental faecal contamination to help trace the causal path between the interventions and outcomes. Environmental contamination measures will include enumeration of faecal indicator bacteria (*Escherichia coli*) in household-stored drinking water, on child toy balls and child hand rinses. In addition, the study will collect quantitative measures of fly density at the latrine and the food preparation area. ![Summary of enteric parasite measurement in both countries, including cluster and target child enrolment in each arm. At enrolment stool specimens will be collected from an older sibling aged 18--27 months if present and will be tested for protozoan infections. At the final measurement, specimens will be collected from the same older siblings plus seven target children per cluster in each country, and analysed for protozoan infections and soil-transmitted helminth infections. C, control; EE, environmental enteropathy; H, improved handwashing; N, improved nutrition; S, improved sanitation; W, improved water quality; WSH, combined improvements in water quality, sanitation and handwashing; WSH+N, combined improvements in water quality, sanitation, handwashing and nutrition.](bmjopen2013003476f03){#BMJOPEN2013003476F3} Referral guidelines {#s2e} ------------------- The study will refer participants for treatment at appropriate local government healthcare providers if we observe any of the three following outcomes: soy or nut allergies related to LNS, acute malnutrition and intestinal parasite infection (described below). ### Soy or nut allergies related to LNS {#s2e1} In the LNS arms, intervention promoters will recommend that caregivers stop using LNS and notify one of the study staff immediately should their child have any adverse reactions shortly after ingesting the supplement (such as vomiting, stomach pain, rash and breathing problems with wheezing). In the event of an adverse reaction, study staff will assess the child\'s condition and, if necessary, provide transport to the closest medical facility for treatment. ### Acute malnutrition {#s2e2} In the anthropometry and enteropathy assessment survey, children who are found to be acutely malnourished based on WHO/UNICEF criteria (severely wasted \[weight for length Z-score \<−3\] and/or bipedal oedema) will be referred to the appropriate existing treatment programmes in each country. In Kenya, where promoters measure MUAC each month for all target children, children \>6 months with MUAC \<115 mm will be considered severely malnourished and will be referred to treatment. ### Intestinal parasites {#s2e3} All children who provide a stool specimen in the 24-month survey will be offered deworming medication, which is consistent with national standards in both countries. Randomisation and blinding {#s2f} -------------------------- The trials will randomly allocate clusters to each intervention arm of the study in equal proportion along with a double-sized control arm. The randomisation is pair-matched by geography, with adjacent clusters randomised in blocks. The rationale for using geography to match the randomisation is that it is logistically feasible; it may add efficiency to our effect estimation if geography is strongly correlated with our outcomes and it will help ensure that the different arms are balanced with respect to characteristics and events that are spatially clustered. In Bangladesh, the trial will randomise groups of eight geographically proximate clusters to one of the six intervention arms or the double-sized control arm with allocation probabilities of 2/8 for control and 1/8 for each intervention arm. In Kenya, the randomisation is identical but includes nine proximate clusters in each block with allocation probabilities of 2/9 for active control, 1/9 for each intervention arm and 1/9 for a potential passive control (not yet funded). Clusters allocated to a passive control arm in Kenya will enable the study to measure the effect of regular visits to the study\'s active control arm, if any, pending future funding. The randomisation sequence generation and allocation for both trials will be conducted by the coordinating team at the University of California, Berkeley, using a random number generator in Stata V.12 (StataCorp, College Station, Texas, USA) with a reproducible seed. Owing to the nature of the interventions, participants are not blinded to their treatment assignment. Principal investigators and primary analysts for the trial will remain blinded to the randomised group assignments until the primary analysis is complete. Cluster-level assignments will be under control of each country\'s lead data manager in separate data files that are independent from the main datasets of the study. Access to the treatment assignment information (even if blinded), will be limited to the core analysis team in each country until the primary results are published. Sample size {#s2g} ----------- The sample size calculations were based on the two primary outcomes: LAZ and caregiver-reported diarrhoea. We calculated the minimum detectable effect for LAZ measured at 2 years using a standard equation[@R108] and for diarrhoea using a simulation-based approach to accommodate two levels of correlation in the outcome (within child and within cluster).[@R109] To inform our sample size calculations we used existing datasets from relevant populations. In Bangladesh, we used diarrhoea and anthropometric measurements from 982 children \<36 months, collected from 100 rural villages between 2007 and 2009.[@R110] In Kenya, we conducted the sample size calculations using diarrhoea data, collected from 1704 children in 95 control villages enrolled in a cluster-randomised trial of spring protection conducted in Western Province between 2005 and 2007[@R85]; we also conducted the sample size calculation with LAZ measurements from 310 children 4--30 months old in a pilot study in our study region. We selected final designs in each country to detect differences of +0.15 in LAZ and a relative risk of diarrhoea of 0.7 or smaller for a comparison of any intervention with the double-sized control arm. We chose the effect size for LAZ based on our team\'s expert opinion of the smallest effect that would be biologically meaningful and measurable given measurement error in field conditions (+0.15 Z equals 0.48 cm in a 24-month-old girl). We chose the effect size for diarrhoea based on earlier WASH efficacy studies.[@R111] The control arm is double sized because it will be used in multiple hypothesis tests and, given available information, a 2:1 allocation ratio is close to the optimal allocation that minimises the variance for the six tests planned under our first hypothesis, below.[@R112] [@R113] Online supplementary appendix 4 includes the detailed assumptions used in the calculations. Analysis plan {#s2h} ------------- ### General analysis approach {#s2h1} Each study team will develop its own analysis plan, but both teams will include in their analyses unadjusted means and SDs by randomised groups, along with unadjusted comparisons between groups for the primary hypotheses.[@R114] [@R115] We will also re-estimate our parameters of interest in adjusted analyses (details below). We will produce public replication files for our primary analyses in both countries. We will analyse participants according to their randomised assignment (intention to treat). ### Parameters of interest {#s2h2} This section discusses parameters of interest for the primary analyses. Let Y be an outcome of interest and let T index the randomised group assignment, where T∈ (C, W, S, H, WSH, N, NWSH). There are seven arms: C control; W water; S sanitation; H handwashing; WSH; N nutrition supplement; and NWSH nutrition plus combined WSH. Let Z be a set of indicators for matched blocks used in the randomisation. Finally, let ψ denote parameters of interest. In each comparison below, we define ψ as a difference between various randomised groups. For dichotomous outcomes like diarrhoea, this implies a risk difference. We will additionally report risk ratios for dichotomous outcomes as recommended by CONSORT.[@R114] H1: water, sanitation, handwashing, nutrition and their combination reduce child diarrhoea and improve linear growth. The mean outcomes in each active intervention arm will be compared to the mean outcomes in the control arm (6 comparisons per outcome). The null hypothesis is that there is no difference between intervention and control. The same control group (double sized) will be used in every comparison. The parameters of interest are the difference in means between the intervention groups and the control group. For t∈ (W, S, H, WSH, N, NWSH):H2: when delivered in combination, water, sanitation and handwashing interventions reduce child diarrhoea more than when delivered individually The combined arm (WSH) treatment effect for diarrhoea will be compared to individual WASH treatment effects to determine whether the combined effect is larger than the individual effects. The parameters of interest are the difference in means between the combined group and the individual intervention groups. For t∈ (W, S, H):Note that this parameter and associated test differs from a test for interaction (departure from additive effects). We expect this study to have limited power to detect interactions between interventions, but describe tests in online supplementary appendix 5. H3: combined nutrition and WASH interventions reduce diarrhoea and improve linear growth more than each component alone We will compare the combined nutrition + WASH arm (NWSH) treatment effects for growth to the nutrition arm (N) and the combined WASH arm (WSH). The null hypothesis is that the treatment effect in the combined arm is equal to the single arms, and the parameter of interest is the difference in means between groups. For t∈ (WSH, N):As with H2, this hypothesis is not a hypothesis of interaction or synergy. Rather, it is a test to determine whether one intervention is better than another (additive interaction would test whether the combined arm is greater than the sum of the independent intervention arms). If the interaction were of equal magnitude to the overall treatment effect, a roughly fourfold increase in the sample size would be required,[@R116] which would be logistically infeasible given the already large size of the trial. ### Testing and estimation {#s2h3} One strength of a randomised trial is that it allows investigators to draw inference non-parametrically, relying only on randomisation.[@R117] One approach to test for statistical significance is a permutation test based on randomly permuting randomised assignments in the data (following the original randomisation strategy, ie, permuting T within strata Z) and re-estimating a test statistic.[@R117; @R118; @R119; @R120; @R121] We plan to use a rank-based test statistic, which has been shown to have good power against alternatives,[@R122] and estimate it on unweighted cluster means.[@R118] [@R119] We will use one-sided tests because we would only expect the interventions to be beneficial.[@R123] Owing to the relatively small number of tests involved, we do not plan to adjust the p values for multiple testing.[@R124] The permutation test is a test for statistical independence with good power against alternatives but does not estimate a specific parameter of interest (and thus will not provide SEs and CIs for our parameters). Since the trials depart from an individually randomised design, we will bootstrap the dataset, resampling clusters in matched blocks with replacement and re-estimate our parameters of interest. Resampling matched blocks preserves the correlation structure in the data and retains any efficiency gains from the matched randomisation. Since we will have a large number of units to resample, the asymptotic assumptions will be reasonable, the bootstrap distribution will be smooth and percentile-based CIs will be accurate for all parameters of interest. We will examine the bootstrap estimate of the sampling distribution to confirm these assumptions. The SDs of the bootstrap distributions will provide estimates of SE. We will complement our unadjusted analyses with a second set of estimates that are conditional on baseline covariates to potentially increase the efficiency of our analysis and reduce bias from any chance imbalances in prognostic covariates despite randomisation.[@R125] It is straightforward to extend permutation tests to include covariate adjustment while still taking advantage of the exact distribution theory provided by randomised inference.[@R118] [@R120] For example, let Y~ijk~ be the outcome of interest for individual i in village j and randomisation stratum k; let T~jk~ be the randomised intervention indicator and X~ijk~ be a vector of adjustment covariates. Models are fit of the form: E\[Y~ijk~\|X~ijk~\]=m(X~ijk~), where m(.) is some function of the covariates X. For example, m(X~ijk~)=α~k~+β *× *X~ijk~+ɛ~ijk~ for a linear regression, but it could be a more sophisticated prediction function. The residuals are then calculated using predicted values of Y~ijk~ from the model: and the permutation test is conducted on the residuals. The test has nominal size for the null hypothesis even if the model m(.) is mis-specified and if the covariates are measured with error.[@R118] [@R120] There is no stochastic model for m(.), just a reduced algorithmic fit; the approach increases statistical efficiency because the residuals are less variable than the original outcomes, assuming the covariates are strongly associated with the outcome or heterogeneous within the strata.[@R118] Following CONSORT guidelines,[@R114] [@R115] we prespecify a repeatable, objective approach that we will use to identify adjustment covariates. We plan to consider the following covariates in adjusted models: Administrative union (Bangladesh) or location (Kenya);Field staff team member who recorded the measurement;Time between intervention delivery and measurement;Month of measurement, to account for seasonal variation;Household food insecurity;Child age;Child sex;Mother\'s age;Mother\'s height;Mother\'s education level and literacy;Number of children \<15 years in the household;Number of individuals living in the compound;Distance (in minutes) to the primary water source;Housing materials (floor, walls and roof) and household assets. We will use a repeatable data-adaptive algorithm to control for the covariates flexibly and semiparametrically that will be chosen before the analysis.[@R126] We will calculate adjusted p values using the permutation test described above based on predicted residuals from the algorithm. We will estimate SEs and CIs for our parameters of interest using the bootstrap described in the unadjusted analysis section. Online supplementary appendix 5 includes the details of additional, prespecified analyses, including tests of interactions between interventions, subgroup analyses and tests for between-cluster spillover effects. ### Differential attrition (loss to follow-up): detection and effect bounds calculation {#s2h4} The study will track enrolled participants carefully to help minimise attrition. We will compare attrition rates across randomised arms and also the characteristics of those lost to follow-up versus those that remain to determine whether attrition is random. If we find systematic attrition that is not balanced across arms, then we will conduct sensitivity analyses using 'worst case' imputation bounds for our effect estimates (proposed by Horowitz and Manski,[@R127] and summarised by Duflo *et al*,[@R108]) and we will also calculate bounds proposed by Lee.[@R128] If overall levels of attrition approach 20%, we will attempt to locate individuals who left the study area to measure outcomes at the 2-year measurement and include them in our analyses; if attrition is high we will also consider the use of semiparametric weighting using baseline characteristics.[@R129] ### Interim analyses and stopping rules {#s2h5} #### Interim analyses {#s2h5a} Except for monitoring uptake of the interventions described above, the WASH Benefits study team does not plan to conduct interim outcome analyses that include information about randomised assignment until all of the data from the 2-year measurement are collected.[@R125] [@R130] [@R131] #### Negative stopping rule {#s2h5b} There is always a risk that interventions will have unintended consequences. Although we would not conduct the trial if we anticipated such harm, the interventions are complex and there is always the chance for unanticipated outcomes. If one of the country\'s Data and Safety Monitoring Boards (DSMBs) were to find clear evidence of harm based on adverse events, then the study will halt the harmful intervention arm under international ethical guidelines for medical research.[@R132] #### Positive stopping rule {#s2h5c} Since this is an efficacy study designed to identify proof of principle, even if a marked early benefit is identified with one or more of the interventions, neither the study implementers nor the Governments of Bangladesh or Kenya will be in a position to immediately scale up effective interventions. Thus, the social benefit of early stoppage is limited. However, we will provide 1-year anthropometry measurements to each country\'s DSMB. If at the 1-year measurement, child length for age Z-score in any of the intervention arms is more than 2 SDs above the control arm we will look to the country\'s DSMB to decide on the appropriateness of continuing the trial. ### Additional analyses {#s2h6} WASH Benefits is a large study with many collaborators and the research will be able to answer scientific questions beyond those posed in this protocol. Indeed, the study team expects to conduct and publish analyses that extend beyond those specified in this protocol. For example, objective 5 of the study is to explore the association among multiple enteric infection measures collected in the study. Yet, many promising multiplex antigen assays for parasitic infection are still in development and so the study plans to archive samples for future analyses. Ethics and dissemination {#s3} ======================== Each trial is overseen by an independent DSMB, which review the study protocols and monitor severe adverse events. All study communities, compounds and caregivers provide informed consent. The data collected in the study will be publicly distributed along with metadata and critical documents (ie, protocols and questionnaires) following the publication of the primary results from the trials, which is expected to be within 24 months of the final data collection date. Supplementary Material ====================== ###### Author\'s manuscript The authors would like to thank Michael Kremer, Shaila Arman, Farzana Begum, Jade Benjamin-Chung, Colin Christensen, Ayse Ercumen, Fabian Esamai, Muhammad Faruqe Hussain, Kaniz Khatun-e-Jannat, Charles Mwandawiro, Md. Fosiul Alam Nizame, Carol Nekesa, Tadeo Muriuki, Victor Owino and Md. Mahbubur Rahman for additional substantive input to the study design, intervention development or study protocols. **Contributors:** BFA, CN, SPL, LU, CPS, SA, GC, AEH, AL, AJP and JMC drafted the protocol. KGD, TA, TC, HND, LCHF, RH, PK, EL, SMN, PKR, FT and PJW reviewed and provided critical input to the protocol. **Funding:** This study was funded by a grant from the Bill & Melinda Gates Foundation to the University of California, Berkeley, grant number OPPGD759. **Competing interests:** None. **Ethics approval:** University of California, Berkeley, Stanford University, the International Centre for Diarrheal Disease Research, Bangladesh, the Kenya Medical Research Institute and Innovations for Poverty Action. **Provenance and peer review:** Not commissioned; peer reviewed for ethical and funding approval prior to submission. **Data sharing statement:** The data collected in the study will be publicly distributed along with metadata and critical documents (ie, protocols and questionnaires) following the publication of the primary results from the trials, which is expected to be within 24 months of the final data collection date.
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. This application includes a transmittal under 37 C.F.R. Sec. 1.52(e) of a Computer Program Listing Appendix. The Appendix comprises text files that are IBM-PC machine and Microsoft Windows Operating System compatible. The files include the following list of files. Object Description: File 1; Object ID: qog_subplannode.txt, created: May 6, 2003, 10:47 am, size 3.62 KB; Object Contents: Source Code. Object Description: File. 2; Object ID: qog_optgov.txt, created: May 6, 2003, 10:48 am, size 6.07 KB; Object Contents: Source Code. All of the material disclosed in the Computer Program Listing Appendix can be found at the U.S. Patent and Trademark Office archives and is hereby incorporated by reference into the present application. 1. Field of the Invention The present invention relates generally to information processing environments and, more particularly, to a database management system (DBMS) having a methodology for distributing query optimization effort over large search spaces. 2. Description of the Background Art Computers are very powerful tools for storing and providing access to vast amounts of information. Computer databases are a common mechanism for storing information on computer systems while providing easy access to users. A typical database is an organized collection of related information stored as xe2x80x9crecordsxe2x80x9d having xe2x80x9cfieldsxe2x80x9d of information. As an example, a database of employees may have a record for each employee where each record contains fields designating specifics about the employee, such as name, home address, salary, and the like. Between the actual physical database itself (i.e., the data actually stored on a storage device) and the users of the system, a database management system or DBMS is typically provided as a software cushion or layer. In essence, the DBMS shields the database user from knowing or even caring about underlying hardware-level details. Typically, all requests from users for access to the data are processed by the DBMS. For example, information may be added or removed from data files, information may be retrieved from, or updated in, such files; and so forth, all without user knowledge of underlying system implementation. In this manner, the DBMS provides users with a conceptual view of the database that is removed from the hardware level. The general construction and operation of a database management system is known in the art. See e.g., Date, C., xe2x80x9cAn Introduction to Database Systems, Volume I and IIxe2x80x9d, Addison Wesley, 1990; the disclosure of which is hereby incorporated by reference. DBMS systems have long since moved from a centralized mainframe environment to a decentralized or distributed environment. One or more PC xe2x80x9cclientxe2x80x9d systems, for instance, may be connected via a network to one or more server-based database systems (SQL database server). Commercial examples of these xe2x80x9cclient/serverxe2x80x9d systems include Powersoft (r) clients connected to one or more Sybase (r) SQL Anywhere (r) Studio (Adaptive Server (r) Anywhere) database servers. Both Powersoft and Sybase SQL Anywhere Studio (Adaptive Server Anywhere) are available from Sybase, Inc. of Dublin, Calif. In today""s computing environment, database technology can be found on virtually any device, from traditional mainframe computers to cellular phones. Sophisticated applications, whether human resources information systems or sales force automation systems, can xe2x80x9cpushxe2x80x9d much of their complexity into the database itself. Indeed, this represents one of the main benefits of database technology. The challenge, however, is to support these applications, and the complex queries they generate, on small computing devices. At the same time, users expect the productivity and reliability advantages of using a relational DBMS. One purpose of a database system is to answer decision support queries. A query may be defined as a logical expression over the data and the data relationships set forth in the database, and results in the identification of a subset of the database. Consider, for instance, the execution of a request for information from a relational DBMS. In operation, this request is typically issued by a client system as one or more Structured Query Language or xe2x80x9cSQLxe2x80x9d queries for retrieving particular data (e.g., a list of all employees earning $10,000 or more) from database tables on a server. In response to this request, the database system typically returns the names of those employees earning $10,000, where xe2x80x9cemployeesxe2x80x9d is a table defined to include information about employees of a particular organization. The syntax of SQL is well documented, see e.g., xe2x80x9cInformation Technologyxe2x80x94Database languagesxe2x80x94SQLxe2x80x9d, published by the American National Standards Institute as American National Standard ANSI/ISO/IEC 9075: 1992, the disclosure of which is hereby incorporated by reference. SQL queries express what results are requested but do not state how the results should be obtained. In other words, the query itself does not tell how the query should be evaluated by the DBMS. Rather, a component called the optimizer determines the xe2x80x9cplanxe2x80x9d or the best method of accessing the data to implement the SQL query. The query optimizer is responsible for transforming an SQL request into an access plan composed of specific implementations of the algebraic operator selection, projection, join, and so forth. The role of a query optimizer in a relational DBMS system is to find an adequate execution plan from a search space of many semantically equivalent alternatives. One component of this task is join enumeration. Since relational databases typically only provide physical operators that can join two tables at a time, an n-way join must be executed as a sequence of two-way joins, and there are many possible such sequences. The optimizer must enumerate some or all of these sequences and choose one based on estimates of their relative execution costs. In general, this problem is NP-complete. (See e.g., Ibaraki, T. et al xe2x80x9cOn the Optimal Nesting Order for Computing N-Relational Joinsxe2x80x9d, ACM Transactions on Database Systems, 9(3): 482-502, September 1984. See also e.g., Ono, K. et al xe2x80x9cMeasuring the Complexity of Join Enumeration in Query Optimizationxe2x80x9d, in Proceedings of the 16th International Conference on Very Large Data Bases, pp. 314-325, Brisbane, Australia, August 1990, Morgan Kaufmann; and Steinbrunn, M. et al xe2x80x9cHeuristic and Randomized Optimization for the Join Ordering Problemxe2x80x9d, The VLDB Journal, 6(3): 191-208, August 1997.) An NP-complete problem is any one of a class of computational problems for which no efficient solution has been found. In practice, query optimizers restrict the sequences or plans that are considered so that an adequate plan can be found in a reasonable amount of time. Examples of such limitations include: restricting the search to left-deep trees where the inner operand of each join is a single table (see e.g., Cluet, S. et al xe2x80x9cOn the Complexity of Generating Optimal Left-deep Processing Trees with Cross Productsxe2x80x9d, in Proceedings of the Fifth International Conference on Database Theory (ICDT ""95), pp. 54-67, Prague, Czech Republic, January 1995, Springer-Verlag); requiring each join to have at least one equi-join predicate of the form (column1=column2); considering only a subset of the available physical join methods (e.g., only nested loop joins); considering only a subset of the possible table access methods (e.g., only index scans); and deferring Cartesian products as late in the plan as possible. (See e.g., Morishita, S. xe2x80x9cAvoiding Cartesian Products for Multiple Joinsxe2x80x9d, Journal of the ACM, 44(1): 57-85, January 1997. Also see e.g., Selinger, P. G. et al xe2x80x9cAccess Path Selection in a Relational Database Management Systemxe2x80x9d, in ACM SIGMOD International Conference on Management of Data, pp. 23-34, Boston, Mass., May 1979.) Choosing a set of restrictions for a given query defines a search space of possible plans that may be considered by a search operation. Deciding how to restrict a search space for a particular query is not straightforward. On one hand, a larger space improves the possibility of finding a better plan. On the other, it also guarantees an increase in the cost of performing the search. If a query is to be optimized once and executed repeatedly, a longer optimization time may be justified. For interactive queries, however, one should optimize the total time spent on execution plus the time spent on the optimization process itself. A difficulty with choosing search space restrictions is that there is not always a direct, linear relationship between the size of the search space and the optimization time. This is the case because most search operations prune (i.e., do not consider) parts of the space that provably cannot contain an optimal plan. The amount of such pruning that is possible can vary considerably depending on the cost distribution of the plans in the space and the order in which plans are visited. Manual control of the parameters that restrict search space size may sometimes be useful. It is usually better, however, if a query optimizer makes such choices automatically. A technique where a series of (not necessarily disjoint) search spaces are defined and searched in sequence is described in U.S. Pat. No. 5,301,317 by Lohman, G. M. et al entitled xe2x80x9cSystem for Adapting Query Optimization Effort to Expected Execution Timexe2x80x9d. In the system described by Lohman, when the search of one space is finished, the cost of searching the next space is estimated and compared to the estimated execution cost of the best plan that has been found. The overall search is halted if the estimated cost of searching the next space exceeds the expected benefit. It is difficult to predict the benefit, but a heuristic is to assume that it will be some fixed fraction (e.g., ten percent) of the estimated cost for the best plan that has been identified. It is also difficult to estimate the cost of searching a space. An upper bound can be obtained by multiplying the cost of enumerating a single plan by the total size of the space. However, this really is just an upper bound since, as noted above, the amount of pruning performed by a search operation may vary considerably. Overall, this technique can be seen as one possible way of automatically choosing search space parameters. However, an undesirable characteristic of this approach is that it may enumerate some plans twice if they appear in more than one search space. Another problem is that the decision to stop the search is only considered after each complete space is finished. For large join degrees, every one of these spaces may be very large. As such, the technique does not allow fine-grained control over how much total effort is spent upon enumeration. A join enumeration operation based on depth-first search of a space of left-deep trees is described by Bowman, I. T. and Paulley, G. N. in xe2x80x9cJoin Enumeration in a Memory Constrained Environmentxe2x80x9d, in Proceedings, Sixteenth IEEE International Conference on Data Engineering, pp. 645-654, San Diego, Calif., IEEE Computer Society Press, March 2000. This depth-first join enumeration search operation is also described in commonly-owned U.S. Pat. No. 6,516,310 titled xe2x80x9cSystem and Methodology for Join Enumeration in a Memory-Constrained Environmentxe2x80x9d, the disclosure of which is hereby incorporated by reference in its entirety, including any appendices or attachments thereof, for all purposes. One advantage of the approach described by Bowman and Paulley is that it uses very little memory relative to the widely used technique of dynamic programming. (See e.g., Selinger, P. G. et al xe2x80x9cAccess Path Selection in a Relational Database Management Systemxe2x80x9d, above; Kabra, N. et al xe2x80x9cOPT++: An object-oriented implementation for extensible database query optimizationxe2x80x9d, The VLDB Journal, 8 (1): 55-78, May 1999; Pellenkoft, A. xe2x80x9cProbabilistic and Transformation-based Query Optimizationxe2x80x9d, PhD thesis, Wiskunde en Informatica, CWI, Amsterdam, The Netherlands, November 1997; and Scheufele, W. et al xe2x80x9cEfficient Dynamic Programming Algorithms for Ordering Expensive Joins and Selectionsxe2x80x9d, in Advances in Database Technology-Proceedings of the 6th International Conference on Extending Database Technology, pp. 201-215, Valencia, Spain, Springer-Verlag, March 1998.) Another advantage of the approach described by Bowman and Paulley is that complete plans are generated continuously during the search. This means that it is possible to interrupt the search at any time after the first plan is found and simply keep the best plan found at the time the search is halted. Therefore, fine-grained control over the amount of enumeration effort is possible. Early halting is a simple way of limiting the computational effort spent on join enumeration. However, a problem with simply stopping the search early is that the search effort is not very well distributed over the search space. If only a small fraction of the search space is visited, then most of the plans considered are typically very similar. An improved solution for limiting the computational effort spent on join enumeration is required that enables the search effort to be limited in a more effective way. The present invention provides a solution for these and other needs. In a database system, a method for optimization of a query is described. When a query is received which requests data from a database, a plurality of plans which can be used for obtaining data requested by the query are enumerated. A search tree is created based upon these plans, with nodes of the search tree representing segments of the plans. A limited number of nodes of the search tree are selected for evaluation to limit the effort spent on query optimization. A complete plan for execution of the query is generated by evaluating the selected nodes of the search tree and, if the evaluation determines that a given node is more favorable than comparable nodes previously evaluated, retaining the given node as part of the complete plan.
Colombian economy and politics 1929–58 Since the year 1929 the Liberal Party period began. It lasted 16 years and had to fight the global economic crisis. Also during this period there was great controversy bipartisan, creating many internal conflicts. One of the major problems in the crisis was the dependency of Colombia in the U.S for the purchasing of coffee which was the backbone of its economy. The economic crisis in Colombia during the period of 1928 through 1933 was a devastating result of the previous years of prosperity based on high amounts of international loans and credits, high prices in the exporting coffee and a confident country that generated investment and cash flow. The same way that Colombia prospered thanks to the US, it went down parallel to it in their time of crisis. The New York stock market collapsed and the confidence within the country was low and protective, the investment stopped, as well as the loans and Colombia was directly affected by that situation. There was a constant decrease in the exporting potential product of Colombia; coffee, as well as a cut in the international loans and investment. Eventually the crisis in the U.S.A. generated within Colombia a cut in urban employment, diminished internal market and among other problematic social and economic situations. From the year 1933 to the year 1939 Colombia began to see a big change in the country's industries, leaving behind the problems of urbanization in the twenties. There was also a large agricultural development, therefore strengthening the development of the economy, and expansion of the agriculture and livestock. During this period, coffee exports were very high. Coffee farmers managed to expand their crops; with the help of politics they also tried promoting agriculture, with help of the “Consejo Nacional de Agricultura”. Economic Crisis One of the main reasons for the development of an economic crisis is when a world power country enters in the crisis, eventually all countries get affected. Basically a crisis leads a country to debt and economic stagnation. And it's ultimately influenced by all kinds of factors such as culture, climate, previous development, political order, and internal and external social conflicts. In other words, there is a great deal of ingredients mixture needed for an economic crisis to come across. First of all, to analyze what an economic crisis is, we need to know what the symptoms of a healthy economy are. There should be progressive increase in the sustainable growth in behalf of the government, the economy and society as a whole. Something crucial for the country is the trade balance, which should mediate the amount of imports with the exports. Also, the currency has to be revalued in order to have a controlled inflation within all the areas. The government also has to be very careful in the amount of money that runs among the pockets of the people, because that's what determines product prices and inflation. Unemployment rates also have to be no more than a digit to be considered a country with healthy economic symptoms in relation to all the population, not a few favored. Now knowing the symptoms of a healthy economy we can take into account what an economic crisis is and how does it become global. When problems are created in different bases, the economy turns to be fragile and that's when the economy crisis occurs. This is given to other things that make development stagnates. In these time period, exports fall, affecting the trade balance like a cycle. If a country's export is not enough, it will affect the rest of the countries including its imports. There comes a point where it fails to defend currency, and when that happens, its value falls and people only think about getting rid of it to buy other. There is also the big problem of excess in the credit banking system, creating a situation in which the bank is not paying people and increases poverty. The economic crisis is something that affects the whole world. There are clearly more developed countries and the impact is equally different. In conclusion, the crisis occurs, because panic is formed, and just then a country loses confidence. This may cause stagnation in production and investment, leading the country to a slow development. Because of the stagnation, unemployment grows increasing human poverty. Economic Crisis 1928-1933 From the period of 1922 to 1928 the two main factors that took over and increased Colombia's economy were the rise in external coffee prices (the main exportation product) and a huge increase in the international credits to the public and the bank system. An incredible amount of money was flowing within the country with prosperity yet to be paid. Therefore, the economic success wasn't internally based and sustained, divergently the economy was holding on the United States credits and investment. That was the mistake and that was the cause of the future crisis. 1928 was the year of a significant reduction in external credit; generating a constant decline in the domestic bank credits and a hold back of the stock markets of Bogota and Medellin. Throughout 1929 the international coffee prices continued declining abruptly in addition to the New York stock market, and eventually the first manifestations of urban unemployment boost generated an immediate internal market crisis. Deflection from that moment on till 1932 was the result of the international price reduction of Colombian exporting products. It was only until 1932 that the rise in gold production compensated the external dept as well as it created commercial balance, ultimately preventing the continuous loss in reserves. After Colombia was able to defend its currency, it was in search of a new economic politic. This was based on three pillars; the fiscal, monetary and costumes or external commerce. The problem was that a free trade was established and the lack of government intervention in the economy was induced. International imports weren't charged as much taxes supposedly to pay off the increase of life expenses, this didn't generate any type of positive results, and by 1931 a protectionist economy reestablished with the purpose of reanimating the public investment. Anyway, Colombia found itself within the countries that had to surrender to their exchange freedom and abandon the gold reserves with the purpose of devaluing their currency against gold and dollars. In the long run it was a reaction to the collapse of international reserves. Parallel to the causes of the economic crisis in Colombia, the presidents in charge of the time period (Abadia Mendez and Olaya Herrera) reduced hastily the public investment programs, causing a decline in the previous infrastructure privileged circumstances. This ultimately affected the public employment and finances. It was only until 1933 that the government gave a tremendous tip to its economic matters. It focused more in recuperating and reanimating the local product market, returned to the coffee exportation as a main economic sustains and finally it emphasized the improvements of the domestic credits. The 2 main sections that found themselves favored by the new economic measures were the agricultural and the industrial. Basically because the loads of people that moved to the cities in search of work, went back to the rural areas to work on agriculture which was productively being exported again. In addition the industrial sector was favored due to the substitution of importation products by Colombian goods. These were the actions that Colombia took in order to take control over the economy once again and recuperate after 5 years of crisis. Liberal Republic The Colombian government came from a long period of being conservative until the year 1930. This year in the elections, the liberal party started to take control. A period of social, political and economical controversy was about to begin. The growing economic crisis, and the way the conservative government was handling the country's problems made them lose many followers, therefore losing their next elections. The most noticeable and immediate change was the sudden deterioration of the public order in most of the country. Colombia also suffered a period of violence, not only with the internal conflicts but also conflicts with other countries. The violent issues and the global economic crisis was what were shouting for a new regimen. United States wasn't only providing investment; it was also producing internal violence. One example is how United Fruit Company could manipulate the government and reach what they wanted no matter the cost. It started when the company complains to the government that the workers were refusing to do their jobs because they were on strike. The government ordered the military to go to where they were and threaten the workers that if they don't work they would shoot. The workers refused to work and there was when the military killed approximately 3000. Another violence problem in Colombia during the liberal party period was how the guerrilla groups started to form. This was created because the liberals were protecting themselves from the conservatives; this was a bipartisan violent problem. One of the major problems Colombia had when the liberals took over the government was that they hadn't been in control almost half a century. Part of the problem was that the liberals still accused the conservatives for things that were already left in the past and weren't relevant in the present. This was a real big issue because many ideas and situations had already changed or disappeared with the conservative government. A fact worth mentioning, the loyalty of liberalism ideas weren't based in what each person believes but it was more an obligation that came from many years back, in other words, inherited hatred. Even though this problems did affect the relationship between this two parties, to make the transition easier the government included some conservative members in the cabinet. This would make the overall government ideas liberals, but would have also some conservative influences. The first liberal president during this period was Enrique Olaya Herrera. He took measures over how to handle the economic crisis, one of the measures he took was devaluing the currency to make the international exports more competitive and stimulate the industry. Something Olaya Herrera decided it was really important to get over the crisis was making even stronger the relation between Colombia and U.S.A so there was a chance that their vast resources would help resist the depression. Olaya also sustained the external debt, not letting it get out of control. He also gave importance to the workers and women. For the workers he made a law that the working day could not be more than eight hours and the legal right to organize unions. For the women Olaya gave them the legal rights to own their own properties. In the girls´ schools, he said that they could have high school; that before was forbidden. The second liberal government was led by the president Alfonso Lopez Pumarejo. Many situations in his presidency were influenced by the New Deal, and much idealism from Franklin D. Roosevelt. Most of the time he dedicated to solve the social bipartisan issues. Lopez called this “Revolucion en Marcha”, or revolution underway. Part of this process was helping the poor to participate in the system's benefit. This would also make the parties be more pacific and not use violence as a solution. One of the measures he took was making a new agrarian reform. This consisted in giving the peasants that were in non-legal land legal land so they could work on them. Lopez administration was recognized by being the protector of the working class. He created something very important that was the CTC or the Colombian Worker Confederation. Lopez also increased the public spending in public education and rural roads. He finished his presidency by making various changes in the constitution. He said that the started would have more power over economic issues, eliminated the article that said that public education had to be in agreement of the catholic religion; finally he eliminated the literacy requirements to vote. In conclusion, Lopez main contribution was making Colombia face for the first time its social issues. Economy 1930-1945 It was until 1930, that Colombia's political government was steady and managed to grow economically. Since the beginning of the century, new routes developed, especially those of coffee. The crop production was so good, that its consumption was larger each time, bringing money that was invested in new oil companies, and products consumption. In 1930, when the Liberal party was able to retake power, new reform started to come up and slowly the economy went down. The economy in Colombia around World War II was divided in two large periods: from 1930–1939, in which the country experienced not only an outstanding increase, but also a new social transformation. 1939-1945, the economy was stagnant, and at the same time the social transformation was not progressing at all. Although some countries around the world had trouble coming back from the crisis, Colombia was one of the few that had a swift and sustainable recovery, in order to develop its agricultural sector, coffee production and oil exploitations. If we look at the economy from 1933–1939, you will see a big duplication in the industrial production, growing by almost 11% annually; which no other country was able to accomplish. Although the increase was spectacular, we had to take into account what CEPAL organization was saying: “if we have a good growing rhythm, we need to think about the low initial level, which can cause future plans”. The very important fact pointed out by the organization was ignored by the investor, which kept on creating new textile, shoes, and expanding food companies. By 1939 all of these developed a huge increase in the domestic demand, and nearly 2,805 manufacture companies which led to having a low import demand. Using these as a method wasn't as good as expected, since other countries could see this as a closing trade action. Economy 1946-1958 Previous to 1946 there was a tremendous break down in the international and national economy due to World War II. Basically the income per capita was lower than normal and export and import activities were definitely more difficult than before. Colombia reacted with an “emergency economy” by intervening the coffee industry, by coordinating and controlling transportation, by organizing the external commerce and strengthening the regulation systems of importation. It was until 1946 that Colombia didn't focused in creating more reforms but in recuperating from the War and boost up the economy. The fastest increase in economic activity was during 1946 through 1953, were not only was the country changing its economic structure but also the international situation was healthier. Eventually the investment amplified noticeably as a result of the best conditions of external payments. The structure of the industry was diversifying due to the accelerating industry and the fast process of urbanization, which created a great internal demand in cooperation with the protectionist substitution of imports. Even though it was all the economic sectors that were increasing its production rates, the industry had the most representing statistics of development and modernization. In a period of eight years the industry duplicated its production, directly because a great amount of the rural population were moving to the cities and linking to the activities as well. By creating new companies en developing many factories the innovation was focused in the intermediary goods and from capital, creating directly the quickest absorption rates of jobs among the industry. This phenomenon was not only present in Colombia but also around the world were the most dynamic and increasing sector of the economy is the industry. It is clear that Colombia's economy is completely linked to the external financing plus the income from exportation products. Although the industry was the sector that had most representative lift up in Colombia during 1946 to 1958, the agriculture sector had a boost as well. A modernization in production was the key to the high productivity. But in order to reach these point three main factors were involved; the increasingly high demand of industrial raw material, the government stimulus and the available international resources. To protect the agricultural sector from the external competition from imported products, the government established high taxation as a result motivating the internal production. Another way the government generated a positive stimulus was by introducing machinery and proper equipment, fertilizers, insecticides, credit, and infrastructure investment, among others. As many industrial raw material crops were taking over the fertile and plain lands, the cattle were being displaced and eventually diminished the productivity. Despite the fact that Colombia demonstrates a healthier economy during 1946 through 1958, socially the country wasn't that well in the popular sections. Life conditions during this period are far from being considered good. More than half of the employed population lives barely above subsistence level as a result of the sudden increase of population in urban areas. The problem is that not all of the agricultural relocated individuals were properly absorbed by the main cities in Colombia (Bogota, Barranquilla, Cali and Medellin) and therefore found themselves in activities of very low productivity, unemployed or unemployed disguise. This time period is probably the beginning of an escalating amount of Colombian population that began at a greater scale with the nowadays cycle of unemployment. "A man of Principles" “A Man of Principle” is a great movie that represents the Violence in Colombia starting in the year 1948. During this time the liberals were in the power and the conservative followers were discriminated. This movie start with a man called Leon Maria; he is conservative and lives in a small town called Tulua. Leon Maria is not so well economically; he has a cheese store and worked at a library. Many people in the town were liberals and went around discriminating the conservative. The wealthiest, most powerful people there were liberals and had a lot of influence in the government and in other political issues. In the movie there is a turning point that is the death of Jorge Eliecer Gaitan, a very influential liberal leader that was killed by some conservative. When this happens the liberals decide to show how much they cared and started burning and destroying conservative symbols like the Catholic Church. This period of anger and violence was called ‘el bogotazo’. What Leon Maria does is that he has to protect in what he believes in and decides to get a conservative group to protect their church from the liberals. The next day Leon Maria was seen like a hero for his party, he stood up for what he believed no matter the cost. The conservatives now had the power, and the government sent money to the party and the party used the money to pay people to get rid of the liberals. Leon Maria receives a telegram informing him that he had an urgent meeting with the conservative leaders. In this reunion the conservatives tell Leon Maria that he was the man chosen to protect its party and they gave him guns and money to start right away. During the rest of the movie they show how Leon Maria kills, threaten, and torture liberals for them to leave their homes and go far away. This only lasts until the liberal party regains its power and Leon Maria is killed. Throughout the movie we can see how the political parties affect everyday life. It shows that the Violence in Colombia was mainly because bipartisan conflicts that weren't really because of beliefs but instead inheritance. The real power in the town wasn't only wealth but also the amount of influence they had on the government and also the amount of fear that they provoked in the community. The name given to Leon Maria was the condor. This was because the people that killed liberals for the conservative party were called ‘pajaros’ or birds. A condor is a very powerful and fearful bird and this is what Leon Maria represented around the country, someone more powerful than just a simple bird. Frente Nacional With the military board (junta military) in the year 1958, the two political parties: liberals and conservatives started the new era of peace and social development. Although everything was getting better, new problems started to come about, economically and socially. The liberals and conservatives were now sharing power equally, but it never came up to be a 50-50 percent equality throughout the country. The two-year period as president for each party, was named after: FRENTE NACIONAL, which ended up with the violence movement. From 1958 to 1970, the industrialization became a backup for the new exportation methods, meaning the new government was more focused in developing the new ideas to improve the economy. In this time period the government was also able to enter more complex areas of importation, creating changes in their new politics to increase their marketing development. A very big change also arises in the external national product, expanding and developing the modernized economy and reducing importations in a large percent. As we have seen throughout the class, the industrial sector ruled for a long time period leaving behind the agricultural sector. It wasn't until about 1955 that the government was starting to get focused on developing and growing this sector; in order to also create new urban jobs the violent movement had to be exterminated by the Frente Nacional. At the same time new opportunities for the peasant were given, meaning, the government was giving them the chance to work on the territorial lands and see how big a change it would make for the economy. After spending and investing so much money in this new method, the government was undecided whether to move on with this idea or abandon it. The big problem in this sector was, that peasants did not have enough money required to productively support the new economical development. It was also better to move on with new ideas, due to the requirements presidents Lleras Restrepo proposed: 1) devalue the currency again, 2) import release, 3) free foreign exchange market. References Category:Economic history of Colombia Category:Political history of Colombia
The center sign appears untouched, and on the exits only the shields and "TO" appear to have been replaced. The lowest line on the Exit 85 assembly was ugly then (and seemed to be fixed with a more proper yellow bar until I guess it was determined to not really be one mile?) and is differently ugly now. It's a shame the lights are gone. The signs in the pic are non-reflective background button copy and the signs on GSV are reflective button copy put up in the 1980s. None of the three signs are the same. Old photo is phase II non Reflective. All the signage on that gantry on google maps now is phase III retroreflective button copy. Note that the phase II center sign has finishing strips on each side, the present sign does not and the border goes right to the left and right edges of the extruded panels. None of the three signs are the same. Old photo is phase II non Reflective. All the signage on that gantry on google maps now is phase III retroreflective button copy. Note that the phase II center sign has finishing strips on each side, the present sign does not and the border goes right to the left and right edges of the extruded panels. I'm still amazed and how short lived some of the non-reflective button copy signs were. These were up for only 10 years or so. Judging by reflective button copy sign dates (on the back) and known openings of other highways, I've gathered some of the non-reflective ones were only up for about 5-10 years. CT-8 comes to mind. A section in Beacon Falls opened up around 1980 and the reflective signs NB say 1989 on them. So the original NB signs were only up for about 9 years. (SB still has them, as seen in this pic, but not for long) The same with the NRBC signs on CT-40. If it opened up in 1976, the signage was replaced in 1990 or so. All that good non-reflective signage wasn't even up for 20 years. This one has escaped the rath: I forgot to mention that Route 184 has exit numbers too, also without new overhead signage. There are also new retroreflective traffic signals at the end of the divided highway, which I've seen a lot more of in this part of the state. (not a fan of span wire, but at least ConnDOT is finally moving into the 21st century). Speaking of Gold Star Bridge signage, I was browsing the website for the construction project and came across this image, from when the NB bridge was widened in 1974. It's the same sign that exists presently, which will be replaced soon. Any other BGS in the state that have lasted this long? Cool shot. I almost forgot that Bridge Street was used back then. The signs put up in the mid/late 1980s which still exist today say "Thames Street". Here's a shot I got last summer:95NB-GoldStar-1 by Jay Hogan, on Flickr I believe the 1 mile sign may have fallen off, vs being removed on purpose due to incorrect mileage. The same gantry has been in place the whole time - you can still see some of the wiring for the lights (the loop of wire on the right post in each picture. The signs on the Exit 85 offramp, which splits into ramps for Thames St and US 1/Downtown Groton is being changed as part of the current I-95 resigning contract. "Thames Street" will become "Groton Waterfront". I'd venture to guess that when the signs on the bridge itself are replaced, they'll say "US 1 North/Downtown Groton/Groton Waterfront". Not to be confused with Exit 87, which will become "Groton City". As far as what the SB signs on the bridge will say, this is from the contract plans, with no more pullthroughs: Interesting. So at some point between 1974 and today, Connecticut replaced BGSs that had white shields with BGSs that have less visible, outline button copy shields? Don't get me wrong, I love and will miss the outilne button copy shields, but it's clear why they are being phased out. The first of what I call "Phase III" signage (button copy, reflective backgrounds, outline US/state shields) was installed at some point during the mid 1980s. I remember a news story about the signs, being a partnership with the company 3-M. The signs on I-395 and on I-95 west of New Haven were installed in the 1985 timeframe. CT 2 and CT 9 got theres in the late 1980s. The state became blanketed with Phase III by 1990, with only I-84 east of East Hartford and I-95 from Madison east to New London escaping the cut. I-95 in Branford and Guilford held onto its original turnpike signage (all text, blue) until the early 90s when it became Phase III as well. On routes like CT 2, CT 8, and CT 9, they held onto their original signage for only about 10-15 years. Current signage on those routes is now over 30 years old, with no plans to replace signs on CT 2 or 9 in the near future. The funny thing is the outline shield were a step backwards. Vermont was using that style on their BGS’s in 1960, and other than California, whose shield have green backgrounds anyway, it suggests that CT’s shields should as well, but they don’t. I’m a fan of the older signage, and the new signage being installed. Take a ride on route 2 west late at night coming back from the casino, and half the signage is illegible because the reflectors have lost their reflectivity. I do remember seeing them on I-95 in 1985 coming back from NYC with my parents, and was wondering if they forgot to finish painting the shields. The most ridiculous signage has to be the route 15 shields on the HOV lane signs on I-84 West by the I-384 interchange. The 15 looks like it’s part of the sign. From alpsroads.net: Close inspection of the 15 shield reveals its a shield just slapped on. The only problem is that type of shield should be on a sign with a green background. Other shields on HOV signs in CT have a black border around them and are integrated into the sign itself. This particular sign was installed in the early 2000s when this "ramp" (actually, just a painted break in the HOV/main lane divider) was created. It dumps into the left lane of mainline westbound traffic in 1 mile, but is still 1 1/2 miles away from the actual exit from the 84 mainline. During the 2000s era, a lot of shields were slapped onto signs to replace those that were faded or worn out. That's why some have the state name... they were meant to be reassurance shields, but got slapped onto a BGS instead. There's a couple small ones about 1 1/2 miles west of this particular location. There's also some on I-91, mostly southbound, that replaced worn-out button copy I-shields. And on parts of I-95 in SE CT, there are button copy I-shields with the state name that was put on when the sign was created. There's one or two on the 15SB ramp to 91SB a few miles west of this spot as well. Speaking of transportation projects, I notice that district 1 has started blanketing the beginning and end of limited access highways’ guardrails with a red and green delineator, a la MAssachussetts. I first noticed these in district 2 a couple years back. Looks like their going to catch on statewide. And in conndot plow jockey fashion, several of them have already been twisted and bent around tolland from the last storm. I think of this as the Ryan Seacrest or Johnny Gilbert sign. It was still standing in 1980 (at least the one at the NY/CT line was) when my mom, stepdad, stepgrandmom and I went to Nova Scotia to visit relatives. When was it taken down? And was there one at the RI end as well? I think of this as the Ryan Seacrest or Johnny Gilbert sign. It was still standing in 1980 (at least the one at the NY/CT line was) when my mom, stepdad, stepgrandmom and I went to Nova Scotia to visit relatives. When was it taken down? And was there one at the RI end as well? I think of this as the Ryan Seacrest or Johnny Gilbert sign. It was still standing in 1980 (at least the one at the NY/CT line was) when my mom, stepdad, stepgrandmom and I went to Nova Scotia to visit relatives. When was it taken down? And was there one at the RI end as well? ixnay There was an identical sign at the RI end of the turnpike. I remember seeing a photo of it someplace online, but I can't remember where. What is the answer to the ultimate question of life, Connecticut, and everything? Logged "Officer, I'm always careful to drive the speed limit no matter where I am and that's what I was doin'." Said "No, you weren't," she said, "Yes, I was." He said, "Madam, I just clocked you at 22 MPH," and she said "That's the speed limit," he said "No ma'am, that's the route numbah!" - Gary Crocker Speaking of transportation projects, I notice that district 1 has started blanketing the beginning and end of limited access highways’ guardrails with a red and green delineator, a la MAssachussetts. I first noticed these in district 2 a couple years back. Looks like their going to catch on statewide. And in conndot plow jockey fashion, several of them have already been twisted and bent around tolland from the last storm.
@using System.Activities.Expressions @using Kudu.Core.SourceControl @using Kudu.Web.Models @using Kudu.Web.Infrastructure @model ApplicationViewModel @{ ViewBag.Title = Model.Name; } @Html.Partial("_GitUrlTextbox", Model.GitUrl) <div class="well"> <div class="form-group"> <label class="control-label"><strong>Application URL</strong></label> <div> <a href="@Model.SiteUrl" target="_blank">@Model.SiteUrl</a> <p class="help-block">This is the link to your website.</p> </div> </div> <div class="form-group"> <label class="control-label"><strong>Service URL</strong></label> <div> <a href="@Model.ServiceUrl" target="_blank">@Model.ServiceUrl</a> <p class="help-block">This is the link to the kudu service.</p> </div> </div> </div> <script type="text/javascript"> //Note: Would be so nice with a proper frontend framework like, EmberJS, AngularJS or React! function removeBinding(binding, element) { if (confirm('Remove the following site binding: ' + binding)) { $(element).val(binding).closest('form').submit(); } } var BindingForm = (function () { //Note: Util function getFormField(form, field) { return $('#' + form + field); } function isDef(val) { return typeof val !== 'undefined'; } function propertyFn(formField) { return function(value) { if (isDef(value)) { getFormField(this.name, formField).val(value); return this; } return getFormField(this.name, formField).val(); } } function BindingForm(name) { this.name = name; } //[0].checked BindingForm.prototype.schema = propertyFn('SiteSchema'); BindingForm.prototype.port = propertyFn('SitePort'); BindingForm.prototype.hostName = propertyFn('SiteHost'); BindingForm.prototype.sniEnabled = function(value) { if (isDef(value)) { getFormField(this.name, 'SniEnabled')[0].checked = value; return this; } return getFormField(this.name, 'SniEnabled')[0].checked; } BindingForm.prototype.certificate = function (value) { if (isDef(value)) { getFormField(this.name, 'SiteCertificate').val(null); } return getFormField(this.name, 'SiteCertificate').children('option:selected').text(); } BindingForm.prototype.httpsFields = function () { var selector = '.' + this.name + 'HttpsFields'; return $(selector); } BindingForm.prototype.hostNameField = function() { return getFormField(this.name, 'SiteHost'); } return BindingForm; })(); function getForm(form) { return new BindingForm(form); } function schemaChanged(form) { form = getForm(form); if (form.schema() == 'Https://') { form.httpsFields().show(); form.port(443); } else { form.httpsFields().hide(); form.port(80); } } function httpsChanged(form) { form = getForm(form); //Note: We know the form is HTTPS here. var cert = form.certificate(); if (cert[0] === '*' || (supportsSni() && form.sniEnabled())) { form.hostNameField().prop('disabled', false); } else { form.hostName(""); form.hostNameField().prop('disabled', true); } } function supportsSni() { return @Model.SupportsSni.ToString().ToLower(); } </script> @helper AddBindingForm(string name, string action, string controller) { //Note: Lets mimic the entire IIS dialog instead so it is more familiar to IIS administrators. using (Html.BeginForm(action, controller, new { slug = Model.Name.GenerateSlug() }, FormMethod.Post)) { <label class="control-label"><strong>Add binding</strong></label> @Html.ValidationSummary() <div class="row"> <div class="span2"> <label class="control-label">Type:</label> @Html.DropDownList("siteSchema", Model.Schemas, new { onchange = "schemaChanged('" + name + "')", id = name + "SiteSchema", style = "width: 100%;", @class = "form-control" }) </div> <div class="span4"> <label class="control-label">IP address:</label> @Html.DropDownList("siteIp", Model.IpAddresses, new { style = "width: 100%", id = name + "SiteIp", @class = "form-control" }) </div> <div class="span2"> <label class="control-label">Port:</label> @Html.TextBox("sitePort", "80", new { style = "width: 50%", id = name + "SitePort", @class = "form-control" }) </div> </div> <div class="row"> <div class="span5"> <label>Host name:</label> @Html.TextBox("siteHost", "", new { placeholder = "example.org", style = "width: 100%", id = name + "SiteHost", @class = "form-control" }) </div> </div> if (Model.SupportsSni) { <div style="display: none;" class="row @(name + "HttpsFields")"> <div class="checkbox span5"> <label class="checkbox"> @Html.CheckBox("siteRequireSni", false, new { id = name + "SniEnabled", onchange = "httpsChanged('" + name + "')" }) Require Server Name Indication </label> </div> </div> } else { @Html.Hidden("siteRequireSni", false) } <div style="display: none;" class="row @(name + "HttpsFields")"> <div class="span5"> <label>SSL certificate:</label> @Html.DropDownList("siteCertificate", Model.Certificates, "Select certificate...", new { onchange = "httpsChanged('" + name + "')", style = "width: 100%", id = name + "SiteCertificate", @class = "form-control" }) </div> </div> <button id="add_sitebinding" type="submit" class="btn btn-primary">Add binding</button> } } @if (Model.CustomHostNames) { <div class="well"> <div class="form-group"> <label class="control-label"><strong>Custom Application Site Bindings</strong></label> <p class="help-block"> Specify additional site bindings for the service site. Can be of the format 'hostname', 'hostname:port', 'example.org' or 'example.org:port'. </p> <p class="help-block"> Protocol is limited to http only and all bindings entered will be set to http. </p> @if (Model.SiteUrls.Any()) { <table id="custom-site-bindings" class="table"> <tr> <th>Protocol</th> <th>Hostname</th> <th>Port</th> <th></th> </tr> @foreach (string siteBinding in Model.SiteUrls.Skip(1)) { var uri = new Uri(siteBinding); <tr> <td>@uri.Scheme</td> <td><a href="@uri.AbsoluteUri">@uri.Host</a></td> <td>@uri.Port</td> <td class="actions"> <button type="button" class="btn btn-danger" onclick=" removeBinding('@siteBinding', '#removesitebinding') ">Remove</button> </td> </tr> } </table> using (Html.BeginForm("remove-custom-site-binding", "Application", new { slug = Model.Name.GenerateSlug() }, FormMethod.Post, new { id = "remove-site-binding-form" })) { @Html.Hidden("siteBinding", "", new { id = "removesitebinding" }) } } @AddBindingForm("app", "add-custom-site-binding", "Application") </div> </div> <div class="well"> <div class="form-group"> <label class="control-label"><strong>Custom Service Site Bindings</strong></label> <p class="help-block"> Specify additional site bindings for the service site. Can be of the format 'hostname', 'hostname:port', 'example.org' or 'example.org:port'. </p> <p class="help-block"> Protocol is limited to http only and all bindings entered will be set to http. </p> @if (Model.ServiceUrls.Any()) { <table id="custom-site-bindings" class="table"> <tr> <th>Protocol</th> <th>Hostname</th> <th>Port</th> <th></th> </tr> @foreach (string siteBinding in Model.ServiceUrls.Skip(1)) { var uri = new Uri(siteBinding); <tr> <td>@uri.Scheme</td> <td><a href="@uri.AbsoluteUri">@uri.Host</a></td> <td>@uri.Port</td> <td class="actions"> <button type="button" class="btn btn-primary" onclick=" removeBinding('@siteBinding', '#removeservicebinding') ">Remove</button> </td> </tr> } </table> using (Html.BeginForm("remove-service-site-binding", "Application", new { slug = Model.Name.GenerateSlug() }, FormMethod.Post, new { id = "remove-site-binding-form" })) { @Html.Hidden("siteBinding", "", new { id = "removeservicebinding" }) } } @AddBindingForm("scm", "add-service-site-binding", "Application") </div> </div> } @using (Html.BeginForm("Delete", "Application", new { slug = Model.Name.GenerateSlug() })) { <input type="submit" class="btn btn-danger" name="name" value="Delete Application" /> }
Rorik of Dorestad Rorik (Roricus, Rorichus; Old Norse HrœrekR, c. 810 – c. 880) was a Danish Viking, who ruled over parts of Friesland between 841 and 873, conquering Dorestad and Utrecht in 850. Rorik swore allegiance to Louis the German in 873. He died at some point between 873 and 882. Since the 19th century, there have been attempts to identify him with Rurik, the founder of the Ruthenian royal dynasty. Family He had a brother named Harald. Harald Klak was probably their uncle, and Godfrid Haraldsson their cousin. The identity of his father remains uncertain. There are various interpretations of the primary sources on his family, particularly because names such as Harald are repeated in the texts with little effort to distinguish one holder of a name from another. But Harald Klak had at least three brothers. Anulo (d. 812), Ragnfrid (d. 814) and Hemming Halfdansson (d. 837). Any of them could be the father of the younger Harald and Rorik. Several writers have chosen Hemming for chronological reasons, estimating Rorik was born following the 810s. This remains a plausible theory, not an unquestionable conclusion. Early life Harald the younger had been exiled from Denmark and had raided Frisia for several years. He had entered an alliance with Lothair I who was involved in conflict against Louis the Pious, his father. Frisia was part of Louis' lands and the raids were meant to weaken him. By 841, Louis was dead and Lothair was able to grant Harald and Rorik several parts of Friesland. His goal at the time was to establish the military presence of his loyalists in Frisia, securing it against his siblings and political rivals Louis the German and Charles the Bald. The two Norsemen used islands as a main base of operations, the seat of Rorik being the island of Wieringen, while Harald operated from the island of Walcheren, and they also ruled Dorestad at this time. In the early 840s, Frisia seemed to attract fewer raids than in the previous decade. Viking raiders were turning their attention to West Francia and Anglo-Saxon England. In 843, Lothair, Louis and Charles signed the Treaty of Verdun, settling their territorial disputes. Lothair previously needed Rorik and Harald to defend Frisia from external threats. With the seeming elimination of such threats, the two Vikings may have outlived their usefulness to their overlord. In about 844, both "fell into disgrace". They were accused of treason and imprisoned. The chronicles of the time report doubt on the accusation. Rorik would later manage to escape. Harald probably died while a prisoner. According to an 850 entry of the Annales Fuldenses, "Hrørek the Norseman () held the vicus Dorestad as a benefice with his brother Haraldr in the time of the Emperor Louis the Pious. After the death of the emperor and his brother he was denounced as a traitor - falsely as it is said - to Lothair I, who had succeeded his father in the kingdom, and was captured and imprisoned. He escaped and became the faithful man of Louis the German. After he had stayed there for some years, living among the Saxons, who were neighbours of the Norsemen, he collected a not insubstantial force of Danes and began a career of piracy, devastating places near the northern coasts of Lothair's kingdom. And he came through the mouth of the river Rhine to Dorestad, seized and held it. Because the emperor Lothar was unable to drive him out without danger to his own men, Hrørek was received back into fealty on the advice of his counsellors and through mediators on condition that he would faithfully handle the taxes and other matters pertaining to the royal fisc, and would resist the piratical attacks of the Danes." The Annales Bertiniani also records the event: "Hrørek (), the nephew of Haraldr, who had recently defected from Lothar, raised whole armies of Norsemen with a vast number of ships and laid waste Frisia and the island of Betuwe and other places in that neighbourhood by sailing up the Rhine and the Waal. Lothar, since he could not crush him, received him into his allegiance and granted him Dorestad and other counties." The Annales Xantenses briefly report: "Hrørek the Norseman (), brother of the mentioned younger Haraldr, who was earlier dishonored by Lothar, fled, demanded Dorestad back, deceitfully inflicted much evil on the Christians." Ruler of Dorestad After Rorik, together with Godfrid Haraldsson, conquered Dorestad and Utrecht in 850, emperor Lothair I had to acknowledge him as ruler of most of Friesland. Dorestad had been one of the most prosperous ports in Northern Europe for quite some time. By accepting Rorik as one of his subjects, Lothair managed to keep the city as a part of his realm. His sovereignty was still recognized. For example, the coinage produced at the local mint would continue to bear the name of the Emperor. On the other hand, Dorestad was already in economic decline. Leaving it to its fate was not much of a risk for the welfare of his state. Bishop Hunger of Utrecht had to move to Deventer (to the east). Later on, together with Godfrid, Rorik went to Denmark to try and gain power during the Danish civil war of 854, but this wasn't a success. The Annales Bertiniani reports: "Lothar gave the whole of Frisia to his son Lothar, whereupon Hrørek and Gøtrik headed back to their native Denmark in the hope of gaining royal power. ... Hrørek and Gøtrik, on whom success had not smiled, remained based at Dorestad and held sway over most of Frisia.". Godfrid is not mentioned again and could have died not long of his return. The extent of Rorik's area of control at the time is uncertain. In "Carolingian Coinage and the Vikings" (2007), historian Simon Coupland made an educated guess based on primary sources. Rorik's recorded control over the city Gendt on the bank of the Waal River, suggests the river formed the southern border of the area. The Kennemerland is also mentioned as part of Rorik's area of control. Later negotiations with Louis the German would probably mean Rorik's area shared its eastern borders with East Francia. The western border is more obscure. Rorik and his brother controlled the islands of Zeeland in the 840s. There is no later mention of them in connection to Rorik; which could mean the ruler of Dorestad had never regained control over them. Expedition to Denmark According to an 857 entry in the Annales Fuldenses: "Hrørek the Norseman, who ruled in Dorestad, took a fleet to the Danish boundaries with the agreement of his lord King Lothar, and with the agreement of Hørekr, king of the Danes, he and his comrades occupied the part of the kingdom which lies between the sea and the Eider." Which means Rorik, with Lothair's encouragement, went to Denmark and forced King Horik II (Erik Barn) to recognize his rule over a significant area. The Eider River formerly marked the border between Denmark and the Carolingian Empire. Coupland estimates the region gained to have lain to the north or northeast of the river and to have stretched to Schlei, a narrow inlet of the Baltic Sea. Though not mentioned by the chronicler, Rorik may have taken control over Hedeby, a significant trade center of the area. The historian considers Hedeby would be a "valuable prize" for Rorik. He considers the motivation of Lothair to be to use the new port to increase trade between his realm of Lotharingia and the region of Scandinavia. However raids in Rorik's own territory are reported by the Annales Bertiniani: "Other Danes stormed the emporium called Dorestad and ravaged the whole island of Betuwe and other neighbouring districts." Coupland considers this indicates Lothair's plans had backfired. Left unguarded, Dorestad and its surrounding area were easy prey for other Scandinavian raiders. Even Utrecht was sacked this year. The Frankish chroniclers are silent on the subject but Rorik was presumably recalled in haste by Lothair to defend Frisia. His conquests across the Danish borders were apparently short-lived. They are next mentioned as administered by Danish monarchs in 873. Questions on loyalty An 863 entry of the Annales Bertiniani reports "In January Danes sailed up the Rhine towards Cologne, after sacking the emporium called Dorestad and also a fairly large villa at which the Frisians had taken refuge, and after slaying many Frisian traders and taking captive large numbers of people. Then they reached a certain island near the fort of Neuss. Lothar came up and attacked them with his men along one bank of the Rhine and the Saxons along the other and they encamped there until about the beginning of April. The Danes therefore followed the advice of Hrørek and withdrew by the same way they had come." The entry makes clear that another group of Danish raiders had attacked Dorestad before traveling upstream to Xanten. However a rumour soon circulated that Rorik had encouraged the raiders on their expedition. Coupland dismisses the idea that Rorik could have invited a raid on his own area. He suggests the rumour was based on his method of getting rid of the invaders. Rorik could have protected his own territory by convincing the Danes to travel further up the river, effectively letting them become other rulers' problems. Coupland notes it would not be a unique case in the 9th century. The Siege of Paris from 885 to 886 under Sigfred and Rollo had not ended with mutual annihilation. Charles the Fat had simply allowed Rollo to go and plunder Burgundy. The rumour of Rorik's apparent disloyalty induced Hincmar, Archbishop of Reims, to write two letters, one to Hunger and one to Rorik. Bishop Hunger was instructed to impose a suitable penance on Rorik if the rumour was found to be true. Hincmar also told Rorik not to shelter Baldwin I of Flanders, who had eloped with the king's daughter Judith. From these letters it becomes clear that Rorik had recently converted to Christianity and been baptized. Flodoard summarizes the content of the two letters, the first "To Bishop Hunger about the excommunication of Baldwin, who stole the widowed Judith, the daughter of the king, to become his wife, whereupon he was excommunicated by the bishop. He also admonishes Hunger, to persuade Hrørek the Norseman, who recently was converted to the Christian faith, not to receive or protect Baldwin. And also, if other Norsemen with his consent, as has been told, should have raided the kingdom after his conversion, he should be corrected with a proper punishment.", the other "To Hrørek the Norseman, who was converted to the Christian faith, so that he always might benefit [to do] the will of God and exercise his orders. As he had heard from many to do so, that nobody should persuade him acting against the Christians with advice or aid to benefit the heathens. Else it would not have been in his advantage that he had received the Christian baptism, as he himself or through others should have planned perverse or hostile affairs, and so on. As follows, it was made clear to him in an episcopal way how much danger was hidden in such a machination. He was also admonished not to receive Baldwin, who was excommunicated by the spirit of God, for which reason the holy canon was drawn up by means of episcopal authority, because he had stolen the daughter of the king to become his wife. And he should not allowed consolation nor refuge on his part whatsoever. So he and his men should not get involved in his sins and excommunication and get doomed themselves. But he should take care to present himself in a way, that he could benefit from the prayers of the saints." Coupland finds the contents of the letters particularly revealing. Rorik had apparently been granted control over Dorestad twice and well before his conversion to Christianity in the early 860s. Hincmar and Hunger having to convince Rorik not to give refuge to a declared enemy of Charles the Bald would mean Rorik enjoyed a "measure of political independence" from the various courts of the Carolingian dynasty at the time. Coupland notes that his contemporary Sedulius Scottus calls Rorik a King (Latin:Rex). Though noting that the reference has alternatively been interpreted to mean another contemporary ruler, Rhodri the Great of the Kingdom of Gwynedd. A hagiography of Adalbert of Egmond, written in the late 10th century, mentions a miracle of the saint in the time of "Roric the barbarian king" (Latin:Roricus barbarorum rex) Later rule In 867 there was a local revolt by the Cokingi and Rorik was driven out of Frisia. The Annales Bertiniani report that Lothair II "summoned up the host throughout his realm to the defense of the fatherland, as he explained, against the Norsemen, for he expected, that Hrørek, whom the local people, the new name for them is Cokings, had driven out of Frisia, would return bringing some Danes to help him." Coupland notes that the identity of the Cokingi is uncertain. Also uncertain is the nature of this loss of power by Rorik. Rorik could have lost control of only part of his realm or to have resumed control rather quickly. Because he is next mentioned in 870, still in Frisia. On 8 August 869, Lothair II died. Lotharingia was claimed by his uncles, Louis the German and Charles the Bald. In 870, the two came to an agreement with the Treaty of Meerssen which divided Lotharingia among them. The Annales Bertiniani report that Charles the Bald "went to the palace of Nijmegen to hold discussions with the Norseman Hrørek, whom he bound to himself by a treaty." Coupland considers the talks were between a ruler and a "leading local figure" of a newly annexed area. Charles secured his loyalty and recognition of his sovereignty, Rorik kept control of his region. The same type of agreement Lothair I and Lothair II had with him. Charles and Rorik seem to have restarted negotiations in 872, according to two separate entries of the Annales Bertiniani: "On 20 January he [Charles the Bald] left Compendio and went to the monastery of [name missing in surviving manuscripts] to hold talks with the Norsemen Hrørek and Hróðulfr." ... "In October he [Charles the Bald] came by boat down the Meuse to Maastricht and held talks with the Norsemen Hrørek and Hróðulfr who had come up the river to meet him. He gave a gracious reception to Hrørek who had proved loyal to him, but Hróðulfr he dismissed empty-handed, because he had been plotting acts of treachery and pitching his demands too high. Charles prepared his faithful men for defense against treacherous attacks of Hróðulfr. Then he rode back by way of Attigny to St. Medard's Abbey,where he [Charles] spent Christmas." The "Hróðulfr" of the text was Rudolf Haraldsson, a presumed nephew of Rorik. The Annales Xantenses mention him as "nepos" of Rorik which typically means "nephew". However like in the term "Cardinal-nephew" (); the term can also have the meaning of "relative" without specifying the relation. Coupland suggests the monastery mentioned was Moustier-sur-Sambre in the modern Namur province of Belgium, close to the former borders of Lotharingia. The reason and nature of these negotiations is obscure. In 873, Rorik swore allegiance to Louis, and that is the last that is heard of him. The Annales Xantenses report: "Likewise came to him [Louis] Hrørek, the gall of Christianity, nevertheless many hostages were put back in the ships and he became subject of the king and was bound by an oath to keep a firm loyalty." Coupland notes that Rorik held lands in both sides of the current border between the realms of Charles and Louis. Which would mean he owed loyalty to both of them. Leaving him in an "unenviable position". Death Rorik died before 882 when his lands were given to Sea-King Godfried. According to the Annales Bertiniani: "Charles, who had the title of emperor, marched against the Norsemen with a large army and advanced right up to their fortification. Once he got there, however, his courage failed him. Through the intervention of certain men, he managed to reach an agreement with Gøtrik and his men on the following terms: namely that Gøtrik would be baptized, and would then receive Frisia and the other regions that Hrørek had held." Dorestad was in economic decline throughout his reign, merchants migrating to cities less exposed to the constant fighting like Deventer and Tiel. Both of the latter were developing into "merchant towns" at the time. Coupland considers Rorik "the most powerful and influential of all the Danes drawn into the Carolingian milieu" of the 9th century. He notes how four Carolingian monarchs (Lothair I, Lothair II, Charles the Bald, Louis the German) accepted his presence in Frisia and his continued service as their vassal. Little criticism against him was recorded in the Frankish chronicles of his time. Even Hincmar did not outright accuse him and expected him to accept penance like a good Christian, which indicated the Franks had ceased thinking of him as a foreign element to their realm, regarding Rorik as one of their own. The historian also notes that there are only two recorded raids of his area in twenty-three known years of rule, a record of his effectiveness in defense in an era of turbulence. Rorik and Rurik Numerous scholars identified Rorik with Rurik, the founder of the Russian royal dynasty. The suggestion is based on the disappearance of Rorik from Frankish chronicles during the 860s, consistent with the appearance of Rurik in Novgorod in 862, but inconsistent with his remaining in power there until 879. The first identification to this effect was made by Hermann Hollmann in 1816. He stressed the importance of the locality of Rustringen, in Lower Saxony, as the possible origin of Rurik. In 1836, Friedrich Kruse also supported such a view. The hypothesis was revived strongly by N. T. Belyaev in 1929. Such an identification is not conclusive, and does not appear to have support from the majority of scholars. Yet there are a number of prominent Russian academics, such as Boris Rybakov, Dmitry Machinsky, and Igor Dubov, who have supported this identification to some extent. See also Scylding (dynasty) Rurik Dynasty Shum Gora Rikiwulf Godfrid, Duke of Frisia References Encyclopedia: Grote Winkler Prins Website about the Vikings in the Netherlands External links Chapter of "Carolingian Coinage and the Vikings" which covers the life of Rorik Category:810s births Category:880s deaths Category:9th-century Danish people Category:9th-century rulers in Europe Category:Converts to Christianity from pagan religions Category:Danish monarchs Category:People from Wijk bij Duurstede Category:Medieval Frisian rulers Category:Viking Age monarchs
Q: How to read from hbase using spark The below code will read from the hbase, then convert it to json structure and the convert to schemaRDD , But the problem is that I am using List to store the json string then pass to javaRDD, for data of about 100 GB the master will be loaded with data in memory. What is the right way to load the data from hbase then perform manipulation,then convert to JavaRDD. package hbase_reader; import java.io.IOException; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import org.apache.spark.api.java.JavaPairRDD; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.rdd.RDD; import org.apache.spark.sql.api.java.JavaSQLContext; import org.apache.spark.sql.api.java.JavaSchemaRDD; import org.apache.commons.cli.ParseException; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.mapreduce.TableInputFormat; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.Text; import org.apache.spark.SparkConf; import scala.Function1; import scala.Tuple2; import scala.runtime.AbstractFunction1; import com.google.common.collect.Lists; public class hbase_reader { public static void main(String[] args) throws IOException, ParseException { List<String> jars = Lists.newArrayList(""); SparkConf spconf = new SparkConf(); spconf.setMaster("local[2]"); spconf.setAppName("HBase"); //spconf.setSparkHome("/opt/human/opt/spark-0.9.0-hdp1"); spconf.setJars(jars.toArray(new String[jars.size()])); JavaSparkContext sc = new JavaSparkContext(spconf); //spconf.set("spark.executor.memory", "1g"); JavaSQLContext jsql = new JavaSQLContext(sc); HBaseConfiguration conf = new HBaseConfiguration(); String tableName = "HBase.CounData1_Raw_Min1"; HTable table = new HTable(conf,tableName); try { ResultScanner scanner = table.getScanner(new Scan()); List<String> jsonList = new ArrayList<String>(); String json = null; for(Result rowResult:scanner) { json = ""; String rowKey = Bytes.toString(rowResult.getRow()); for(byte[] s1:rowResult.getMap().keySet()) { String s1_str = Bytes.toString(s1); String jsonSame = ""; for(byte[] s2:rowResult.getMap().get(s1).keySet()) { String s2_str = Bytes.toString(s2); for(long s3:rowResult.getMap().get(s1).get(s2).keySet()) { String s3_str = new String(rowResult.getMap().get(s1).get(s2).get(s3)); jsonSame += "\""+s2_str+"\":"+s3_str+","; } } jsonSame = jsonSame.substring(0,jsonSame.length()-1); json += "\""+s1_str+"\""+":{"+jsonSame+"}"+","; } json = json.substring(0,json.length()-1); json = "{\"RowKey\":\""+rowKey+"\","+json+"}"; jsonList.add(json); } JavaRDD<String> jsonRDD = sc.parallelize(jsonList); JavaSchemaRDD schemaRDD = jsql.jsonRDD(jsonRDD); System.out.println(schemaRDD.take(2)); } finally { table.close(); } } } A: A Basic Example to Read the HBase data using Spark (Scala), You can also wrtie this in Java : import org.apache.hadoop.hbase.client.{HBaseAdmin, Result} import org.apache.hadoop.hbase.{ HBaseConfiguration, HTableDescriptor } import org.apache.hadoop.hbase.mapreduce.TableInputFormat import org.apache.hadoop.hbase.io.ImmutableBytesWritable import org.apache.spark._ object HBaseRead { def main(args: Array[String]) { val sparkConf = new SparkConf().setAppName("HBaseRead").setMaster("local[2]") val sc = new SparkContext(sparkConf) val conf = HBaseConfiguration.create() val tableName = "table1" System.setProperty("user.name", "hdfs") System.setProperty("HADOOP_USER_NAME", "hdfs") conf.set("hbase.master", "localhost:60000") conf.setInt("timeout", 120000) conf.set("hbase.zookeeper.quorum", "localhost") conf.set("zookeeper.znode.parent", "/hbase-unsecure") conf.set(TableInputFormat.INPUT_TABLE, tableName) val admin = new HBaseAdmin(conf) if (!admin.isTableAvailable(tableName)) { val tableDesc = new HTableDescriptor(tableName) admin.createTable(tableDesc) } val hBaseRDD = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat], classOf[ImmutableBytesWritable], classOf[Result]) println("Number of Records found : " + hBaseRDD.count()) sc.stop() } } UPDATED -2016 As of Spark 1.0.x+, Now you can use Spark-HBase Connector also : Maven Dependency to Include : <dependency> <groupId>it.nerdammer.bigdata</groupId> <artifactId>spark-hbase-connector_2.10</artifactId> <version>1.0.3</version> // Version can be changed as per your Spark version, I am using Spark 1.6.x </dependency> And find a below sample code for the same : import org.apache.spark._ import it.nerdammer.spark.hbase._ object HBaseRead extends App { val sparkConf = new SparkConf().setAppName("Spark-HBase").setMaster("local[4]") sparkConf.set("spark.hbase.host", "<YourHostnameOnly>") //e.g. 192.168.1.1 or localhost or your hostanme val sc = new SparkContext(sparkConf) // For Example If you have an HBase Table as 'Document' with ColumnFamily 'SMPL' and qualifier as 'DocID, Title' then: val docRdd = sc.hbaseTable[(Option[String], Option[String])]("Document") .select("DocID", "Title").inColumnFamily("SMPL") println("Number of Records found : " + docRdd .count()) } UPDATED - 2017 As of Spark 1.6.x+, Now you can use SHC Connector also (Hortonworks or HDP users) : Maven Dependency to Include : <dependency> <groupId>com.hortonworks</groupId> <artifactId>shc</artifactId> <version>1.0.0-2.0-s_2.11</version> // Version depends on the Spark version and is supported upto Spark 2.x </dependency> The Main advantage of using this connector is that it have flexibility in the Schema definition and doesn't need Hardcoded params just like in nerdammer/spark-hbase-connector. Also remember that it supports Spark 2.x so this connector is pretty much flexible and provides end-to-end support in Issues and PRs. Find the below repository path for the latest readme and samples : Hortonworks Spark HBase Connector You can also convert this RDD's to DataFrames and run SQL over it or You can map these Dataset or DataFrames to user defined Java Pojo's or Case classes. It works brilliant. Please comment below if you need anything else. A: I prefer to read from hbase and do the json manipulation all in spark. Spark provides JavaSparkContext.newAPIHadoopRDD function to read data from hadoop storage, including HBase. You will have to provide the HBase configuration, table name, and scan in the configuration parameter and table input format and it's key-value You can use table input format class and it's job parameter to provide the table name and scan configuration example: conf.set(TableInputFormat.INPUT_TABLE, "tablename"); JavaPairRDD<ImmutableBytesWritable, Result> data = jsc.newAPIHadoopRDD(conf, TableInputFormat.class,ImmutableBytesWritable.class, Result.class); then you can do the json manipulation in spark. Since spark can do recalculation when the memory is full, it will only load the data needed for the recalculation part (cmiiw) so you don't have to worry about the data size A: just to add a comment on how to add scan: TableInputFormat has the following attributes: SCAN_ROW_START SCAN_ROW_STOP conf.set(TableInputFormat.SCAN_ROW_START, "startrowkey"); conf.set(TableInputFormat.SCAN_ROW_STOP, "stoprowkey");
Tag: movingtoaustin It’s no secret Texans love their barbecue. It’s also a verifiable truth that H-E-B is one of the mot beloved grocery stores in the state, and maybe even America. Put the two together, and you’ve got a winning formula. RICARDO B. BRAZZIELL/AMERICAN-STATESMAN That’s right, Texas. H-E-B is about to introduce drive-thru barbecue stands to certain stores starting in August, the San Antonio Express-News reports. Customers will be able to enjoy meals from True Texas BBQ, the grocery chain’s barbecue brand. The restaurant will also serve breakfast tacos, because of course it will. “Even if families don’t need to necessarily do a full shop, the True Texas BBQ will be a spot where families can go and dine together and enjoy what is arguably some of the best barbecue in Texas,” H-E-B spokesperson Dya Campos told the Express-News Wednesday. Sadly for Austinites, it looks like we’re still stuck waiting in line at Franklin. So far, the only store to feature the True Texas BBQ restaurant will be in San Antonio, as part of a new 118,000-square-foot H-E-B in the southwest corner of Loop 1604 and Bulverde Road. Sure, the Uvalde-born Matthew McConaughey is a Texan’s Texan with an obvious affinity for the Lone Star State … EXHIBIT A: Yes, that’s Matthew McConaughey back there cheering on a Longhorn touchdown during an October 1999 game. Photo by Sung Park / American-Statesman … but the Hollywood superstar could live anywhere he wants — and afford to bring plenty of Texas with him — why does he live in Austin? The answer is simple: Family. In an interview with ABC News film and TV critic Peter Travers, McConaughey said “my mother is there, the rest of my family is there, part of the reason for going back there was having kids.” McConaughey has lived here in Austin with his wife, Camila Alves, and their three children. His mom, Kay, lives in the Sun City retirement community near Georgetown. Matthew McConaughey and Camila Alves attend a party at the Highball before the Austin premiere of McConaughey’s movie “Gold” on Jan. 12, 2017. Contributed by Rick Kern Last week, McConaughey appeared on ABC’s “Good Morning America” to promote his new film, “Gold” — wearing, fittingly, a shirt emblazoned with Texas yellow roses … Personally, I’d rather take a whomping with a stout stick than spend much time looking back on 2016. But if you’re up for it, l’ll guide you through some of the Texas stuff we learned and loved in the last year … If you’ve ever woken up hungover in a tent in Terlingua, we talked about how that probably happened. When you crest that rise on that Farm-to-Market road at 75 mph and there’s a buzzard congregation gathered ’round the remains of something that was easier to recognize before it wandered into traffic … what are you going to do? What are the buzzards going to do? Today, the Austin360 cover story was a (long) list of things we love about Austin. We know there are millions to include, but we were limited by space so we stuck to 175. Did we miss yours? Let us know in the comments here or on the full story. Want a visual aid? We built a gallery of the 170 picks that could be depicted through photos. Here were the top picks from each staff member that participated: Arianna Auber: beer, wine and spirits writer: Sipping on any one of the 24 thoughtfully curated craft beers on tap at Hi Hat Public House. This little eastside bar, with always friendly service and a menu of gourmet comfort food, helped develop my love of beer and discover the welcoming community surrounding it here. Try Hi Hat on a Tuesday, when you can get two tacos and a pint for $10. (hihatpublichouse.com) Michael Barnes: people, places, culture and history writer: Walking anywhere in Austin. Doesn’t matter where. Mostly, however, in the central city, where, thanks to the Great Streets program, pedestrians are safe, shaded, comfortable and happy. (austintexas.gov/page/great-streets) Peter Blackstock: music writer: Weekly residencies at the Continental Cluband Continental Gallery. We tend to take them for granted, but faraway fans of established artists such as James McMurtry, Alejandro Escovedo, Dale Watson and Jon Dee Graham are rightly amazed to learn that Austinites can hear them play most every week at the anchor of SoCo. (continentalclub.com) Addie Broyles: food writer: With more than a dozen farmers markets taking place on just about every day of the week, it’s easy to find yourself sampling some of the most interesting locally produced food products Austin has to offer, from kimchi and kombucha to kolaches from a food truck and some of the best tamales in Central Texas. Sharon Chapman: entertainment editor: Yappy hours, off-leash parks, day cares, parades and more rescue groups than you can name: Dog cultureis alive and thriving in our pet-friendly city (this year even saw the first ever Austin Pittie Limits). My two wishes for my fellow Austin dog lovers: Everyone obey leash laws, and everyone pick up after their beloved four-legged pals. Nancy Flores: culture reporter: Standing on the top step of the St. Edward’s University Main Building, which is perched on a hill with panoramic views of the Austin downtown skyline. When I first moved to Austin from the small town of Eagle Pass to attend St. Edward’s, looking out at the impressive view meant a world of possibilities ahead. (stedwards.edu) Omar L. Gallaga: technology culture writer: The incredible collection of cabinets and pinball machines atPinballz Arcade make this North Austin institution a great place for a nerd party. With a castle-themed Pinballz Kingdom opening in Buda, South Central Texas is getting arcade love as well. (pinballzarcade.com) Joe Gross: culture writer: The fact that if you are a geek of any conceivable stripe, boy howdy, is this the town for you. Let’s start with comics. Our very best comics shop, Austin Books and Comics, opens at 9 a.m. on new comics Wednesdays. Pick up your titles, talk shop, then head to work or class. (austinbooks.com) Pamela LeBlanc: fitness and travel writer: Water skiing beneath the Pennybacker Bridge on Lake Austin as the sun comes up. (During the week, before work, when the water is glass.) Melissa Martinez: online content producer and entertainment blogger: Hanging out in the grass of the Capitol lawn enjoying a picnic, playing games and rolling down the hills. (tspb.state.tx.us) Matthew Odam: restaurant and travel writer: He may not have been born here or started his career here, but Willie Nelson put Austin on the map musically. He is the Godfather of Austin, embodying the city’s spirit. His natural ease and Zen nature beautifully represent that to which many Austinites aspire. Some of my earliest memories are of hearing his music and going to his fun run. (willienelson.com) Dale Roe: lifestyle writer: Watching a slow-moving Round Rock Expressbaseball game at the Dell Diamond while a cool breeze wafts through the third base side seats offers a welcome respite from the pressures of daily life. A cold beverage, delicious ballpark food, the crack of a bat and the roar of the crowd can make the rest of the world disappear for a few precious hours. Courtney Sebesta: online news and entertainment editor: Sailing a 30-foot sailboat on Lake Travis and spending long summer days swimming, grilling and watching the sun set with friends. Deborah Sengupta Stith: music writer: The city’s ethos is built on a spirit of individuality. I’ve walked around town with a shaved head and a giant nose ring and no one batted an eye. After growing up in a small town, I love being in a place where I feel free to follow my oddball muse wherever it might take me. Jeanne Claire van Ryzin: arts critic: Wandering in and around the 1916 Italianate mansion and lakeside gardens known as Laguna Gloria. (thecontemporaryaustin.org) Nicole Villalpando: family editor: The Thinkery, which opened last December, is finally the children’s museum Austin deserves. Get your tickets online in advance to guarantee entry and don’t forget to step across the street for one of the coolest playgrounds in Austin. (thinkeryaustin.com) Eric Webb: online content producer and culture blogger: Going where everybody knows your name at Cheer Up Charlie’s. Calling the über-chill LGBT haven a “scene” would sound contrived. But the truth is that there’s no better place to be young and breathing in Austin on any given night. Been sipping their kombucha cocktails since the bar was on East Sixth? Hankering East Side King trailer grub and live outdoor music (or a drag show)? Ready to get repulsively sweaty to a DJ set of exclusively Beyoncé songs? The pink and blue neon sign beckons. (cheerupcharlies.com)
The present invention relates generally to the field of marking devices, and more particularly to a device capable of applying a marking material to a substrate by introducing the marking material into a high-velocity propellant stream. Ink jet is currently a common printing technology. There are a variety of types of ink jet printing, including thermal ink jet (TIJ), piezo-electric ink jet, etc. In general, liquid ink droplets are ejected from an orifice located at a one terminus of a channel. In a TIJ printer, for example, a droplet is ejected by the explosive formation of a vapor bubble within an ink-bearing channel. The vapor bubble is formed by means of a heater, in the form of a resistor, located on one surface of the channel. We have identified several disadvantages with TIJ (and other ink jet) systems known in the art. For a 300 spot-per-inch (spi) TIJ system, the exit orifice from which an ink droplet is ejected is typically on the order of about 64 xcexcm in width, with a channel-to-channel spacing (pitch) of about 84 xcexcm, and for a 600 dpi system width is about 35 xcexcm and pitch of about 42 xcexcm. A limit on the size of the exit orifice is imposed by the viscosity of the fluid ink used by these systems. It is possible to lower the viscosity of the ink by diluting it in increasing amounts of liquid (e.g., water) with an aim to reducing the exit orifice width. However, the increased liquid content of the ink results in increased wicking, paper wrinkle, and slower drying time of the ejected ink droplet, which negatively affects resolution, image quality (e.g., minimum spot size, inter-color mixing, spot shape), etc. The effect of this orifice width limitation is to limit resolution of TIJ printing, for example to well below 900 spi, because spot size is a function of the width of the exit orifice, and resolution is a function of spot size. Another disadvantage of known ink jet technologies is the difficulty of producing greyscale printing. That is, it is very difficult for an ink jet system to produce varying size spots on a printed substrate. If one lowers the propulsive force (heat in a TIJ system) so as to eject less ink in an attempt to produce a smaller dot, or likewise increases the propulsive force to eject more ink and thereby to produce a larger dot, the trajectory of the ejected droplet is affected. This in turn renders precise dot placement difficult or impossible, and not only makes monochrome greyscale printing problematic, it makes multiple color greyscale ink jet printing impracticable. In addition, preferred greyscale printing is obtained not by varying the dot size, as is the case for TIJ, but by varying the dot density while keeping a constant dot size. Still another disadvantage of common ink jet systems is rate of marking obtained. Approximately 80% of the time required to print a spot is taken by waiting for the ink jet channel to refill with ink by capillary action. To a certain degree, a more dilute ink flows faster, but raises the problem of wicking, substrate wrinkle, drying time, etc. discussed above. One problem common to ejection printing systems is that the channels may become clogged. Systems such as TIJ which employ aqueous ink colorants are often sensitive to this problem, and routinely employ non-printing cycles for channel cleaning during operation. This is required since ink typically sits in an ejector waiting to be ejected during operation, and while sitting may begin to dry and lead to clogging. Other technologies which may be relevant as background to the present invention include electrostatic grids, electrostatic ejection (so-called tone jet), acoustic ink printing, and certain aerosol and atomizing systems such as dye sublimation. The present invention is a novel system for applying a marking material to a substrate, directly or indirectly, which overcomes the disadvantages referred to above, as well as others discussed further herein. In particular, the present invention is a system of the type including a propellant which travels through a channel, and a marking material which is controllably (i.e., modifiable in use) introduced, or metered, into the channel such that energy from the propellant propels the marking material to the substrate. The propellant is usually a dry gas which may continuously flow through the channel while the marking apparatus is in an operative configuration (i.e., in a power-on or similar state ready to mark). The system is referred to as xe2x80x9cballistic aerosol markingxe2x80x9d in the sense that marking is achieved by in essence launching a non-colloidal, solid or semi-solid particulate, or alternatively a liquid, marking material at a substrate. The shape of the channel may result in a collimated (or focused) flight of the propellant and marking material onto the substrate. In our system, the propellant may be introduced at a propellant port into the channel to form a propellant stream. A marking material may then be introduced into the propellant stream from one or more marking material inlet ports. The propellant may enter the channel at a high velocity. Alternatively, the propellant may be introduced into the channel at a high pressure, and the channel may include a constriction (e.g., de Laval or similar converging/diverging type nozzle) for converting the high pressure of the propellant to high velocity. In such a case, the propellant is introduced at a port located at a proximal end of the channel (defined as the converging region), and the marking material ports are provided near the distal end of the channel (at or further down-stream of a region defined as the diverging region), allowing for introduction of marking material into the propellant stream. In the case where multiple ports are provided, each port may provide for a different color (e.g., cyan, magenta, yellow, and black), pre-marking treatment material (such as a marking material adherent), post-marking treatment material (such as a substrate surface finish material, e.g., matte or gloss coating, etc.), marking material not otherwise visible to the unaided eye (e.g., magnetic particle-bearing material, ultra violet-fluorescent material, etc.) or other marking material to be applied to the substrate. The marking material is imparted with kinetic energy from the propellant stream, and ejected from the channel at an exit orifice located at the distal end of the channel in a direction toward a substrate. One or more such channels may be provided in a structure which, in one embodiment, is referred to herein as a print head. The width of the exit (or ejection) orifice of a channel is generally on the order of 250 xcexcm or smaller, preferably in the range of 100 xcexcm or smaller. Where more than one channel is provided, the pitch, or spacing from edge to edge (or center to center) between adjacent channels may also be on the order of 250 xcexcm or smaller, preferably in the range of 100 xcexcm or smaller. Alternatively, the channels may be staggered, allowing reduced edge-to-edge spacing. The exit orifice and/or some or all of each channel may have a circular, semicircular, oval, square, rectangular, triangular or other cross sectional shape when viewed along the direction of flow of the propellant stream (the channel""s longitudinal axis). The material to be applied to the substrate may be transported to a port by one or more of a wide variety of ways, including simple gravity feed, hydrodynamic, electrostatic, or ultrasonic transport, etc. The material may be metered out of the port into the propellant stream also by one of a wide variety of ways, including control of the transport mechanism, or a separate system such as pressure balancing, electrostatics, acoustic energy, ink jet, etc. The material to be applied to the substrate may be a solid or semi-solid particulate material such as a toner or variety of toners in different colors, a suspension of such a marking material in a carrier, a suspension of such a marking material in a carrier with a charge director, a phase change material, etc. One preferred embodiment employs a marking material which is particulate, solid or semi-solid, and dry or suspended in a liquid carrier. Such a marking material is referred to herein as a particulate marking material. This is to be distinguished from a liquid marking material, dissolved marking material, atomized marking material, or similar non-particulate material, which is generally referred to herein as a liquid marking material. However, the present invention is able to utilize such a liquid marking material in certain applications, as otherwise described herein. In addition, the ability to use a wide variety of marking materials (e.g., not limited to aqueous marking material) allows the present invention to mark on a wide variety of substrates. For example, the present invention allows direct marking on non-porous substrates such as polymers, plastics, metals, glass, treated and finished surfaces, etc. The reduction in wicking and elimination of drying time also provides improved printing to porous substrates such as paper, textiles, ceramics, etc. In addition, the present invention may be configured for indirect marking, for example marking to an intermediate transfer roller or belt, marking to a viscous binder film and nip transfer system, etc. The material to be deposited on a substrate may be subjected to post ejection modification, for example fusing or drying, overcoat, curing, etc. In the case of fusing, the kinetic energy of the material to be deposited may itself be sufficient to effectively either soften or melt (generically referred to herein as xe2x80x9cmeltxe2x80x9d) the marking material upon impact with the substrate and fuse it to the substrate. The substrate may be heated to enhance this process. Pressure rollers may be used to cold-fuse the marking material to the substrate. In-flight phase change (solid-liquid-solid) may alternatively be employed. A heated wire in the particle path is one way to accomplish the initial phase change. Alternatively, propellant temperature may accomplish this result. In one embodiment, a laser may be employed to heat and melt the particulate material in-flight to accomplish the initial phase change. The melting and fusing may also be electrostatically assisted (i.e., retaining the particulate material in a desired position to allow ample time for melting and fusing into a final desired position). The type of particulate may also dictate the post ejection modification. For example, UV curable materials may be cured by application of UV radiation, either in flight or when located on the material-bearing substrate. Since propellant may continuously flow through a channel, channel clogging from the build-up of material is reduced or eliminated (the propellant effectively continuously cleans the channel). In addition, a closure may be provided which isolates the channels from the environment when the system is not in use. Alternatively, the print head and substrate support (e.g., platen) may be brought into physical contact to effect a closure of the channel. Initial and terminal cleaning cycles may be designed into operation of the printing system to optimize the cleaning of the channel(s). Waste material cleaned from the system may be deposited in a cleaning station. However, it is also possible to engage the closure against an orifice to redirect the propellant stream through the port and into the reservoir to thereby flush out the port. Thus, the present invention and its various embodiments provide numerous advantages discussed above, as well as additional advantages which will be described in further detail below.
/*--------------------------------------------------------------------------------------------- * Copyright (c) Bentley Systems, Incorporated. All rights reserved. * See LICENSE.md in the project root for license terms and full copyright notice. *--------------------------------------------------------------------------------------------*/ import * as chai from "chai"; import { Config, Guid } from "@bentley/bentleyjs-core"; import { ContextType } from "@bentley/context-registry-client"; import { ChangeSetCreatedEvent, GetEventOperationType, GlobalEventSAS, GlobalEventSubscription, GlobalEventType, HardiModelDeleteEvent, HubIModel, IModelClient, IModelCreatedEvent, IModelHubGlobalEvent, NamedVersionCreatedEvent, SoftiModelDeleteEvent, } from "@bentley/imodelhub-client"; import { AccessToken, AuthorizedClientRequestContext } from "@bentley/itwin-client"; import { TestUserCredentials } from "@bentley/oidc-signin-tool"; import { RequestType, ResponseBuilder, ScopeType } from "../ResponseBuilder"; import { TestConfig } from "../TestConfig"; import * as utils from "./TestUtils"; chai.should(); function mockGetGlobalEvent(subscriptionId: string, eventBody: object, eventType?: string, timeout?: number, responseCode?: number, delay?: number) { if (!TestConfig.enableMocks) return; const headers = eventType ? { "content-type": eventType! } : {}; let query = subscriptionId + "/messages/head"; if (timeout) query += `?timeout=${timeout}`; const requestPath = utils.createRequestUrl(ScopeType.Global, "", "Subscriptions", query); ResponseBuilder.mockResponse(utils.IModelHubUrlMock.getUrl(), RequestType.Delete, requestPath, eventBody, 1, {}, headers, responseCode, delay); } function mockPeekLockGlobalEvent(subscriptionId: string, eventBody: object, eventType?: string, timeout?: number, responseCode: number = 201, delay?: number) { if (!TestConfig.enableMocks) return; const headerLocationQuery = subscriptionId + "/messages/2/7da9cfd5-40d5-4bb1-8d64-ec5a52e1c547"; const responseHeaderLocation = utils.IModelHubUrlMock.getUrl() + utils.createRequestUrl(ScopeType.Global, "", "Subscriptions", headerLocationQuery); const headers = eventType ? { "content-type": eventType!, "location": responseHeaderLocation, } : {}; let query = subscriptionId + "/messages/head"; if (timeout) query += `?timeout=${timeout}`; const requestPath = utils.createRequestUrl(ScopeType.Global, "", "Subscriptions", query); ResponseBuilder.mockResponse(utils.IModelHubUrlMock.getUrl(), RequestType.Post, requestPath, eventBody, 1, undefined, headers, responseCode, delay); } function mockDeleteLockedEvent(subscriptionId: string, responseCode: number = 200) { if (!TestConfig.enableMocks) return; const query = subscriptionId + "/messages/2/7da9cfd5-40d5-4bb1-8d64-ec5a52e1c547"; const requestPath = utils.createRequestUrl(ScopeType.Global, "", "Subscriptions", query); ResponseBuilder.mockResponse(utils.IModelHubUrlMock.getUrl(), RequestType.Delete, requestPath, undefined, 1, undefined, undefined, responseCode); } function mockCreateGlobalEventsSubscription(subscriptionId: string, eventTypes: GlobalEventType[]) { if (!TestConfig.enableMocks) return; const requestPath = utils.createRequestUrl(ScopeType.Global, "", "GlobalEventSubscription"); const requestResponse = ResponseBuilder.generatePostResponse<GlobalEventSubscription>( ResponseBuilder.generateObject<GlobalEventSubscription>(GlobalEventSubscription, new Map<string, any>([ ["wsgId", Guid.createValue()], ["eventTypes", eventTypes], ["subscriptionId", subscriptionId], ]))); const postBody = ResponseBuilder.generatePostBody<GlobalEventSubscription>( ResponseBuilder.generateObject<GlobalEventSubscription>(GlobalEventSubscription, new Map<string, any>([ ["eventTypes", eventTypes], ["subscriptionId", subscriptionId], ]))); ResponseBuilder.mockResponse(utils.IModelHubUrlMock.getUrl(), RequestType.Post, requestPath, requestResponse, 1, postBody); } function mockUpdateGlobalEventSubscription(wsgId: string, subscriptionId: string, eventTypes: GlobalEventType[]) { if (!TestConfig.enableMocks) return; const responseObject = ResponseBuilder.generateObject<GlobalEventSubscription>(GlobalEventSubscription, new Map<string, any>([ ["wsgId", wsgId], ["eventTypes", eventTypes], ["subscriptionId", subscriptionId], ])); const requestPath = utils.createRequestUrl(ScopeType.Global, "", "GlobalEventSubscription", wsgId); const requestResponse = ResponseBuilder.generatePostResponse<GlobalEventSubscription>(responseObject); const postBody = ResponseBuilder.generatePostBody<GlobalEventSubscription>(responseObject); ResponseBuilder.mockResponse(utils.IModelHubUrlMock.getUrl(), RequestType.Post, requestPath, requestResponse, 1, postBody); } function mockDeleteGlobalEventsSubscription(wsgId: string) { if (!TestConfig.enableMocks) return; const requestPath = utils.createRequestUrl(ScopeType.Global, "", "GlobalEventSubscription", wsgId); ResponseBuilder.mockResponse(utils.IModelHubUrlMock.getUrl(), RequestType.Delete, requestPath); } function mockGetGlobalEventSASToken() { if (!TestConfig.enableMocks) return; const requestPath = utils.createRequestUrl(ScopeType.Global, "", "GlobalEventSAS"); const responseObject = ResponseBuilder.generateObject<GlobalEventSAS>(GlobalEventSAS, new Map<string, any>([ ["sasToken", "12345"], ["baseAddress", `${utils.IModelHubUrlMock.getUrl()}/sv1.1/Repositories/Global--Global/GlobalScope`]])); const requestResponse = ResponseBuilder.generatePostResponse<GlobalEventSAS>(responseObject); const postBody = ResponseBuilder.generatePostBody<HubIModel>(ResponseBuilder.generateObject<GlobalEventSAS>(GlobalEventSAS)); ResponseBuilder.mockResponse(utils.IModelHubUrlMock.getUrl(), RequestType.Post, requestPath, requestResponse, 1, postBody); } describe("iModelHub GlobalEventHandler (#unit)", () => { let globalEventSubscription: GlobalEventSubscription; let globalEventSas: GlobalEventSAS; let projectId: string; const imodelName = "imodeljs-clients GlobalEvents test"; const imodelHubClient: IModelClient = utils.getDefaultClient(); let requestContext: AuthorizedClientRequestContext; let serviceAccountRequestContext: AuthorizedClientRequestContext; let serviceAccount1: TestUserCredentials; before(async () => { const accessToken: AccessToken = await utils.login(); requestContext = new AuthorizedClientRequestContext(accessToken); projectId = await utils.getProjectId(requestContext); serviceAccount1 = { email: Config.App.getString("imjs_test_serviceAccount1_user_name"), password: Config.App.getString("imjs_test_serviceAccount1_user_password"), }; const serviceAccountAccessToken = await utils.login(serviceAccount1); serviceAccountRequestContext = new AuthorizedClientRequestContext(serviceAccountAccessToken); await utils.deleteIModelByName(requestContext, projectId, imodelName); }); after(async () => { if (!TestConfig.enableMocks) return; await utils.deleteIModelByName(requestContext, projectId, imodelName); if (!TestConfig.enableMocks) { utils.getRequestBehaviorOptionsHandler().resetDefaultBehaviorOptions(); imodelHubClient.requestOptions.setCustomOptions(utils.getRequestBehaviorOptionsHandler().toCustomRequestOptions()); } }); afterEach(() => { ResponseBuilder.clearMocks(); }); it("should subscribe to Global Events", async () => { const eventTypesList: GlobalEventType[] = ["iModelCreatedEvent"]; const id = Guid.createValue(); mockCreateGlobalEventsSubscription(id, eventTypesList); globalEventSubscription = await imodelHubClient.globalEvents.subscriptions.create(serviceAccountRequestContext, id, eventTypesList); chai.assert(globalEventSubscription); chai.assert(globalEventSubscription.eventTypes); chai.expect(globalEventSubscription.eventTypes!).to.be.deep.equal(eventTypesList); }); it("should retrieve Global Event SAS token", async () => { mockGetGlobalEventSASToken(); globalEventSas = await imodelHubClient.globalEvents.getSASToken(serviceAccountRequestContext); }); it("should receive Global Event iModelCreatedEvent", async () => { await utils.createIModel(requestContext, imodelName, projectId); const eventBody = `{"EventTopic":"iModelHubGlobalEvents","FromEventSubscriptionId":"${Guid.createValue()}","ToEventSubscriptionId":"","ProjectId":"${projectId}","ContextId":"${projectId}","ContextTypeId":${ContextType.Project},"iModelId":"${Guid.createValue()}"}`; mockGetGlobalEvent(globalEventSubscription.wsgId, JSON.parse(eventBody), "iModelCreatedEvent"); const event = await imodelHubClient.globalEvents.getEvent(requestContext, globalEventSas.sasToken!, globalEventSas.baseAddress!, globalEventSubscription.wsgId); chai.expect(event).to.be.instanceof(IModelCreatedEvent); chai.assert(!!event!.iModelId); chai.expect(event!.contextId).to.be.eq(projectId); chai.expect(event!.contextTypeId).to.be.eq(ContextType.Project); }); it("should update Global Event subscription", async () => { const newEventTypesList: GlobalEventType[] = ["iModelCreatedEvent", "SoftiModelDeleteEvent", "HardiModelDeleteEvent", "ChangeSetCreatedEvent", "NamedVersionCreatedEvent"]; mockUpdateGlobalEventSubscription(globalEventSubscription.wsgId, globalEventSubscription.subscriptionId!, newEventTypesList); globalEventSubscription.eventTypes = newEventTypesList; globalEventSubscription = await imodelHubClient.globalEvents.subscriptions.update(serviceAccountRequestContext, globalEventSubscription); chai.assert(globalEventSubscription); chai.assert(globalEventSubscription.eventTypes); chai.expect(globalEventSubscription.eventTypes!).to.be.deep.equal(newEventTypesList); }); it("should receive Global Event through listener", async () => { if (TestConfig.enableMocks) { mockGetGlobalEventSASToken(); const requestResponse = JSON.parse(`{"EventTopic":"iModelHubGlobalEvents","FromEventSubscriptionId":"${Guid.createValue()}","ToEventSubscriptionId":"","ProjectId":"${Guid.createValue()}","iModelId":"${Guid.createValue()}"}`); mockGetGlobalEvent(globalEventSubscription.wsgId, requestResponse, "SoftiModelDeleteEvent", 60); mockGetGlobalEvent(globalEventSubscription.wsgId, {}, undefined, 60, 204, 2000); } let receivedEventsCount = 0; const deleteListener = imodelHubClient.globalEvents.createListener(requestContext, async () => { return utils.login(serviceAccount1); }, globalEventSubscription.wsgId, (receivedEvent: IModelHubGlobalEvent) => { if (receivedEvent instanceof SoftiModelDeleteEvent) receivedEventsCount++; }); await utils.deleteIModelByName(requestContext, projectId, imodelName); let timeoutCounter = 0; for (; timeoutCounter < 100; ++timeoutCounter) { if (receivedEventsCount === 1) break; await new Promise((resolve) => setTimeout(resolve, TestConfig.enableMocks ? 1 : 100)); } deleteListener(); chai.expect(timeoutCounter).to.be.lessThan(100); }); it("should receive Global Event with Peek-lock (#unit)", async () => { const eventBody = `{"EventTopic":"iModelHubGlobalEvents","FromEventSubscriptionId":"${Guid.createValue()}","ToEventSubscriptionId":"","ProjectId":"${projectId}","iModelId":"${Guid.createValue()}"}`; mockPeekLockGlobalEvent(globalEventSubscription.wsgId, JSON.parse(eventBody), "iModelCreatedEvent"); const lockedEvent = await imodelHubClient.globalEvents.getEvent(requestContext, globalEventSas.sasToken!, globalEventSas.baseAddress!, globalEventSubscription.wsgId, undefined, GetEventOperationType.Peek); mockDeleteLockedEvent(globalEventSubscription.wsgId); const deleted = await lockedEvent!.delete(requestContext); chai.expect(deleted); }); it("should receive Global Event SoftiModelDeleteEvent (#unit)", async () => { const eventBody = `{"EventTopic":"iModelHubGlobalEvents","FromEventSubscriptionId":"${Guid.createValue()}","ToEventSubscriptionId":"","ProjectId":"${Guid.createValue()}","iModelId":"${Guid.createValue()}"}`; mockGetGlobalEvent(globalEventSubscription.wsgId, JSON.parse(eventBody), "SoftiModelDeleteEvent"); const event = await imodelHubClient.globalEvents.getEvent(requestContext, globalEventSas.sasToken!, globalEventSas.baseAddress!, globalEventSubscription.wsgId); chai.expect(event).to.be.instanceof(SoftiModelDeleteEvent); chai.assert(!!event!.iModelId); }); it("should receive Global Event HardiModelDeleteEvent (#unit)", async () => { const eventBody = `{"EventTopic":"iModelHubGlobalEvents","FromEventSubscriptionId":"${Guid.createValue()}","ToEventSubscriptionId":"","ProjectId":"${Guid.createValue()}","iModelId":"${Guid.createValue()}"}`; mockGetGlobalEvent(globalEventSubscription.wsgId, JSON.parse(eventBody), "HardiModelDeleteEvent"); const event = await imodelHubClient.globalEvents.getEvent(requestContext, globalEventSas.sasToken!, globalEventSas.baseAddress!, globalEventSubscription.wsgId); chai.expect(event).to.be.instanceof(HardiModelDeleteEvent); chai.assert(!!event!.iModelId); }); it("should receive Global Event ChangeSetCreatedEvent (#unit)", async () => { const eventBody = `{"EventTopic":"iModelHubGlobalEvents","FromEventSubscriptionId":"${Guid.createValue()}","ToEventSubscriptionId":"","ProjectId":"${Guid.createValue()}","iModelId":"${Guid.createValue()}","BriefcaseId":2,"ChangeSetId":"369","ChangeSetIndex":"1"}`; mockGetGlobalEvent(globalEventSubscription.wsgId, JSON.parse(eventBody), "ChangeSetCreatedEvent"); const event = await imodelHubClient.globalEvents.getEvent(requestContext, globalEventSas.sasToken!, globalEventSas.baseAddress!, globalEventSubscription.wsgId); chai.expect(event).to.be.instanceof(ChangeSetCreatedEvent); chai.assert(!!event!.iModelId); }); it("should receive Global Event baseline NamedVersionCreatedEvent (#unit)", async () => { const eventBody = `{"EventTopic":"iModelHubGlobalEvents","FromEventSubscriptionId":"${Guid.createValue()}","ToEventSubscriptionId":"","ProjectId":"${Guid.createValue()}","iModelId":"${Guid.createValue()}","ChangeSetId":"","VersionId":"${Guid.createValue()}","VersionName":"357"}`; mockGetGlobalEvent(globalEventSubscription.wsgId, JSON.parse(eventBody), "NamedVersionCreatedEvent"); const event = await imodelHubClient.globalEvents.getEvent(requestContext, globalEventSas.sasToken!, globalEventSas.baseAddress!, globalEventSubscription.wsgId); chai.expect(event).to.be.instanceof(NamedVersionCreatedEvent); chai.assert(!!event!.iModelId); const typedEvent = event as NamedVersionCreatedEvent; chai.assert(!!typedEvent); chai.assert(!!typedEvent!.versionId); chai.expect(typedEvent.changeSetId).to.be.eq(""); }); it("should receive Global Event NamedVersionCreatedEvent (#unit)", async () => { const eventBody = `{"EventTopic":"iModelHubGlobalEvents","FromEventSubscriptionId":"${Guid.createValue()}","ToEventSubscriptionId":"","ProjectId":"${Guid.createValue()}","iModelId":"${Guid.createValue()}","ChangeSetId":"369","VersionId":"${Guid.createValue()}","VersionName":"357"}`; mockGetGlobalEvent(globalEventSubscription.wsgId, JSON.parse(eventBody), "NamedVersionCreatedEvent"); const event = await imodelHubClient.globalEvents.getEvent(requestContext, globalEventSas.sasToken!, globalEventSas.baseAddress!, globalEventSubscription.wsgId); chai.expect(event).to.be.instanceof(NamedVersionCreatedEvent); chai.assert(!!event!.iModelId); const typedEvent = event as NamedVersionCreatedEvent; chai.assert(!!typedEvent); chai.assert(!!typedEvent!.versionId); }); it("should delete Global Event subscription by InstanceId", async () => { mockDeleteGlobalEventsSubscription(globalEventSubscription.wsgId); await imodelHubClient.globalEvents.subscriptions.delete(serviceAccountRequestContext, globalEventSubscription.wsgId); }); it("should receive Global Event iModelCreatedEvent from Asset", async () => { const assetId = await utils.getAssetId(requestContext, undefined); await utils.createIModel(requestContext, imodelName, assetId); const eventBody = `{"EventTopic":"iModelHubGlobalEvents","FromEventSubscriptionId":"${Guid.createValue()}","ToEventSubscriptionId":"","ProjectId":"${assetId}","ContextId":"${assetId}","ContextTypeId":${ContextType.Asset},"iModelId":"${Guid.createValue()}"}`; mockGetGlobalEvent(globalEventSubscription.wsgId, JSON.parse(eventBody), "iModelCreatedEvent"); const event = await imodelHubClient.globalEvents.getEvent(requestContext, globalEventSas.sasToken!, globalEventSas.baseAddress!, globalEventSubscription.wsgId); chai.expect(event).to.be.instanceof(IModelCreatedEvent); chai.assert(!!event!.iModelId); chai.expect(event!.contextId).to.be.eq(assetId); chai.expect(event!.contextTypeId).to.be.eq(ContextType.Asset); }); });
/** * Copyright (c) Microsoft Corporation. All rights reserved. * Licensed under the MIT License. See License.txt in the project root for * license information. * * Code generated by Microsoft (R) AutoRest Code Generator. */ package com.microsoft.azure.cognitiveservices.vision.faceapi.implementation; import com.microsoft.azure.cognitiveservices.vision.faceapi.models.CreatePersonGroupPersonsOptionalParameter; import com.microsoft.azure.cognitiveservices.vision.faceapi.models.ListPersonGroupPersonsOptionalParameter; import com.microsoft.azure.cognitiveservices.vision.faceapi.models.UpdatePersonGroupPersonsOptionalParameter; import com.microsoft.azure.cognitiveservices.vision.faceapi.models.UpdateFaceOptionalParameter; import com.microsoft.azure.cognitiveservices.vision.faceapi.models.AddPersonFaceFromUrlOptionalParameter; import com.microsoft.azure.cognitiveservices.vision.faceapi.models.AddPersonFaceFromStreamOptionalParameter; import retrofit2.Retrofit; import com.microsoft.azure.cognitiveservices.vision.faceapi.PersonGroupPersons; import com.google.common.base.Joiner; import com.google.common.reflect.TypeToken; import com.microsoft.azure.cognitiveservices.vision.faceapi.models.APIErrorException; import com.microsoft.azure.cognitiveservices.vision.faceapi.models.ImageUrl; import com.microsoft.azure.cognitiveservices.vision.faceapi.models.NameAndUserDataContract; import com.microsoft.azure.cognitiveservices.vision.faceapi.models.PersistedFace; import com.microsoft.azure.cognitiveservices.vision.faceapi.models.Person; import com.microsoft.azure.cognitiveservices.vision.faceapi.models.UpdatePersonFaceRequest; import com.microsoft.rest.CollectionFormat; import com.microsoft.rest.ServiceCallback; import com.microsoft.rest.ServiceFuture; import com.microsoft.rest.ServiceResponse; import com.microsoft.rest.Validator; import java.io.IOException; import java.util.List; import java.util.UUID; import okhttp3.MediaType; import okhttp3.RequestBody; import okhttp3.ResponseBody; import retrofit2.http.Body; import retrofit2.http.GET; import retrofit2.http.Header; import retrofit2.http.Headers; import retrofit2.http.HTTP; import retrofit2.http.PATCH; import retrofit2.http.Path; import retrofit2.http.POST; import retrofit2.http.Query; import retrofit2.Response; import rx.functions.Func1; import rx.Observable; /** * An instance of this class provides access to all the operations defined * in PersonGroupPersons. */ public class PersonGroupPersonsImpl implements PersonGroupPersons { /** The Retrofit service to perform REST calls. */ private PersonGroupPersonsService service; /** The service client containing this operation class. */ private FaceAPIImpl client; /** * Initializes an instance of PersonGroupPersonsImpl. * * @param retrofit the Retrofit instance built from a Retrofit Builder. * @param client the instance of the service client containing this operation class. */ public PersonGroupPersonsImpl(Retrofit retrofit, FaceAPIImpl client) { this.service = retrofit.create(PersonGroupPersonsService.class); this.client = client; } /** * The interface defining all the services for PersonGroupPersons to be * used by Retrofit to perform actually REST calls. */ interface PersonGroupPersonsService { @Headers({ "Content-Type: application/json; charset=utf-8", "x-ms-logging-context: com.microsoft.azure.cognitiveservices.vision.faceapi.PersonGroupPersons create" }) @POST("persongroups/{personGroupId}/persons") Observable<Response<ResponseBody>> create(@Path("personGroupId") String personGroupId, @Header("accept-language") String acceptLanguage, @Body NameAndUserDataContract bodyParameter, @Header("x-ms-parameterized-host") String parameterizedHost, @Header("User-Agent") String userAgent); @Headers({ "Content-Type: application/json; charset=utf-8", "x-ms-logging-context: com.microsoft.azure.cognitiveservices.vision.faceapi.PersonGroupPersons list" }) @GET("persongroups/{personGroupId}/persons") Observable<Response<ResponseBody>> list(@Path("personGroupId") String personGroupId, @Query("start") String start, @Query("top") Integer top, @Header("accept-language") String acceptLanguage, @Header("x-ms-parameterized-host") String parameterizedHost, @Header("User-Agent") String userAgent); @Headers({ "Content-Type: application/json; charset=utf-8", "x-ms-logging-context: com.microsoft.azure.cognitiveservices.vision.faceapi.PersonGroupPersons delete" }) @HTTP(path = "persongroups/{personGroupId}/persons/{personId}", method = "DELETE", hasBody = true) Observable<Response<ResponseBody>> delete(@Path("personGroupId") String personGroupId, @Path("personId") UUID personId, @Header("accept-language") String acceptLanguage, @Header("x-ms-parameterized-host") String parameterizedHost, @Header("User-Agent") String userAgent); @Headers({ "Content-Type: application/json; charset=utf-8", "x-ms-logging-context: com.microsoft.azure.cognitiveservices.vision.faceapi.PersonGroupPersons get" }) @GET("persongroups/{personGroupId}/persons/{personId}") Observable<Response<ResponseBody>> get(@Path("personGroupId") String personGroupId, @Path("personId") UUID personId, @Header("accept-language") String acceptLanguage, @Header("x-ms-parameterized-host") String parameterizedHost, @Header("User-Agent") String userAgent); @Headers({ "Content-Type: application/json; charset=utf-8", "x-ms-logging-context: com.microsoft.azure.cognitiveservices.vision.faceapi.PersonGroupPersons update" }) @PATCH("persongroups/{personGroupId}/persons/{personId}") Observable<Response<ResponseBody>> update(@Path("personGroupId") String personGroupId, @Path("personId") UUID personId, @Header("accept-language") String acceptLanguage, @Body NameAndUserDataContract bodyParameter, @Header("x-ms-parameterized-host") String parameterizedHost, @Header("User-Agent") String userAgent); @Headers({ "Content-Type: application/json; charset=utf-8", "x-ms-logging-context: com.microsoft.azure.cognitiveservices.vision.faceapi.PersonGroupPersons deleteFace" }) @HTTP(path = "persongroups/{personGroupId}/persons/{personId}/persistedFaces/{persistedFaceId}", method = "DELETE", hasBody = true) Observable<Response<ResponseBody>> deleteFace(@Path("personGroupId") String personGroupId, @Path("personId") UUID personId, @Path("persistedFaceId") UUID persistedFaceId, @Header("accept-language") String acceptLanguage, @Header("x-ms-parameterized-host") String parameterizedHost, @Header("User-Agent") String userAgent); @Headers({ "Content-Type: application/json; charset=utf-8", "x-ms-logging-context: com.microsoft.azure.cognitiveservices.vision.faceapi.PersonGroupPersons getFace" }) @GET("persongroups/{personGroupId}/persons/{personId}/persistedFaces/{persistedFaceId}") Observable<Response<ResponseBody>> getFace(@Path("personGroupId") String personGroupId, @Path("personId") UUID personId, @Path("persistedFaceId") UUID persistedFaceId, @Header("accept-language") String acceptLanguage, @Header("x-ms-parameterized-host") String parameterizedHost, @Header("User-Agent") String userAgent); @Headers({ "Content-Type: application/json; charset=utf-8", "x-ms-logging-context: com.microsoft.azure.cognitiveservices.vision.faceapi.PersonGroupPersons updateFace" }) @PATCH("persongroups/{personGroupId}/persons/{personId}/persistedFaces/{persistedFaceId}") Observable<Response<ResponseBody>> updateFace(@Path("personGroupId") String personGroupId, @Path("personId") UUID personId, @Path("persistedFaceId") UUID persistedFaceId, @Header("accept-language") String acceptLanguage, @Body UpdatePersonFaceRequest bodyParameter, @Header("x-ms-parameterized-host") String parameterizedHost, @Header("User-Agent") String userAgent); @Headers({ "Content-Type: application/json; charset=utf-8", "x-ms-logging-context: com.microsoft.azure.cognitiveservices.vision.faceapi.PersonGroupPersons addPersonFaceFromUrl" }) @POST("persongroups/{personGroupId}/persons/{personId}/persistedFaces") Observable<Response<ResponseBody>> addPersonFaceFromUrl(@Path("personGroupId") String personGroupId, @Path("personId") UUID personId, @Query("userData") String userData, @Query("targetFace") String targetFace, @Header("accept-language") String acceptLanguage, @Body ImageUrl imageUrl, @Header("x-ms-parameterized-host") String parameterizedHost, @Header("User-Agent") String userAgent); @Headers({ "Content-Type: application/octet-stream", "x-ms-logging-context: com.microsoft.azure.cognitiveservices.vision.faceapi.PersonGroupPersons addPersonFaceFromStream" }) @POST("persongroups/{personGroupId}/persons/{personId}/persistedFaces") Observable<Response<ResponseBody>> addPersonFaceFromStream(@Path("personGroupId") String personGroupId, @Path("personId") UUID personId, @Query("userData") String userData, @Query("targetFace") String targetFace, @Body RequestBody image, @Header("accept-language") String acceptLanguage, @Header("x-ms-parameterized-host") String parameterizedHost, @Header("User-Agent") String userAgent); } /** * Create a new person in a specified person group. * * @param personGroupId Id referencing a particular person group. * @param createOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @throws APIErrorException thrown if the request is rejected by server * @throws RuntimeException all other wrapped checked exceptions if the request fails to be sent * @return the Person object if successful. */ public Person create(String personGroupId, CreatePersonGroupPersonsOptionalParameter createOptionalParameter) { return createWithServiceResponseAsync(personGroupId, createOptionalParameter).toBlocking().single().body(); } /** * Create a new person in a specified person group. * * @param personGroupId Id referencing a particular person group. * @param createOptionalParameter the object representing the optional parameters to be set before calling this API * @param serviceCallback the async ServiceCallback to handle successful and failed responses. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceFuture} object */ public ServiceFuture<Person> createAsync(String personGroupId, CreatePersonGroupPersonsOptionalParameter createOptionalParameter, final ServiceCallback<Person> serviceCallback) { return ServiceFuture.fromResponse(createWithServiceResponseAsync(personGroupId, createOptionalParameter), serviceCallback); } /** * Create a new person in a specified person group. * * @param personGroupId Id referencing a particular person group. * @param createOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the Person object */ public Observable<Person> createAsync(String personGroupId, CreatePersonGroupPersonsOptionalParameter createOptionalParameter) { return createWithServiceResponseAsync(personGroupId, createOptionalParameter).map(new Func1<ServiceResponse<Person>, Person>() { @Override public Person call(ServiceResponse<Person> response) { return response.body(); } }); } /** * Create a new person in a specified person group. * * @param personGroupId Id referencing a particular person group. * @param createOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the Person object */ public Observable<ServiceResponse<Person>> createWithServiceResponseAsync(String personGroupId, CreatePersonGroupPersonsOptionalParameter createOptionalParameter) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } final String name = createOptionalParameter != null ? createOptionalParameter.name() : null; final String userData = createOptionalParameter != null ? createOptionalParameter.userData() : null; return createWithServiceResponseAsync(personGroupId, name, userData); } /** * Create a new person in a specified person group. * * @param personGroupId Id referencing a particular person group. * @param name User defined name, maximum length is 128. * @param userData User specified data. Length should not exceed 16KB. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the Person object */ public Observable<ServiceResponse<Person>> createWithServiceResponseAsync(String personGroupId, String name, String userData) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } NameAndUserDataContract bodyParameter = new NameAndUserDataContract(); bodyParameter.withName(name); bodyParameter.withUserData(userData); String parameterizedHost = Joiner.on(", ").join("{AzureRegion}", this.client.azureRegion()); return service.create(personGroupId, this.client.acceptLanguage(), bodyParameter, parameterizedHost, this.client.userAgent()) .flatMap(new Func1<Response<ResponseBody>, Observable<ServiceResponse<Person>>>() { @Override public Observable<ServiceResponse<Person>> call(Response<ResponseBody> response) { try { ServiceResponse<Person> clientResponse = createDelegate(response); return Observable.just(clientResponse); } catch (Throwable t) { return Observable.error(t); } } }); } private ServiceResponse<Person> createDelegate(Response<ResponseBody> response) throws APIErrorException, IOException, IllegalArgumentException { return this.client.restClient().responseBuilderFactory().<Person, APIErrorException>newInstance(this.client.serializerAdapter()) .register(200, new TypeToken<Person>() { }.getType()) .registerError(APIErrorException.class) .build(response); } @Override public PersonGroupPersonsCreateParameters create() { return new PersonGroupPersonsCreateParameters(this); } /** * Internal class implementing PersonGroupPersonsCreateDefinition. */ class PersonGroupPersonsCreateParameters implements PersonGroupPersonsCreateDefinition { private PersonGroupPersonsImpl parent; private String personGroupId; private String name; private String userData; /** * Constructor. * @param parent the parent object. */ PersonGroupPersonsCreateParameters(PersonGroupPersonsImpl parent) { this.parent = parent; } @Override public PersonGroupPersonsCreateParameters withPersonGroupId(String personGroupId) { this.personGroupId = personGroupId; return this; } @Override public PersonGroupPersonsCreateParameters withName(String name) { this.name = name; return this; } @Override public PersonGroupPersonsCreateParameters withUserData(String userData) { this.userData = userData; return this; } @Override public Person execute() { return createWithServiceResponseAsync(personGroupId, name, userData).toBlocking().single().body(); } @Override public Observable<Person> executeAsync() { return createWithServiceResponseAsync(personGroupId, name, userData).map(new Func1<ServiceResponse<Person>, Person>() { @Override public Person call(ServiceResponse<Person> response) { return response.body(); } }); } } /** * List all persons in a person group, and retrieve person information (including personId, name, userData and persistedFaceIds of registered faces of the person). * * @param personGroupId Id referencing a particular person group. * @param listOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @throws APIErrorException thrown if the request is rejected by server * @throws RuntimeException all other wrapped checked exceptions if the request fails to be sent * @return the List&lt;Person&gt; object if successful. */ public List<Person> list(String personGroupId, ListPersonGroupPersonsOptionalParameter listOptionalParameter) { return listWithServiceResponseAsync(personGroupId, listOptionalParameter).toBlocking().single().body(); } /** * List all persons in a person group, and retrieve person information (including personId, name, userData and persistedFaceIds of registered faces of the person). * * @param personGroupId Id referencing a particular person group. * @param listOptionalParameter the object representing the optional parameters to be set before calling this API * @param serviceCallback the async ServiceCallback to handle successful and failed responses. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceFuture} object */ public ServiceFuture<List<Person>> listAsync(String personGroupId, ListPersonGroupPersonsOptionalParameter listOptionalParameter, final ServiceCallback<List<Person>> serviceCallback) { return ServiceFuture.fromResponse(listWithServiceResponseAsync(personGroupId, listOptionalParameter), serviceCallback); } /** * List all persons in a person group, and retrieve person information (including personId, name, userData and persistedFaceIds of registered faces of the person). * * @param personGroupId Id referencing a particular person group. * @param listOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the List&lt;Person&gt; object */ public Observable<List<Person>> listAsync(String personGroupId, ListPersonGroupPersonsOptionalParameter listOptionalParameter) { return listWithServiceResponseAsync(personGroupId, listOptionalParameter).map(new Func1<ServiceResponse<List<Person>>, List<Person>>() { @Override public List<Person> call(ServiceResponse<List<Person>> response) { return response.body(); } }); } /** * List all persons in a person group, and retrieve person information (including personId, name, userData and persistedFaceIds of registered faces of the person). * * @param personGroupId Id referencing a particular person group. * @param listOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the List&lt;Person&gt; object */ public Observable<ServiceResponse<List<Person>>> listWithServiceResponseAsync(String personGroupId, ListPersonGroupPersonsOptionalParameter listOptionalParameter) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } final String start = listOptionalParameter != null ? listOptionalParameter.start() : null; final Integer top = listOptionalParameter != null ? listOptionalParameter.top() : null; return listWithServiceResponseAsync(personGroupId, start, top); } /** * List all persons in a person group, and retrieve person information (including personId, name, userData and persistedFaceIds of registered faces of the person). * * @param personGroupId Id referencing a particular person group. * @param start Starting person id to return (used to list a range of persons). * @param top Number of persons to return starting with the person id indicated by the 'start' parameter. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the List&lt;Person&gt; object */ public Observable<ServiceResponse<List<Person>>> listWithServiceResponseAsync(String personGroupId, String start, Integer top) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } String parameterizedHost = Joiner.on(", ").join("{AzureRegion}", this.client.azureRegion()); return service.list(personGroupId, start, top, this.client.acceptLanguage(), parameterizedHost, this.client.userAgent()) .flatMap(new Func1<Response<ResponseBody>, Observable<ServiceResponse<List<Person>>>>() { @Override public Observable<ServiceResponse<List<Person>>> call(Response<ResponseBody> response) { try { ServiceResponse<List<Person>> clientResponse = listDelegate(response); return Observable.just(clientResponse); } catch (Throwable t) { return Observable.error(t); } } }); } private ServiceResponse<List<Person>> listDelegate(Response<ResponseBody> response) throws APIErrorException, IOException, IllegalArgumentException { return this.client.restClient().responseBuilderFactory().<List<Person>, APIErrorException>newInstance(this.client.serializerAdapter()) .register(200, new TypeToken<List<Person>>() { }.getType()) .registerError(APIErrorException.class) .build(response); } @Override public PersonGroupPersonsListParameters list() { return new PersonGroupPersonsListParameters(this); } /** * Internal class implementing PersonGroupPersonsListDefinition. */ class PersonGroupPersonsListParameters implements PersonGroupPersonsListDefinition { private PersonGroupPersonsImpl parent; private String personGroupId; private String start; private Integer top; /** * Constructor. * @param parent the parent object. */ PersonGroupPersonsListParameters(PersonGroupPersonsImpl parent) { this.parent = parent; } @Override public PersonGroupPersonsListParameters withPersonGroupId(String personGroupId) { this.personGroupId = personGroupId; return this; } @Override public PersonGroupPersonsListParameters withStart(String start) { this.start = start; return this; } @Override public PersonGroupPersonsListParameters withTop(Integer top) { this.top = top; return this; } @Override public List<Person> execute() { return listWithServiceResponseAsync(personGroupId, start, top).toBlocking().single().body(); } @Override public Observable<List<Person>> executeAsync() { return listWithServiceResponseAsync(personGroupId, start, top).map(new Func1<ServiceResponse<List<Person>>, List<Person>>() { @Override public List<Person> call(ServiceResponse<List<Person>> response) { return response.body(); } }); } } /** * Delete an existing person from a person group. Persisted face images of the person will also be deleted. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @throws IllegalArgumentException thrown if parameters fail the validation * @throws APIErrorException thrown if the request is rejected by server * @throws RuntimeException all other wrapped checked exceptions if the request fails to be sent */ public void delete(String personGroupId, UUID personId) { deleteWithServiceResponseAsync(personGroupId, personId).toBlocking().single().body(); } /** * Delete an existing person from a person group. Persisted face images of the person will also be deleted. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param serviceCallback the async ServiceCallback to handle successful and failed responses. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceFuture} object */ public ServiceFuture<Void> deleteAsync(String personGroupId, UUID personId, final ServiceCallback<Void> serviceCallback) { return ServiceFuture.fromResponse(deleteWithServiceResponseAsync(personGroupId, personId), serviceCallback); } /** * Delete an existing person from a person group. Persisted face images of the person will also be deleted. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceResponse} object if successful. */ public Observable<Void> deleteAsync(String personGroupId, UUID personId) { return deleteWithServiceResponseAsync(personGroupId, personId).map(new Func1<ServiceResponse<Void>, Void>() { @Override public Void call(ServiceResponse<Void> response) { return response.body(); } }); } /** * Delete an existing person from a person group. Persisted face images of the person will also be deleted. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceResponse} object if successful. */ public Observable<ServiceResponse<Void>> deleteWithServiceResponseAsync(String personGroupId, UUID personId) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } if (personId == null) { throw new IllegalArgumentException("Parameter personId is required and cannot be null."); } String parameterizedHost = Joiner.on(", ").join("{AzureRegion}", this.client.azureRegion()); return service.delete(personGroupId, personId, this.client.acceptLanguage(), parameterizedHost, this.client.userAgent()) .flatMap(new Func1<Response<ResponseBody>, Observable<ServiceResponse<Void>>>() { @Override public Observable<ServiceResponse<Void>> call(Response<ResponseBody> response) { try { ServiceResponse<Void> clientResponse = deleteDelegate(response); return Observable.just(clientResponse); } catch (Throwable t) { return Observable.error(t); } } }); } private ServiceResponse<Void> deleteDelegate(Response<ResponseBody> response) throws APIErrorException, IOException, IllegalArgumentException { return this.client.restClient().responseBuilderFactory().<Void, APIErrorException>newInstance(this.client.serializerAdapter()) .register(200, new TypeToken<Void>() { }.getType()) .registerError(APIErrorException.class) .build(response); } /** * Retrieve a person's information, including registered persisted faces, name and userData. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @throws IllegalArgumentException thrown if parameters fail the validation * @throws APIErrorException thrown if the request is rejected by server * @throws RuntimeException all other wrapped checked exceptions if the request fails to be sent * @return the Person object if successful. */ public Person get(String personGroupId, UUID personId) { return getWithServiceResponseAsync(personGroupId, personId).toBlocking().single().body(); } /** * Retrieve a person's information, including registered persisted faces, name and userData. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param serviceCallback the async ServiceCallback to handle successful and failed responses. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceFuture} object */ public ServiceFuture<Person> getAsync(String personGroupId, UUID personId, final ServiceCallback<Person> serviceCallback) { return ServiceFuture.fromResponse(getWithServiceResponseAsync(personGroupId, personId), serviceCallback); } /** * Retrieve a person's information, including registered persisted faces, name and userData. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the Person object */ public Observable<Person> getAsync(String personGroupId, UUID personId) { return getWithServiceResponseAsync(personGroupId, personId).map(new Func1<ServiceResponse<Person>, Person>() { @Override public Person call(ServiceResponse<Person> response) { return response.body(); } }); } /** * Retrieve a person's information, including registered persisted faces, name and userData. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the Person object */ public Observable<ServiceResponse<Person>> getWithServiceResponseAsync(String personGroupId, UUID personId) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } if (personId == null) { throw new IllegalArgumentException("Parameter personId is required and cannot be null."); } String parameterizedHost = Joiner.on(", ").join("{AzureRegion}", this.client.azureRegion()); return service.get(personGroupId, personId, this.client.acceptLanguage(), parameterizedHost, this.client.userAgent()) .flatMap(new Func1<Response<ResponseBody>, Observable<ServiceResponse<Person>>>() { @Override public Observable<ServiceResponse<Person>> call(Response<ResponseBody> response) { try { ServiceResponse<Person> clientResponse = getDelegate(response); return Observable.just(clientResponse); } catch (Throwable t) { return Observable.error(t); } } }); } private ServiceResponse<Person> getDelegate(Response<ResponseBody> response) throws APIErrorException, IOException, IllegalArgumentException { return this.client.restClient().responseBuilderFactory().<Person, APIErrorException>newInstance(this.client.serializerAdapter()) .register(200, new TypeToken<Person>() { }.getType()) .registerError(APIErrorException.class) .build(response); } /** * Update name or userData of a person. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param updateOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @throws APIErrorException thrown if the request is rejected by server * @throws RuntimeException all other wrapped checked exceptions if the request fails to be sent */ public void update(String personGroupId, UUID personId, UpdatePersonGroupPersonsOptionalParameter updateOptionalParameter) { updateWithServiceResponseAsync(personGroupId, personId, updateOptionalParameter).toBlocking().single().body(); } /** * Update name or userData of a person. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param updateOptionalParameter the object representing the optional parameters to be set before calling this API * @param serviceCallback the async ServiceCallback to handle successful and failed responses. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceFuture} object */ public ServiceFuture<Void> updateAsync(String personGroupId, UUID personId, UpdatePersonGroupPersonsOptionalParameter updateOptionalParameter, final ServiceCallback<Void> serviceCallback) { return ServiceFuture.fromResponse(updateWithServiceResponseAsync(personGroupId, personId, updateOptionalParameter), serviceCallback); } /** * Update name or userData of a person. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param updateOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceResponse} object if successful. */ public Observable<Void> updateAsync(String personGroupId, UUID personId, UpdatePersonGroupPersonsOptionalParameter updateOptionalParameter) { return updateWithServiceResponseAsync(personGroupId, personId, updateOptionalParameter).map(new Func1<ServiceResponse<Void>, Void>() { @Override public Void call(ServiceResponse<Void> response) { return response.body(); } }); } /** * Update name or userData of a person. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param updateOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceResponse} object if successful. */ public Observable<ServiceResponse<Void>> updateWithServiceResponseAsync(String personGroupId, UUID personId, UpdatePersonGroupPersonsOptionalParameter updateOptionalParameter) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } if (personId == null) { throw new IllegalArgumentException("Parameter personId is required and cannot be null."); } final String name = updateOptionalParameter != null ? updateOptionalParameter.name() : null; final String userData = updateOptionalParameter != null ? updateOptionalParameter.userData() : null; return updateWithServiceResponseAsync(personGroupId, personId, name, userData); } /** * Update name or userData of a person. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param name User defined name, maximum length is 128. * @param userData User specified data. Length should not exceed 16KB. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceResponse} object if successful. */ public Observable<ServiceResponse<Void>> updateWithServiceResponseAsync(String personGroupId, UUID personId, String name, String userData) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } if (personId == null) { throw new IllegalArgumentException("Parameter personId is required and cannot be null."); } NameAndUserDataContract bodyParameter = new NameAndUserDataContract(); bodyParameter.withName(name); bodyParameter.withUserData(userData); String parameterizedHost = Joiner.on(", ").join("{AzureRegion}", this.client.azureRegion()); return service.update(personGroupId, personId, this.client.acceptLanguage(), bodyParameter, parameterizedHost, this.client.userAgent()) .flatMap(new Func1<Response<ResponseBody>, Observable<ServiceResponse<Void>>>() { @Override public Observable<ServiceResponse<Void>> call(Response<ResponseBody> response) { try { ServiceResponse<Void> clientResponse = updateDelegate(response); return Observable.just(clientResponse); } catch (Throwable t) { return Observable.error(t); } } }); } private ServiceResponse<Void> updateDelegate(Response<ResponseBody> response) throws APIErrorException, IOException, IllegalArgumentException { return this.client.restClient().responseBuilderFactory().<Void, APIErrorException>newInstance(this.client.serializerAdapter()) .register(200, new TypeToken<Void>() { }.getType()) .registerError(APIErrorException.class) .build(response); } @Override public PersonGroupPersonsUpdateParameters update() { return new PersonGroupPersonsUpdateParameters(this); } /** * Internal class implementing PersonGroupPersonsUpdateDefinition. */ class PersonGroupPersonsUpdateParameters implements PersonGroupPersonsUpdateDefinition { private PersonGroupPersonsImpl parent; private String personGroupId; private UUID personId; private String name; private String userData; /** * Constructor. * @param parent the parent object. */ PersonGroupPersonsUpdateParameters(PersonGroupPersonsImpl parent) { this.parent = parent; } @Override public PersonGroupPersonsUpdateParameters withPersonGroupId(String personGroupId) { this.personGroupId = personGroupId; return this; } @Override public PersonGroupPersonsUpdateParameters withPersonId(UUID personId) { this.personId = personId; return this; } @Override public PersonGroupPersonsUpdateParameters withName(String name) { this.name = name; return this; } @Override public PersonGroupPersonsUpdateParameters withUserData(String userData) { this.userData = userData; return this; } @Override public void execute() { updateWithServiceResponseAsync(personGroupId, personId, name, userData).toBlocking().single().body(); } @Override public Observable<Void> executeAsync() { return updateWithServiceResponseAsync(personGroupId, personId, name, userData).map(new Func1<ServiceResponse<Void>, Void>() { @Override public Void call(ServiceResponse<Void> response) { return response.body(); } }); } } /** * Delete a face from a person. Relative image for the persisted face will also be deleted. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @throws IllegalArgumentException thrown if parameters fail the validation * @throws APIErrorException thrown if the request is rejected by server * @throws RuntimeException all other wrapped checked exceptions if the request fails to be sent */ public void deleteFace(String personGroupId, UUID personId, UUID persistedFaceId) { deleteFaceWithServiceResponseAsync(personGroupId, personId, persistedFaceId).toBlocking().single().body(); } /** * Delete a face from a person. Relative image for the persisted face will also be deleted. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @param serviceCallback the async ServiceCallback to handle successful and failed responses. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceFuture} object */ public ServiceFuture<Void> deleteFaceAsync(String personGroupId, UUID personId, UUID persistedFaceId, final ServiceCallback<Void> serviceCallback) { return ServiceFuture.fromResponse(deleteFaceWithServiceResponseAsync(personGroupId, personId, persistedFaceId), serviceCallback); } /** * Delete a face from a person. Relative image for the persisted face will also be deleted. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceResponse} object if successful. */ public Observable<Void> deleteFaceAsync(String personGroupId, UUID personId, UUID persistedFaceId) { return deleteFaceWithServiceResponseAsync(personGroupId, personId, persistedFaceId).map(new Func1<ServiceResponse<Void>, Void>() { @Override public Void call(ServiceResponse<Void> response) { return response.body(); } }); } /** * Delete a face from a person. Relative image for the persisted face will also be deleted. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceResponse} object if successful. */ public Observable<ServiceResponse<Void>> deleteFaceWithServiceResponseAsync(String personGroupId, UUID personId, UUID persistedFaceId) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } if (personId == null) { throw new IllegalArgumentException("Parameter personId is required and cannot be null."); } if (persistedFaceId == null) { throw new IllegalArgumentException("Parameter persistedFaceId is required and cannot be null."); } String parameterizedHost = Joiner.on(", ").join("{AzureRegion}", this.client.azureRegion()); return service.deleteFace(personGroupId, personId, persistedFaceId, this.client.acceptLanguage(), parameterizedHost, this.client.userAgent()) .flatMap(new Func1<Response<ResponseBody>, Observable<ServiceResponse<Void>>>() { @Override public Observable<ServiceResponse<Void>> call(Response<ResponseBody> response) { try { ServiceResponse<Void> clientResponse = deleteFaceDelegate(response); return Observable.just(clientResponse); } catch (Throwable t) { return Observable.error(t); } } }); } private ServiceResponse<Void> deleteFaceDelegate(Response<ResponseBody> response) throws APIErrorException, IOException, IllegalArgumentException { return this.client.restClient().responseBuilderFactory().<Void, APIErrorException>newInstance(this.client.serializerAdapter()) .register(200, new TypeToken<Void>() { }.getType()) .registerError(APIErrorException.class) .build(response); } /** * Retrieve information about a persisted face (specified by persistedFaceId, personId and its belonging personGroupId). * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @throws IllegalArgumentException thrown if parameters fail the validation * @throws APIErrorException thrown if the request is rejected by server * @throws RuntimeException all other wrapped checked exceptions if the request fails to be sent * @return the PersistedFace object if successful. */ public PersistedFace getFace(String personGroupId, UUID personId, UUID persistedFaceId) { return getFaceWithServiceResponseAsync(personGroupId, personId, persistedFaceId).toBlocking().single().body(); } /** * Retrieve information about a persisted face (specified by persistedFaceId, personId and its belonging personGroupId). * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @param serviceCallback the async ServiceCallback to handle successful and failed responses. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceFuture} object */ public ServiceFuture<PersistedFace> getFaceAsync(String personGroupId, UUID personId, UUID persistedFaceId, final ServiceCallback<PersistedFace> serviceCallback) { return ServiceFuture.fromResponse(getFaceWithServiceResponseAsync(personGroupId, personId, persistedFaceId), serviceCallback); } /** * Retrieve information about a persisted face (specified by persistedFaceId, personId and its belonging personGroupId). * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the PersistedFace object */ public Observable<PersistedFace> getFaceAsync(String personGroupId, UUID personId, UUID persistedFaceId) { return getFaceWithServiceResponseAsync(personGroupId, personId, persistedFaceId).map(new Func1<ServiceResponse<PersistedFace>, PersistedFace>() { @Override public PersistedFace call(ServiceResponse<PersistedFace> response) { return response.body(); } }); } /** * Retrieve information about a persisted face (specified by persistedFaceId, personId and its belonging personGroupId). * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the PersistedFace object */ public Observable<ServiceResponse<PersistedFace>> getFaceWithServiceResponseAsync(String personGroupId, UUID personId, UUID persistedFaceId) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } if (personId == null) { throw new IllegalArgumentException("Parameter personId is required and cannot be null."); } if (persistedFaceId == null) { throw new IllegalArgumentException("Parameter persistedFaceId is required and cannot be null."); } String parameterizedHost = Joiner.on(", ").join("{AzureRegion}", this.client.azureRegion()); return service.getFace(personGroupId, personId, persistedFaceId, this.client.acceptLanguage(), parameterizedHost, this.client.userAgent()) .flatMap(new Func1<Response<ResponseBody>, Observable<ServiceResponse<PersistedFace>>>() { @Override public Observable<ServiceResponse<PersistedFace>> call(Response<ResponseBody> response) { try { ServiceResponse<PersistedFace> clientResponse = getFaceDelegate(response); return Observable.just(clientResponse); } catch (Throwable t) { return Observable.error(t); } } }); } private ServiceResponse<PersistedFace> getFaceDelegate(Response<ResponseBody> response) throws APIErrorException, IOException, IllegalArgumentException { return this.client.restClient().responseBuilderFactory().<PersistedFace, APIErrorException>newInstance(this.client.serializerAdapter()) .register(200, new TypeToken<PersistedFace>() { }.getType()) .registerError(APIErrorException.class) .build(response); } /** * Update a person persisted face's userData field. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @param updateFaceOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @throws APIErrorException thrown if the request is rejected by server * @throws RuntimeException all other wrapped checked exceptions if the request fails to be sent */ public void updateFace(String personGroupId, UUID personId, UUID persistedFaceId, UpdateFaceOptionalParameter updateFaceOptionalParameter) { updateFaceWithServiceResponseAsync(personGroupId, personId, persistedFaceId, updateFaceOptionalParameter).toBlocking().single().body(); } /** * Update a person persisted face's userData field. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @param updateFaceOptionalParameter the object representing the optional parameters to be set before calling this API * @param serviceCallback the async ServiceCallback to handle successful and failed responses. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceFuture} object */ public ServiceFuture<Void> updateFaceAsync(String personGroupId, UUID personId, UUID persistedFaceId, UpdateFaceOptionalParameter updateFaceOptionalParameter, final ServiceCallback<Void> serviceCallback) { return ServiceFuture.fromResponse(updateFaceWithServiceResponseAsync(personGroupId, personId, persistedFaceId, updateFaceOptionalParameter), serviceCallback); } /** * Update a person persisted face's userData field. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @param updateFaceOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceResponse} object if successful. */ public Observable<Void> updateFaceAsync(String personGroupId, UUID personId, UUID persistedFaceId, UpdateFaceOptionalParameter updateFaceOptionalParameter) { return updateFaceWithServiceResponseAsync(personGroupId, personId, persistedFaceId, updateFaceOptionalParameter).map(new Func1<ServiceResponse<Void>, Void>() { @Override public Void call(ServiceResponse<Void> response) { return response.body(); } }); } /** * Update a person persisted face's userData field. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @param updateFaceOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceResponse} object if successful. */ public Observable<ServiceResponse<Void>> updateFaceWithServiceResponseAsync(String personGroupId, UUID personId, UUID persistedFaceId, UpdateFaceOptionalParameter updateFaceOptionalParameter) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } if (personId == null) { throw new IllegalArgumentException("Parameter personId is required and cannot be null."); } if (persistedFaceId == null) { throw new IllegalArgumentException("Parameter persistedFaceId is required and cannot be null."); } final String userData = updateFaceOptionalParameter != null ? updateFaceOptionalParameter.userData() : null; return updateFaceWithServiceResponseAsync(personGroupId, personId, persistedFaceId, userData); } /** * Update a person persisted face's userData field. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param persistedFaceId Id referencing a particular persistedFaceId of an existing face. * @param userData User-provided data attached to the face. The size limit is 1KB. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceResponse} object if successful. */ public Observable<ServiceResponse<Void>> updateFaceWithServiceResponseAsync(String personGroupId, UUID personId, UUID persistedFaceId, String userData) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } if (personId == null) { throw new IllegalArgumentException("Parameter personId is required and cannot be null."); } if (persistedFaceId == null) { throw new IllegalArgumentException("Parameter persistedFaceId is required and cannot be null."); } UpdatePersonFaceRequest bodyParameter = new UpdatePersonFaceRequest(); bodyParameter.withUserData(userData); String parameterizedHost = Joiner.on(", ").join("{AzureRegion}", this.client.azureRegion()); return service.updateFace(personGroupId, personId, persistedFaceId, this.client.acceptLanguage(), bodyParameter, parameterizedHost, this.client.userAgent()) .flatMap(new Func1<Response<ResponseBody>, Observable<ServiceResponse<Void>>>() { @Override public Observable<ServiceResponse<Void>> call(Response<ResponseBody> response) { try { ServiceResponse<Void> clientResponse = updateFaceDelegate(response); return Observable.just(clientResponse); } catch (Throwable t) { return Observable.error(t); } } }); } private ServiceResponse<Void> updateFaceDelegate(Response<ResponseBody> response) throws APIErrorException, IOException, IllegalArgumentException { return this.client.restClient().responseBuilderFactory().<Void, APIErrorException>newInstance(this.client.serializerAdapter()) .register(200, new TypeToken<Void>() { }.getType()) .registerError(APIErrorException.class) .build(response); } @Override public PersonGroupPersonsUpdateFaceParameters updateFace() { return new PersonGroupPersonsUpdateFaceParameters(this); } /** * Internal class implementing PersonGroupPersonsUpdateFaceDefinition. */ class PersonGroupPersonsUpdateFaceParameters implements PersonGroupPersonsUpdateFaceDefinition { private PersonGroupPersonsImpl parent; private String personGroupId; private UUID personId; private UUID persistedFaceId; private String userData; /** * Constructor. * @param parent the parent object. */ PersonGroupPersonsUpdateFaceParameters(PersonGroupPersonsImpl parent) { this.parent = parent; } @Override public PersonGroupPersonsUpdateFaceParameters withPersonGroupId(String personGroupId) { this.personGroupId = personGroupId; return this; } @Override public PersonGroupPersonsUpdateFaceParameters withPersonId(UUID personId) { this.personId = personId; return this; } @Override public PersonGroupPersonsUpdateFaceParameters withPersistedFaceId(UUID persistedFaceId) { this.persistedFaceId = persistedFaceId; return this; } @Override public PersonGroupPersonsUpdateFaceParameters withUserData(String userData) { this.userData = userData; return this; } @Override public void execute() { updateFaceWithServiceResponseAsync(personGroupId, personId, persistedFaceId, userData).toBlocking().single().body(); } @Override public Observable<Void> executeAsync() { return updateFaceWithServiceResponseAsync(personGroupId, personId, persistedFaceId, userData).map(new Func1<ServiceResponse<Void>, Void>() { @Override public Void call(ServiceResponse<Void> response) { return response.body(); } }); } } /** * Add a representative face to a person for identification. The input face is specified as an image with a targetFace rectangle. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param url Publicly reachable URL of an image * @param addPersonFaceFromUrlOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @throws APIErrorException thrown if the request is rejected by server * @throws RuntimeException all other wrapped checked exceptions if the request fails to be sent * @return the PersistedFace object if successful. */ public PersistedFace addPersonFaceFromUrl(String personGroupId, UUID personId, String url, AddPersonFaceFromUrlOptionalParameter addPersonFaceFromUrlOptionalParameter) { return addPersonFaceFromUrlWithServiceResponseAsync(personGroupId, personId, url, addPersonFaceFromUrlOptionalParameter).toBlocking().single().body(); } /** * Add a representative face to a person for identification. The input face is specified as an image with a targetFace rectangle. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param url Publicly reachable URL of an image * @param addPersonFaceFromUrlOptionalParameter the object representing the optional parameters to be set before calling this API * @param serviceCallback the async ServiceCallback to handle successful and failed responses. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceFuture} object */ public ServiceFuture<PersistedFace> addPersonFaceFromUrlAsync(String personGroupId, UUID personId, String url, AddPersonFaceFromUrlOptionalParameter addPersonFaceFromUrlOptionalParameter, final ServiceCallback<PersistedFace> serviceCallback) { return ServiceFuture.fromResponse(addPersonFaceFromUrlWithServiceResponseAsync(personGroupId, personId, url, addPersonFaceFromUrlOptionalParameter), serviceCallback); } /** * Add a representative face to a person for identification. The input face is specified as an image with a targetFace rectangle. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param url Publicly reachable URL of an image * @param addPersonFaceFromUrlOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the PersistedFace object */ public Observable<PersistedFace> addPersonFaceFromUrlAsync(String personGroupId, UUID personId, String url, AddPersonFaceFromUrlOptionalParameter addPersonFaceFromUrlOptionalParameter) { return addPersonFaceFromUrlWithServiceResponseAsync(personGroupId, personId, url, addPersonFaceFromUrlOptionalParameter).map(new Func1<ServiceResponse<PersistedFace>, PersistedFace>() { @Override public PersistedFace call(ServiceResponse<PersistedFace> response) { return response.body(); } }); } /** * Add a representative face to a person for identification. The input face is specified as an image with a targetFace rectangle. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param url Publicly reachable URL of an image * @param addPersonFaceFromUrlOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the PersistedFace object */ public Observable<ServiceResponse<PersistedFace>> addPersonFaceFromUrlWithServiceResponseAsync(String personGroupId, UUID personId, String url, AddPersonFaceFromUrlOptionalParameter addPersonFaceFromUrlOptionalParameter) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } if (personId == null) { throw new IllegalArgumentException("Parameter personId is required and cannot be null."); } if (url == null) { throw new IllegalArgumentException("Parameter url is required and cannot be null."); } final String userData = addPersonFaceFromUrlOptionalParameter != null ? addPersonFaceFromUrlOptionalParameter.userData() : null; final List<Integer> targetFace = addPersonFaceFromUrlOptionalParameter != null ? addPersonFaceFromUrlOptionalParameter.targetFace() : null; return addPersonFaceFromUrlWithServiceResponseAsync(personGroupId, personId, url, userData, targetFace); } /** * Add a representative face to a person for identification. The input face is specified as an image with a targetFace rectangle. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param url Publicly reachable URL of an image * @param userData User-specified data about the face for any purpose. The maximum length is 1KB. * @param targetFace A face rectangle to specify the target face to be added to a person in the format of "targetFace=left,top,width,height". E.g. "targetFace=10,10,100,100". If there is more than one face in the image, targetFace is required to specify which face to add. No targetFace means there is only one face detected in the entire image. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the PersistedFace object */ public Observable<ServiceResponse<PersistedFace>> addPersonFaceFromUrlWithServiceResponseAsync(String personGroupId, UUID personId, String url, String userData, List<Integer> targetFace) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } if (personId == null) { throw new IllegalArgumentException("Parameter personId is required and cannot be null."); } if (url == null) { throw new IllegalArgumentException("Parameter url is required and cannot be null."); } Validator.validate(targetFace); ImageUrl imageUrl = new ImageUrl(); imageUrl.withUrl(url); String parameterizedHost = Joiner.on(", ").join("{AzureRegion}", this.client.azureRegion()); String targetFaceConverted = this.client.serializerAdapter().serializeList(targetFace, CollectionFormat.CSV); return service.addPersonFaceFromUrl(personGroupId, personId, userData, targetFaceConverted, this.client.acceptLanguage(), imageUrl, parameterizedHost, this.client.userAgent()) .flatMap(new Func1<Response<ResponseBody>, Observable<ServiceResponse<PersistedFace>>>() { @Override public Observable<ServiceResponse<PersistedFace>> call(Response<ResponseBody> response) { try { ServiceResponse<PersistedFace> clientResponse = addPersonFaceFromUrlDelegate(response); return Observable.just(clientResponse); } catch (Throwable t) { return Observable.error(t); } } }); } private ServiceResponse<PersistedFace> addPersonFaceFromUrlDelegate(Response<ResponseBody> response) throws APIErrorException, IOException, IllegalArgumentException { return this.client.restClient().responseBuilderFactory().<PersistedFace, APIErrorException>newInstance(this.client.serializerAdapter()) .register(200, new TypeToken<PersistedFace>() { }.getType()) .registerError(APIErrorException.class) .build(response); } @Override public PersonGroupPersonsAddPersonFaceFromUrlParameters addPersonFaceFromUrl() { return new PersonGroupPersonsAddPersonFaceFromUrlParameters(this); } /** * Internal class implementing PersonGroupPersonsAddPersonFaceFromUrlDefinition. */ class PersonGroupPersonsAddPersonFaceFromUrlParameters implements PersonGroupPersonsAddPersonFaceFromUrlDefinition { private PersonGroupPersonsImpl parent; private String personGroupId; private UUID personId; private String url; private String userData; private List<Integer> targetFace; /** * Constructor. * @param parent the parent object. */ PersonGroupPersonsAddPersonFaceFromUrlParameters(PersonGroupPersonsImpl parent) { this.parent = parent; } @Override public PersonGroupPersonsAddPersonFaceFromUrlParameters withPersonGroupId(String personGroupId) { this.personGroupId = personGroupId; return this; } @Override public PersonGroupPersonsAddPersonFaceFromUrlParameters withPersonId(UUID personId) { this.personId = personId; return this; } @Override public PersonGroupPersonsAddPersonFaceFromUrlParameters withUrl(String url) { this.url = url; return this; } @Override public PersonGroupPersonsAddPersonFaceFromUrlParameters withUserData(String userData) { this.userData = userData; return this; } @Override public PersonGroupPersonsAddPersonFaceFromUrlParameters withTargetFace(List<Integer> targetFace) { this.targetFace = targetFace; return this; } @Override public PersistedFace execute() { return addPersonFaceFromUrlWithServiceResponseAsync(personGroupId, personId, url, userData, targetFace).toBlocking().single().body(); } @Override public Observable<PersistedFace> executeAsync() { return addPersonFaceFromUrlWithServiceResponseAsync(personGroupId, personId, url, userData, targetFace).map(new Func1<ServiceResponse<PersistedFace>, PersistedFace>() { @Override public PersistedFace call(ServiceResponse<PersistedFace> response) { return response.body(); } }); } } /** * Add a representative face to a person for identification. The input face is specified as an image with a targetFace rectangle. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param image An image stream. * @param addPersonFaceFromStreamOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @throws APIErrorException thrown if the request is rejected by server * @throws RuntimeException all other wrapped checked exceptions if the request fails to be sent * @return the PersistedFace object if successful. */ public PersistedFace addPersonFaceFromStream(String personGroupId, UUID personId, byte[] image, AddPersonFaceFromStreamOptionalParameter addPersonFaceFromStreamOptionalParameter) { return addPersonFaceFromStreamWithServiceResponseAsync(personGroupId, personId, image, addPersonFaceFromStreamOptionalParameter).toBlocking().single().body(); } /** * Add a representative face to a person for identification. The input face is specified as an image with a targetFace rectangle. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param image An image stream. * @param addPersonFaceFromStreamOptionalParameter the object representing the optional parameters to be set before calling this API * @param serviceCallback the async ServiceCallback to handle successful and failed responses. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the {@link ServiceFuture} object */ public ServiceFuture<PersistedFace> addPersonFaceFromStreamAsync(String personGroupId, UUID personId, byte[] image, AddPersonFaceFromStreamOptionalParameter addPersonFaceFromStreamOptionalParameter, final ServiceCallback<PersistedFace> serviceCallback) { return ServiceFuture.fromResponse(addPersonFaceFromStreamWithServiceResponseAsync(personGroupId, personId, image, addPersonFaceFromStreamOptionalParameter), serviceCallback); } /** * Add a representative face to a person for identification. The input face is specified as an image with a targetFace rectangle. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param image An image stream. * @param addPersonFaceFromStreamOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the PersistedFace object */ public Observable<PersistedFace> addPersonFaceFromStreamAsync(String personGroupId, UUID personId, byte[] image, AddPersonFaceFromStreamOptionalParameter addPersonFaceFromStreamOptionalParameter) { return addPersonFaceFromStreamWithServiceResponseAsync(personGroupId, personId, image, addPersonFaceFromStreamOptionalParameter).map(new Func1<ServiceResponse<PersistedFace>, PersistedFace>() { @Override public PersistedFace call(ServiceResponse<PersistedFace> response) { return response.body(); } }); } /** * Add a representative face to a person for identification. The input face is specified as an image with a targetFace rectangle. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param image An image stream. * @param addPersonFaceFromStreamOptionalParameter the object representing the optional parameters to be set before calling this API * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the PersistedFace object */ public Observable<ServiceResponse<PersistedFace>> addPersonFaceFromStreamWithServiceResponseAsync(String personGroupId, UUID personId, byte[] image, AddPersonFaceFromStreamOptionalParameter addPersonFaceFromStreamOptionalParameter) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } if (personId == null) { throw new IllegalArgumentException("Parameter personId is required and cannot be null."); } if (image == null) { throw new IllegalArgumentException("Parameter image is required and cannot be null."); } final String userData = addPersonFaceFromStreamOptionalParameter != null ? addPersonFaceFromStreamOptionalParameter.userData() : null; final List<Integer> targetFace = addPersonFaceFromStreamOptionalParameter != null ? addPersonFaceFromStreamOptionalParameter.targetFace() : null; return addPersonFaceFromStreamWithServiceResponseAsync(personGroupId, personId, image, userData, targetFace); } /** * Add a representative face to a person for identification. The input face is specified as an image with a targetFace rectangle. * * @param personGroupId Id referencing a particular person group. * @param personId Id referencing a particular person. * @param image An image stream. * @param userData User-specified data about the face for any purpose. The maximum length is 1KB. * @param targetFace A face rectangle to specify the target face to be added to a person in the format of "targetFace=left,top,width,height". E.g. "targetFace=10,10,100,100". If there is more than one face in the image, targetFace is required to specify which face to add. No targetFace means there is only one face detected in the entire image. * @throws IllegalArgumentException thrown if parameters fail the validation * @return the observable to the PersistedFace object */ public Observable<ServiceResponse<PersistedFace>> addPersonFaceFromStreamWithServiceResponseAsync(String personGroupId, UUID personId, byte[] image, String userData, List<Integer> targetFace) { if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } if (personGroupId == null) { throw new IllegalArgumentException("Parameter personGroupId is required and cannot be null."); } if (personId == null) { throw new IllegalArgumentException("Parameter personId is required and cannot be null."); } if (image == null) { throw new IllegalArgumentException("Parameter image is required and cannot be null."); } Validator.validate(targetFace); String parameterizedHost = Joiner.on(", ").join("{AzureRegion}", this.client.azureRegion()); String targetFaceConverted = this.client.serializerAdapter().serializeList(targetFace, CollectionFormat.CSV); RequestBody imageConverted = RequestBody.create(MediaType.parse("application/octet-stream"), image); return service.addPersonFaceFromStream(personGroupId, personId, userData, targetFaceConverted, imageConverted, this.client.acceptLanguage(), parameterizedHost, this.client.userAgent()) .flatMap(new Func1<Response<ResponseBody>, Observable<ServiceResponse<PersistedFace>>>() { @Override public Observable<ServiceResponse<PersistedFace>> call(Response<ResponseBody> response) { try { ServiceResponse<PersistedFace> clientResponse = addPersonFaceFromStreamDelegate(response); return Observable.just(clientResponse); } catch (Throwable t) { return Observable.error(t); } } }); } private ServiceResponse<PersistedFace> addPersonFaceFromStreamDelegate(Response<ResponseBody> response) throws APIErrorException, IOException, IllegalArgumentException { return this.client.restClient().responseBuilderFactory().<PersistedFace, APIErrorException>newInstance(this.client.serializerAdapter()) .register(200, new TypeToken<PersistedFace>() { }.getType()) .registerError(APIErrorException.class) .build(response); } @Override public PersonGroupPersonsAddPersonFaceFromStreamParameters addPersonFaceFromStream() { return new PersonGroupPersonsAddPersonFaceFromStreamParameters(this); } /** * Internal class implementing PersonGroupPersonsAddPersonFaceFromStreamDefinition. */ class PersonGroupPersonsAddPersonFaceFromStreamParameters implements PersonGroupPersonsAddPersonFaceFromStreamDefinition { private PersonGroupPersonsImpl parent; private String personGroupId; private UUID personId; private byte[] image; private String userData; private List<Integer> targetFace; /** * Constructor. * @param parent the parent object. */ PersonGroupPersonsAddPersonFaceFromStreamParameters(PersonGroupPersonsImpl parent) { this.parent = parent; } @Override public PersonGroupPersonsAddPersonFaceFromStreamParameters withPersonGroupId(String personGroupId) { this.personGroupId = personGroupId; return this; } @Override public PersonGroupPersonsAddPersonFaceFromStreamParameters withPersonId(UUID personId) { this.personId = personId; return this; } @Override public PersonGroupPersonsAddPersonFaceFromStreamParameters withImage(byte[] image) { this.image = image; return this; } @Override public PersonGroupPersonsAddPersonFaceFromStreamParameters withUserData(String userData) { this.userData = userData; return this; } @Override public PersonGroupPersonsAddPersonFaceFromStreamParameters withTargetFace(List<Integer> targetFace) { this.targetFace = targetFace; return this; } @Override public PersistedFace execute() { return addPersonFaceFromStreamWithServiceResponseAsync(personGroupId, personId, image, userData, targetFace).toBlocking().single().body(); } @Override public Observable<PersistedFace> executeAsync() { return addPersonFaceFromStreamWithServiceResponseAsync(personGroupId, personId, image, userData, targetFace).map(new Func1<ServiceResponse<PersistedFace>, PersistedFace>() { @Override public PersistedFace call(ServiceResponse<PersistedFace> response) { return response.body(); } }); } } }
When reading a published journal the reader always hopes to be privy to the inner life of the writer. Most often, though, writing is done to be read and so there will be a certain amount of shaping, self-censorship, even. In Father Alexander Schmemann’s Journals there is quite a lot of shaping, and it seems clear from the selection and manner of presentation of details that he expected the journals to be read by others who would not be familiar with his life. Indeed, the very first entry of these journals seems to be a ‘preface’ to the whole. "What is there to ‘explain’?" Fr Schmemann asks of his desire to begin these journals, and goes on to respond: The surprising combination in me of a deep and ever-growing revulsion at endless discussions and debates about religion, at superficial affirmations, pious emotionalism and certainly against pseudo-churchly interests, petty and trifling, and, at the same time, an ever-growing sense of reality … Always the same feeling of time filled with eternity, with full and sacred joy. I have the feeling that church is needed so that this experience of reality would exist. Where the church ceases to be a symbol, a sacrament, it becomes a horrible caricature of itself. (p 1) In addition to his own personal shaping for the eyes of others, these Journals are edited by Fr Alexander’s widow, Juliana Schmemann. Editing by someone so close to the writer raises the possibility that the text might be edited to protect the author’s memory, to present him at his best. It is also published by the press of St Vladimir’s Seminary, which, as the home and workplace for many years to Fr Alexander, would naturally also be interested in the author’s reputation. And so there are (at least) three layers between the reader and the inner life of the author. This having been said, the publication of the Journals by Matushka Schmemann and St Vladimir’s is an act of courage. There is no shying away from the darkness that often seems to have been strongly present in Fr Alexander. There is much that, in isolation, could be used against him by his detractors. In one place Fr Alexander is very honest about his prayer life, not ‘traditional’ by any means, and in many others is critical of monasticism, of bishops. He is most often critical of a certain type who play acts Orthodoxy, taking the Orthodox costume and a shallow maximalism above the substance of the life in Christ. There are many instances in the journals of what seems to be depression, sometimes almost despair, in regard to the situation of Christianity, the Orthodox Church in particular, and even more, the Orthodox situation in America. But O, how wondrous, how luminescent, is the joy, the light, the hope that shines against the dark background of these journals. It is this contrast, and it is a multi-toned contrast rather than simply black and white, that is the tonality of these journals. The source of false religion is the inability to rejoice, or, rather, the refusal of joy, whereas joy is absolutely essential because it is without any doubt the fruit of God’s presence. One cannot know that God exists and not rejoice. Only in relation to joy are the fear of God and humility correct, genuine, fruitful. Outside of joy they become demonic, the deepest distortion of religious experience … Somehow ‘religious’ people often look on joy with suspicion. The first, the main source of everything is ‘my soul rejoices in the Lord …’ The fear of sin does not save from sin. Joy in the Lord saves … Joy is the foundation of freedom, where we are called to stand. Where, how, when has this tonality of Christianity become distorted, dull — or, rather, where, how, why have Christians become deaf to joy? How, when and why, instead of freeing suffering people, did the Church come to sadistically intimidate and frighten them? (p. 129) The reader has the sense that joy was not easily come by for Fr Alexander, but a constant struggle, and when it appears it is always seen against the darker background, the forces that would rob Christianity of the one thing necessary, especially the forces present within the Church itself. This criticism of those forces, including its Orthodox manifestation, is repeated again and again in these pages. There is a wonderful set of entries from Holy Week in 1981: Monday, April 20. Lazarus Saturday and Palm Sunday services were especially joyful….the Epistle of all Epistles: "Rejoice … and again I say, Rejoice!" Truly the Kingdom of God is among us, within us. But why, except for a momentary joy, does all of it not have more effect? How much anger, mutual torture, offense. How much — without exaggeration — hidden violence. Tuesday, April 21 What has Christianity lost so that the world, nurtured by Christianity, has recoiled from it and started to pass judgement over the Christian faith? Christianity has lost joy — not natural joy, not joy-optimism, not joy from earthly happiness, but the Divine joy about which Christ told us that "no one will take your joy from you" (John 16:22). Only this joy knows that God’s love to man and to the world is not cruel; knows it because that love is part of the absolute happiness for which we are all created … The world is created by happiness and for happiness and everything in the world prophesies that happiness; everything calls to it, witnesses it by its very fragility. To the fallen world that has lost that happiness, but yearns for it and — in spite of everything — lives by it, Christianity has opened up and given back happiness; has fulfilled it in Christ as joy. And then dismissed it. So that the world began to hate Christianity (the Christian world) and went back to its earthly happiness. But having been poisoned by the incredible promise of an absolute happiness, the world started to build it, to progress toward it, to submit the present to this future happiness … … Some say, "How can one rejoice when millions are suffering? One must serve the world." Others say "How can one rejoice in a world lying in evil?" The do not understand that if for just one minute (that lasts secretly and hidden in the saints) the Church has overcome the world, the victory was won through Joy and Happiness. Holy Thursday, April 23, 1981 Christianity is beautiful. But precisely because it is wonderful, perfect, full, true, its acceptance is before anything else the acceptance of its beauty, i.e., its fullness, divine perfection; whereas in history, Christians themselves have fragmented Christianity, have started to perceive it and offer it to others "in parts" — quite often in parts not connected to the whole. Holy Saturday, April 25, 1981 I am writing before leaving for my most beloved of all loved services: the Baptismal, Paschal Liturgy of St Basil the Great, when "Life sleeps and Hades shudders …" I write just to say it again. It is the day of my conversion — not of unbelief to belief, not of "out of the Church" to "Church." No; an internal conversion of faith, within the Church, to what constitutes the treasure of the heart — in spite of my sins laziness, indifference, in spite of a continuous almost conscious falling away from that treasure, in spite of negligence, in the literal sense of the word. I don’t know how, I don’t know why — truly only by God’s mercy — but Holy Saturday remains the center, the light, sign, symbol, and gift of everything. "Christ — the new Pascha …" And to that Now Pascha, something in me says with joy and faith: "Amen." (pp. 289-293, passim) The note of joy is always present as in this series. Elsewhere he insists that joy is the only possible attitude of a Christian. And in almost every year there is a comment on the words of St Paul in the epistle for Palm Sunday: "Rejoice, and again I say, rejoice …" And what is the source of this unfailing joy? It is the eschatological dimension of Christ’s saving act, the Kingdom of God now present in the Church. It is the presence of the Kingdom, here and now; the ‘last things’ — judgement and coming in glory — present here and now. We stand with a foot in either world, this one and the Kingdom, and it is our duty to keep each foot planted in its own place. We do not escape by the liberal fantasy of a possible utopia nor by the reactionary otherworldly grasping on to a disembodied ascetic and romantic view of a church that never was. There are many examples of this presence of the Kingdom in this world in the Journals. In one entry Fr Alexander contends with the failures of his beloved Church, and then bursts into one of the many small epiphanies of his daily life: My perpetual conclusion: If theology, spirituality, etc. do not return to a genuine Christian eschatology (and I don’t see any signs of one) then we are fated not only to remain a ghetto, but to transform ourselves, the Church and all that is within it, into a spiritual ghetto. The return – and this is my other perpetual conclusion — starts from a genuine understanding of the Eucharist, the mystery of the Church the mystery of the New Creature, the mystery of the Kingdom of God. These are the Alpha and Omega of Christianity …. What is real? All that I mentioned earlier, or this moment: An empty house flooded with sunshine; trees in full bloom behind the window; far away little white clouds floating in the sky; the peace of my office; the silent presence — friendly, joyful — of the books on my shelf. (p 330) These small epiphanies are bright stars in deep night. Along with what often seem to be mini-essays on the many serious subjects dear to his heart — the liturgy, Solzhenitsyn, the émigré community, modern culture, Russian literature, and many others — there are the delights in the presence of his family, his wife Juliana especially; the liturgies and services of the Church year, especially the Vesperal Liturgy of Holy Saturday, the double feast of Lazarus and the Palms, the Annunciation, the Akathist to the Theotokos; his early life in Paris and teachers; summers in Labelle and each day’s bringing of the Divine presence in the natural world. There are many other aspects of Fr Alexander’s Journals that are of great beauty, joy and wonder. Music, literature, the love of teaching, the appreciation of positive response to his books, especially by those whose faith was strengthened by them, Scripture, the Eucharist. There are many others not so bright as well — the struggle to write, difficulty hearing confessions, disappointment with students. In this world we can only approach beauty. But when we do it is an approach to wonder, perfection, fullness. To read these Journals of Fr Alexander’s is to make such an approach, by virtue of seeing the fullness of a human life in ‘this world’, struggling to realize the presence of the Kingdom of God, and the connection of Christ’s church to it strong, vibrant, and meaningful. From Jacob's WellDiocese of New York and New Jersey Orthodox Church in America Spring-Summer 2000
Bucs Fall to Jets, 18-17 EAST RUTHERFORD, NJ - The Bucs fell to the Jets today at MetLife Stadium, 18-17, in a wild finish that saw Tampa Bay take the lead on a 37-yard field goal by Ryan Lindell with :38 remaining, only to have it taken away after Jets kicker Nick Folk nailed the game-winner from 48 yards out with just :15 left on the clock. Of the Bucs’ “Big Three” on offense, only WR Vincent Jackson produced at the level that the team maintained for much of 2012 and expected to see more of on Sunday. Jackson caught seven passes for a team-high 154 yards and repeatedly moved the chains on third down. However, QB Josh Freeman (15/31, 210yds, 1TD and 1INT) and RB Doug Martin (24 carries for 65 yards and 1TD) failed to match their performance from last year as the Bucs struggled on offense throughout the game. Martin did run for a touchdown in the second quarter to give the Bucs an early 14-5 lead, but he and Freeman had a particularly hard time hooking up in the passing game. Tampa Bay’s defense made things equally difficult for rookie QB Geno Smith and the Jets’ offense. With LB Mason Foster and LB Lavonte David making key contributions. However, it was David’s costly 15-yard personal foul penalty that set up Folk’s game-winner. David was flagged for pushing Jets QB Geno Smith after Smith ran out of bounds at the Tampa Bay 45-yard line with :15 left in the game. The penalty gave the Jets the ball at the Bucs 30-yard line and Folk connected from 48-yards out on the next play to hand the Bucs their first defeat of the season. Tampa Bay’s first two offensive possessions didn’t go as scripted, despite a sharp 20-yard slant from Freeman to Jackson on the game’s first third down. Apparent communication problems with the helmet radio led to consecutive delay-of-game penalties, and a false start contributed to pushing the Bucs almost back to the original line of scrimmage before a punt. The Jets’ first drive got across midfield before ending in a punt, and that indirectly led to the game’s first score. The Bucs had to start at their own three after the kick, and three plays later an aborted snap led to a safety for the Jets, with Freeman kicking the ball out of the grasp of a Jets defender to prevent a touchdown and save five points. Special teams helped set up both of the Buccaneers’ first two touchdowns as the visitors surged to a 14-5 lead. Michael Koenen’s coffin-corner punt near the end of the first quarter pushed the Jets back inside their own 10 and eventually allowed the Bucs’ offense to regain the ball inside New York territory. Freeman then led the Bucs on a five-play, 44-yard drive that ended in his 17-yard strike to Williams in the left side of the end zone. In the second quarter, Koenen dropped another punt down at the Jets’ 11 and that was followed in rapid succession by a Geno Smith fumble, forced by blitzing LB Mason Foster, and a five-yard touchdown run by Martin. Tampa Bay appeared to be on the verge of taking over the game minutes later when Lavonte David intercepted a Smith pass in the right flat. However, Jets S Dawan Landry picked Freeman right back three plays later, and his long return to the Bucs’ 31 set up a short touchdown drive that reduced the visitors’ lead to two points at halftime. The Bucs appeared to have stopped the drive at the edge of field goal territory after Foster’s second sack of the game, but an unnecessary roughness call drawn by S Mark Barron extended the drive and it ended in Kellen Winslow’s seven-yard TD catch. While the Buccaneers may have disagreed with the unnecessary roughness calls drawn by Barron and, just moments earlier, fellow S Dashon Goldson, flags were a persistent problem for the team throughout the afternoon. A game-opening touchback set the Bucs up at their own 20 to start the season, and Freeman’s first pass of the season was a quick-hitter to Williams for seven yards. After Martin’s first run of the season left the Bucs in a third-and-one, they elected to throw and Freeman hit Jackson on a quick slant that gained 20 yards to midfield. After an incompletion, the Bucs had trouble getting the next play in and eventually had to burn a timeout before the next snap. Apparently experiencing problems with his helmet radio, Freeman let the clock run out for another delay penalty, and then, after switching helmets, was sacked by LB Antwan Barnes for a loss of 10. A false start penalty moved the Bucs almost all the way back to the drive’s original line of scrimmage and the drive ended on a short bubble-screen pass to Martin. Michael Koenen helped flip field position with a 54-yard punt and the Jets’ opening drive started at their own 29. Smith’s first professional pass was a good one, a 26-yarder to Kerley to the Bucs’ 45. Tampa Bay’s defense forced him to scramble up the middle for three on the next play, and a direct-snap run by Powell picked up four to make it third-and-three. The Bucs’ defense got the stop it needed by chasing Smith into a hurried incompletion, but the resulting punt trapped the Bucs at their own three. That proved disastrous, because three plays later the ball was snapped before Freeman was ready and it bounced into the end zone. Freeman was able to kick it out of the back of the end zone (drawing an illegal kicking penalty) to avoid a touchdown recovery, but that gave the Jets the game’s first two points and the next possession. After the free kick, the Jets started up again at their own 32. Smith tested Revis on a quick slant on the second play but Revis knocked it away, setting up third-and-nine. Smith got his third-down pass off while being buried by McCoy, but the completion to TE Kellen Winslow came up short and led to another punt. WR Eric Page found a seam to exploit on his return and got back 28 yards to the Bucs’ 42. Unfortunately, the run game still struggled to find any openings, and Freeman was quickly in a third-and-nine hole, which became third-and-14 after a false start. Freeman stepped up in the pocket to buy time but was eventually sacked before he could get off a pass, somehow managing to avoid a fumble despite Wilkerson’s blind-side hit. Koenen’s punt was fair caught all the way back at the eight. McCoy stopped Ivory on a run up the middle for just one yard and instant pressure on second down blew up an attempted screen to Holmes. Kerley nearly made a diving grab down the middle but couldn’t hold on, and resulting punt plus Page’s seven-yard return put the Bucs into Jet territory. A five-yard run by Martin and a seven-yard out to Jackson moved the chains, and a great come-backer to Jackson got 12 more to the 20-yard line. The penalty let the Bucs kick off from midfield, which led to an easy touchback. Revis broke up another attempted slant in front of him and a false start before the next play put the Jets into a third-and-11 from their 19. However, on the final play of the third quarter, Holmes threaded his way through the middle on a shallow post and caught a 13-yard pass to move the chains. After switching sides, the Jets soon faced a third-and-seven but FB Tommy Bohanon snuck out into the right flat and was wide open for a gain of 21 to the Bucs’ 44. Smith converted the next third down, too, with a 13-yard scramble that had five yards tacked on the end for defensive holding. On first down from the 24, DE Adrian Clayborn buried Ivory for a loss of one and, two plays later, Smith’s errant downfield pass was nearly intercepted by Revis. The Jets settled for Nick Folk’s 43-yard field goal and a 7-5 deficit. A five-yard run on second-and-10 put the Bucs in a quick third-and-five hole, but Jackson broke away from coverage on a quick slant and dashed up the middle for 39 yards to the Jets’ 36. A holding penalty on the next snap hurt, however, and the offense couldn’t recover, eventually punting from the Jets’ 45. Koenen’s kick bounced sideways and was downed at the 11. Two plays later, the Buccaneers were back in the end zone. First, Foster blitzed around the back of the pocket and caught Smith for a six-yard sack, knocking the ball out of his hand for Spence to recover at the Jets’ five-yard line. Martin’s most decisive run of the day to that point followed, a five-yard burst straight up the middle to give the Bucs a 14-5 lead with 6:46 left in the first half. On the ensuing drive, Smith answered with quick strikes down the middle for 11 yards to Kerley and nine to Powell. Powell’s second-down run was stymied by Spence but Ivory was able to pound it over right guard on third-and-one to move the chains. On the next play, Smith threw deep down the middle to TE Jeff Cumberland, who was leveled by S Dashon Goldson. The resulting deflection was nearly intercepted by Mark Barron, but that was moot as Goldson was flagged for unnecessary roughness. Two plays later, however, David picked off a short pass intended for WR Stephen Hill and returned it seven yards to the Bucs’ 41. Three plays later, the Jets’ defense took the ball back, as Freeman’s pass to Jackson was overthrown and hauled in deep in centerfield by Landry, who returned it to the Bucs’ 31. Foster’ second sack on a jailbreak blitz up the middle was huge, pushing the Jets back 18 yards to midfield. After the two-minute warning, the Bucs appeared to get a stop but another unnecessary roughness call, this one on Barron, led to a new first down at the Bucs’ 19. Clayborn drew a holding call on T Austin Howard, but two plays later Smith bought time with a rollout right and eventually found Winslow at the Bucs’ seven. On the next snap, Smith had a long time in the pocket to survey the field and Winslow eventually slid into the open for the seven-yard touchdown. The Bucs dodged a bullet before halftime when, on the only offensive play they ran before time ran out, Martin followed a nice eight-yard run up the middle with a fumble that C Jeremy Zuttah alertly covered. New York’s offense got off to a sharp start after another Koenen touchback, with a 10-yard catch by Hill and a seven-yard run by Ivory. Another short pass to Hill moved the chains, as did a third-and-one direct snap to Powell three plays later, which moved the ball across midfield to the Bucs’ 46. Barron drew a holding call with a blitz off the left edge on the next play but it was declined after Daniel Te’o-Nesheim dropped Smith for a six-yard sack. Smith’s pass to Winslow on third down came up five yards short and the Jets punted, with Page fair-catching at the six. An incompletion and a short Martin run into the teeth of the defense made it third-and-eight in a hurry. Williams got the Bucs out of trouble with a nice run after a short catch over the middle that picked up 13 yards and a new set of downs. Williams was injured on the play, but he returned after missing just one snap. Three plays later, the Bucs faced another long third down and, for the third time in the game, converted it with a slant to Jackson that picked up extra yards. This one went for 17, and a slant on the other side to Williams moments later got 15 more to the Jets’ 45. The drive stalled there, however, and the Bucs punted away, with Koenen this time dropping it down at the four. The Jets faced a third-and-seven moments later after McCoy beat his man off the line and buried Smith on a short incompletion. Unfortunately, great coverage downfield didn’t help on third down when Smith saw an opening and ran straight up the middle for 12 yards. The next third down was only a two-yarder, but the Bucs stopped this one when LB Dekoda Watson’s perfectly-timed blitz led to an 11-yard sack. Jets punter Robert Malone then blasted an incredible 84-yard punt that went to the end zone for a touchback. Tampa Bay’s offense did nothing with the new possession and was punting it back within a minute. Koenen helped out once again with a 57-yard blast but the Jets had the ball back at their own 25 with 1:22 to play in the third. The Bucs’ defense countered with a three-and-out keyed by David’s big stop for a loss of one on Jeremy Kerley on third-and-one. The Bucs fell into their own third-down hole after two plays but Freeman moved the sticks with a scrambling 22-yard completion to Jackson on the right sideline. Two pitches to Martin picked up six yards and made it third-and-four at the Bucs’ 43, but Freeman’s third-down attempt down the right sideline to Martin was overthrown. The Jets got a new start at their own 23 and would have been off the field in three plays if not for a defensive holding call on CB Leonard Johnson downfield. Powell fumbled on the next play but the Jets were fortunate again when it was recovered by Hill. On the next play, Smith rolled right and found WR Clyde Gates on the sideline for a gain of 17 to midfield. A 14-yard catch-and-run by Powell got the ball down to the 20 yard line. The Bucs held their but New York took the lead with a 30-yard field goal with 5:05 to play. Game Notes: - 15-22 on opening day, 8-12 on the road, Schiano could be first Buc coach to win first two openers - 1-9 vs. Jets, no wins since ’84, none in NY - WR Mike Williams’ 17-yard touchdown catch in the first quarter was the 24th receiving score of his Buccaneer career. That ties him with former TE Dave Moore, currently an analyst on the Buccaneers Radio Network, for fifth place in team history.
All relevant data are within the manuscript and its Supporting Information files. Introduction {#sec001} ============ Meloxicam is a non-steroidal anti-inflammatory drug (NSAID) that inhibits cyclooxygenase-2 (COX-2) enzymes which convert arachidonic acid into pro-inflammatory prostaglandins \[[@pone.0217518.ref001]\]. Meloxicam is approved for its use in cattle in the European Union and Canada, and it is an attractive analgesic option as it is effective following a single dose administration due to its long half-life in calves (subcutaneous: SC = 16.4 h; oral: PO = 27.5 h) \[[@pone.0217518.ref002],[@pone.0217518.ref003]\]. In Canada, meloxicam is available for use in cattle in two presentations: meloxicam PO suspension (1.0 mg/kg) labelled for reducing pain and inflammation associated with band and knife castration and, injectable meloxicam (0.5 mg/kg) labelled as an adjuvant for diarrhea, mastitis, de-budding and abdominal surgery. The Canadian Beef Codes of Practice \[[@pone.0217518.ref004]\] has set as a requirement the use of pain mitigation when performing painful husbandry procedures such as castration, spaying and dehorning. Castration is a routine practice which improves cattle management, avoids unwanted reproduction and increases meat quality \[[@pone.0217518.ref005]\]. Injectable meloxicam is not labelled for pain mitigation associated with castration however, previous studies have reported a reduction in physiological and behavioural indicators of pain in calves receiving SC meloxicam compared to un-medicated 1 week and 2 month old castrated calves \[[@pone.0217518.ref006],[@pone.0217518.ref007]\]. Meloxicam tablets have been reported to decrease the inflammatory response in weaned calves after surgical castration \[[@pone.0217518.ref008],[@pone.0217518.ref009]\], but no effects were observed in weaned calves after band castration \[[@pone.0217518.ref010]\]. The presentation of oral meloxicam used in the previous studies differs from the liquid formulation approved for use in cattle in Canada. Therefore, the aim of this study was to compare the pharmacokinetics (PK) of SC and PO meloxicam and to assess the effect of different routes of meloxicam administration on indicators of pain and inflammation in 7--8 month old calves during and after knife castration. We hypothesize that indicators of pain and inflammation will be mitigated after PO and SC administration but the effect will be observed at different time points after castration due to differences in PK. Materials and methods {#sec002} ===================== This protocol was approved by the Animal Care Committee of Lethbridge Research and Development Centre (ACC number 1718). Animals were cared for in accordance with the Canadian Council of Animal Care guidelines \[[@pone.0217518.ref011]\]. Animal housing and management {#sec003} ----------------------------- Twenty-three crossbred Angus beef calves of 328 ± 4.4 kg body weight (BW) and 7--8 months of age were used in a 28 day (d) experiment. Upon weaning, calves were vaccinated with Pyramid FP 5 (Pyramid FP 5, Boehringer Ingelheim (Canada) Ltd., Burlington, Ontario, Canada) and TASVAX (TASVAX, Merck Animal Health, Kirkland, Quebec, Canada) and housed in 4 experimental pens (5--6 calves/pen) for a 3 week adaptation period prior to the start of the trial. Pens (40.2 m × 27.4 m) contained straw bedding, *ad libitum* water provided through a centrally located water system and *ad libitum* feed consisting of a total mixed ration of 80% barley silage, 17% dry-rolled barley and 3% supplement with vitamins and minerals to meet beef cattle nutrition requirements \[[@pone.0217518.ref012]\]. Calves were equally distributed by weight into pens and randomly assigned to treatments. The day of castration, calves were restrained in a hydraulic squeeze chute (Cattlelac Cattle, Reg Cox Feedmixers Ltd, Lethbridge, Alberta, Canada) where they were sampled and castrated. The experiment consisted of two treatment groups: **PO**; n = 12 meloxicam (Solvet, Alberta Veterinary Laboratories, Calgary, Alberta, Canada) (1mg/kg BW) and **SC**; n = 11 meloxicam (Metacam 20 mg/mL, Boehringer Ingelhein, Burlington, Ontario, Canada) (0.5 mg/kg BW) administered immediately prior to knife castration. The same veterinarian performed the knife castration on all the calves by making a latero-lateral incision on the scrotum with a Newberry castration knife (Syrvet Inc., Waukee, IA) and an emasculator was used to crush and cut the spermatic cords. Sample collection {#sec004} ----------------- Sampling time points included 24 and 48 (h) prior to castration (d -1 and -2), immediately before castration (T0), as well as 30, 60, 90, 120, 150, 240 min and on d 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration. ### Meloxicam {#sec005} Meloxicam samples were collected on d -2, T0, 30, 60, 90, 120, 150, 240 min and on d 1, 2, 3, 5 and 7 after castration to determine plasma concentrations of meloxicam for all calves. Samples were collected into 6-mL lithium heparin tubes (BD vacutainer; Becton Dickinson Co., Franklin Lakes, NJ), centrifuged for 15 min at 1.5 × *g* at 4°C and the serum was stored at -80°C \[[@pone.0217518.ref013]\]. Samples were analyzed using high-pressure liquid chromatography (Agilent 1100 Pump, Column Compartment, and Autosampler, Santa Clara, CA, USA) with mass spectrometry detection (LTQ, Thermo Scientific, San Jose, CA, USA) at Iowa State University, College of Veterinary Medicine (Ames, IA). The plasma concentration vs. time data of meloxicam following SC and PO meloxicam administration were analyzed to determine their PK profile using the software (Phoenix Win-Nonlin 7.0, Certara, Inc. Princeton, NJ, USA) as described in a previous study \[[@pone.0217518.ref002]\]. Non-compartment PK approach was applied to the data using a pre-structured model (Model: Plasma 200--202 with uniform weighting) in the software. The slope of terminal phase (λ~z~) of the log plasma concentration vs. time curve was estimated by means of linear regression; while the half-life of the terminal phase (λ~z-HL~) was calculated using the following equation: λ~z-HL~ = $\frac{0.693}{\lambda_{z}}$. Area under the plasma concentration vs. time curve (AUC) and area under the first moment of the plasma concentration vs. time curve (AUMC) were calculated by use of the log- linear trapezoidal method \[[@pone.0217518.ref014]\]. Time range from the first measurement (T0) to the last measurement (d 7) of drug concentration was used for the calculation of AUC~0-last~ and AUMC~0-last~. The AUC and AUMC were extrapolated to infinity to determine AUC~0-∞~ and AUMC~0-∞~ to account for the total meloxicam exposure to calves \[[@pone.0217518.ref014]\]. Apparent volume of distribution during terminal phase (V~z~/F) and total systemic clearance scaled by bioavailability (CL/F) and mean residence time (MRT) of drug were also determined. Peak plasma concentration (C~max~) and time to achieve peak concentration (T~max~) were determined directly from the observed data. ### Salivary cortisol {#sec006} Salivary samples were collected on d -1, -2, T0, 30, 60, 90, 120, 150, 240 min and on d 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration. A cotton swab used to collect saliva from the oral cavity was stored in a plastic tube and frozen at-- 20° C for further cortisol analysis \[[@pone.0217518.ref015]\]. Salivary cortisol concentrations were quantified using an enzyme immunoassay kit (Salimetrics, State College, PA). Inter-assay CV and intra-assay CV were 32.1% and 8.8%, respectively. ### Hair cortisol {#sec007} Hair from the forehead of the calves was clipped on d---2, 14 and 28 after castration. Samples were stored in plastic bags at room temperature and handled and analyzed as described by Moya et al. \[[@pone.0217518.ref016]\]. Cortisol was quantified using an enzyme-immunosorbent assay (Salimetrics, State College, PA). The intra-assay and the inter-assay's CV were 8.8% and 11.0%, respectively. Substance P {#sec008} ----------- Samples were collected from all calves through jugular venipuncture at d -1, -2, T0, 30, 60, 90, 120, 150 and 240 min, and on d 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration. Samples were collected and analyzed as previously described by Meléndez et al. \[[@pone.0217518.ref017]\]. Samples were collected into 6-mL tubes containing EDTA (BD vacutainer; Becton Dickinson Co., Franklin Lakes, NJ), where benzamidine hydrochloride was added to reduce substance P degradation and centrifuged for 15 min at 1.5 × *g* at 4°C and the serum was stored at -80°C. Samples were analyzed at Iowa State University, College of Veterinary Medicine (Ames, IA) with some modifications from the previously described procedure by Van Engen et al. \[[@pone.0217518.ref018]\]. The intra-assay CV was 8.8% and the inter-assay CV was calculated at 11.5%. ### Haptoglobin and serum amyloid-A {#sec009} Samples were collected from all calves through jugular venipuncture at d -1, T0, 90 and 240 min, and on d 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration. Blood samples were collected into 10-mL non-additive tubes (BD vacutainer; Becton Dickinson Co., Franklin Lakes, NJ), left at room temperature for 1 hour before being centrifuged for 15 min at 1.5 × *g* at 4°C and the serum was decanted and frozen at -80°C for further analysis \[[@pone.0217518.ref013]\]. The inter-assay CV for haptoglobin was 8.2%, while SAA intra-assay and inter-assay CV were 3.9% and 11.6%, respectively. ### Complete blood cell count {#sec010} Blood samples were collected through jugular venipuncture at d -2, -1, T0, 30, 60, 90, 120, 150, and 240 min, and on d 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration. Blood samples were collected into 6-mL EDTA tubes (BD vacutainer; Becton Dickinson Co., Franklin Lakes, NJ) and red blood cells and white blood cells were measured using a HemaTrueHematology Analyzer (Heska, Lobeland, Co). ### Scrotal temperature {#sec011} Images of the scrotum and its surrounding area were collected on d -2, -1, T0, 30, 60, 90, 120, 150, 240 min and on d 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration. A FLIR i60 infrared camera (FLIR Systems Ltd., Burlington, ON, Canada) was used to capture infrared images of the scrotal area at a distance of 1 m from the scrotum, and FLIR Tools version 5.1 (FLIR Systems Ltd.) was used to delineate the scrotal area and to record the maximum temperature \[[@pone.0217518.ref019]\]. An emissivity coefficient of 0.98 was used to analyze the images. ### Scrotal circumference {#sec012} The scrotum was evaluated on d -2, 90 and 240 min, and on d 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration using scrotal tape (Reliabull, Lane Manufacturing, Denver, CO) on the widest part of the scrotum \[[@pone.0217518.ref020]\]. ### Rectal temperature {#sec013} A digital thermometer (M750 Livestock Thermometer, GLA Agricultural Electronics, San Luis Obispo, CA) was used to collect rectal temperature on d---2, -1, 0, 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration. ### Weight {#sec014} Calves were weighed in a hydraulic squeeze chute (Cattlelac Cattle, Reg Cox Feedmixers Ltd, Lethbridge, Alberta, Canada) on d -2, -1, T0, 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration. ### Visual analog scale {#sec015} Two experienced observers placed a mark along a 10 cm line (far left indicating no pain and far right extreme pain) as an indicator of their perception of the amount of pain calves were experiencing during castration \[[@pone.0217518.ref019]\]. Due to the experimental conditions observers were not blind to treatments. ### Head movement {#sec016} A video camera was placed in front of the head gate during castration to record head movement. An observer blind to treatment used the middle of the hairline of the muzzle as a reference point to track the distance (cm) of head movement during castration using Kinovea (General Public License) version 2 \[[@pone.0217518.ref013]\]. ### Chute movement {#sec017} The movement of the animals in the chute during castration was quantified using strain gauges and accelerometers as previously described by Melendez et al. \[[@pone.0217518.ref013]\]. Briefly, the right and left head gate were equipped with strain gauges to measure the force cattle exerted on the head gate by pushing or pulling, while the chute was equipped with three 1-axis accelerometers (CXL-GP Series, Aceinna, Andover, MA) measuring lateral, vertical and horizontal movement. Analog signals (V) from the accelerometer and strain gauges were sent to a computer at a rate of 100 samples/s. Data from the accelerometers was added for each animal to obtain an overall acceleration force, and the data from the left and right head gate were added by animal to obtain an overall head gate force. Data from d -1 and d -2 was used as a baseline for each calf, this data was collected after the animal entered the chute and prior to sampling for a 20 second period. Variables included head gate and accelerometer number of peaks between 1 and 2 SD, 2 and 3 SD, and above or below 3 SD above and below the mean, and total area between the mean ± 1 SD, mean ± 2 SD, and mean ± 3 SD. These variables were divided by the time required to castrate each calf. ### Pain sensitivity {#sec018} Pain sensitivity was assessed as previously described by Marti et al. \[[@pone.0217518.ref020]\] using a Von Frey anesthesiometer (electronic von Frey anesthesiometer with rigid tip; 0 to 1000g; IITC-Life Science Instruments, Woodland Hills, California, USA) on the wound and on the skin adjacent to the wound. Animals were tested on d -2, -1, T0, 30, 90, 240 min and on d 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration while standing in the chute with their head restrained. The maximum pressure exerted on the wound before a behavioural reaction (steps, kicks or tail flicks) was recorded. ### Stride length {#sec019} Video recordings of calves walking through a 1 x 3 m alley were collected on d -2, -1, immediately after castration, 30, 60, 90, 120, 150, 240 min and on d 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration. Stride length was collected as described by Currah et al. \[[@pone.0217518.ref021]\] however a grid background wasn't used and the image analysis software differed between studies. Observers blind to treatments took pictures of the back legs using GOM player (GOM Lab, Gretech Corporation, Seoul, South Korea) and Image J (National Institutes of Health Image, Bethesda, MD) was used to measure the stride distance (cm). Data from d 5 and 14 were removed from the analysis due to incomplete data for the majority of animals. ### Pen behavior {#sec020} An experienced observer blind to the treatments scored behaviour for a 4 hour period between 5 to 9 hour relative to castration on d 0 when calves returned to their home pen, and at the same time of the day on d 1, 2, 3 and 7 after castration. Focal animal sampling from continuous recordings \[[@pone.0217518.ref022]\] were used to score frequency of tail flicks, foot stamping, head turning and lesion licking and duration of standing, lying, walking and eating. Behaviours were modified from the ethogram described by Molony et al. \[[@pone.0217518.ref023]\]. Behaviours were defined as: a) eating: ingesting hay or straw from the ground or the feeder, b) lying: either lateral (laying with hip and shoulder on the ground with at least 3 limbs extended) or ventral (laying in sternal recumbency with legs folded under the body or one hind or front leg extended) lying, c) walking: walking forward more than 2 steps, d) standing: standing on all four legs, e) foot stamping: hind legs are lifted and forcefully placed on the ground or kicked outwards while standing, f) head turning: head is turned and touches the side of the calf's body when standing, including head turning to groom, g) tail flicking: forceful tail movement beyond the widest part of the rump when standing, movement to one side is counted as one action, h) lesion licking: head turning to lick the lesion caused by castration while standing \[[@pone.0217518.ref017]\]. Intra-rater reliability was 0.98 respectively. ### Standing and lying behavior {#sec021} Accelerometers (Hobo pendant G, Onset Computer Corporation, Bourne, MA) were placed on the calves using Vet Wrap (Professional Preference, Calgary, Canada) to determine daily standing and lying percentage, and daily average standing and lying bout duration \[[@pone.0217518.ref024]\]. Accelerometers were wrapped in plastic to protect the device from moisture and in foam to avoid discomfort when placed above the hock \[[@pone.0217518.ref017]\]. Accelerometers were placed on d -1 and changed weekly to avoid inflammation of the area. Information from days when accelerometers were changed (d 7, 14, 21 and 28) were excluded from the analysis due to incomplete data collection. ### Feeding behavior {#sec022} The GrowSafe feed bunk monitoring system (GrowSafe Systems, Airdrie, Alberta, Canada) was used to record feeding behaviour. Each calf was fitted with a radio frequency ear tag and each pen was equipped with 5 feeding tubs which recorded feeding behaviour for each individual calf 24 hours a day over a 28 d period. Feeding duration (min/d), dry matter intake (kg/day), feeding rate (g/min), meal frequency (number/d), meal duration (min/meal) and meal size (kg/meal) were calculated from the feeding behaviour data \[[@pone.0217518.ref015]\]. As in the previous study, a meal criterion of 300 s was selected as it has been previously used in cattle \[[@pone.0217518.ref025], [@pone.0217518.ref026]\]. Statistical analysis {#sec023} -------------------- The normal distribution of the residual was not assumed and therefore the models were "generalized" (SAS PROC GLIMMIX). For each model, a distribution was selected from the exponential family of distributions based on the model fit statistics, i.e., the Bayesian information criterion. The models were "mixed" due to the inclusion of fixed (treatment, the experimental covariate, and linear, quadratic, and cubic effects of time) and random (pen and animal) factors. In each model a covariate was included. The values of the covariate were averages from d 2 and 1 before castration (or values from d 1 or 2 before castration). In some cases, the polynomial (cubic and quadratic) effects of time were not statistically significant and (in those cases) the polynomial components were not included in the models. Results and discussion {#sec024} ====================== The PK of meloxicam following intravenous (IV) or PO administration have been previously reported for cattle, sheep, goats, llamas and horses \[[@pone.0217518.ref003],[@pone.0217518.ref027]--[@pone.0217518.ref032]\]. In the European Union and Canada, meloxicam has been approved for intramuscular (IM) and SC (0.5 mg/kg) use in cattle, as an adjunct therapy during the treatment of acute mastitis, diarrhea, respiratory disease and dehorning. In Canada PO meloxicam (1 mg/kg) has been approved for its use in cattle to mitigate pain associated with band and knife castration. The PK data are clinically useful as the terminal half-life of PO meloxicam at a dose of 1.0 mg/kg suggested that once a day administration provides analgesic efficacy in calves \[[@pone.0217518.ref003]\]. The PO route of drug administration is convenient, non-invasive, typically painless, and formulations are generally cheaper. Limitations of PO administration include a prolonged time of onset of analgesia after administration and unpredictable absorption due to varying gastric conditions and first pass hepatic biotransformation \[[@pone.0217518.ref033]\]. In contrast, SC administration offers the advantage of faster absorption and ease of administration. To our knowledge, there is only one previous study assessing PK following SC administration of meloxicam in cattle \[[@pone.0217518.ref002]\]. Therefore, the goals of this study were to describe the PK characteristics of meloxicam following SC administration to compare the pharmacokinetics of meloxicam after SC (0.5 mg/kg) and PO (1 mg/kg) administration. These data are important to optimize drug administration relative to the timing of the procedure and to design effective analgesic protocols for use in calves at the time of castration. The time to reach peak plasma drug concentrations (T~max~ = 24.0 hour, PO; T~max~ = 3.7 hour, SC) after drug administrations differed (*P* ≤ 0.05) between treatments while no differences (*P* ≥ 0.10) were observed for peak plasma drug concentration (C~max~ = 2.32 μg/mL, PO; C~max~ = 2.37 μg/mL, SC) ([Table 1](#pone.0217518.t001){ref-type="table"}). These findings were expected due to the differences in route of drug administration which have an effect on drug absorption. Similar T~max~ and C~max~ values were observed in calves receiving SC meloxicam with or without a lidocaine ring block prior to knife castration \[[@pone.0217518.ref002]\]. Similar findings were also reported in goats, where SC meloxicam administration had a significantly shorter T~max~ (3.20 hour) compared to PO meloxicam administration (14.3 hour) \[[@pone.0217518.ref031]\]. In contrast, mean C~max~ following SC meloxicam administration in the present study was higher than the value (C~max~ = 1.91 μg/mL) obtained for goats \[[@pone.0217518.ref031]\], while a lower C~max~ was observed following PO administration in comparison to the C~max~ (3.10 μg/mL) previously reported in calves \[[@pone.0217518.ref003]\]. Difference in age and breed of animals, in addition to the time of drug administration relative to the feeding regimen may be the reason for the discrepancies observed between studies. 10.1371/journal.pone.0217518.t001 ###### Mean ± SD PK parameters of meloxicam following PO (1mg/kg) and SC (0.5 mg/kg) administration in calves (n = 12). ![](pone.0217518.t001){#pone.0217518.t001g} Item PO SC *P*-Value ------------------------- -------------------------------------------------------- ------------------------------------------------------- ----------- \*λ~z,~ 1/h [\*](#t001fn002){ref-type="table-fn"}0.045 ± 0.006 [\*](#t001fn002){ref-type="table-fn"}0.043 ± 0.007 0.39 \*λ~z~-HL [\*](#t001fn002){ref-type="table-fn"}15.6 ± 2.33 [\*](#t001fn002){ref-type="table-fn"}16.2 ± 2.48 0.39 T~max~, h 24.0[^a^](#t001fn003){ref-type="table-fn"} ± 0.00 3.7[^b^](#t001fn003){ref-type="table-fn"} ± 0.72 \<0.01 C~max,~ ng/ml 2325 ± 431.4 2374 ± 384.0 0.90 Cl_F, mL/h/kg 11.11[^a^](#t001fn003){ref-type="table-fn"} ± 3.10 7.98[^b^](#t001fn003){ref-type="table-fn"} ± 1.436 \<0.01 AUC~0-24h,~ h × ng/mL 34195 ± 5493.9 39285 ± 6083.1 0.20 AUC~0-last,~ h × ng/mL 94992[^a^](#t001fn003){ref-type="table-fn"} ± 20718.7 64320[^b^](#t001fn003){ref-type="table-fn"} ± 11275.5 \<0.01 AUC ~0-∞,~ h × ng/mL 95160[^a^](#t001fn003){ref-type="table-fn"} ± 20755.3 64455[^b^](#t001fn003){ref-type="table-fn"} ± 11331.9 \<0.01 AUC extrapolated, % 0.18[^b^](#t001fn003){ref-type="table-fn"} ± 0.144 0.21[^a^](#t001fn003){ref-type="table-fn"} ± 0.156 \<0.01 AUMC~0-∞,~ h^2^ × ng/mL 3294678[^a^](#t001fn003){ref-type="table-fn"} ± 956551 1483639^b^ ± 480681 \<0.01 MRT0-∞, h 34.1[^a^](#t001fn003){ref-type="table-fn"} ± 3.32 22.6[^b^](#t001fn003){ref-type="table-fn"} ± 4.26 \<0.01 Vz_F, mL/kg 244[^a^](#t001fn003){ref-type="table-fn"} ± 43.2 183.5[^b^](#t001fn003){ref-type="table-fn"} ± 21.85 \<0.01 PK parameters were determined using non-compartment modeling. \*Harmonic means and rest of the means are geometric means ± SD. ^a-b^ Values with differing superscripts differ *P* \<0.05. The area under the curve (AUC = 95.16 μg × h/mL), V~z~/F = 244 mL/kg and Cl/F = 11.11 mL/h/kg were greater (*P* ≤ 0.05) in the calves receiving PO compared to SC meloxicam administration. The AUC is an indicator of the total drug exposure and it is dependent on dose and rate of elimination. Calves given PO meloxicam received a higher dose (1 mg/kg) compared to SC meloxicam administration (0.5 mg/kg). In general the oral dose of a particular drug is higher than the dose of the injectable formulation due to the metabolism that occurs in the gastrointestinal wall and the liver which is commonly known as *first pass effect*. A higher dose of meloxicam given to the PO calves seems to be the major contributing factor for a greater AUC as the elimination rate (λ~z\ =~ 0.043--0.045 1/h) is approximately the same for both treatment groups. The SC calves had lower (*P* ≤ 0.05) clearance (Cl/F = 7.98 mL/h/kg) of meloxicam than the PO calves, which is in agreement with the longer elimination half-life (λ~z~-HL = 16.2 h) of the SC treatment than the PO (15.2 h) administration. The λ~z~-HL (16.2 h) in calves was slightly higher than that reported for goats (15.1 h) after SC meloxicam administration using the same dose of 0.5 mg/kg \[[@pone.0217518.ref031]\], while higher values for λ~z~-HL (27.5 h) and AUC (164.4 μg.h/mL) have been reported following PO administration of meloxicam in calves \[[@pone.0217518.ref003]\]. In the previously mentioned trial PK analysis showed that the AUC extrapolation range was 23.0--39.4% in four calves and 4.14--5.85% in two calves. In contrast, PK analysis for the current study was done with AUC extrapolation of 0.18%. In addition, there was a difference in the sampling schedule design between the two studies. In the present study, blood samples for meloxicam determination were collected for 168 hours after drug administration, however, in the previous study blood samples were collected up to 96 hours post drug administration. Insufficient sampling times in the descending part of the curve may lead to overestimation of AUC \[[@pone.0217518.ref034]\]. This could be the reason for a greater AUC in the previous study compared to the AUC obtained in the current study. The AUC in calves was greater than the AUC reported in sheep (75.09 μg × h/mL) \[[@pone.0217518.ref030]\] and goats (23.24 μg × h/mL) \[[@pone.0217518.ref029]\], indicating that meloxicam is eliminated at a slower rate in calves than small ruminant species. The limited V~z~/F = 244 mL/kg observed in the present study after PO meloxicam administration is similar to previous values reported for calves (242 mL/kg) \[[@pone.0217518.ref003]\] and sheep (293 mL/kg) \[[@pone.0217518.ref030]\]. A low V~z~/F indicates that the drug is mainly found in the vascular space as opposed to the extravascular space. Meloxicam is a drug which is highly bound to plasma proteins and its molecules are ionized at physiological pH in ruminants, therefore, it is mainly found in the vascular space. This finding is in accordance with a previous study reporting limited volumes of distribution in ruminants receiving NSAIDs \[[@pone.0217518.ref035]\]. Although differences were observed between PO and SC treatments further PK studies are needed to evaluate if these differences are biologically relevant. However, low values of clearance and a terminal half-life of 16.2 hours following SC administration suggests that once a day dosing might prove effective to maintain analgesic effect in calves. Pharmacodynamic studies demonstrating the efficacy of SC meloxicam at this dose are required for future indication of SC meloxicam in calves. Substance P is a neuropeptide associated with the modulation of pain, stress and anxiety \[[@pone.0217518.ref036]\]. During tissue injury and inflammation the pain threshold can be reduced by the effect of pro-inflammatory substances, such as prostaglandin E~2~ which has the ability to stimulate the release of substance P from sensory neurons, therefore increasing the sensitivity of sensory neurons to physical or chemical stimuli \[[@pone.0217518.ref037]\]. A previous study identified substance P as a potentially useful biomarker of pain when greater substance P concentrations were reported in surgically castrated calves (506.4 ± 38.11 pg/mL) compared to un-castrated (386.4 ± 40.09 pg/mL) 4 to 6 month old calves, while no differences were observed in plasma cortisol concentrations \[[@pone.0217518.ref038]\]. Contrary to the previous findings, several studies have reported lack of differences in substance P concentrations after different methods of castration in calves of different ages \[[@pone.0217518.ref006],[@pone.0217518.ref010],[@pone.0217518.ref017],[@pone.0217518.ref039]\]. In the present study substance P concentrations were greater (*P* ≤ 0.05) in PO than SC calves ([Table 2](#pone.0217518.t002){ref-type="table"}). Based on the limits of 95% confidence, PO calves are expected to have 1 to 9% higher substance P concentrations than SC calves. Meloxicam has been previously reported to decrease substance P concentrations in the case of acute synovitis in the horse, as well as dehorning and castration in cattle \[[@pone.0217518.ref007],[@pone.0217518.ref040],[@pone.0217518.ref041]\]. The differences observed between treatments could be attributed to a faster absorption rate which could inhibit the release of inflammatory substances sooner in the inflammatory cascade, therefore affecting overall substance P concentrations. Although statistically significant, differences observed between treatments were very small (4.24 pg/mL) in comparison to the difference observed in the previous study (120 pg/mL) \[[@pone.0217518.ref038]\]. Caution should be taken when comparing results as age and sampling time points differ between studies. To our knowledge there are no studies assessing the effect of route of drug administration on substance P concentrations. 10.1371/journal.pone.0217518.t002 ###### Least square means (± SEM) of physiological parameters minutes and days after castration of surgical castrated weaned Angus crossbred calves receiving PO or SC meloxicam[^1^](#t002fn001){ref-type="table-fn"}. ![](pone.0217518.t002){#pone.0217518.t002g} Treatment (T)[^2^](#t002fn002){ref-type="table-fn"} *P*-value --------------------------- ----------------------------------------------------- --------------------------------------------- --------- ----------- -------- Salivary Cortisol, nmol/L 3.0 3.1 0.07 0.64 \<0.01 Hair Cortisol, nmol/L 2.1 2.3 0.10 0.44 0.79 Substance P, pg/mL 83.0[^a^](#t002fn004){ref-type="table-fn"} 78.7[^b^](#t002fn004){ref-type="table-fn"} 0.02 0.01 0.01 Haptoglobin, g/L 0.6 0.7 0.08 0.24 \<0.01 Serum amyloid-A, ug/mL 64.5 75.4 0.10 0.10 \<0.01 CBC     WBC, 10^9^/L 10.7[^a^](#t002fn004){ref-type="table-fn"} 10.1[^b^](#t002fn004){ref-type="table-fn"} 0.01 \<0.01 \<0.01     RBC, 10^12^/L 7.2 7.3 0.002 0.06 \<0.01 Scrotal temperature,°C 33.9 33.8 0.01 0.33 0.09 Rectal temperature,°C 39.4 39.4 0.001 0.66 0.01 Scrotal circumference, cm 24.8[^b^](#t002fn004){ref-type="table-fn"} 26.1[^a^](#t002fn004){ref-type="table-fn"} 0.00004 0.01 \<0.01 Weight, kg 326.6[^b^](#t002fn004){ref-type="table-fn"} 327.2[^a^](#t002fn004){ref-type="table-fn"} 0.001 \<0.01 0.16 ^1^Values in the table represent the mean of T0, 30, 60, 90, 120, 150 and 240 min and day 1, 2, 3,5,10,14, 21 and 28 after castration for salivary cortisol, substance P, scrotal temperature and CBC; the mean of day 14 and 28 after castration for hair cortisol; the mean of T0, 90 and 240 min and day 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration for SAA and haptoglobin; the mean of T0, 1, 2, 3, 5, 7, 10, 14 and 28 after castration for rectal temperature and weight; the mean of 90 and 240 min and day 1, 2, 3, 5, 7, 10, 14, 21 and 28 after castration for scrotal circumference. ^2^ Treatments administered immediately prior to castration: PO: oral meloxicam; SC: subcutaneous meloxicam. ^3^The values correspond to non-transformed means, however, the SEM and the *P-*values correspond to GLIMMIX analysis using napierian log transformation. ^a-b^ Values with differing superscripts differ *P* \<0.05. WBC counts were greater (*P* ≤ 0.05) in PO than SC calves. Based on the limits 95% confidence, PO calves are expected to have 2 to 9% higher WBC counts than SC calves. Castrated calves have been previously reported to have greater WBC counts than sham castrated calves \[[@pone.0217518.ref008],[@pone.0217518.ref042],[@pone.0217518.ref043]\] and meloxicam has been previously reported to decrease the WBC counts after castration in 1 week, 2 month and weaned calves \[[@pone.0217518.ref002],[@pone.0217518.ref006]--[@pone.0217518.ref008]\]. Similar to the results observed for substance P, it is likely that SC calves had lower WBC counts due to a faster onset of action. Although differences were observed between treatments, the WBC count was within the normal range (WBC: 4--12 × 10^3^/μL) \[[@pone.0217518.ref044]\]. Weight was lower (*P* ≤ 0.05) in PO than SC calves. Based on the limits 95% confidence, PO calves are expected to have 0.25 to 0.08% lower weight than SC calves. Weight was assessed in the present study as an indicator of welfare as animals that are in pain generally reduce feed consumption which could potentially affect their average daily gain (ADG). Previous studies assessing the effect of castration in beef cattle have reported a decrease in ADG after knife and band castration, but performance parameters were not affected by medication \[[@pone.0217518.ref010],[@pone.0217518.ref015],[@pone.0217518.ref019],[@pone.0217518.ref045]\]. These findings are similar to the results observed for substance P and WBC counts. If an analgesic and anti-inflammatory effect is achieved sooner, it is more likely that calves will be willing to walk to the feed bunk and eat sooner which could potentially affect weight gain. Scrotal circumference was lower (*P* ≤ 0.05) in PO than SC calves. In the present study, scrotal circumference was assessed as an indicator of inflammation. Previous studies have reported an increase in scrotal circumference after band \[[@pone.0217518.ref042]\], knife \[[@pone.0217518.ref046]\], and burdizzo \[[@pone.0217518.ref047]\] castration in cattle. A previous study reported that the combination of lidocaine and meloxicam was more effective at reducing scrotal circumference than meloxicam alone \[[@pone.0217518.ref002]\]. The result for scrotal circumference is contrary to the results observed for substance P, WBC and weight. A possible explanation for the reduction in scrotal inflammation observed in PO calves could be due to the greater exposure to meloxicam as PO calves had greater AUC than SC calves. No differences were observed in behaviour during castration ([Table 3](#pone.0217518.t003){ref-type="table"}) and no differences were observed for behaviour after castration with the exception of lying and standing ([Table 4](#pone.0217518.t004){ref-type="table"}). Lying percentage was greater (*P* ≤ 0.05) in PO than SC calves, while SC calves had greater (*P* ≤ 0.05) standing duration than PO calves. Previous studies have reported an increase in standing duration after castration in comparison to prior to castration \[[@pone.0217518.ref048]\]. Similar results have reported greater standing duration in knife castrated calves compared to band and control 2 month and 4 month old calves \[[@pone.0217518.ref017]\] suggesting that lying behaviour could be associated with comfort. Meloxicam treated calves had greater lying duration than non-medicated calves after knife castration and the combination of knife castration and branding \[[@pone.0217518.ref006]\]. Similar studies assessing the effect of meloxicam after a painful procedure reported greater lying duration in cattle after a C-section \[[@pone.0217518.ref049]\] and after dehorning \[[@pone.0217518.ref050]\] when compared to the placebo group. The effect of painful procedures and medication on lying behaviour support the notion of lying as a comfort indicator. 10.1371/journal.pone.0217518.t003 ###### Least square means (±SEM) of behavioural parameters during castration of surgical castrated weaned Angus crossbred calves receiving PO or SC meloxicam[^1^](#t003fn001){ref-type="table-fn"}. ![](pone.0217518.t003){#pone.0217518.t003g} Treatment (T)[^2^](#t003fn002){ref-type="table-fn"} *P*-value -------------------------------- ----------------------------------------------------- ----------- ------- ------ VAS, cm 3.8 3.6 0.18 0.75 Leg movements, n 13.6 14.7 0.08 0.42 Head movement, cm 2734 2391 0.11 0.35 *Accelerometers* Peaks between ± 1--2 SD, n 174 186 38.0 0.83 Peaks between ± 2--3 SD, n 72 49 24.9 0.43 Peaks above and below 3 SD, n 57 44 13.7 0.53 TA above and below 1 SD, V × s 7.0 5.3 0.20 0.08 TA above and below 2 SD, V × s 4.3 2.9 0.03 0.13 TA above and below 3SD, V × s 3.2 2.0 0.07 0.11 *Strain Gauges* Peaks between ± 1--2 SD, n 138 302 79.2 0.12 Peaks between ± 2--3 SD, n 177 65 24.9 0.10 Peaks above and below 3 SD, n 413.0 370.1 117.6 0.76 TA above and below 1 SD, V × s 210.9 260.9 0.17 0.83 TA above and below 2 SD, V × s 132.2 165.9 0.47 0.24 TA above and below 3SD, V × s 89.2 122.3 0.42 0.79 ^1^Values in the table represent the mean of VAS, leg movement, head movement and chute behaviour assessed at the time of castration. ^2^ Treatments administered immediately prior to castration: PO: oral meloxicam; SC: subcutaneous meloxicam. ^3^Values in the table correspond to non-transformed means; however, SEM and *P*-values correspond to the scale of inference (distribution of SAS PROC GLIMMIX), analysis using square root + 1 transformed data for VAS, leg movement, head movement and chute behaviour. 10.1371/journal.pone.0217518.t004 ###### Least square means (± SEM) of behavioural parameters minutes and days after castration of surgical castrated weaned Angus crossbred calves receiving PO or SC meloxicam[^1^](#t004fn001){ref-type="table-fn"}. ![](pone.0217518.t004){#pone.0217518.t004g} Treatment (T)[^2^](#t004fn002){ref-type="table-fn"} *P*-value ------------------------------ ----------------------------------------------------- --------------------------------------------- ------ ----------- -------- Von Frey, g 324.2 342.0 0.00 0.65 \<0.01 Stride length, cm 52.9 53.2 0.02 0.64 0.02 *Pen behaviour* Lying, min 65.2 53.1 0.76 0.29 0.02 Standing, min 104.4[^b^](#t004fn004){ref-type="table-fn"} 114.6[^a^](#t004fn004){ref-type="table-fn"} 0.00 0.05 0.05 Walking, min 10.4 12.3 0.16 0.31 \<0.01 Eating, min 35.4 29.6 0.21 0.72 0.20 Tail flick, n 1064.9 1013.0 0.38 0.45 0.60 Foot stamp, n 4.1 5.6 0.34 0.92 0.36 Head turning, n 5.9 10.1 0.00 0.10 0.02 Lesion licking, n 1.4 2.3 0.06 0.08 0.02 *Standing/Lying behaviour* Standing duration, min 64.1 70.2 0.06 0.06 0.00 Lying duration, min 67.6 69.8 0.00 0.39 \<0.01 Standing time, % 41.7 42.9 0.04 0.13 0.01 Lying time, % 55.3[^a^](#t004fn004){ref-type="table-fn"} 52.9[^b^](#t004fn004){ref-type="table-fn"} 0.02 0.04 \<0.01 *Feeding behaviour* Dry matter feed intake, kg/d 5.5 5.6 0.10 0.88 \<0.01 Feeding time, min/d 134 134 0.00 0.97 \<0.01 Feeding rate, g/min 43.0 43.7 0.02 0.45 0.13 Meal frequency, meal/d 11.9 11.8 0.02 0.68 0.88 Meal duration, min/meal 11.3 11.3 0.00 0.76 \<0.01 Meal size, kg/meal 0.6 0.6 0.02 0.86 0.09 ^1^Values in the table represent the mean of immediately after castration, 30, 60, 90, 120, 150, 240 min and day 1, 2, 3, 7, 10, 21 and 28 after castration for SL; the mean of d 0, 1, 2, 3 and 7 for pen behaviour; the means of d 0 to 28 after castration, excluding sampling days, for standing and lying behaviour; the means of d 0 to 28 for feeding behaviour. ^2^ Treatments administered immediately prior to castration: PO: oral meloxicam; SC: subcutaneous meloxicam. ^3^The values correspond to non-transformed means, however, the SEM and the *P-*values correspond to GLIMMIX analysis. ^a-b^Least square means within a row with differing superscripts differ (*P* ≤ 0.05) If the differences observed in the present study were biologically relevant, both PO and SC meloxicam administration reduced pain and/or inflammation indicators. The greater exposure to meloxicam (AUC) observed can explain lower standing duration and scrotal inflammation, and the greater lying percentage observed in the PO calves. On the other hand, the T~max~ could explain the lower substance P concentrations, WBC counts and weight, as a faster onset of action from SC meloxicam administration could inhibit the production of pro-inflammatory substances sooner in the inflammatory process which could consequently lead to a reduced magnitude of inflammation. In addition, SC calves were likely to reach therapeutic meloxicam concentrations sooner than PO calves, therefore a faster analgesic and anti-inflammatory effect could motivate calves to eat sooner after the procedure which could explain the difference observed for weight. Differences observed between treatments for substance P (4.24 pg/mL), WBC (0.05 × 10^9^), weight (0.54 kg), scrotal circumference (1.24 cm) lying (2.85%) and standing (10.2 min) are relatively small and although statistically significant these results may lack biological relevance. Lack of differences observed in the rest of parameters could be due to a small sample size, lack of sensitivity of the parameters assessed or because in fact there were no differences between treatments. Previous studies assessing SC meloxicam have reported a reduction in indicators of pain and/or inflammation in medicated than un-medicated castrated beef calves \[[@pone.0217518.ref002],[@pone.0217518.ref006],[@pone.0217518.ref007]\], however, a limitation of the current study is lack of internal sensitivity due to the absence of a control group that did not receive pain control. The purpose of this study was to assess the PK of PO and SC meloxicam and the effect of drug administration route on physiological and behavioural indicators of pain. Although statistical differences were observed in PK, physiological and behavioural parameters, differences observed may lack biological relevance. Based on these results few differences were observed in physiological and behavioural indicators of pain after PO and SC meloxicam during and after castration in 7--8 month old beef calves. Further studies are needed to determine if the differences observed are biologically relevant. Supporting information {#sec025} ====================== ###### (XLSX) ###### Click here for additional data file. ###### (DOCX) ###### Click here for additional data file. The authors appreciate the invaluable help of Agriculture and Agri-Food Canada research feedlot staff and beef welfare technicians Randy Wilde and Fiona Brown. We are very thankful for the funding provided by Agriculture and Agri-Food Canada and the Beef Cattle Research Council through the Canadian Beef Cattle Industry Science Cluster. We would also like to thank all the students that helped with data collection and behavioural scoring: Nicholas Wong and Charis Lau. The co-author Sonia Marti was partly supported by the CERCA program from Generalitat de Catalunya. This is Lethbridge Research Centre contribution \# 38719006. [^1]: **Competing Interests:**The authors have declared that no competing interests exist.
import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import dgl from dgl.model_zoo.chem.gnn import GATLayer from dgl.nn.pytorch import NNConv, Set2Set from dgl.nn.pytorch.conv import GINConv from dgl.nn.pytorch.glob import AvgPooling, MaxPooling, SumPooling class SELayer(nn.Module): """Squeeze-and-excitation networks""" def __init__(self, in_channels, se_channels): super(SELayer, self).__init__() self.in_channels = in_channels self.se_channels = se_channels self.encoder_decoder = nn.Sequential( nn.Linear(in_channels, se_channels), nn.ELU(), nn.Linear(se_channels, in_channels), nn.Sigmoid(), ) def forward(self, x): """""" # Aggregate input representation x_global = torch.mean(x, dim=0) # Compute reweighting vector s s = self.encoder_decoder(x_global) return x * s class ApplyNodeFunc(nn.Module): """Update the node feature hv with MLP, BN and ReLU.""" def __init__(self, mlp, use_selayer): super(ApplyNodeFunc, self).__init__() self.mlp = mlp self.bn = ( SELayer(self.mlp.output_dim, int(np.sqrt(self.mlp.output_dim))) if use_selayer else nn.BatchNorm1d(self.mlp.output_dim) ) def forward(self, h): h = self.mlp(h) h = self.bn(h) h = F.relu(h) return h class MLP(nn.Module): """MLP with linear output""" def __init__(self, num_layers, input_dim, hidden_dim, output_dim, use_selayer): """MLP layers construction Paramters --------- num_layers: int The number of linear layers input_dim: int The dimensionality of input features hidden_dim: int The dimensionality of hidden units at ALL layers output_dim: int The number of classes for prediction """ super(MLP, self).__init__() self.linear_or_not = True # default is linear model self.num_layers = num_layers self.output_dim = output_dim if num_layers < 1: raise ValueError("number of layers should be positive!") elif num_layers == 1: # Linear model self.linear = nn.Linear(input_dim, output_dim) else: # Multi-layer model self.linear_or_not = False self.linears = torch.nn.ModuleList() self.batch_norms = torch.nn.ModuleList() self.linears.append(nn.Linear(input_dim, hidden_dim)) for layer in range(num_layers - 2): self.linears.append(nn.Linear(hidden_dim, hidden_dim)) self.linears.append(nn.Linear(hidden_dim, output_dim)) for layer in range(num_layers - 1): self.batch_norms.append( SELayer(hidden_dim, int(np.sqrt(hidden_dim))) if use_selayer else nn.BatchNorm1d(hidden_dim) ) def forward(self, x): if self.linear_or_not: # If linear model return self.linear(x) else: # If MLP h = x for i in range(self.num_layers - 1): h = F.relu(self.batch_norms[i](self.linears[i](h))) return self.linears[-1](h) class UnsupervisedGAT(nn.Module): def __init__( self, node_input_dim, node_hidden_dim, edge_input_dim, num_layers, num_heads ): super(UnsupervisedGAT, self).__init__() assert node_hidden_dim % num_heads == 0 self.layers = nn.ModuleList( [ GATLayer( in_feats=node_input_dim if i == 0 else node_hidden_dim, out_feats=node_hidden_dim // num_heads, num_heads=num_heads, feat_drop=0.0, attn_drop=0.0, alpha=0.2, residual=False, agg_mode="flatten", activation=F.leaky_relu if i + 1 < num_layers else None, ) for i in range(num_layers) ] ) def forward(self, g, n_feat, e_feat): for i, layer in enumerate(self.layers): n_feat = layer(g, n_feat) return n_feat class UnsupervisedMPNN(nn.Module): """ MPNN from `Neural Message Passing for Quantum Chemistry <https://arxiv.org/abs/1704.01212>`__ Parameters ---------- node_input_dim : int Dimension of input node feature, default to be 15. edge_input_dim : int Dimension of input edge feature, default to be 15. output_dim : int Dimension of prediction, default to be 12. node_hidden_dim : int Dimension of node feature in hidden layers, default to be 64. edge_hidden_dim : int Dimension of edge feature in hidden layers, default to be 128. num_step_message_passing : int Number of message passing steps, default to be 6. num_step_set2set : int Number of set2set steps num_layer_set2set : int Number of set2set layers """ def __init__( self, output_dim=32, node_input_dim=32, node_hidden_dim=32, edge_input_dim=32, edge_hidden_dim=32, num_step_message_passing=6, lstm_as_gate=False, ): super(UnsupervisedMPNN, self).__init__() self.num_step_message_passing = num_step_message_passing self.lin0 = nn.Linear(node_input_dim, node_hidden_dim) edge_network = nn.Sequential( nn.Linear(edge_input_dim, edge_hidden_dim), nn.ReLU(), nn.Linear(edge_hidden_dim, node_hidden_dim * node_hidden_dim), ) self.conv = NNConv( in_feats=node_hidden_dim, out_feats=node_hidden_dim, edge_func=edge_network, aggregator_type="sum", ) self.lstm_as_gate = lstm_as_gate if lstm_as_gate: self.lstm = nn.LSTM(node_hidden_dim, node_hidden_dim) else: self.gru = nn.GRU(node_hidden_dim, node_hidden_dim) def forward(self, g, n_feat, e_feat): """Predict molecule labels Parameters ---------- g : DGLGraph Input DGLGraph for molecule(s) n_feat : tensor of dtype float32 and shape (B1, D1) Node features. B1 for number of nodes and D1 for the node feature size. e_feat : tensor of dtype float32 and shape (B2, D2) Edge features. B2 for number of edges and D2 for the edge feature size. Returns ------- res : Predicted labels """ out = F.relu(self.lin0(n_feat)) # (B1, H1) h = out.unsqueeze(0) # (1, B1, H1) c = torch.zeros_like(h) for i in range(self.num_step_message_passing): m = F.relu(self.conv(g, out, e_feat)) # (B1, H1) if self.lstm_as_gate: out, (h, c) = self.lstm(m.unsqueeze(0), (h, c)) else: out, h = self.gru(m.unsqueeze(0), h) out = out.squeeze(0) return out class UnsupervisedGIN(nn.Module): """GIN model""" def __init__( self, num_layers, num_mlp_layers, input_dim, hidden_dim, output_dim, final_dropout, learn_eps, graph_pooling_type, neighbor_pooling_type, use_selayer, ): """model parameters setting Paramters --------- num_layers: int The number of linear layers in the neural network num_mlp_layers: int The number of linear layers in mlps input_dim: int The dimensionality of input features hidden_dim: int The dimensionality of hidden units at ALL layers output_dim: int The number of classes for prediction final_dropout: float dropout ratio on the final linear layer learn_eps: boolean If True, learn epsilon to distinguish center nodes from neighbors If False, aggregate neighbors and center nodes altogether. neighbor_pooling_type: str how to aggregate neighbors (sum, mean, or max) graph_pooling_type: str how to aggregate entire nodes in a graph (sum, mean or max) """ super(UnsupervisedGIN, self).__init__() self.num_layers = num_layers self.learn_eps = learn_eps # List of MLPs self.ginlayers = torch.nn.ModuleList() self.batch_norms = torch.nn.ModuleList() for layer in range(self.num_layers - 1): if layer == 0: mlp = MLP( num_mlp_layers, input_dim, hidden_dim, hidden_dim, use_selayer ) else: mlp = MLP( num_mlp_layers, hidden_dim, hidden_dim, hidden_dim, use_selayer ) self.ginlayers.append( GINConv( ApplyNodeFunc(mlp, use_selayer), neighbor_pooling_type, 0, self.learn_eps, ) ) self.batch_norms.append( SELayer(hidden_dim, int(np.sqrt(hidden_dim))) if use_selayer else nn.BatchNorm1d(hidden_dim) ) # Linear function for graph poolings of output of each layer # which maps the output of different layers into a prediction score self.linears_prediction = torch.nn.ModuleList() for layer in range(num_layers): if layer == 0: self.linears_prediction.append(nn.Linear(input_dim, output_dim)) else: self.linears_prediction.append(nn.Linear(hidden_dim, output_dim)) self.drop = nn.Dropout(final_dropout) if graph_pooling_type == "sum": self.pool = SumPooling() elif graph_pooling_type == "mean": self.pool = AvgPooling() elif graph_pooling_type == "max": self.pool = MaxPooling() else: raise NotImplementedError def forward(self, g, h, efeat): # list of hidden representation at each layer (including input) hidden_rep = [h] for i in range(self.num_layers - 1): h = self.ginlayers[i](g, h) h = self.batch_norms[i](h) h = F.relu(h) hidden_rep.append(h) score_over_layer = 0 # perform pooling over all nodes in each graph in every layer all_outputs = [] for i, h in list(enumerate(hidden_rep)): pooled_h = self.pool(g, h) all_outputs.append(pooled_h) score_over_layer += self.drop(self.linears_prediction[i](pooled_h)) return score_over_layer, all_outputs[1:] class GraphEncoder(nn.Module): """ MPNN from `Neural Message Passing for Quantum Chemistry <https://arxiv.org/abs/1704.01212>`__ Parameters ---------- node_input_dim : int Dimension of input node feature, default to be 15. edge_input_dim : int Dimension of input edge feature, default to be 15. output_dim : int Dimension of prediction, default to be 12. node_hidden_dim : int Dimension of node feature in hidden layers, default to be 64. edge_hidden_dim : int Dimension of edge feature in hidden layers, default to be 128. num_step_message_passing : int Number of message passing steps, default to be 6. num_step_set2set : int Number of set2set steps num_layer_set2set : int Number of set2set layers """ def __init__( self, positional_embedding_size=32, max_node_freq=8, max_edge_freq=8, max_degree=128, freq_embedding_size=32, degree_embedding_size=32, output_dim=32, node_hidden_dim=32, edge_hidden_dim=32, num_layers=6, num_heads=4, num_step_set2set=6, num_layer_set2set=3, norm=False, gnn_model="mpnn", degree_input=False, lstm_as_gate=False, ): super(GraphEncoder, self).__init__() if degree_input: node_input_dim = positional_embedding_size + degree_embedding_size + 1 else: node_input_dim = positional_embedding_size + 1 edge_input_dim = freq_embedding_size + 1 if gnn_model == "mpnn": self.gnn = UnsupervisedMPNN( output_dim=output_dim, node_input_dim=node_input_dim, node_hidden_dim=node_hidden_dim, edge_input_dim=edge_input_dim, edge_hidden_dim=edge_hidden_dim, num_step_message_passing=num_layers, lstm_as_gate=lstm_as_gate, ) elif gnn_model == "gat": self.gnn = UnsupervisedGAT( node_input_dim=node_input_dim, node_hidden_dim=node_hidden_dim, edge_input_dim=edge_input_dim, num_layers=num_layers, num_heads=num_heads, ) elif gnn_model == "gin": self.gnn = UnsupervisedGIN( num_layers=num_layers, num_mlp_layers=2, input_dim=node_input_dim, hidden_dim=node_hidden_dim, output_dim=output_dim, final_dropout=0.5, learn_eps=False, graph_pooling_type="sum", neighbor_pooling_type="sum", use_selayer=False, ) self.gnn_model = gnn_model self.max_node_freq = max_node_freq self.max_edge_freq = max_edge_freq self.max_degree = max_degree self.degree_input = degree_input if degree_input: self.degree_embedding = nn.Embedding( num_embeddings=max_degree + 1, embedding_dim=degree_embedding_size ) self.set2set = Set2Set(node_hidden_dim, num_step_set2set, num_layer_set2set) self.lin_readout = nn.Sequential( nn.Linear(2 * node_hidden_dim, node_hidden_dim), nn.ReLU(), nn.Linear(node_hidden_dim, output_dim), ) self.norm = norm def forward(self, g, return_all_outputs=False): """Predict molecule labels Parameters ---------- g : DGLGraph Input DGLGraph for molecule(s) n_feat : tensor of dtype float32 and shape (B1, D1) Node features. B1 for number of nodes and D1 for the node feature size. e_feat : tensor of dtype float32 and shape (B2, D2) Edge features. B2 for number of edges and D2 for the edge feature size. Returns ------- res : Predicted labels """ if self.degree_input: device = g.ndata["seed"].device degrees = g.in_degrees() if device != torch.device("cpu"): degrees = degrees.cuda(device) n_feat = torch.cat( ( g.ndata["pos_undirected"], self.degree_embedding(degrees.clamp(0, self.max_degree)), g.ndata["seed"].unsqueeze(1).float(), ), dim=-1, ) else: n_feat = torch.cat( (g.ndata["pos_undirected"], g.ndata["seed"].unsqueeze(1).float()), dim=-1, ) e_feat = None if self.gnn_model == "gin": x, all_outputs = self.gnn(g, n_feat, e_feat) else: x, all_outputs = self.gnn(g, n_feat, e_feat), None x = self.set2set(g, x) x = self.lin_readout(x) if self.norm: x = F.normalize(x, p=2, dim=-1, eps=1e-5) if return_all_outputs: return x, all_outputs else: return x
911 F.Supp. 1228 (1995) Victor DAY, Plaintiff, v. BOARD OF REGENTS OF the UNIVERSITY OF NEBRASKA and Pill Soon Song, Defendants. No. 4:CV94-3193. United States District Court, D. Nebraska. October 24, 1995. *1229 *1230 *1231 *1232 James C. Zalewski, Demars, Gordon Law Firm, Lincoln, NE, for Victor Day. David R. Buntain, Cline, Williams Law Firm, Lincoln, NE, John C. Wiltse, University of Nebraska, Lincoln, NE, for Board of Regents, of University of Nebraska, Pill Soon Song. MEMORANDUM AND ORDER PIESTER, United States Magistrate Judge. Pending before the court is the defendants' motion for summary judgment. (Filing 39). For the reasons set forth below, I shall grant the motion with respect to plaintiff's constitutional and age discrimination claims and dismiss his state based contract claim for want of jurisdiction.[1] BACKGROUND Plaintiff, Dr. Victor Day, is employed as a professor in the Chemistry Department at the University of Nebraska-Lincoln and is a resident of Nebraska. (Amended Complaint at ¶ 4; Answer to Amended Complaint at ¶ 1). The defendant Board of Regents of the University of Nebraska (Regents) is the governing body of the University of Nebraska-Lincoln (UNL) and defendant Dr. Pill Soon Song has been the chairperson of the UNL Chemistry Department since 1987. The defendants are also residents of Nebraska. (Amended Complaint at ¶ 4-5; Answer to Amended Complaint at ¶ 1). UNL hired Dr. Day as a chemistry professor in 1972 and he has been a tenured professor since 1979. (Day Depo. 3:10-12; Defendant's Brief at 3). He became a full professor in August 1985. (Defendant's Brief at 3; Plaintiff's Brief at 2). Day is a member of the inorganic chemistry section of the department and specializes in crystallography — the study of crystal structures using diffraction equipment. (Day Depo. 46:8-47:12). Since 1984 Dr. Day has not taught any courses other than freshman chemistry and in the same year his teaching load was raised from nine to twelve credit hours per semester. (Day Depo. 137:4-14). Prior to 1980 Dr. Day conducted his research in Hamilton Hall, a building on UNL's city campus which houses the Chemistry Department. Dr. Day had his own assigned laboratory space where he conducted research and worked with graduate students. In 1980 Dr. Day decided to remove his research activities from campus and now conducts his research in a laboratory in his home. Dr. Day's home is located outside the Lincoln city limits and is five miles from the UNL campus. (Day Depo. 2:19-3:5). Dr. Day continues to have an assigned laboratory space of approximately 600 square feet in Hamilton Hall, however, the equipment has not been functional for six or seven years. (Day Depo. 44:16-45:19; 50:15-16). Day asserts that the laboratory in his home is more complete and has better equipment than his office at UNL. (Day Affidavit at ¶ 8-9). Dr. Day admits that there is probably "no other situation like this in the country," where a faculty member's laboratory is physically located somewhere other than in the chemistry department. (Day Depo. 121:8-13). Dr. Day's laboratory space at his home is operated through a corporation named Crystalytics. The corporation was formed in 1979 by Dr. Day and his wife. Dr. Day owns 49% of Crystalytics while his wife owns the remaining 51% and is the president of the organization. The company analyzes crystal structures on a contract basis for other private companies. (Day Depo. 45:20-50:14). *1233 During his early years as a UNL faculty member, Dr. Day worked with interested undergraduate and graduate students before he moved his research to his home. After he moved his laboratory to his home, four students, at most, have researched with him. (Day Depo. 50:1-51:14, 59:17-22; Day Affidavit at ¶ 3). UNL has not assisted Day in obtaining graduate students or assigned him any in the past. (Plaintiff's Index of Evidence Exhibit 1 — Song Depo. 181:3-193:16). Each year a five-person Chemistry Department Executive Committee and the Department Chair, Dr. Song, evaluate the performance of faculty members during the preceding year. The results of the merit evaluation, market conditions, and increases in the cost-of-living are used to determine individual faculty member salaries. (Song Affidavit ¶ 3 and 7). Individual faculty members are evaluated with the use of a grid referred to as a "merit matrix." The matrix system allows the Committee to assign faculty scores in several subcategories in order to measure an individual's performance in research, teaching, and service at UNL. The Department adopted the merit matrix system in the 1970's and it has been used to make salary determinations since that time. (Song Affidavit at ¶ 4-5). Merit evaluation categories are given unequal weight by the Department. From 1987 to 1994 research accounted for 60%, teaching 30%, and service 10% of a faculty member's score. In 1995 research accounted for 55%, teaching 35%, and service 10% of an individual's score. (Day Depo. 35:1-40:11; Song Affidavit at ¶ 6). The research component of the merit score is composed of the following factors: quantity of research publications (15%); quality of publications (15%); and amount of external funding obtained to support the faculty member's research (30%). (Day Depo. 39:10-40:11). Dr. Day understood that he was evaluated under the merit matrix system and "found no problem with those percentages for the department as a whole." (Day Depo. 31:1-36:20). Under this system, the Dean of the College of Arts and Sciences has the discretion to accept or reject any salary recommendations from the Department. Ultimately, the salary recommendations must be approved by the UNL Chancellor, the President of the University, and the UNL Board of Regents. (Song Affidavit at ¶ 8). The Executive Committee and the Department Chair operate under a generally accepted definition of "external funding." The Committee defines external funding as financial support for research projects received from sources outside UNL, such as federal agencies, the National Science Foundation, the National Institute of Health, or the American Cancer Society. (Song Affidavit at ¶ 10; Day Depo. 39:10-40:11). Faculty members generally receive external funding by submitting detailed research proposals to the funding entities. The proposals are subjected to peer review prior to approval. The Committee maintains that peer review serves as an external control which assures that the proposed research is significant and will contribute to the advancement of knowledge in a particular field. (Song Affidavit at ¶ 10). Once approved, externally funded grants are paid directly to UNL to support research described in the proposals. Funds obtained from grant proposals may be used to acquire laboratory equipment and computers, support graduate, undergraduate, and post-doctoral programs, and help fund other staff and faculty members, as well as meet the Department's overhead costs. (Day Depo. 40:12-42:1; Song Affidavit at ¶ 10). Dr. Day contends that he has received inadequate yearly salary increases and that his level of compensation is below that of other full professors at UNL. It is undisputed that some young faculty members receive higher salaries than Dr. Day. (Song Affidavit Exhibit 2; Plaintiff's Brief at 34-35). Day alleges that he was not given credit in his merit evaluations for research conducted at the Crystalytics laboratory in his home. However, Dr. Day admits that he has been advised since 1980 that his research at Crystalytics does not meet the definition of "external funding" which is used by the Department in evaluating faculty. (Day Depo. 42:19-44:13). In addition, Dr. Day acknowledges that he began receiving low raises in about 1979 or 1980, around the time when he established his research laboratory out of the *1234 Chemistry Department. (Day Depo. 82:10-83:25). He also acknowledges that the highest paid faculty members are older than he. (Day Depo. 391:14-16). The defendants claim that Dr. Day's relatively low salary is the result of his low level of contribution to the Chemistry Department as reflected by several years of low merit evaluation scores. (Song Affidavit at ¶ 13). In addition, the defendants contend that Dr. Day has never applied for or obtained any grants which meet the Committee's definition of external funding and Day presents no evidence to the contrary. (Song Affidavit at ¶ 13; Day Depo. 42:19-44:13). Dr. Day claims that defendants Song and the Board of Regents violated his: (1) Freedom of Speech; (2) Freedom of Association; (3) Due Process rights; (4) Equal Protection rights; (5) the Age Discrimination Employment Act, 29 U.S.C. § 621 et seq.; and (6) state based contract law. This court has jurisdiction over plaintiff's constitutional claims under 28 U.S.C. § 1983 and the state based contract claim under the court's supplemental jurisdiction, 28 U.S.C. § 1367. DISCUSSION Federal Rule of Civil Procedure 56(c) mandates entry of summary judgment "if the pleadings, depositions, answers to interrogatories, and admissions on file, together with the affidavits, if any, show that there is no genuine issue as to any material fact and that the moving party is entitled to judgment as a matter of law." The purpose of a motion for summary judgment is to determine whether a "genuine issue of material fact" exists. Anderson v. Liberty Lobby, Inc., 477 U.S. 242, 247-48, 106 S.Ct. 2505, 2509-10, 91 L.Ed.2d 202 (1986). A "material fact" is a fact "that might affect the outcome of the suit under the governing law." Id. at 248, 106 S.Ct. at 2510. A "genuine issue" regarding a material fact exists "if the evidence is such that a reasonable jury could return a verdict for a nonmoving party." Id. Summary judgment is properly granted when, viewing the facts and reasonable inferences in the light most favorable to the nonmoving party, it is clear no genuine issue of material fact remains and the case may be decided as a matter of law. Greeno v. Little Blue Valley Sewer Dist., 995 F.2d 861, 863 (8th Cir.1993). If the moving party meets the initial burden of establishing the nonexistence of a genuine issue, the burden then shifts to the nonmoving party to produce evidence of the existence of a genuine issue for trial: [T]he plain language of Rule 56(c) mandates the entry of summary judgment, after adequate time for discovery and upon motion, against a party who fails to make a showing sufficient to establish the existence of an element essential to that party's case, and on which that party will bear the burden of proof at trial. In such a situation, there can be "no genuine issue as to any material fact," since a complete failure of proof concerning an essential element of the non-moving party's case necessarily renders all other facts immaterial. The moving party is "entitled to judgment as a matter of law" because the non-moving party has failed to make a sufficient showing on an essential element of [its] case with respect to which [it] has the burden of proof. Celotex Corp. v. Catrett, 477 U.S. 317, 322-23, 106 S.Ct. 2548, 2552, 91 L.Ed.2d 265 (1986). Defendants seek summary judgment on each of the claims raised by plaintiff. Each claim is addressed separately below. (1)(a) Freedom of Speech: Plaintiff's Research Publications Public employees are not, by virtue of becoming public employees, shorn of First Amendment protection. Mt. Healthy City Dist. Board of Educ. v. Doyle, 429 U.S. 274, 283, 97 S.Ct. 568, 574, 50 L.Ed.2d 471 (1977). However, the state, as an employer, has a legitimate interest in regulating the speech of its employees. Pickering v. Board of Educ., 391 U.S. 563, 88 S.Ct. 1731, 20 L.Ed.2d 811 (1968). The Eighth Circuit has outlined a two-step approach for determining whether a public employee's speech is protected by the First Amendment. See e.g. Kincade v. City of Blue Springs, 64 F.3d 389, 395 (8th Cir.1995); Tindle v. Caudell, 56 F.3d 966, 970 (8th Cir.1995); Shands v. City of Kennett, 993 F.2d 1337, 1342 (8th Cir. *1235 1993). The first step is to determine whether "the employee's speech can be `fairly characterized as constituting speech on a matter of public concern.'" Shands, 993 F.2d at 1342 (quoting Connick v. Myers, 461 U.S. 138, 146, 103 S.Ct. 1684, 1690, 75 L.Ed.2d 708 (1983)). If the speech addresses a matter of public concern, the second step requires the court to balance the "interests of the [employee], as a citizen, in commenting upon matters of public concern and the interests of the State, as an employer, in promoting the efficiency of the public services it performs through its employees." Pickering v. Board of Educ., 391 U.S. 563, 88 S.Ct. 1731, 20 L.Ed.2d 811 (1968); Kincade, 64 F.3d at 395. Both inquiries are questions of law for the court to resolve.[2]Kincade, 64 F.3d at 395. Dr. Day argues, however, that the Connick-Pickering public concern analysis does not apply to his claim that the defendants violated his First Amendment rights with respect to his research and scholarly publications. (Plaintiff's Brief at 16-18). He relies on Eberhardt v. O'Malley, 17 F.3d 1023, 1026 (7th Cir.1994), as authority for his contention that the public concern requirement applies only in cases where "the public employee was merely complaining privately about matters personal to himself, such as whether he was being paid enough or given deserved promotions ... or ... he was whistleblowing or otherwise "going public" with matters in which the public might be expected to take interest." Id. In Eberhardt, a prosecutor's office discharged an assistant district attorney after he began working on a novel which involved "fictitious prosecutors and other persons in the criminal justice system." Id. at 1024. Eberhardt did not work on the manuscript during office hours and the novel apparently did not focus on internal matters of the actual office in which he worked. Id. at 1025. The Seventh Circuit concluded that Connick public concern requirement does not apply where "the protected expression has nothing to do with the employee's job or with the public interest in the operation of his office." Id. at 1027. "[T]he purpose of the `public concern' requirement is to distinguish grievances of an entirely personal character from statements of broader interest concerning one's job, rather than to fix the boundaries of the First Amendment." Id. at 1026 (quoting Swank v. Smart, 898 F.2d 1247, 1251 (7th Cir.1990)). While the Seventh Circuit's reasoning in Eberhardt is persuasive, I conclude that Dr. Day's allegations with regard to his research publications do not invoke First Amendment concerns.[3] First, Dr. Day has stated that he *1236 feels free to conduct any research which he wants. (Day Depo. 108:13-18). In addition, he admits that the defendants have not interfered with or prohibited him from publishing articles or conducting any research which he wants. (Day Depo. 106:4 — 107:9, 108:16-18). Rather, he claims that the defendants "punished him" by refusing to give him credit for research he conducted at his home when salary increases were determined. (Amended Complaint at ¶ 16-23). The First Amendment, however, protects freedom of expression. Barnes v. Glen Theatre, Inc., 501 U.S. 560, 565-572, 111 S.Ct. 2456, 2460-2464, 115 L.Ed.2d 504 (1991); Texas v. Johnson, 491 U.S. 397, 403, 109 S.Ct. 2533, 2538-39, 105 L.Ed.2d 342 (1989); Clark v. Community for Creative Non-Violence, 468 U.S. 288, 293, 104 S.Ct. 3065, 3068-3069, 82 L.Ed.2d 221 (1984); United States v. O'Brien, 391 U.S. 367, 88 S.Ct. 1673, 20 L.Ed.2d 672 (1968). There is no evidence which suggests that the defendants have done anything to interfere with the content of the speech contained in Dr. Day's published articles.[4] If Dr. Day had conducted the same research on the UNL campus and published the same articles and papers using that information, there is no evidence that he would not have been given some credit for that research when salary levels were determined. (Day Depo. 39:7-40:7). Day claims only that he is not able to conduct his research where he wants, not that the defendants have prohibited him from saying anything that he wants. (Day Depo. 106:4-107:9, 108:13-18). As such, I conclude that the denial of credit in salary level determinations for research conducted at plaintiff's home does not invoke the First Amendment.[5] The denial of credit was based on Dr. Day's absence from the chemistry department and not on the contents of the research which he published. See Weinstein v. University of Illinois, 628 F.Supp. 862, 866 (N.D.Ill.1986) (assistant professor not entitled to First Amendment protection where he failed to establish that the "defendant's decision to terminate him was in any way prompted by the content of plaintiff's various projects."). (1)(a) Freedom of Speech: Plaintiff's Statements Dr. Day argues that he has made statements challenging the fairness of UNL chemistry department policies. Specifically, Dr. Day asserts that he has stated, "it is irresponsible of academicians to train students for careers in chemistry unless there are a sufficient number of jobs to place them after they have completed their studies." (Day Affidavit at ¶ 22). In addition, he alleges that he complained to defendant Song, UNL Chemistry Professor Robert Rieke, UNL Arts and Sciences Dean John Peters, and the UNL Senior Vice-Chancellor for Academic Affairs about his salary and tenure status.[6] (Day Affidavit at ¶ 13 and 24). Finally, Day alleges that he complained to other UNL administrators about listing his UNL and Crystalytics affiliation on his published articles.[7] (Day Affidavit at ¶ 16). *1237 Dr. Day concedes that the Connick-Pickering test applies to his criticisms of UNL. (Plaintiff's Brief at 19). Therefore, the first step is to determine if his speech can be "fairly characterized as constituting speech on a matter of public concern." Connick v. Myers, 461 U.S. 138, 146, 103 S.Ct. 1684, 1690, 75 L.Ed.2d 708 (1983). The Eighth Circuit recently addressed a situation where a assistant architecture professor claimed that Iowa State University (ISU) officials had violated his freedom of speech when he allegedly was denied tenure for being "openly critical of what he considered to be unsound teaching and administrative practices within the department." Mumford v. Godfried, 52 F.3d 756, 758 (8th Cir.1995). Mumford criticized the department for letting the local architectural business community exert undue influence over the curriculum and education at ISU. Mumford had stated that the university's relationship with the business community was "potentially unethical because the business community was motivated by financial pursuits rather than the academic interests of the students and ISU." Id. The Eighth Circuit held that Mumford's speech was a matter of public concern even though he expressed his views only to fellow faculty members at ISU.[8]Id. at 761-62. However, Dr. Day's complaints about his salary, his tenure status, and listing his UNL affiliation on his research publications are unlike the statements made in Mumford. Dr. Day alleges that he was denied status as a full professor in 1984 and challenged UNL Professor Rieke to explain his reasons for voting against plaintiff's denial. (Day Affidavit at ¶ 11-13). In addition, he alleges that in 1991 he complained to several UNL officials about defendant Song's application of the guidelines used to determine his salary. (Day Affidavit at ¶ 24). Dr. Day also asserts that he complained to UNL officials about a requirement that he list his affiliation with UNL on his research publications. (Day Affidavit at ¶ 16). All of these statements, however, involve complaints about UNL's treatment of plaintiff as an employee. These statements relate to Dr. Day's own salary level, his own tenure status, and his own desire not to include his UNL affiliation on his publications. "Where a public employee speaks out in public or in private on matters that relate solely to the employee's parochial concerns as an employee, no first amendment interests are at stake...." Mumford v. Godfried, 52 F.3d 756, 760 (8th Cir.1995) (quoting Cox v. Dardanelle Public School District, 790 F.2d 668, 672 (8th Cir.1986)). Unlike Mumford, Dr. Day was not criticizing the institution for failing to discharge its duties to the public. At best, his statements were "concerned only with internal policies or practices which are of relevance only to the employees of that institution." Mumford v. Godfried, 52 F.3d 756, 760 (8th Cir.1995) (quoting Cox v. Dardanelle Public School District, 790 F.2d 668, 672 (8th Cir.1986)). Thus, I conclude that Day's complaints to UNL officials regarding his salary and the listing of his UNL affiliation on his publications are not matters of public concern. See Kurtz v. Vickrey, 855 F.2d 723 (11th Cir.1988) (professor's complaints about his salary level and expressions of his personal contempt made to a departmental dean were not a matter of public concern). Finally, Dr. Day asserts: I have also always felt that it is irresponsible of academicians to train students for careers in chemistry unless there are a sufficient number of jobs to place them after they complete their studies. I have often stated this. (Day Affidavit at ¶ 22). While Dr. Day's statement is a matter of public concern, see Mumford v. Godfried, 52 F.3d 756, 760 (8th Cir.1995); Honore v. Douglas, 833 F.2d 565 (5th Cir.1987), his statement fails to constitute a triable issue of fact for two reasons. First, Day has provided no evidence that this *1238 statement was "a substantial and motivating factor in the [denial of an employment benefit]." Hamer v. Brown, 831 F.2d 1398, 1403 (8th Cir.1987) (citing Mt. Healthy City Dist. Board of Educ. v. Doyle, 429 U.S. 274, 287, 97 S.Ct. 568, 576, 50 L.Ed.2d 471 (1977)). As such, he fails to establish a causal relationship between his protected speech and lower salary increases. Hamer, 831 F.2d at 1403. Second, material submitted in opposition of a summary judgment motion must consist of admissible evidence. Fed.R.Civ.P. 56(e) (affidavits "shall set forth such facts as would be admissible in evidence"); Miller v. Solem, 728 F.2d 1020, 1026 (8th Cir.1984) (same). Dr. Day's statement lacks foundation as to the date and location when such statements where allegedly made, as well as to whom he made the statements and whom was present at the time. Thus, Day's statement fails to meet the admissibility requirement of Rule 56(e). As such, plaintiff has failed to produce evidence which supports his contention that a genuine issue of material fact exists with regard to his freedom of speech claims and I conclude that defendants' motion for summary judgment should be granted as a matter of law. (2) Freedom of Association Dr. Day claims that his constitutional right to freedom of association has been infringed by the defendants' denial of credit for research done at Crystalytics in determining the level of his salary increases. (Amended Complaint at ¶ 23; Plaintiff's Brief at 23). While the First Amendment on its face does not specifically mention the right to freedom of association, the United States Supreme Court has declared that the First Amendment encompasses a right to "associate with others in pursuit of a wide variety of political, social, economic, educational, religious, and cultural ends." Roberts v. United States Jaycees, 468 U.S. 609, 622, 104 S.Ct. 3244, 3252, 82 L.Ed.2d 462 (1984). The Supreme Court has afforded constitutional protection to freedom of association in two distinct senses. First, the Court has held that the Constitution protects against unjustified government interference with an individual's choice to enter into and maintain certain intimate or private relationships. Second, the Court has upheld the freedom of individuals to associate for the purpose of engaging in protected speech or religious activities. Board of Dirs. of Rotary Int'l v. Rotary Club, 481 U.S. 537, 544, 107 S.Ct. 1940, 1945, 95 L.Ed.2d 474 (1987). However, freedom of association "while protecting the rights of citizens to engage in `expressive' or `intimate' association, does not protect every form of association." United States v. Frame, 885 F.2d 1119, 1131 (3rd Cir.1989) (citing City of Dallas v. Stanglin, 490 U.S. 19, 25, 109 S.Ct. 1591, 1595, 104 L.Ed.2d 18 (1989)). Under the first branch, freedom of association protects "certain kinds of highly personal relationships ... from unjustified interference by the State." Roberts v. United States Jaycees, 468 U.S. 609, 618, 104 S.Ct. 3244, 3250, 82 L.Ed.2d 462 (1984). Protected associations "are distinguished by such attributes as relative smallness, a high degree of selectivity in decisions to begin and maintain affiliation, and seclusion from others in critical aspects of the relationship."[9]Id. at 620, 104 S.Ct. at 3250. Freedom of association under the second branch of Supreme Court cases involves "a right to join with others to pursue goals independently protected by the [F]irst [A]mendment." Walker v. City of Kansas City, Mo., 911 F.2d 80, 89 (8th Cir.1990) (quoting L. Tribe, American Constitutional Law 702-03 (1978)). "Assuming an association is found, therefore, any activity that would merit First Amendment protection if engaged in outside the context of the association will suffice to constitute a right of association." Walker, 911 F.2d at 89. *1239 Dr. Day claims that the defendant's have violated his right of association to conduct research and publish his findings with his corporation Crystalytics.[10] (Plaintiff's Brief at 23). However, he provides no evidence that the defendants prohibited him from exercising his constitutional rights. Day admits that the defendants did not interfere with or prohibit him from publishing articles or conducting any research which he wanted. (Day Depo. 106:4-107:9, 108:16-18). Although defendant Song warned Dr. Day that he would not be given credit for research conducted off the UNL campus, (Plaintiff's Brief at 7; Day Affidavit at ¶ 17), he was free to continue doing so with the understanding that his salary would be set accordingly. (Day Depo. 106:4 — 107:9, 108:16-18; Day Affidavit at ¶ 17). Dr. Day wrote UNL officials on several occasions stating that he had and would continue to publish articles without listing his UNL affiliation,[11] (Day Depo. Exhibits 11, 14, and 18), and expressed his satisfaction with that arrangement. (Day Depo. Exhibit 10). While UNL, as an employer, did not credit Day with research conducted at his home laboratory in salary level determinations, (Plaintiff's Index of Evidence — Song Depo. 42:14-47:23), Dr. Day was free to research and publish as much as he wished with Crystalytics on his own time. (Day Depo: 106:4-107:9, 108:16-18, Exhibit 14). Therefore, I conclude that there is no evidence that the defendants' actions violated Day's freedom to associate with Crystalytics during his free hours. Celotex Corp. v. Catrett, 477 U.S. 317, 322-23, 106 S.Ct. 2548, 2552-53, 91 L.Ed.2d 265 (1986). Even assuming Dr. Day had associational rights which were affected by the defendants' conduct during the hours he was to devote to his employment with UNL,[12] the *1240 Eighth Circuit recently held that courts should apply a modified Pickering analysis to cases involving alleged First Amendment rights by public employees beyond freedom of speech. Brown v. Polk County, 37 F.3d 404, 409 (8th Cir.1994). In that case, the Eighth Circuit noted that "Pickering's rationale — that the government as an employer has a special interest in regulating its employee's behavior to avoid the disruption of public functions — applies to free exercise rights as well as free speech rights." Id. The court noted with approval that other circuits have "applied a Pickering analysis outside the speech context in cases involving expressive association rights...." Id. See e.g. McCabe v. Sharrett, 12 F.3d 1558 (11th Cir.1994); Marshall v. Allen, 984 F.2d 787 (7th Cir.1993); Gressley v. Deutsch, 890 F.Supp. 1474 (D.Wyo.1994); Lowenstein v. Wolff, 1994 WL 411389 (N.D.Ill.1994). There is no reason that the Eighth's Circuit's rationale in Brown should not apply to the case at hand. I therefore conclude that the court is required to balance the interests of the employee in the association against "the interests of the State, as an employer, in promoting the efficiency of the public services it performs through its employees." Pickering v. Board of Educ., 391 U.S. 563, 88 S.Ct. 1731, 20 L.Ed.2d 811 (1968). See Lowenstein v. Wolff, 1994 WL 411389 (N.D.Ill.1994) ("an employer may discharge the public employee for that association `if the government's interest in the `effective and efficient fulfillment of its responsibilities to the public' outweighs the employee's interest' in association") (quoting Marshall v. Allen, 984 F.2d 787, 797 (7th Cir.1993)). In this case it is apparent that the defendants have a legitimate interest in requiring Dr. Day to conduct his research and publishing activities at his place of work — the UNL campus.[13] UNL's interest lies in increasing student access to Dr. Day, facilitating supervision of a greater number of students in graduate programs by Dr. Day, increasing UNL's reputation in the academic community for research conducted on its campus, and increasing the sources of external funding to support the Chemistry Department.[14] Dr. Day admits that he has supervised, at most, four graduate students in the fifteen years since establishing his laboratory in his home in 1980. (Day Depo. 59:17-22). He also admits that he retains 600 square feet of unused laboratory space at UNL. (Day Depo. 45:3-19). Moreover, there is no evidence that Day has obtained any external funding grants, such as grants from federal agencies, while working out of his home laboratory. (Day Depo. 40:12-44:13). Dr. Day stated the following justification for moving his laboratory to his home: [T]o be perfectly honest with you, it's much nicer to walk down the hall when you got something to do, do it and go do something else than it is to work in there with people in an environment that is very non-convenient with dust. You don't have any interruptions. And basically as long as I didn't have any graduate students or undergraduates doing research with me, it's a hell of a lot easier to do it at home. So I did, I have. And the University was fully aware of it. (Day Depo. 50:4-14). The First Amendment does not require an employer to yield to an employee's notions of convenience in allowing the employee to engage in his private business pursuits. As the defendants state, "If Dr. Day were to succeed here, he could elect to do his research at a laboratory in Omaha (or anywhere else), come into the Department just long enough to teach his classes, and still expect to be given large raises based *1241 solely on the quality of his off-site research." (Defendants' Brief at 13). The evidence establishes that, contrary to plaintiff's assertions, he was restricted in salary increases not for what he did off campus, but rather for what he failed to do on campus. The University's interests in this arena are legitimate and its actions reasonable. Day has and remains free to conduct research for publication during his own time with Crystalytics. (Day Depo. 106:4-107:9, 108:16-18, Exhibit 14). I conclude that "the interests of the State, as an employer, in promoting the efficiency of the public services it performs through its employees" outweighs Dr. Day's interest in his association with Crystalytics during the hours he is fulfilling his obligations to UNL. Pickering v. Board of Educ., 391 U.S. 563, 88 S.Ct. 1731, 20 L.Ed.2d 811 (1968). Viewing all of the submitted evidence in a light most favorable to Dr. Day, I conclude that he has not shown the existence of a material factual dispute regarding defendants' liability on this issue. I therefore shall grant defendants' motion for summary judgment as a matter of law with respect to Dr. Day's First Amendment claims. (3) Due Process Dr. Day claims that the defendants' have "intentionally deprived [him] of a property interest in his employment which resulted in significant lost income" to him without due process of law. (Amended Complaint at ¶ 20 and 23). He contends that the defendants have failed to grant him annual raises in amounts which he believes are merited by his academic record. (Amended Complaint at ¶ 8-20). The Eighth Circuit has held that "a government employee is entitled to procedural due process only when he has been deprived of a constitutionally protected property or liberty interest." Winegar v. Des Moines Indep. Com. School Dist., 20 F.3d 895, 899 (8th Cir.1994). A person must have a legitimate claim of entitlement to his or her employment to have a property interest in it. The existence of a property interest must be determined with reference to state law. Typically, this interest arises from contractual or statutory limitations on the employer's ability to terminate an employee.... Id. at 899 (citations omitted). See also Blankenbaker v. McCook Public Power Dist., 940 F.2d 384 (8th Cir.1991) (a property interest normally arises from regulatory or contractual provisions which constrain the employer or confer a benefit on the employee). While Dr. Day claims that he has not received salary increases which he believes his academic record merits, he has offered no evidence of any statute, regulation, rule, or contractual provision which would entitle him to receive any raise, let alone one of a specific amount.[15] In addition, the Nebraska Court of Appeals has stated that a public employee does not have a right to a particular salary increase under Nebraska state law without a contractual or legislative entitlement. Sinn v. City of Seward, 3 Neb.App. 59, 66-67, 523 N.W.2d 39, 45-46 (1994). Day's claim to larger salary raises is highly speculative at best and there is no evidence that he has lost any "pay, benefits, job status, or tenure[]" to which he had an entitlement. *1242 Miller v. Lovell, 14 F.3d 20, 21 (8th Cir.1994). Thus, I conclude that Day has failed to establish that he has been deprived of a constitutionally protected property interest by the defendants which would entitle him to procedural due process protections. "The Fourteenth Amendment's Due Process Clause does not convert the federal courts into arbitral forums for review of commonplace personnel decisions that public agencies routinely make." Miller v. Lovell, 14 F.3d 20 (8th Cir.1994). I shall therefore grant the defendants' motion for summary judgment on this issue. (4) Equal Protection The final constitutional deprivation alleged by Day is that he has been denied equal protection of law under the Fourteenth Amendment. (Amended Complaint at ¶ 23). "The Equal Protection Clause directs that `all persons similarly circumstanced shall be treated alike.'" Plyler v. Doe, 457 U.S. 202, 216, 102 S.Ct. 2382, 2394, 72 L.Ed.2d 786 (1982) (quoting F.S. Royster Guano Co. v. Virginia, 253 U.S. 412, 415, 40 S.Ct. 560, 562, 64 L.Ed. 989 (1920)). While Dr. Day is exceptionally vague about the nature of the classification which he alleges violated his constitutional rights, he apparently challenges the use of the salary matrix by the defendants in establishing UNL Chemistry Department salaries. (Plaintiff's Brief at 26). However, Dr. Day concedes that "the classification for salary purposes does not proceed along suspect lines." (Plaintiff's Brief at 24). "Equal protection is not a license for courts to judge the wisdom, fairness, or logic of [governmental decisions]." FCC v. Beach Comm., Inc., 508 U.S. 307, 313, 113 S.Ct. 2096, 2101, 124 L.Ed.2d 211 (1993). "Purely economic classifications will be upheld if they are rationally related to a legitimate governmental interest." Massey v. McGrath, 965 F.2d 678, 681 (8th Cir.1992) (citing United States Railroad Ret. Bd. v. Fritz, 449 U.S. 166, 101 S.Ct. 453, 66 L.Ed.2d 368 (1980)). See also Kadrmas v. Dickinson Public Schools, 487 U.S. 450, 108 S.Ct. 2481, 101 L.Ed.2d 399 (1988); Bannum, Inc. v. City of St. Charles, Mo. 2 F.3d 267 (8th Cir.1993). Because the treatment which Day challenges does not involve a suspect classification or impinge upon fundamental rights, a presumption of rationality attaches to the classification at issue. Massey, 965 F.2d 678 (8th Cir.1992) (citing Hodel v. Indiana, 452 U.S. 314, 331, 101 S.Ct. 2376, 2386-87, 69 L.Ed.2d 40 (1981)). Dr. Day argues that he has been penalized in salary level determinations by application of the salary matrix system and asserts that the system should be administered in a different way. (Day Depo. 67:21-68:23, 71:18-73). However, he also objects to the weighting of the percentages for teaching, research, and service, as applied to him, even though, he "found no problem with those percentages for the department as a whole." (Day Depo. 36:19-20). Day argues that "the question of whether or not the classification has a rational basis aside, the application of this salary matrix was such that similarly situated people did not get treated alike, and the result of the application would have no rational basis." (Plaintiff's Brief at 24-25). However, Dr. Day misframes the issue as a factual inquiry to avoid summary judgment on this claim.[16] The evidence is clear that Dr. Day was not treated alike because he did not obtain external *1243 funding grants to support his research. (Song Affidavit at 5; Plaintiff's Brief at 26; Day Depo. 40:12-42:1, 84:20-85:11). As Dr. Day admits, he was "penalized not because of a failure to do research, but because his research is not supported by an external grant." (Plaintiff's Brief at 26). It was neither arbitrary nor irrational for the defendants to deny credit to Day for research which was not supported by external funding in making salary determinations under the merit matrix system.[17] External funding is paid directly to UNL and is used to support graduate, post-doctoral, and undergraduate students and staff, as well as salaries for faculty members. (Day Depo. 40:12-42:1; Song Affidavit at ¶ 10). Research funds are also used to acquire equipment, computers, and other research capital for the Department and for the University. (Song Affidavit at 4). It was neither arbitrary nor irrational as plaintiff suggests for the defendants to determine that research conducted at Day's home laboratory through Crystalytics did not meet this requirement, (Day Depo. 42:19-44:13), because UNL did not receive money to support its programs. Although Dr. Day contends that external grants are often "break-even propositions[] or money losers for UNL," (Plaintiff's Brief at 9), the defendants' classification decision must be upheld if there is "any reasonably conceivable state of facts that could provide a rational basis for the classification." FCC v. Beach Comm., Inc., 508 U.S. 307, 313, 113 S.Ct. 2096, 2101, 124 L.Ed.2d 211 (1993). As such, I conclude that the defendants are entitled to summary judgment on this issue. (5) Age Discrimination Dr. Day alleges that the defendants have discriminated against him on the basis of his age by giving younger faculty members more favorable pay increases than those given to him. (Amended Complaint at ¶ 26-27). The Age Discrimination in Employment Act of 1967 (ADEA), 29 U.S.C. §§ 621-634, forbids employment discrimination against workers forty years of age or older. Section 623(a)(1) of the act provides that it is unlawful to "discriminate against any individual with respect to his compensation, terms, conditions, or privileges of employment, because of such individual's age...." 29 U.S.C. § 623(a)(1). The allocation of the burden of proof in ADEA cases "is the same as in cases arising under Title VII of the Civil Rights Act of 1964, 42 U.S.C. §§ 2000e-17 (1988)[.]" Radabaugh v. Zip Feed Mills, Inc., 997 F.2d 444 (8th Cir.1993). Dr. Day argues that he is entitled to relief under disparate treatment and disparate impact theories. (a) Disparate Treatment The issue under a disparate treatment theory is whether "[t]he employer ... treat[ed] some people less favorably than others because of their race, color, religion, sex, or [other protected status]." International Brotherhood of Teamsters v. United States, 431 U.S. 324, 335 n. 15, 97 S.Ct. 1843, 1854 n. 15, 52 L.Ed.2d 396 (1977). "[L]iability depends on whether the protected trait (under the ADEA, age) actually motivated the employer's decision." Hazen Paper Co. v. Biggins, 507 U.S. 604, ___, 113 S.Ct. 1701, 1706, 123 L.Ed.2d 338 (1993). A plaintiff may prove discrimination under a disparate treatment theory with direct or indirect evidence. Beshears v. Asbill, 930 F.2d 1348, 1353 (8th Cir.1991); Blake v. J.C. Penney Co., 894 F.2d 274, 278 (8th Cir.1990). If an employee produces direct evidence that his age "played a motivating part in [the] employment decision," then the defendant may "avoid a finding of liability only by proving by a preponderance of the evidence that it *1244 would have made the same decision even if it had not taken the [illegitimate criterion] into account." Price Waterhouse v. Hopkins, 490 U.S. 228, 258, 109 S.Ct. 1775, 1795, 104 L.Ed.2d 268 (1989). In this case Dr. Day concedes that he has no direct evidence of discrimination, such as incriminatory statements made by the defendants, which would support his claim. (Day Depo. 394:18-395:3; Plaintiff's Brief at 36). Therefore, the court must apply the analytical framework of shifting burdens developed in McDonnell Douglas Corp. v. Green, 411 U.S. 792, 93 S.Ct. 1817, 36 L.Ed.2d 668 (1973) and its progeny. Gaworski v. ITT Comm. Fin. Corp., 17 F.3d 1104 (8th Cir.1994). Under this framework, the plaintiff has the burden of establishing a prima facie case of discrimination. Once established, the prima facie case raises a legal presumption of discrimination in the plaintiff's favor, requiring the defendant to produce legitimate, non-discriminatory reasons for its actions. If such reasons are put forth, the plaintiff, who at all times retains the burden of proving discrimination, may attempt to demonstrate that the proffered reasons are pretextual. This framework is designed as a "sensible, orderly way to evaluate the evidence in light of common experience as it bears on the critical question of discrimination." Id. at 1108 (quoting Furnco Constr. Corp. v. Waters, 438 U.S. 567, 577, 98 S.Ct. 2943, 2949, 57 L.Ed.2d 957 (1978) (citations omitted)). If the defendants produce a non-pretextual reason for the conduct "the factual inquiry proceeds to a new level of specificity." Texas Dept. of Community Affairs v. Burdine, 450 U.S. 248, 255, 101 S.Ct. 1089, 1095, 67 L.Ed.2d 207 (1981). The Eighth Circuit has held that in order to "survive summary judgment at the third stage of the McDonnell Douglas analysis, a plaintiff must demonstrate the existence of evidence of some additional facts that would allow a jury to find that the defendant[s'] proffered reason is pretext and that the real reason for its action was intentional discrimination." Krenik v. County of Le Sueur, 47 F.3d 953, 958 (8th Cir.1995). See also Lidge-Myrtil v. Deere & Co., 49 F.3d 1308 (8th Cir.1995). "[T]here must be some additional evidence beyond the elements of the prima facie case to support a finding of pretext." Krenik, 47 F.3d at 959. The large majority of age discrimination and Title VII cases involve employers who have discharged or failed to hire plaintiffs. The Eighth Circuit has followed the framework of McDonnell Douglas in analyzing those cases. The prima facie case under a typical McDonnell Douglas scenario requires the plaintiff to prove (1) that he was within the protected class, (2) that he was performing his job at a level that met his employer's legitimate expectations, (3) that he was discharged, and (4) his employer attempted to replace him. See e.g. Radabaugh v. Zip Feed Mills, Inc., 997 F.2d 444 (8th Cir.1993). I also note that the parties have proposed similar requirements for a prima facie case.[18] Therefore, I conclude that Day must demonstrate that he (1) is a member of the protected class, (2) has performed his job *1245 at a level that met his employer's legitimate expectations, and (3) has not received a salary comparable to that paid to younger tenured faculty members of the UNL Chemistry Department. See Marshall v. Pyramid Life Ins. Co., 52 F.E.P. 1398, 1400, 1990 WL 58714 (D.Kan.1990). Upon application of this standard, I conclude that Day has not provided sufficient evidence to prove a prima facie case. There is no dispute that Day is over 40 years old and a member of the protected class. (Defendant's Brief at 37). In addition, there is sufficient evidence to create a genuine issue as to whether he has received a salary comparable to that paid to younger faculty members. (Song Affidavit, Exhibit 2). However, Day has failed to produce sufficient evidence to demonstrate that he has met his employer's legitimate expectations. Among other things, UNL expected Day to mentor graduate students, teach classes, and apply for and receive external funding to support his research and that of the department. (Song Affidavit at ¶ 13). Even assuming that Day has presented a genuine issue as to whether his classroom teaching and mentoring of graduate students met those expectations, (Day Depo. 25:19-27:6, 50:10-53:25), Day admits that since 1980 he has been told that he was expected to apply for and receive external funding,[19] that this criterion was used in determining merit salary increases, and that research conducted at his home laboratory was not considered to be satisfactory in meeting this criterion. (Day Depo. 42:19-44:13). Dr. Day offers no evidence that he sought or obtained external funding for his research during that time which met his employer's expectations. To the contrary, the evidence that he failed do so is uncontradicted. (Day Depo. 42:19-44:13; Song Affidavit at ¶ 13). An employer may not establish unrealistic expectations for an employee. O'Bryan v. KTIV Television, 868 F.Supp. 1146 (N.D.Iowa 1994). See also Meiri v. Dacon, 759 F.2d 989, 995 (2nd Cir.1985) (an employee may show that the employer's demands were illegitimate or arbitrary). Dr. Day argues that a factual dispute exists as to whether the defendants' expectations were reasonable by asserting that he has published more papers, done more research, and taught for more years than many younger faculty members. (Plaintiff's Brief at 40). While these facts may very well be true, they do not create an inference that the defendants' expectation that Dr. Day seek and obtain external funding for his research is unreasonable. Dr. Day offers no other evidence which would support an inference that it was unreasonable to expect him to meet that expectation.[20] "Although an employer *1246 may not make unreasonable expectations, and must make the employee aware of just what his expectations are, beyond that the court will not inquire into the defendant's method of conducting its business. If [plaintiff] was not doing what his employer wanted him to do, he was not doing his job." Kephart v. Institute of Gas Tech., 630 F.2d 1217, 1223 (7th Cir.1980). Dr. Day has failed to offer evidence sufficient to prove that he has met the legitimate expectations of his employer or that his employer's expectations were arbitrary or unreasonable. "It is the non-moving party's burden to demonstrate that there is evidence to support each essential element of his claim." Leidig v. Honeywell, Inc., 850 F.Supp. 796, 801 (D.Minn. 1994) (citing Celotex Corp v. Catrett, 477 U.S. 317, 323-24, 106 S.Ct. 2548, 2553, 91 L.Ed.2d 265 (1986)). Day has failed to establish a prima facie case of age discrimination. Even assuming for purposes of argument that Dr. Day had presented a prima facie case, the defendants have provided a legitimate, non-discriminatory reasons for their actions. Specifically, defendants assert that Dr. Day receives a lower salary in comparison to other younger faculty members, because the evaluations of his contribution to the department have been consistently lower than that of the younger faculty members under the merit matrix system. (Song Affidavit at ¶ 13; Defendant's Brief at 39). Therefore, in order to survive summary judgment Day is also required to "demonstrate the existence of evidence of some additional facts that would allow a jury to find that the defendant[s'] proffered reason is pretext and that the real reason for its action was intentional discrimination." Krenik v. County of Le Sueur, 47 F.3d 953, 958 (8th Cir.1995). See also Lidge-Myrtil v. Deere & Co., 49 F.3d 1308 (8th Cir.1995). Dr. Day must "produce `some additional evidence beyond the elements of the prima facie case' that would allow a rational jury to reject [the employer's] proffered reasons as a mere pretext for discrimination." Lidge-Myrtil v. Deere & Co., 49 F.3d 1308, 1310-11 (8th Cir.1995) (quoting Krenik, 47 F.3d at 959). Dr. Day concedes that he has no direct evidence of age discrimination. (Day Depo. 394:18-395:3). Instead, he argues that the defendants' emphasis on research and external funding leads it to favor some faculty members over others. However, Day concedes that many faculty members over 40 years of age do research and are not victims of discrimination. (Day Depo. 384:9-19). While it is undisputed that some younger faculty members receive higher salaries than plaintiff, this fact bears no inference of age discrimination because most of Dr. Day's older colleagues also receive higher salaries than him. During the last eight years, Day has received annual raises resulting in a total percentage increase in salary of 38.8%, the lowest increases among the eleven tenured faculty members who have been in the Department for the entire period. These other faculty members have received annual increases ranging from 45.5% to 100%. All of these faculty members are older than Dr. Day. (Song Affidavit ¶ 13 & Ex. 2). The only evidence which potentially provides "some additional evidence" of pretext is a 1988 memorandum to all faculty of the UNL College of Arts and Sciences, concerning factors used in determining salary increases for the 1988-89 academic year. *1247 (Plaintiff's Index of Evidence, Ex. 8). One factor is stated in the memorandum as follows: Inversions. Untenured faculty and more experienced faculty fulfilling departmental performance expectations sometimes earn less than the beginning salary in their departments for fall 1988. The salaries of untenured faculty will be based on beginning salaries, not on the performance record since it is still early to assess their performance. (Plaintiff's Index of Evidence, Ex. 8). There is nothing in its contents which indicates that younger faculty members were to be given salary increases merely because of their age.[21] The stated purpose of the "inversion" system was to "establish marketplace equity at the beginning level" of employment, so that new employees were not paid more than other recently hired faculty members. (Plaintiff's Index of Evidence, Ex. 8). Dr. Day has been employed by UNL since 1972 and provides no evidence that he was adversely impacted by such a policy.[22] A cause of action for a diminution in Day's salary during the 1988-1989 academic year is well outside the applicable statute of limitations, and he provides no evidence that this program continued into later years.[23] Moreover, "liability depends on whether the protected trait (under the ADEA, age) actually motivated the employer's decision." Hazen Paper Co. v. Biggins, 507 U.S. 604, 113 S.Ct. 1701, 1703, 123 L.Ed.2d 338 (1993). The memorandum describes a facially neutral policy of the College of Arts and Sciences and does not provide an inference that age was a motivating factor in the merit based salary level determinations made by members of the Chemistry Department regarding the plaintiff such as would allow a jury to infer that the defendants' legitimate, nondiscriminatory reasons for giving Day low merit increases were pretextual.[24] It is the plaintiff's burden to present evidence which would allow a reasonable jury to return a verdict for him.[25]Anderson v. Liberty Lobby, Inc., 477 U.S. 242, 252, 106 S.Ct. 2505, 2512, 91 L.Ed.2d 202 (1986). Although courts have cautioned that summary judgment should be used sparingly in cases involving issues of motive or intent, this "cannot and should not be construed to exempt employment discrimination cases involving motive and intent from summary judgment procedures." Krenik, 47 F.3d 953, 959 (8th Cir.1995). As Dr. Day has failed to present evidence sufficient to establish a prima facie case of discrimination and has failed to present "some additional evidence" sufficient to discredit the defendants' legitimate nondiscriminatory reasons for their conduct, I conclude that summary judgment for the defendants is appropriate on this claim. (b) Disparate Impact Disparate impact claims "involve employment practices that are facially neutral in their treatment of different groups but that in fact fall more harshly on one group than another and cannot be justified by business necessity. Proof of discriminatory motive ... is not required under a disparateimpact theory." International Brotherhood of Teamsters v. United States, 431 U.S. 324, 97 S.Ct. 1843, 52 L.Ed.2d 396 (1977). The underlying premise of the theory is that "some employment practices, adopted without a deliberately discriminatory motive, may in operation be functionally equivalent to intentional *1248 discrimination." Watson v. Fort Worth Bank & Trust, 487 U.S. 977, 987, 108 S.Ct. 2777, 2785, 101 L.Ed.2d 827 (1988). "To establish a prima facie case of disparate impact discrimination, a plaintiff must demonstrate that a specific employment practice or policy has a significant discriminatory impact on a protect group." Leidig v. Honeywell, Inc., 850 F.Supp. 796, 802 (D.Minn. 1994) (citing Wards Cove Packing Co. v. Atonio, 490 U.S. 642, 656, 109 S.Ct. 2115, 2124, 104 L.Ed.2d 733 (1989)). This showing is ordinarily made by use of statistical evidence. Wards Cove, 490 U.S. at 650, 109 S.Ct. at 2121.[26] In this case Day argues that elements of UNL's salary administration "appear to impact on him based upon his age...." (Plaintiff's Brief at 38). He suggests that the salary matrix and "salary inversion factors" demonstrate that he has been impacted by UNL's policies. Day admits, however, that he can provide no statistical analysis which establishes an adverse impact on a department wide basis. (Plaintiff's Brief at 39). In addition, he provides no admissible evidence to support his assertions that UNL policies have affected anyone else in the department.[27] As such, Dr. Day has failed to establish a prima facie case of disparate impact discrimination, and defendants are entitled to summary judgment on this claim. See Leidig v. Honeywell, Inc., 850 F.Supp. 796, 802 (D.Minn.1994) (summary judgment is appropriate where plaintiff fails to present sufficient evidence to establish a prima facie case of the disparate impact of an employment policy of practice). (6) State Law Contract Claim Finally, Dr. Day claims that the defendants have violated his employment contract under Nebraska state law. (Amended Complaint at ¶ 2). However, the "threshold requirement in every federal case is jurisdiction." Barclay Square Prop. v. Midwest Fed. Sav. & Loan Ass'n, 893 F.2d 968, 969 (8th Cir.1990) (quoting Sanders v. Clemco Indus., 823 F.2d 214, 216 (8th Cir.1987)). Federal courts are courts of limited jurisdiction, Owen Equip. and Erection Co. v. Kroger, 437 U.S. 365, 374, 98 S.Ct. 2396, 2403, 57 L.Ed.2d 274 (1978), and the jurisdiction of lower federal courts is created entirely by statute. See Continental Cablevision v. United States Postal Service, 945 F.2d 1434, 1435 (8th Cir.1991). Congress has provided that a civil plaintiff may bring suit in federal court only if his or her claim "arises under" federal law[28] or diversity jurisdiction exits. See 28 U.S.C. § 1331. In the present case all parties are residents or agents of the state of Nebraska. (Amended Complaint at ¶ 4-6; Answer at ¶ 1). Therefore, diversity jurisdiction does not exist with this court. Owen Equipment & Erection Co. v. Kroger, *1249 437 U.S. 365, 98 S.Ct. 2396, 57 L.Ed.2d 274 (1978). Additionally, I have concluded that summary judgment is appropriate with respect to each claim raised by Dr. Day which arises under the United States Constitution or federal statute. I shall therefore decline to exercise supplemental jurisdiction over Day's state law contract claim, 28 U.S.C. § 1367(c)(3), and I shall dismiss it without prejudice so that he may refile in an appropriate forum. IT THEREFORE HEREBY IS ORDERED: 1. Defendants' motion for summary judgment (filing 39) is granted with respect to plaintiff's constitutional and age discrimination claims. 2. Plaintiff's state law claim is dismissed without prejudice. NOTES [1] The parties consented to have me preside at the trial and enter judgment pursuant to 28 U.S.C. § 636(c). (See filing 12.) [2] The Eighth Circuit provided in Shands: Any underlying factual disputes concerning whether the plaintiff's speech is protected, however, should be submitted to the jury through special interrogatories or special verdict forms.... The trial court should then combine the jury's factual findings with its legal conclusions in determining whether the plaintiff's speech is protected. If any speech is found to be protected under the above analysis, the plaintiff must show that the protected speech was a substantial, or motivating, factor in the defendant's decision to discharge him. If the plaintiff meets this burden, the burden then shifts to the defendant to show by a preponderance of the evidence that the plaintiff would have been discharged regardless of the protected speech activity. These two causation questions are questions of fact for the jury. 993 F.2d at 1342-43 (citations omitted). [3] Plaintiff also argues that he "has been penalized for his research and publication done in his home and outside the university...." (Plaintiff's Brief at 22). Therefore, he asserts that he cannot be punished for speech which was "done at home and not at work...." (Plaintiff's Brief at 22). To support his argument plaintiff cites Flanagan v. Munger, 890 F.2d 1557 (10th Cir. 1989). In that case three police officers who owned a video rental store which contained legal "adult" films were reprimanded for their conduct. The Tenth Circuit determined that Flanagan involved "`speech' which [was] off the job and unrelated to any internal functioning of the department." Id. at 1562. Here, however, plaintiff's employment is connected to the speech at issue — the research which he has published. (Song Affidavit at ¶ 19; Plaintiff's Brief at 9-10). Plaintiff's publication of research articles is highly related to his employment as an academic scholar at UNL. See Tindle v. Caudell, 56 F.3d 966, 970-71 (8th Cir.1995). Moreover, plaintiff's argument is contrary to his contention that he should have been given credit by the defendants for research conducted through Crystalytics. (See Amended Complaint at ¶ 16-23). If the "speech" conducted at his home is private as he suggests, then it is disingenuous for the plaintiff to argue that he should have been given credit for this "speech" when his salary level was determined. [4] For example, there are no allegations that plaintiff did not receive credit for his research because he expressed ideological or political views in his publications which were unpopular with the administration. [5] Cf. Greer v. Spock, 424 U.S. 828, 839, 96 S.Ct. 1211, 1218, 47 L.Ed.2d 505 (1976) (rejecting alleged First Amendment right to have a political candidate speak at a military base when member of the Armed Forces stationed at the base were free to attend political rallies off base); Lloyd Corp., Ltd. v. Tanner, 407 U.S. 551, 566-67 & n. 12, 92 S.Ct. 2219, 2227-2228 & n. 12, 33 L.Ed.2d 131 (1972) (rejecting alleged First Amendment right to distribute handbills in privately-owned shopping center, partly on the basis that surrounding public roads and sidewalks provided adequate alternative public forums for disseminating a message); L. Tribe, American Constitutional Law, § 12-23, at 982. ("Unless the inhibition resulting from such a content-neutral abridgement is significant, government need show no more than a rational justification for its choice; and if equally effective alternatives are readily available to the speaker or listener, the inhibition is not deemed significant."). [6] Plaintiff also argues in his brief that he filed grievances with UNL regarding his complaints. (Plaintiff's Brief, at 19). However, he does not cite any evidence of filing such grievances and none appears in his affidavit. [7] I note that none of these allegations is contained in plaintiff's complaint or amended complaint. However, I shall consider the merits of plaintiff's allegations for purposes of this summary judgment motion, because plaintiff could potentially amend his complaint to conform with this evidence. See generally J. Moore, Moore's Federal Practice, § 56.10, at 56-92 through 56-97. [8] See also Honore v. Douglas, 833 F.2d 565 (5th Cir.1987) (law school professor stated a claim for a first amendment violation where he had been denied tenure after criticizing the law school's admissions policy, size of the student population, administration of the school budget, and failure to certify graduates for the Texas bar examination in a timely fashion). [9] "[T]he Constitution undoubtedly imposes constraints on the State's power to control the selection of one's spouse that would not apply to regulations affecting the choice of one's fellow employees." Roberts, 468 U.S. at 620, 104 S.Ct. at 3251. Although plaintiff's wife is a partner in Crystalytics, plaintiff does not contend that the defendants' have interfered with his "marriage relationship." (Day Depo 99:1-100:12; Plaintiff's Brief at 23; Amended Complaint at ¶ 16-23). [10] Plaintiff argues in his brief and his attorney suggests in his deposition that some research collaborations between the plaintiff and other academics may be involved in his freedom of association claim. (Day Depo. 99:22-25). However, no such facts have been pleaded in plaintiff's amended complaint. (See Amended Complaint, at ¶ 16-23). Moreover, following his attorney's suggestion Dr. Day states: A. I suspect — okay. To be honest with you, I suspect that the department people in the department have resented the fact that I collaborate with people elsewhere. I've not got any hard evidence of that but that might be an issue as well. I don't know. Q. (By Mr. Buntain) You say you suspect that that might be the case? But do you have anything to base that suspicion on? A. No. Maybe that's what the problem is. I've been trying for three years to find out what the problem is. (Day Deposition 100:1-12) (emphasis added). Furthermore, Day admits that his concerns have not prevented him from collaborating with other researchers around the country. (Day Deposition 105:1-23). Because he admits that he has no evidence of a violation of his associational rights with respect to his collaborations with other academics outside the University, plaintiff has not met his burden of establishing that his lower salary increases resulted from an exercise of his constitutional rights. See Hamer v. Brown, 831 F.2d 1398, 1403 (8th Cir.1987) (citing Mt. Healthy City Dist. Board of Educ. v. Doyle, 429 U.S. 274, 287, 97 S.Ct. 568, 576, 50 L.Ed.2d 471 (1977)) (plaintiff bears that burden of establishing the exercise of a constitutional right was a substantial and motivating factor in denial of a benefit). [11] For example, Dr. Day sent a copy of one such article to UNL Arts and Sciences Dean Gerhard Meisels stating, "It is the first of many non-UNL publications by me; I hope to have at least ten per year in the future." (Day Depo. Exhibit 18). In another letter to Meisels, Day stated that he would continue to publish articles without listing his UNL affiliation "on any publications resulting from research conducted by me on my own time at Crystalytics Company." (Day Deposition: Exhibit 14). [12] Plaintiff has not shown whether his wife contributed to the materials published by plaintiff in academic journals. (Day Depo 91:25-94:4). Thus, he has not demonstrated that he engaged in protected speech activities "with others to pursue goals independently protected by the [F]irst [A]mendment." Walker v. City of Kansas City, Mo., 911 F.2d 80, 89 (8th Cir.1990) (quoting L. Tribe, American Constitutional Law 702-03 (1978)) (emphasis added). In addition, Crystalytics is unlikely to qualify as an intimate or private relationship worthy of associational protection under the first branch of the Supreme Court's case. Watson v. Fraternal Order of Eagles, 915 F.2d 235 (6th Cir.1990) (finding no freedom of association violation partly because of group's "quasi-business" activities); Oklahoma Educ. Ass'n v. Alcoholic Bev. Laws Enf. Comm'n, 889 F.2d 929 (10th Cir.1989) (no violation of right to association for state employees working in the alcoholic beverage business); Copp v. Unified School Dist. No. 501, 882 F.2d 1547 (10th Cir.1989) (no right of association between two school employees); Rivers v. Campbell, 791 F.2d 837, 840 (11th Cir.1986) ("the more commercial the associational interest involved the less likely first amendment protection attaches"); Trade Waste Management Ass'n, Inc. v. Hughey, 780 F.2d 221, 238 (3rd Cir.1985) (economic associations receive less protection than political or social associations); Mass v. McClenahan, 893 F.Supp. 225 (S.D.N.Y.1995) (business relationship formed between an attorney and a client did not warrant associational protection). [13] Cf. FCC v. Beach Comm., Inc., 508 U.S. 307, ___, 113 S.Ct. 2096, 2102, 124 L.Ed.2d 211 (1993) ("a legislative choice is not subject to courtroom factfinding and may be based on rational speculation unsupported by evidence or empirical data.") [14] The external funding grants supports UNL's overhead as well as the work of graduate, undergraduate, and post-doctoral students. (Day Depo. 40:12-44:13). [15] Plaintiff cites a number of cases for the proposition that he has a property interest in his job as a tenured public employee. See Williams v. Texas University Health Sciences Center, 6 F.3d 290 (5th Cir.1993) (medical school faculty member's salary reduced from $68,000 to $46,449 a year); Post v. Harper, 980 F.2d 491 (8th Cir.1992) (discharged tenured employee had property interest in his employment); Eguia v. Tompkins, 756 F.2d 1130 (5th Cir.1985) (employee had due process interest in salary and expense reimbursement withheld by employer); Ginaitt v. Haronian, 806 F.Supp. 311 (D.R.I.1992) (termination of pension and medical benefits was termination of benefit in which employee had property interest). However, in each of those cases, the court specifically found that the public employee had a right to continuation of the salary or benefits previously granted by the employer. Although Day may have a property interest in his tenured position, none of the cases cited supports his assertion that he has a property interest in future wage increases. As the Supreme Court stated in Board of Regents v. Roth, in order to have a property interest in a benefit, plaintiff must have more than an abstract need or desire for it. He must have more than a unilateral expectation of it. He must, instead, have a legitimate claim of entitlement to it. 408 U.S. 564, 577, 92 S.Ct. 2701, 2709, 33 L.Ed.2d 548 (1972). [16] Day argues that his claim is fact-dependent so that summary judgment on the issue would be improper. (Plaintiff's Brief at 24). However, plaintiff has provided no evidence which suggests that the defendants have done anything other than follow a policy of denying him credit for research which is not supported by external grants. Moreover, plaintiff admits that he has "nothing that suggests that a different formula was used for faculty member[s] within the Chemistry Department." (Day Depo. 71:7-22). Day also argues that the court is not able to analyze whether the defendants' conduct met the rational basis test until the facts are weighed at trial and that the defendants have put forth no evidence "why the salary matrix is rational[] and why then the result is not equal." However, policy choices by government officials are "not subject to courtroom fact finding and may be based on rational speculation unsupported by evidence or empirical data." FCC v. Beach Comm., Inc., 508 U.S. 307, 315, 113 S.Ct. 2096, 2102, 124 L.Ed.2d 211 (1993). In addition, as stated above, a presumption of rationality attaches to the defendants' classification, because there is no evidence that the classification was based on suspect criteria or impinged on fundamental rights. Massey, 965 F.2d at 681. [17] This is true regardless of whether externally funded research was a written requirement of plaintiff's job as a tenured faculty member. (Plaintiff's Index of Evidence, Exhibit 5). There is no dispute that plaintiff and other UNL faculty were encouraged to seek out external grants. (Day Depo. 40:12-44:3). Plaintiff understood that the components of the merit matrix system weighted the quality of his publications at 15 percent, the quantity of his publications at 15 percent, and the external funding he received at 30 percent. (Day Depo 39:10-40:11). He also understood that in externally funded research, the proposals are subject to peer review, the money goes directly to Chemistry Department to support its overhead, and that the money is often used to support the work of graduate, undergraduate, and post-doctoral students. (Day Depo. 40:12-42:1). Thus, he understood that his research at Crystalytics did not meet those criteria. (Day Depo. 42:19-44:13). [18] Plaintiff would require that a younger similarly situated person has received a higher salary than plaintiff as the final element of the prima facie case. (Defendant's Brief at 36-37). On the other hand, defendants would require that other similarly situated younger employees received a higher salary than plaintiff. (Plaintiff's Brief at 37). Plaintiff's suggestion that he be required to prove that only one employee received a higher salary is clearly at odds with the Marshall decision and other district courts considering the issue in similar contexts. See Glass v. Dep't of Energy, 46 F.E.P. 1890, 1988 WL 57269 (1988); Fong v. Beggs, 620 F.Supp. 847, 872 (D.C.1985) (establishing prima facie elements as (1) plaintiff is a member of the protected class, (2) was qualified an eligible for substantial pay increases, (3) the pay adjustments received were minimal or nonexistent, and (4) that some younger employees received the pay increases that plaintiff expected to receive). I note that the other aspects of the Fong approach are inapplicable in this instance because that case dealt only with wage increases. 620 F.Supp. at 872. Here, plaintiff apparently is maintaining that his salary was already lower than other employees when the applicable statute of limitations on this claim began to run. I also note that to the extent the parties interpret the "similarly situated" requirement as pertaining to the performance record of the other employees, I decline to adopt that approach because that issue is more appropriately addressed as defendants' non-pretextual reason for their actions. [19] Plaintiff argues that the defendant have admitted that obtaining external funding or grants is not a requirement of his job based on the following Plaintiff's Exhibit 5 which is a response to a request for production of documents: [Documents Requested:] Any documents that state chemistry department faculty have to write grant proposals. RESPONSE: There are no documents that "state chemistry department faculty have to write grant proposals." Plaintiff has previously been provided with copies of documents which encourage University of Nebraska-Lincoln Chemistry Department faculty to write grant proposals. (Plaintiff's Index of Evidence Exhibit 5). This statement does not contradict defendant Song's statements concerning the Chemistry Department's strong emphasis on research activities, the importance of external funding to the department, and the importance of external funding in evaluating faculty members. (Song Affidavit, ¶ 18-20). There is no dispute that plaintiff and other UNL faculty were encouraged to seek out external grants. (Day Depo. 40:12-44:3). Plaintiff understood that the components of the merit matrix system weighted the quality of his publications at 15 percent, the quantity of his publications at 15 percent, and the external funding he received at 30 percent. (Day Depo. 39:10-40:11). He understood that in externally funded research, the proposals are subject to peer review, the money goes directly to Chemistry Department to support its overhead, and that the money if often used to support the work of graduate, undergraduate, and post-doctoral students. (Day Depo. 40:12-42:1). That plaintiff was expected to seek and obtain grants is true regardless of whether the expectation was written in terms of a "requirement." Moreover, defendants have not argued that this is a requirement for continued employment. [20] Although Day does not argue the point here, in other portions of his brief he cites to a deposition of defendant Song for the proposition that some externally funded research grants do not cover costs of UNL's overhead. (Plaintiff's Brief at 9). First, defendant Song's testimony was couched in terms of "It's hard to say ... I haven't done the accounting." (Song Depo. 196:17-22). Moreover, Day provides only one page of the relevant testimony and fails to include two pages of the deposition transcript before and two pages after the testimony to which he cites. (Song Depo. 196:1-25). It is apparent that much of the relevant testimony of defendant Song on this issue was excluded from consideration by the court. Moreover, the context of defendant Song's statements is quite ambiguous from the single page included from his deposition. I am unable to discern whether defendant Song testified that UNL may not break even on some particular external grants, on a wide range of external grants, on certain parts of research expenses, such as graduate students aiding in the research supported by the grants, or on certain portions of the program funding jointly by the grants and other UNL funding, such as the graduate student program as a whole. (Song Depo. 196:1-25). Finally, I note that use of this testimony to prove that the defendants' job expectations were unreasonable is suspect, because this evidence shows at most that this expectation was not necessarily beneficial to UNL in all cases and not that it was unreasonable to expect plaintiff to be able to meet this requirement. See O'Bryan v. KTIV Television, 868 F.Supp. 1146 (N.D.Iowa 1994). [21] There is no evidence that untenured and less experienced faculty members may not be biologically older than their tenured counterparts. As discussed below, plaintiff fails to provide statistical evidence suggesting that the "inversion" system had a disparate impact on the salaries of older persons. [22] The memorandum specifically states that "some junior faculty will have below average increases.... This year the senior faculty will not be, in effect, subsidizing increases for beginning faculty." (Plaintiff's Index of Evidence, Ex. 8). I note that this language undercuts plaintiff's claim of discrimination against older faculty members. [23] The general limitations period for filing a lawsuit under the ADEA is two years. In cases of willful violations, however, the limitations period is three years. 29 U.S.C. § 255. [24] See supra, at note 22. [25] The mere existence of a "scintilla of evidence in support of the plaintiff's position" is insufficient to avoid summary judgment. Anderson, 477 U.S. at 252, 106 S.Ct. at 2512. [26] I note that there is "some doubt about the viability of an ADEA disparate impact claim." Leidig v. Honeywell, Inc., 850 F.Supp. 796 (D.Minn.1994). The Supreme Court has "never decided whether a disparate impact theory of liability is available under the ADEA." Hazen Paper Co. v. Biggins, 507 U.S. 604, 610, 113 S.Ct. 1701, 1706, 123 L.Ed.2d 338 (1993). Several Justices cautioned lower federal courts against interpreting Biggins as authority for recognizing such claims. Id. at 610 and 617, at 1706 and 1710. However, because the Eighth Circuit recognized ADEA disparate impact claims before the Supreme Court's Biggins decision, I "assume for purposes of this motion that such a claim is cognizable[]" as the District Court of Minnesota did in Leidig, 850 F.Supp. at 801. See e.g. Nolting v. Yellow Freight Sys., Inc., 799 F.2d 1192 (8th Cir.1986); Leftwich v. Harris-Stowe State College, 702 F.2d 686, 690 (8th Cir.1983). [27] While plaintiff suggested in his deposition that other members of his department have suffered from age discrimination, he does not indicate how he has any personal knowledge of such discrimination beyond mere speculation, does not provide any testimony or affidavits from those he claims might also be victims, and phrases his responses to questions about particular persons in terms of "he may be," "it would appear," and "I really don't know. That's why I'm asking the question." (Day Depo 381:9-391:5). In addition, he also admits that all of the highest paid faculty members in the department are older than he, (Day Depo 391:14-16), and that the department does not discriminate against all older faculty members. (Day Depo 381:9-11). [28] Normally, a case "arises under" federal law, if federal law creates plaintiff's cause of action. See American Well Works Co. v. Layne & Bowler Co., 241 U.S. 257, 260, 36 S.Ct. 585, 586, 60 L.Ed. 987 (1916); see also The Fair v. Kohler Die & Specialty Co., 228 U.S. 22, 25, 33 S.Ct. 410, 411-12, 57 L.Ed. 716 (1913).
x and y have the same value? False Let x = -214.39 - -214.49. Which is bigger: 4717 or x? 4717 Let b(t) = -20*t + 39. Let f be b(8). Suppose 247 = -4*m - 233. Is f >= m? False Let g(l) = l**2 - 2*l - 71. Let k be g(-7). Let q(p) = 5*p + 44. Let f be q(k). Which is bigger: 34/11 or f? f Let w be ((1925/10)/(-11))/(1/(-2)). Let m be (5 - 180/w) + 1/7. Which is smaller: m or -1/60? -1/60 Let i = 24.68 + -27.68. Which is greater: -16 or i? i Let n(w) = -2*w**3 + 16*w - 1 - 18*w**2 + 22 - 6*w**3 + 9*w**3. Let x be n(17). Suppose x*y - 12 = 4*z, 0 = -2*z - 4*y + y + 4. Which is bigger: z or -2/33? -2/33 Let j = 2/13941 - 125581/780696. Which is smaller: -1 or j? -1 Let j = -57 + 82. Let c = 1866 + -1857. Is j bigger than c? True Let j be -1 + 0 + 4/(-12) + (-184)/(-138). Are j and -58/263 equal? False Let j be (-36)/72*(2 + (1 - 5)). Let t be (1 - -1)*(9/6 - j). Which is smaller: 1/82 or t? 1/82 Let q be 13 + (-9)/(-225)*-25. Let f = -2 + 2. Which is smaller: q or f? f Let k be -3 + 17790/(-75)*(-2)/192. Let q = 11/80 - k. Let f = 36/7 - 5. Are f and q unequal? True Suppose 108 = q + 8*q. Suppose -q*m = -10*m - 158. Let i = 80 - m. Is 3 at least as big as i? True Suppose 4*p - 3*x - 22 = 0, p - 11*x + 10*x - 6 = 0. Let q = 2981/91 - 197/7. Which is greater: p or q? q Let z be (-16 - 1334)*2/(-4). Let w = z - 735. Is -64 less than w? True Let g be 198/8 + 48/(-64). Let s = -42 - -69. Which is greater: g or s? s Let j(h) = -155*h + 974. Let y be j(-14). Is 3144 greater than y? False Suppose 0 = 5*x + 5*b + 40, x - 2*x + 5*b = -10. Let a be (5/x - -2)/1. Which is smaller: -64 or a? -64 Let h = 11156 + -13731. Which is greater: h or -2570? -2570 Suppose -3*s = 4*p - 45, 2*p - 15 = -5*s + 2*s. Let r be (-48)/32*(-1 + 187/p). Suppose 0*g + 64 = -4*g. Which is smaller: r or g? r Let p = -3.1 - -15.6. Let y = -7.5 + p. Let c = -3.8 + y. Are 2/7 and c non-equal? True Let t(r) = 12*r - 385. Let p(q) = q**3 + 11*q**2 + 15*q + 5. Let x be p(-9). Let j be t(x). Let z = -19246/31 + 621. Is j at least as big as z? False Let j(f) = -2*f**2 - 37*f + 549. Let w be j(-39). Which is smaller: w or -1048? w Let t = -18 + -132/5. Let j be -7 - (53/(-2))/(112/32) - 8736/196. Which is bigger: j or t? j Let d = -764 - -763. Let m = -44859/2445373 + -1/11269. Is m > d? True Let h(t) be the third derivative of -5*t**4/12 + 10*t**3/3 + 14*t**2. Let c be h(7). Let j be (12/10)/(15/c). Is -2 less than j? False Let x(d) = -24*d**2 + 9*d + 9. Let y(n) = 11*n**2 - 4*n - 4. Let k(g) = -6*x(g) - 13*y(g). Let i be k(-1). Is 2/125 not equal to i? True Let d = -490 - -666. Let h be 76/342 + d/18. Is h bigger than -1? True Let x(t) = -4*t + 98. Suppose 15*h - 370 + 10 = 0. Let y be x(h). Let g = 2/249 - -3721/1743. Is y < g? True Let u be 3720/375 + 1 + -2. Which is bigger: 10 or u? 10 Suppose -1511 = -97*v - 610 + 4822. Do v and 86 have the same value? False Let u = 4 + 2. Suppose -4*p + 6 = -a, -u*a - 18 = -3*a - 5*p. Suppose 0 = -24*j - 103 + 7. Does j = a? False Suppose -5*z - 3*i - 33 = -7, -z + i = 2. Let t be (-3)/(15/(-10))*z/4. Let l be (-2)/(-7) - 12/7. Is l at least as big as t? True Let o be 3*(102/(-12) + -3 + 8). Let s = 64 + -91. Let z = s + 15. Which is greater: z or o? o Let b(v) = -v**3 + 8*v**2 + 23*v + 9. Let d be b(10). Suppose -8*l - 4*x = -11*l + d, 4*l + 2*x = 30. Is l >= 5? True Suppose -9*l + 5*o = -12*l + 83, 3*l = -4*o + 79. Let f be (-1 - (-4 + 0))*7/l. Which is bigger: -1/542 or f? f Let q(o) = 3*o**2 - 157*o + 77. Let a be q(26). Which is smaller: a or 1/4? a Let y(p) = 0*p**2 + p**2 + 17*p - 6 + 3*p**3 - 25*p + 5. Let v be y(-5). Is -311 > v? False Suppose 0 = -125*z + 160*z + 46130. Are -1319 and z non-equal? True Let k(t) = 2*t - 5. Let a be k(4). Let s = 3121 - 3121. Suppose a*r + m = 18 + 1, s = -r + 3*m + 23. Is r smaller than -1/3? False Let a = -2862 + 2074. Is a greater than or equal to -787? False Let l = -168444/43 - -13444652/4171. Let s = l - -694. Is s at least -1? True Let q be (-10394735)/2300 + (1 - (-6)/(-8)). Is q > -4520? True Let i(p) = -40*p - 1276. Let c be i(-55). Which is greater: 922 or c? c Suppose 0 = -52*d - 16*d - 2477 + 6489. Is 80 at least d? True Let z = 535.54 + -537.2. Let k = -13/21 - -1/3. Which is bigger: k or z? k Suppose -26*w = -29*w + 2*i - 680, -5*w = 5*i + 1150. Let c be 2064/(-9) - ((-2)/(-2))/(-3). Is c <= w? True Let o = 0.031 + -1.031. Suppose 6*g = -186 - 180. Let y = g + 84. Which is bigger: y or o? y Let q = -6835 + 2118847/310. Let p = -202 - -203. Is q < p? True Let t = 970 - -620. Suppose -7*v - t = 23*v. Let a = 32 - 85. Is a smaller than v? False Let u = -1.7263 - 0.0537. Let b = u + 1.84. Let m be (2/(-39))/((-2)/6). Is b not equal to m? True Let g(q) = -q**2 + 14*q - 9. Let f be g(5). Suppose f*x - 4 = 35*x. Suppose -2*u + 8 = x*p, -3*u - 5*p + 4 = -6. Which is bigger: -2/43 or u? u Suppose -37*f - 23 = -17*f - 3. Which is greater: 3/4969 or f? 3/4969 Suppose -6 = -a, -3*x - 569*a + 566*a - 42 = 0. Let u = 16 + -11. Are x and u unequal? True Suppose 0 = 2*u - 4*t - 66 - 122, -3*t = 5*u - 548. Which is bigger: u or 126? 126 Let y be 118/8 + ((-810)/72)/(-9). Is y bigger than -19? True Let u = 78.92 - -2.08. Let n = u + -81.22. Is -1 <= n? True Let h be (1 - 2)*(-1 - -2). Let s be 89096/5168 - 17 - 24/114. Do s and h have different values? True Let j(d) = 23*d - 137. Let m be j(6). Is m >= 3/32? True Let h be (-3)/30 - ((-1386)/(-140))/(-9). Which is smaller: -25/91 or h? -25/91 Suppose -168 = 36*j - 38*j + r, 0 = 5*r + 10. Which is smaller: 70 or j? 70 Let u = -135 - -140. Suppose -4*v - g = 24, -12 = -v + u*v - 2*g. Is -4 less than v? False Let a = -1.2879 + 0.2879. Which is bigger: a or -2930? a Let o(j) = 214*j + 4. Let m be o(3). Let t be (-6438)/(-10) - (3 + -3). Let v = m - t. Which is bigger: 1 or v? v Let b = -19646.0013 + 19646. Which is smaller: b or 0.06? b Let z = 0.13263 - -0.01137. Which is bigger: 5 or z? 5 Let d be (-264)/(-1056)*((-2)/2 + -43). Which is greater: d or -46? d Let p = -17898 - -10498. Which is smaller: p or -7401? -7401 Suppose 4*i = 17 + 71. Let d(q) = 4*q - 18. Let n be d(6). Let y be 21 - (n/(-8) - 11/44). Is i at most y? True Let g = -94982/975 - 1096/39. Which is greater: g or -125? -125 Let m = 16573 + -34059/2. Let x = 482 + m. Which is smaller: 25 or x? 25 Let z = -98 - -106. Let v = z + 5. Is v less than 2? False Let q be 3 - (-1 - (-3)/1). Let p be (q - 3/6)*(2 + 6). Suppose -p*g + 102 + 30 = 0. Is g greater than or equal to 0? True Let p(n) = 2*n - 2. Let f be p(1). Suppose -5*x + 3*y = 2*y + 2, f = -3*x + 3*y + 6. Let o be 1/((-9)/12 - x). Which is smaller: o or 5? o Let c be (7 + -9 - (-4 - -1)) + 29. Let u(m) = m**3 - m**2 - 2*m + 2. Let a be u(-3). Let p = a + c. Are p and 1 equal? False Let j = 77 - -84. Let d = j + -159. Let p = 4 - 8. Is p bigger than d? False Let x = -60.0473 + 1.0473. Is x at least 21? False Let s = 22.21 - 23.21. Let c = 0.3 + -0.15. Is c at most s? False Let f be 3885/111*2/70. Is 32/27 bigger than f? True Suppose 4*u - 1 = -5. Let n be 4/(-8)*(0/2 + 0). Suppose -18*t + 28*t = n. Is t less than u? False Let s(c) = -2*c**2 + 43*c + 22. Suppose -w + 41 = -5*f, 5*f = 28*w - 29*w + 1. Let u be s(w). Is u > 263/6? False Let u(r) = 20*r - 288 + 292 + 4*r**2 - 2*r**2. Let s be u(-10). Suppose -s*n + 6*v - 122 = 7*v, -v = 2*n + 62. Do -29 and n have the same value? False Suppose 2*a + 5*f + 1511 = -a, -2*a - 3*f - 1007 = 0. Is -503 at most a? True Suppose 2*z - 4*t = -0*t - 24, 5*z - 4*t = -36. Let m(v) = -20*v - 76. Let q be m(z). Suppose -p + 13 = 5*k, -5*k - 8 + 3 = -5*p. Which is greater: p or q? q Let t = -0.067 - 0.149. Let u = 11146 - 11145.9. Is t > u? False Let v = 4109 - 4109. Which is smaller: v or 0.2447? v Let q = 39 - 14. Let p(b) = -3*b**2 - b. Let k(i) = 19*i**2 - 2*i - 82. Let c(j) = k(j) + 6*p(j). Let w be c(15). Which is greater: w or q? q Suppose -8*l + 32 = -40. Suppose l - 5 = -4*n. Let o = -1/382 - 2287/1910. Which is smaller: n or o? o Let c = -5624 - -5620. Which is smaller: c or -487? -487 Suppose -h = 2*f + f + 11, 4*h - 3*f = -14. Let i = -3/7 + -5/21. Which is bigger: i or h? i Let k = -9042.6 + 9042. Do -275 and k have different values? True Let d be (-61824)/(-30816) - (1 - -1)*1. Suppose k + j + 4 = 2*k,
General workout breakdown: This has already been described well in previous reviews, so Iíll just add a few more details here and there. The workout runs 40 min. total (without the power-up) and 46.5 min. total (with the power-ups), with 5 min. for the warm-up, 21.5 min. for the main cardio portion, and 3.5 min. for the cool-down / stretch; the power-up section runs 6.5 min. Amy builds up the combo on the right, then on the left, and then does 1, maybe 2, run-throughs back to back alternating sides. She doesnít really layer, with the exception of an added turn here or there; pretty much what you see is what you get, although sheíll cut out filler moves and cut down on repetitions in the final product. I would have liked a little more repetition, if not break down, of a few tricky parts, like one big chunk of combo #1 that gets thrown out at once before you repeat somewhat easier moves a number of times. I guess the flip side is the moves that get a lot of repetition are a little more intense, so they keep your heartrate up more. Amy doesnít always run through everything evenly; it feels like the second side gets shortchanged a few times. For example, I think itís in the second combo where she doesnít run through the full left side a few times; she just goes straight into alternating the right and left of the full combos. There is no TIFTing (taking it from the top). Once you learn a combo, youíre done with it. Thereís no running through all four combos together. Another thing that makes this workout tricky to learn is that youíll spend a lot of time in front of your step and thus with your back to your TV quite a bit. This has a lot of quick pivots and turns, including a number of ones on the step itself. I was able to take out some, for example by doing crazy shuffle as fast feet (that is, staying behind the step for all four toe taps on the step), but some are needed to transition from one move to another, so if your knees donít like torque it might be best to pass on this one. Thereís also a good deal of impact, although none of it felt unreasonable to me (thereís something about Catheís high impact that bothers me, but I donít have problems with Amyís). You can take out some, for example by doing basic instead of run. Itíll take more creativity to modify the plyo jumps in the Power Up, but if youíre creative itís not too hard, plus you could always do this segment without a step. The cool-down consists of a minute or two of basic steps to get the heartrate down (see, Cathe, Mindy, Petra, and others who havenít bothered to include cool-downs on recent releases, itís not that hard!) followed by a few quick (and I mean quick) stretches for the back of the leg. I definitely needed more, especially for the quads and hip flexors, although I definitely appreciated having the two different calf stretches. Level: Iíd recommend this to experienced steppers at least at a high intermediate through mid-advanced level who are comfortable with moderate to fairly complex choreography and some impact. I consider myself an intermediate / advanced exerciser whoís better than average at picking up choreography, provided itís taught and cued decently enough. I watched the preview segment and got most of this the first time through, although I took some modifications, as I noted above, to take out some of the quick spins and some of the impact moves. This is plenty of work for me on even 4Ē; when I do it on 6Ē itís a real advanced step challenge! Class: 3 women join Amy, who instructs live. Sheís the main whooper; you canít really hear much from the others. Music: upbeat mostly instrumentals with a beat. Amy is known for her great music, but in this, one of her first releases, she doesnít seem to have the money or connections yet. Still, this is better than your average generic workout music soundtrack. Set: neutral-colored ďroomĒ with two windows and potted plants in the corner. Itís bright and airy. Personally I rather like it: Iíd rather have a bright, clean, plain set than the wacky ones out there; the fewer the distractions from the workout the better for me. Production: clear picture and sound. Although Amyís voice is a little echo-y, it is clearly audible over the music. The camera angles are helpful. You know, there are people whoíve put out many more videos who do a poorer job with production issues than this, Amyís second series and done on her own. Equipment: step (Amy and company use a full-sized club step with 1 pair of risers), sneakers thatíll turn easily on your platform and flooring, and a supportive jog bra. Space Requirements: Youíll need approximately an equal amount of space to the back, front, and sides of your step. I put mine smack dab in the middle of my workout space, which is about 6í deep by 8í wide, and that gave me enough room as long as I stayed in place rather than doing the step out pivot turn to the side in combo 2 or 3. For the main step workout the step is horizontal, but itís vertical during the power up segment. (I kept the step in place and changed my position.) DVD Notes: I have a pressed DVD, but I believe the very first edition appeared on DVD-R. This has also been rereleased with a different cover, but I donít know of any notable differences besides the artwork. Comments: How does the original ASC compared to ASC 2 and ASC 3? Itís the shortest of the three, thatís for sure (although ASC4 might only be a little bit longer than ASC 1). I feel this original one is the most intense, perhaps because no modifications are shown, so the impact and pivots are nearly unavoidable unless you preview it and devise your own. Amy seems to have kept a similar format for her later ASCs, with the Power Ups in a separate chapter, although at least ASC3 has a premix throwing the Power Ups into the mix. Here youíll have to be pretty handy with your remote if you want to do the Power-Up within the main workout. Oh, and ASC 1 is the only one with whooping, as Amy listened to feedback and has worked to removed that from her later releases. Instructor Comments: Amy is upbeat, positive, and cheery, without being too much so; itís just the right level of enthusiasm for me. She has almost a goofy or quirky kind of sense of humor, which I like, as it adds to the sense that sheís really being herself. Amy has great camera presence; itís hard to believe this is just her second filmed series. Amy cues for someone already familiar with the routine; she is not cuing to teach the routine. While this means her cuing will stand up better with repeat viewings, it leaves the burden of learning the routine on the user. Oh, and donít listen to her when she says ďX moreĒ or ďlast one,Ē because she invariably ends up doing more than that. I just finished trying this one out for the first time and WOW...it may be shorter than I'd like but the intensity is WAY up there. I think this one is just as intense all the way through as one of Cathe's challenges. The whooping can be a bit much if that bothers you, but I found myself getting into it just as much as Amy was. I thought it would bug me, but it didn't. This one's a keeper, and I look forward to trying more of Amy's workouts. The first time I did this video, I was very frustrated by the complexity of the moves. I would advise definitely to study the instructional portion of the tape before starting. The 1st combo was the hardest one to get for me. I still haven't mastered the "crazy shuffle" yet. I usually get choreography fairly fast, but with this one, I had to preview it with a 4 inch step just to get the moves. Even then I was sweating! I suspect for most people, it will take a little work to get the combos down...but once you do the workout in itself is very good. My legs were shaking just as much as if I had done Cathe's IMAX2. (Haven't done IMAX3 yet...) The Power Up section is a good one. I would suggest to do Power Up between the first two combos and the last two combos and then again at the end if you want to boost up the cardio of the workout, although it is not necessary. The music was nothing special, but the moves were fun. So far, I have enjoyed all of Amy Bento's DVDs. Instructor Comments: Amy has an energetic gusto that really inspires while you do her workouts. The first time I did this video I swore and said some very bad things to Amy on the TV screen. But that's because I hadn't previewed the four most complex moves! I found that the preview chapter was essential for doing the workout, because she does not teach them or break them down at that point. Once I did that and tried this video for the second time, I fell in love! This is the best advanced step workout to come out in a long time. My only complaint at the end is that it was only about 30 minutes, minus the 10-minute power bonus section (which was fabulous in and of itself!). She does teach the first part of the first combo a little fast, but I was able to pick it up, armed with the previewed knowledge. Crazy shuffle is sort of like a jump aruond the world. YOu also do hamstring curls around the step, a fast shuffle on both sides of the step and a jump about-face which isn't really hard to learn. Alot of the moves are done in front of the step but if you get used to that it's no big deal. The first combo of the bonus power section gives me great hope for her future kickboxing workouts! Knee-kick mambo into two jump kicks--so much fun it hardly hurt! She goes on to do vintage Cathe-style plyo leaps, jumps over the step and other more traditional power moves. The only problem with this workout for me right now is that I have a grouchy downstairs neighbor! It's high-impact, sweat-producing and tons of fun! Instructor Comments: Amy DOES remind of Cathe in an abstract,very positive way--fit, no-nonsense yet friendly, talkative but not chatty. The whoo factor is not too bad in this workout--probably because who has extra breath? Preview: In the preview section, Amy breaks down the 4 main combinations that make up the "meat" of the workout. She demonstrates this at a slow tempo, showing modifications that can be used as well. It's just her and you, no music, just straight forward instruction. 2nd: Scoop Around Step-starts from the top of the step, scoop side to side, then two scoops at front of the step, cross back over, two back then repeater. 3rd: Flip Flop-this also starts from top of the step. 2 shuffles, knee & pivot, this turns from front to back then front again ending with jack on floor. 4th: About Face-this move starts from a straddle on floor and you literally jump up onto the bench facing the opposite direction as where you started. Warm Up: Marches, step touch, 2 knees, hamstring curls to top, scoop around (not starting from the top) to two repeaters on floor. Then you grapevine, turn out and in, arabesque straddle, v-step then go into stretches. She then does 8 jacks and repeats stretches to other side. Combo 4: Knee off side, squat, jog, knee up with a hold onto top of step, lunge back off step, 3 lunges then jacks on top of step. FLIP FLOP is introduced in this combo after the jacks on top of step. Jack on floor. Then run on left to repeat to that side. Next 2 kick spin to end, across the tops, knee to bring it home. Cool Down: Very basic, just a few step touches and deep breaths. Amy does stretches for back, shoulders, hamstrings and calves. Wardrobe: Amy and the A-Team are all wearing matching tops and pants. The background girls followed along beautifully! For you complex choreography step hounds, this may be the ticket!! Lots of twists and turns at non stop stepping speed! For those that may want to try to master it, the review section is very helpful. She breaks down the choreography slowly and deliberately, then she shows what the move looks like at tempo. www.nrgfitness.net is Amy's website. Instructor Comments: Amy has lots of energy and loves to whoop. (in case you don't like that) I'd compare her to Chalene Johnson, that's who she reminded me of. I did a walk thru of the preview section where Amy breaks down the harder moves. She said she does this rather than break the moves down in the workout - keeps the pace more advanced. The moves are nicely brokedown and at a slow pace so you can get it. One is like Cathe's fast feet, but you are moving in a 360 over the step. There is a straddle with a 180 jump up onto the step - I would recommend practicing this first ;) There were a few others, but those 2 stood out. Intro flows well - you get nicely warmed up with just a little bit of stretching on warmed muscles. Side steps, knee ups, few jumping jacks which can be modified to low-jacks, grapevine - nothing too cardio-challenged. Combo#1: Wow - talk about energy :) This one will need to be preview and walked thru for sure. There is really no breakdown, she just goes into it, but I have a feeling it will be worth the work. Music isn't anything special - just kind of beat music. This combo has the crazy shuffle. Combo#2: There is a 6 count mambo in there - love those. There are also forward mambos - love those, too! There's a reverse step similar to Cathes where you reverse over your shoulder around the step (I totally can't explain it, but I love it.) Amy calls is a scissor rock. Hmmm .. I'm thinking this combo looks fantastic! Again - it will take some work. There are also half mambos,then the about-face Amy brokedown in the preview section. Power-up section: step is perpendicular. Lots of high impact jumping on and off the step. Looks tough. Overall, my initial impressions are positive. I walked thru some of the moves tonight (getting over bronchitis, so didn't want to do too much.) It seems fun, but I'll try it Tues or so hoping I can breath enough. Amy sweat it out - there isn't any phantom edits to non-sweat clothes and makeup fixes, which bugs me to no end. Come on - sweat with me! If I'm gonna be drenched at the end of the workout, I want the people on the video to look as bad as I do ;) Chapters: Intro Preview-(Very short instruction on 4 of the hardest moves in the workout) W/U Combo 1-looked tough to learn because it is done off of the front of the step Combo 2 Combo 3-probably the least intense of all the combos Combo 4 CD very short with a couple of standing and hamstring stretches Powerups- 3 longish intervals with short breaks-very plyoish and very tough looking Just to give you some backround, for stepping I use mostly Cathe, a few Christi's, a Season tape (RSS) and Kristen Kagen's 2 tapes. I have a couple of CIA's as well but I rarely use them. This workout is completely different than any workout I own. There is almost NO breakdown of combos which makes it seem to me that once you know the moves, the intensity almost never lets up. I'll know for sure when I do it tomorrow, but the intensity throughout the entire workout looks to be on par with the challenge part of Cathe's Step Blast. What really makes it look different is the amount of complexity COMBINED with the level of intensity. There are lots of turns, combined with jumps in new and interesting ways. The combos are short, but are repeated a bunch of times, and she puts some plyo moves right in the middle of combos to get your HR up even more. The music is loud and good (IMO...funky). She has a TON of energy and does whoop it up quite a bit, but as I was watching I was thinking that I would appreciate that kind of encouragement to get through the workout. This is seriously fast stepping with plenty of intensity. The set is light and bright, and I think it looks great for a first effort. Impressions after doing workout: The feel is so different than other workouts because the combos are so short and repeated so often, but the complexity makes the combos so fun. As soon as you have had enough of one combo it's time to move on to the next. Time flew by. I think if there were 5 combos instead of 4 I would have voted this the all-time perfect step workout. Without the Powerup section which is an add-on of about 5 minutes, the body of the workout is shorter (40min or so??) than what I'm used to doing when I step. Next time I do this, I'm going to do the Powerup section after each combo. I think that would make it about an hour and probably give someone all the intensity they would ever want from a step workout. Her terminology is different than other instructors', but once you understand what she is asking for, her cues are enough to get you through, and the repetition is enough to give you plenty of chances to learn it. I have to give this workout a solid A. It would be an A+ if it was just a little longer. BTW-The whooping didn't bother me at all, although my 8yr old daughter who has watched me do lots of workouts was watching and said, "I think she talks too much." I didn't notice at all when I was doing the workout itself. Instructor Comments: Full of energy, good cueing, whoops more than other instructors
[Cite as Great W. Cas. Co. v. Ohio Bur. of Workers' Comp., 2016-Ohio-2876.] GREAT WEST CASUALTY COMPANY Case No. 2013-00205 Plaintiff Judge Patrick M. McGrath Magistrate Holly True Shaver v. DECISION OHIO BUREAU OF WORKERS’ COMPENSATION, et al. Defendants {¶1} On April 23, 2015, the Tenth District Court of Appeals reversed and remanded this case, finding that this court had jurisdiction over plaintiff’s complaint. After conferences with the court, the parties agreed to conduct additional discovery, and ultimately, a non-oral hearing on the previously filed cross-motions for summary judgment was set for December 7, 2015. On December 4, 2015, defendants, Ohio Bureau of Workers’ Compensation (BWC) and the Industrial Commission of Ohio, filed a supplement to their original motion. The motions are now before the court for a non- oral hearing pursuant to L.C.C.R. 4(D). {¶2} Civ.R. 56(C) states, in part, as follows: {¶3} “Summary judgment shall be rendered forthwith if the pleadings, depositions, answers to interrogatories, written admissions, affidavits, transcripts of evidence, and written stipulations of fact, if any, timely filed in the action, show that there is no genuine issue as to any material fact and that the moving party is entitled to judgment as a matter of law. No evidence or stipulation may be considered except as stated in this rule. A summary judgment shall not be rendered unless it appears from the evidence or stipulation, and only from the evidence or stipulation, that reasonable minds can come to but one conclusion and that conclusion is adverse to the party against whom the motion for summary judgment is made, that party being entitled to Case No. 2013-00205 -2- DECISION have the evidence or stipulation construed most strongly in the party’s favor.” See also Gilbert v. Summit Cty., 104 Ohio St. 3d 660, 2004-Ohio-7108, citing Temple v. Wean United, Inc., 50 Ohio St.2d 317 (1977). {¶4} As stated in the decision of the Tenth District Court of Appeals, the relevant facts are as follows: {¶5} “On March 31, 2011, Great West issued a workers’ compensation and employer’s liability insurance policy to Roeder Cartage Company, Inc. (“Roeder”), a trucking and delivery company. The Great West policy insured Roeder for workers’ compensation claims filed in Alabama. {¶6} “On June 22, 2011, James McElroy, a truck driver employed by Roeder, fell from his truck and injured himself. McElroy’s accident occurred in Alabama, but McElroy is an Ohio resident. McElroy elected to apply for workers’ compensation benefits in Ohio, rather than Alabama. On June 24, 2011, McElroy submitted a completed first-report-of injury form to the BWC. The BWC allowed claims for lumbosacral sprain/strain and sprain of the lumbar region, and it granted payment of temporary total disability compensation and benefits. {¶7} “Roeder appealed the allowance of McElroy’s claims, arguing that McElroy was not eligible for Ohio workers’ compensation benefits because his injury had occurred in Alabama. In response, the BWC vacated its prior orders and halted payment on McElroy’s claims pending an investigation of the interstate jurisdictional issue.1 {¶8} “About the same time Roeder appealed the BWC’s allowance of McElroy’s claims, Roeder reported McElroy’s injury to Great West pursuant to the terms of its insurance policy. Upon review of the situation, Great West learned that McElroy had 1The order from BWC states: “This order replaces the BWC order dated 07-18-2011, which has been vacated for the following reason: TT [temporary total disability] is not being addressed yet until Interstate Jurisdiction is fully investigated.” (Defendant’s Exhibit A-4.) Case No. 2013-00205 -3- DECISION not yet received any workers’ compensation benefits, even though his accident had occurred a month prior. Great West began paying benefits to McElroy. {¶9} “On January 24, 2012, the Commission issued an order finding that McElroy was entitled to Ohio workers’ compensation benefits. The Commission ordered the BWC to pay McElroy temporary total disability compensation and benefits, and required those payments to be offset against the payments received by McElroy from Great West. {¶10} “Upon receiving notification that Ohio would pay McElroy workers’ compensation benefits, Great West discontinued its payments. Great West then sent the BWC a written demand for reimbursement of the $22,758.80 that it had paid McElroy. The BWC did not respond to the demand.” Great West Cas. Co. v. Ohio Bureau of Workers’ Comp., 10th Dist. Franklin No. 14AP-524, 2015-Ohio-1555, ¶ 2-7. {¶11} In its complaint, Great West (plaintiff) asserts claims for unjust enrichment, “quasi-contract,” indemnity, and “statutory credit/reimbursement” based upon the fact that even though the Industrial Commission ordered BWC to make payments to McElroy from his initial date of injury, BWC retained the benefit of Great West’s payments to McElroy by taking an offset in the amount of $22,758.80 and refusing to reimburse Great West. Plaintiff asserts that it would be unjust for BWC to retain the benefit conferred on it from plaintiff’s payments to McElroy while interstate jurisdiction was being decided. In support of its motion, plaintiff cites the decisions of the Supreme Court of Ohio in State ex rel. Liberty Mutual Ins. Co., v. Industrial Com. of Ohio, 18 Ohio St.3d 290 (1985) “Liberty Mutual I”; and Liberty Mutual Ins. Co. v. Industrial Com. of Ohio, 40 Ohio St.3d 109 (1988) “Liberty Mutual II”. {¶12} Defendants assert that Liberty Mutual I and II are not dispositive of this case, and argue that they are entitled to summary judgment, based upon a more recent decision by this court in Lumberman’s Underwriting Alliance v. Indus. Commn., Ct. of Cl. No. 2006-01408, 2007-Ohio-4154. Defendants assert that Alabama allows an injured Case No. 2013-00205 -4- DECISION worker to file a claim in another state without waiving his rights under the Alabama Workers’ Compensation laws. In addition, defendants argue that the equities do not lie with plaintiff, because Roeder created a jurisdictional question and caused the delay in payment when it took an appeal from the BWC order that had initially allowed McElroy’s claims. Defendants also assert that BWC must make an offset of any collateral payment by insurance pursuant to R.C. 4123.54. {¶13} In response, plaintiff argues that Lumberman’s does not apply to the facts of this case. Plaintiff argues that this case is similar to Liberty Mutual I and II, where payments were made to an injured worker pursuant to an insurance policy in another state on an interim basis until such time as it was determined that BWC was responsible for the claim. Plaintiff argues that the equities lie in its favor because BWC would have been required to pay the full amount of benefits to McElroy from the beginning of his claim if defendants had timely determined jurisdiction. Plaintiff further argues that it paid McElroy benefits in good faith until it was definitively determined who was responsible for McElroy’s claims. Plaintiff filed the affidavit of Joseph A. Rayzor, III, a subrogation attorney for plaintiff, who avers, in relevant part: {¶14} “10. On or about July 18, 2011, because the proper situs and jurisdiction for Workers’ Compensation coverage was in dispute, Roeder Cartage Company, Inc. reported Mr. McElroy’s injury and potential claim to Plaintiff under the policy referenced in ¶ 5 above.” {¶15} “11. On July 21, 2011, the Ohio Bureau of Workers’ Compensation issued a fourth Order vacating the prior Orders and allowing this claim for sprain of the lumbosacral and lumbar regions and not addressing Temporary Total Disability benefits ‘until interstate jurisdiction is fully investigated.’ {¶16} “12. On July 21, 2011, the Employer, Roeder Cartage Company, Inc. filed a timely appeal to the original claim allowance. Case No. 2013-00205 -5- DECISION {¶17} “13. On or about July 21, 2011, Plaintiff began review/assessment of Mr. McElroy’s claim, and in so doing, on or about July 22, 2011 determined that Mr. McElroy had received no benefits, medical payments or indemnity, even though the accident/injuries had occurred thirty (30) days previously. As a result, even though jurisdiction was not clear, and because Claimant’s request for Ohio BWC coverage was on appeal, Plaintiff, in good faith, began paying benefits to Mr. McElroy.” (Emphasis added.) (Rayzor Affidavit, paragraphs 10-13.) {¶18} Unjust enrichment occurs “when a party retains money or benefits that in justice and equity belong to another.” Liberty Mutual II, supra, at 111. To prove a claim for unjust enrichment, a party must establish that it conferred a benefit upon another, the other party knew of the benefit, and the other party’s retention of the benefit would be unjust without payment. Hambleton v. R.G. Barry Corp., 12 Ohio St.3d 179, 183 (1984). It is undisputed that Great West conferred a benefit upon BWC, and that BWC knew about the benefit. The dispositive issue is whether it would be unjust to permit BWC to retain the benefit of Great West’s payments to McElroy without payment. {¶19} In Liberty Mutual I, an insurance company in Mississippi paid benefits to an Ohio worker who was injured on the job in Mississippi. Once the BWC determined that the injured worker’s claim was proper in Ohio, the Mississippi insurance company sought a writ of mandamus to order the Industrial Commission to reimburse it for the moneys that it had paid to the injured worker. The Supreme Court of Ohio held that a writ was not the proper mechanism to seek payment, but, rather, that the insurance company could pursue an action in this court for unjust enrichment. {¶20} Once the case was before this court, summary judgment was granted in favor of the state on the basis that there was no statutory authority for reimbursement. On appeal, the Tenth District Court of Appeals analyzed the language in R.C. 4123.54, which states that the Industrial Commission must deduct from an Ohio award any financial benefit paid to the injured worker under the law of another state. The Tenth Case No. 2013-00205 -6- DECISION District noted that “R.C. 4123.54 has no application and makes no provision as to who shall bear the ultimate responsibility for payment. * * * The issue in this case is who should bear the cost of such payment, plaintiff or the Ohio fund. The Supreme Court in Lange, Louisiana Pacific Corp, and Liberty Mutual I, determined that the person who made the actual payment has a right to reimbursement from the state fund where the payment was the obligation of the state fund, rather than that of the person who made the payment.” (Internal citations omitted.) Liberty Mut. Ins. Co. v. Industrial Comm’n of Ohio, 10th Dist. Franklin No. 86AP-656, 1987 Ohio App. LEXIS 8771, 7-8. {¶21} An appeal was taken from the Tenth District’s decision, and in Liberty Mutual II, the Supreme Court of Ohio stated: “We believe the [industrial] commission is unjustly enriched when an employer or its insurer pays benefits under the laws of another state where such benefits are later determined to be the responsibility of the commission.” Liberty Mutual II, at 111. The court further stated that even though Mississippi law required appellee to provide interim benefits to the injured worker while the facts of the case developed, “appellee should not be forced to pay a portion of the commission’s now acknowledged debt to [the injured worker] merely because it was unclear immediately following the injury who would be responsible for compensating [the injured worker.]” Id. Although the Supreme Court acknowledged that pursuant to R.C. 4123.54, the Industrial Commission is obligated to credit payments made under the law of another state, the Supreme Court further stated: “We simply cannot read these provisions as denying reimbursement to an employer or insurer who in good faith pays benefits in another state while the proper situs for workers’ compensation coverage is being determined.” Id. at 112. {¶22} In contrast, defendants argue that plaintiff is not owed any compensation for the benefits that it paid to McElroy, because Alabama allows for an injured worker to file Case No. 2013-00205 -7- DECISION claims in multiple states, and because of the setoff rule in R.C. 4123.54.2 Essentially, defendants assert that plaintiff created the jurisdictional issue itself when it challenged the decision of the Industrial Commission that had initially granted McElroy’s claim, and that the equities do not lie in plaintiff’s favor. {¶23} In Lumberman’s, supra, plaintiff was the Tennessee workers’ compensation insurer for a trucking company. The injured worker was a truck driver who was hired in Tennessee but injured in Ohio. The injured worker applied for Ohio workers’ compensation but was denied benefits. The injured worker then filed her claim in Tennessee, and her claim was allowed. Lumberman’s paid the injured worker benefits pursuant to its contract with the trucking company. {¶24} The injured worker then filed a notice of appeal from the denial of her Ohio claim. Lumberman’s argued that Tennessee had sole jurisdiction of the injured worker’s claim for benefits, however a district hearing officer found that the injured worker’s Ohio claim was proper. After that determination was made, Lumberman’s filed a claim in the Court of Claims seeking reimbursement from BWC and the Industrial Commission for unjust enrichment. The Court of Claims held that since the injured worker was entitled to benefits in both Tennessee and Ohio, and had elected to receive benefits in both states, Lumberman’s claim of unjust enrichment failed because: “both avenues of relief are appropriate and equity requires only that the second state prevent double recovery by crediting the benefits received in the first state against those awarded in the second.” Lumberman’s, supra, quoting Aetna Casualty & Surety Company v. Minnesota Assigned Risk Plan, (July 16, 1996), Minn. Ct. App. No. C7-96-446, 1996 Minn. App. LEXIS 834; Restatement of Conflict of Laws Section 182 & cmt. b (recognizing that 2ALA Code Section 25-5-35(e) states: “The payment or award of benefits under the workers’ compensation law of another state * * * to an employee or his dependents otherwise entitled on account of such injury or death to the benefits of this article and Article 3 of this chapter shall not be a bar to a claim for benefits under this article and Article 3 of this chapter; provided that claim under this article is filed within the time limits set forth in Section 25-5-80.” Case No. 2013-00205 -8- DECISION compensation may be allowed under the laws of two states, but providing for an offset in the event of recovery under both.) {¶25} Based upon the evidence allowed under Civ.R. 56, the court finds that there is no genuine issue as to any material fact. The initial order from the BWC was vacated until such time that interstate jurisdiction was fully investigated. The affidavit from Rayzor shows that because there was a jurisdictional issue, Roeder reported McElroy’s potential claim to Great West, who began payment in good faith to McElroy until jurisdiction was decided. McElroy did not elect to file a claim in Alabama. Once the Commission determined that jurisdiction in Ohio was proper, Great West was no longer obligated to pay for McElroy’s claims. Therefore, the court finds that the facts in this case are more similar to Liberty Mutual I and II than Lumberman’s. The only reasonable conclusion in this case is that Great West in good faith paid benefits to McElroy while the proper situs for workers’ compensation coverage was being determined, and that Great West should not be forced to pay a portion of the commission’s now acknowledged debt to McElroy merely because it was unclear immediately following the injury who would be responsible for compensating him. Therefore, plaintiff is entitled to judgment as a matter of law on its claim of unjust enrichment in the amount of $22,758.80. {¶26} Accordingly, plaintiff’s motion for summary judgment shall be granted, and defendant’s motion for summary judgment shall be denied. {¶27} Although plaintiff seeks interest and attorney fees, a claim of unjust enrichment does not support an award of prejudgment interest under R.C. 1343.03(A). Cantwell Mach. Co. v. Chi. Mach. Co., 184 Ohio App. 3d 287, 2009-Ohio-4548, ¶ 38 (10th Dist.). {¶28} Moreover, in the absence of statutory authority, attorney fees cannot be Case No. 2013-00205 -9- DECISION awarded. Mechanical Contrs. Assn. of Cincinnati, Inc. v. Univ. of Cincinnati, 152 Ohio App. 3d 466, 2003-Ohio-1837, ¶ 34 (10th Dist.). Counsel for plaintiff cites no statutory authorization for an award of attorney fees, and the request for the same is DENIED. PATRICK M. MCGRATH Judge [Cite as Great W. Cas. Co. v. Ohio Bur. of Workers' Comp., 2016-Ohio-2876.] GREAT WEST CASUALTY COMPANY Case No. 2013-00205 Plaintiff Judge Patrick M. McGrath Magistrate Holly True Shaver v. JUDGMENT ENTRY OHIO BUREAU OF WORKERS’ COMPENSATION, et al. Defendants {¶29} A non-oral hearing was conducted in this case upon the parties’ motions for summary judgment. For the reasons set forth in the decision filed concurrently herewith, plaintiff’s motion for summary judgment is GRANTED, and defendants’ motion for summary judgment is DENIED. Judgment is rendered in favor of plaintiff in the amount of $22,758.80. All previously scheduled events are VACATED. Court costs are assessed against defendants. The clerk shall serve upon all parties notice of this judgment and its date of entry upon the journal. PATRICK M. MCGRATH Judge cc: John C. Albert Lindsey M. Grant 500 South Front Street, Suite 1200 Peter E. DeMarco Columbus, Ohio 43215 Assistant Attorneys General 150 East Gay Street, 18th Floor Columbus, Ohio 43215-3130 Case No. 2013-00205 -11- DECISION Ohio Attorney General Assistant Attorney General Court of Claims Defense Section 150 East Gay Street, 18th Floor Columbus, Ohio 43215-3130 Filed March 21, 2016 Sent To S.C. Reporter 5/9/16
986 P.2d 765 (1999) 1999 UT App 232 AMERICAN ESTATE MANAGEMENT CORPORATION, a Utah corporation, Plaintiff and Appellant, v. INTERNATIONAL INVESTMENT AND DEVELOPMENT CORPORATION, a Utah corporation; and John Does I-X, Defendants and Appellees. No. 980264-CA. Court of Appeals of Utah. July 29, 1999. Ronald G. Russell, Parr Waddoups Brown Gee Loveless, Salt Lake City, for Appellant. Merrill F. Nelson and David M. Wahlquist, Kirton & McConkie, Salt Lake City, for Appellees. Before Judges BENCH, DAVIS, and ORME. OPINION ORME, Judge: ¶ 1 American Estate Management Corporation (AEM) appeals the trial court's grant of summary judgment in favor of International Investment and Development Corporation (IID), arguing the trial court incorrectly determined that AEM's adverse possession claim is barred by the claim preclusion branch of res judicata.[1] AEM claims title by *766 adverse possession to a parcel of land used as a parking lot adjacent to the Highland Terrace Apartment Complex. AEM acquired the apartment complex by warranty deed from IID in 1982 and claims the description of the parking lot parcel was inadvertently omitted from the deed. We conclude that the trial court's ruling was correct, and we affirm its judgment. BACKGROUND ¶ 2 In 1982, business partners Po and Beatrice Chang and Tony and Sandra Lin agreed to disentangle some of their joint business enterprises and, to that end, executed a Separation Agreement. Prior to the separation, AEM and IID had been jointly owned by the Changs and the Lins. Pursuant to the agreement, the Changs became the exclusive owners of AEM and the Lins acquired exclusive ownership of IID. ¶ 3 The Separation Agreement further provided that AEM would receive IID's interest in the Highland Terrace Apartment Complex. IID executed a special warranty deed conveying the apartment complex parcel to AEM, but the adjacent parking lot parcel was not described in the deed. Allegedly unaware that the parking lot had not been deeded, AEM took possession of the complex and the parking lot parcel and began paying taxes on both. Later the same year, the parties executed a document entitled "Satisfaction of Debt," agreeing that all debts owed by IID to AEM were satisfied unless specifically identified in other documents. ¶ 4 Several years later, the parties' business relationship deteriorated, and, in 1990, AEM filed a complaint against the Lins, owners of IID, raising numerous allegations of wrongdoing. In 1995, AEM amended its complaint to name IID as a party and to add and amend claims. One of AEM's claims sought damages for breach of the 1982 Separation Agreement and another requested specific performance thereof. AEM alleged in its complaint that IID had breached the Separation Agreement when it failed to deed certain property to AEM. Answers to interrogatories referred to the parking lot parcel as one of the properties AEM alleged should have been deeded. The trial court ultimately granted summary judgment in favor of the Lins and IID on all claims related to the Separation Agreement, ruling that the 1982 Satisfaction of Debt "specifically disposed of claims arising from the Separation Agreement." ¶ 5 In 1997, AEM instituted this second action against IID claiming ownership of the parking lot parcel by adverse possession. The trial court granted summary judgment to IID, concluding that AEM's adverse possession claim was precluded by the trial court's judgment in the earlier action. ISSUES AND STANDARD OF REVIEW ¶ 6 AEM argues on appeal that claim preclusion does not bar its adverse possession claim because (1) the breach of contract claim in the prior action arose out of a different, earlier transaction or occurrence than the adverse possession claim in the pending action and (2) the breach of contract action did not result in a final judgment on the merits.[2] We review the trial court's grant of summary judgment for correctness, determining whether the court correctly concluded that no genuine issue of material fact existed and whether the court correctly applied the governing law. See Harline v. Barker, 912 P.2d 433, 438 (Utah 1996). ANALYSIS Claim preclusion bars a cause of action only if the suit in which that cause of action is being asserted and the prior suit satisfy three requirements. First, both cases must involve the same parties or their privies. Second, the claim that is alleged to be barred must have been presented in the first suit or must be one that could and should have been raised in the first action. Third, the first suit must have resulted in a final judgment on the merits. Madsen v. Borthick, 769 P.2d 245, 247 (Utah 1988). Accord Estate of Covington v. Josephson, *767 888 P.2d 675, 677 (Utah Ct.App. 1994), cert. denied, 910 P.2d 425 (Utah 1995). If these three requirements are met, "the result in the prior action constitutes the full relief available to the parties on the same claim or cause of action." Ringwood v. Foreign Auto Works, Inc., 786 P.2d 1350, 1357 (Utah Ct.App.), cert. denied, 795 P.2d 1138 (Utah 1990). Claim preclusion serves "vital public interests[,] includ[ing] (1) fostering reliance on prior adjudications; (2) preventing inconsistent decisions; (3) relieving parties of the cost and vexation of multiple lawsuits; and (4) conserving judicial resources." Office of Recovery Servs. v. V.G.P., 845 P.2d 944, 946 (Utah Ct.App.1992). ¶ 7 AEM does not dispute that it brought both suits against the same parties, the Lins, and their privy, IID. Nevertheless, it argues its adverse possession claim is not barred because the second and third requirements of claim preclusion are not met. Specifically, AEM argues its adverse possession claim was not brought in the prior action, nor could or should it have been, and that the first action did not result in a final judgment on the merits. A. Adverse Possession Could and Should Have Been Raised ¶ 8 AEM's adverse possession claim is barred by the judgment in the prior action if both suits raised the same claim or cause of action, or if AEM could and should have raised its adverse possession claim in the prior action. See Madsen, 769 P.2d at 247. While AEM concedes that its entitlement to the parking lot parcel was at issue in both actions, it argues its prior claim to title based on the Separation Agreement did not raise the same claim or cause of action raised in the present action, i.e., to quiet title to the parking lot parcel on the ground of adverse possession. AEM asserts that the adverse possession claim did not arise out of the Separation Agreement, the transaction out of which the prior breach of contract claim arose, and that proof of the adverse possession claim requires presentation of different facts and evidence. Further, AEM argues its adverse possession claim was not one that could and should have been brought in the prior action because AEM was unaware when it filed its complaint in the prior action that title to the parking lot parcel remained with IID and because AEM had no duty to amend its complaint to add the adverse possession claim. ¶ 9 The Utah Supreme Court has defined claim or cause of action as "the aggregate of operative facts which give rise to a right enforceable in the courts." A claim is the "situation or state of facts which entitles a party to sustain an action and gives him the right to seek judicial interference in his behalf." A claim petitions the court to award a remedy for injury suffered by the plaintiff. A cause of action is necessarily comprised of specific elements which must be proven before relief is granted. A claim or cause of action is resolved by a judicial pronouncement providing or denying the requested remedy. Swainston v. Intermountain Health Care, Inc., 766 P.2d 1059, 1061 (Utah 1988) (citations omitted). ¶ 10 Defining the scope of a claim or cause of action is not an exact science and, in fact, is at times driven by the relative importance of the finality of judgment. Compare In re J.J.T., 877 P.2d 161, 163-64 (Utah Ct.App.1994) ("[I]t cannot be persuasively argued that judicial economy or the convenience afforded by finality of legal controversies must override the concern for a child's welfare.") with Office of Recovery Servs., 845 P.2d at 947 ("[P]olicies advanced by the doctrine of res judicata have particular importance in this case because the child's right not to be bastardized far outweighs defendant's interest in asserting nonpaternity more than six years after having acknowledged paternity."). When, as in this case, title to real property is at issue, the need for finality is at its apex. See Farrell v. Brown, 111 Idaho 1027, 729 P.2d 1090, 1093 (Ct.App.1986); 18 Charles Alan Wright, et al., Federal Practice and Procedure § 4408, at 65 (1981). ¶ 11 Contrary to AEM's characterization, both its prior and present actions assert one claim — a claim of title to the parking lot parcel — albeit under two different legal theories. *768 Other jurisdictions have so ruled, and have held subsequent suits barred. See, e.g., Blance v. Alley, 697 A.2d 828, 830-31 (Me. 1997) (holding claim of adverse possession barred by judgments in two prior actions to establish title to same property via other legal theories); Hyman v. Hillelson, 79 A.D.2d 725, 434 N.Y.S.2d 742, 745 (N.Y.App. Div.1980) (ruling subsequent adverse possession action and prior suit for reformation of deed not separate and distinct where both involved dispute over conveyance of adjoining lots), aff'd, 55 N.Y.2d 624, 446 N.Y.S.2d 251, 430 N.E.2d 1304 (1981); Myers v. Thomas, No. 01A01-9111-CH-00412, 1992 WL 56993, at *4, 1992 Tenn.App. LEXIS 260, at *9-10 (Tenn.Ct.App. Mar. 25, 1992) (holding addition of adverse possession claim insufficient to distinguish later suit from prior suit involving same property); Green v. Parrack, 974 S.W.2d 200, 203 (Tex.Ct.App.1998) (holding prior judgment establishing ownership to strip of land precluded subsequent competing claims to same strip by same parties under different legal theories). ¶ 12 Nevertheless, we need not definitively determine whether AEM has raised one claim or two because we readily conclude that AEM could and should have brought its adverse possession claim in the prior suit. Claim preclusion "`reflects the expectation that parties who are given the capacity to present their "entire controversies" shall in fact do so.'" Ringwood, 786 P.2d at 1357 (quoting Restatement (Second) of Judgments § 24 cmt. a (1982)). If a party fails, purposely or negligently, to "`make good his cause of action ... "by all proper means within his control, ... he will not afterward be permitted to deny the correctness of that determination, nor to relitigate the same matters between the same parties."'" Horner v. Whitta, No. 13-93-33, 1994 WL 114881, at *2, 1994 Ohio App. LEXIS 1248, at *6-7 (Ohio Ct.App. Mar. 16, 1994) (citations omitted in original), appeal denied, 70 Ohio St.3d 1416, 637 N.E.2d 12 (1994). ¶ 13 In Ringwood v. Foreign Auto Works, Inc., Ringwood filed two separate complaints against individuals to whom he had sold stock in Foreign Auto Works, Inc. See 786 P.2d at 1352-53. Ringwood's first suit was dismissed because it was based on a promissory note the trial court found had merged into a later agreement. See id. at 1357-58. Ringwood then brought suit for breach of the later agreement. See id. at 1353. This court reversed the trial court's ruling that Ringwood's second action was not barred by res judicata, concluding that any "claim by Ringwood under the November agreement could have been decided in the prior action, as the agreement was extant and was in default. The only reason it was not decided was because Ringwood failed to raise the claim.... Therefore, we find that res judicata bars Ringwood's claims[.]" Id. ¶ 14 AEM's situation is similar. When it filed its complaint in the prior action in 1990, it had possessed the parking lot parcel for the requisite seven years. See Utah Code Ann. § 78-12-12 (1996). Hence, its adverse possession claim was then ripe. AEM had a second chance to raise a claim of adverse possession when it amended its complaint in 1995, but did not. As in Ringwood, the only reason AEM's claim of adverse possession was not decided in the prior action is because AEM failed to raise it. And, as in Ringwood, the claim preclusion branch of res judicata bars AEM from doing so now. See Wheadon v. Pearson, 14 Utah 2d 45, 47, 376 P.2d 946, 947-48 (1962) ("Here, we have the same parties litigating the same subject matter — an asserted right of way over defendants' property.... [T]he issue or theory of implied easement, now urged in this second action, could have been urged and adjudicated in the first action."). Accord Irving Pulp & Paper Ltd. v. Kelly, 654 A.2d 416, 418 (Me.1995) (Adverse possession was "an issue that might have been tried in the 1951 action. Under the doctrine of res judicata, [appellee] and his privies are therefore precluded from having or claiming any right or title adverse to [appellant] for any period prior to November 1951."); Bagley v. Moxley, 407 Mass. 633, 555 N.E.2d 229, 232 (1990) ("[P]laintiffs were not entitled to pursue their claim of ownership through piecemeal litigation, offering one legal theory to the court while holding others in reserve for future litigation *769 should the first prove unsuccessful.").[3] B. The Prior Action Resulted In a Final Judgment on the Merits ¶ 15 Having determined that AEM could and should have raised its adverse possession claim in the prior action, we now consider AEM's argument that res judicata does not bar its current suit for title to the parking lot parcel by adverse possession because the prior action did not result in a final judgment on the merits.[4] We also reject this argument. ¶ 16 First, the trial court's Memorandum Decision unequivocally granted summary judgment to the defendants on AEM's claims of breach of the 1982 Separation Agreement. AEM's fifth claim for relief in its amended complaint alleged, at paragraph 44(g), that "[t]he Lins have breached the March 1982 Separation Agreement ... [b]y failing to deed certain properties to Plaintiffs as contemplated by the agreement." In an interrogatory, AEM was asked to "[p]rovide the legal description of all properties you reference in paragraph 44(g)." AEM responded: "The legal description of these properties will be produced in connection with the production of documents, but includes a one-foot strip along the boundary of the Draper property and a parcel of property associated with the Highland Terrace Apartments." The trial court's Memorandum Decision, specifically incorporated into its Final Order, stated: Defendants claim that they are entitled to dismissal of claim 5 (Breach of Separation Agreement) under a theory of accord, satisfaction, and release. They contend that any problems regarding the separation agreement were worked out by the parties when they signed a March 1, 1982 "Satisfaction of Debt." ... Defendants['] argument appears to be well taken. The release specifically disposed of claims arising from the Separation Agreement. Thus the Court concludes that the "Satisfaction of Debt" releases this claim and defendants' motion [for summary judgment] is granted as to this claim. Summary judgment on the Separation Agreement claims constituted a judgment on the merits which became final upon entry of the Final Order.[5] ¶ 17 Moreover, AEM's claims for breach of the Separation Agreement were not among those claims voluntarily dismissed by stipulation, as AEM argues. The trial court's Final Order indicates specifically which claims were dismissed by stipulation. Claims relating to the Separation Agreement were not among them. Thus, dismissal of the breach of Separation Agreement claims was not a voluntary dismissal without prejudice. See Utah R. Civ. P. 41. The third requirement of claim preclusion, that the prior *770 action must have resulted in a final judgment on the merits, is therefore met. CONCLUSION ¶ 18 AEM's claim of title to the parking lot parcel is barred under the claim preclusion branch of res judicata. AEM could and should have raised its adverse possession claim in the prior action alleging breach of the 1982 Separation Agreement. Further, the prior action resulted in a final judgment on the merits. Accordingly, the trial court correctly granted IID's motion for summary judgment on res judicata grounds. ¶ 19 Affirmed. ¶ 20 WE CONCUR: RUSSELL W. BENCH, Judge, and JAMES Z. DAVIS, Judge. NOTES [1] Although IID styled its motion as a motion to dismiss under Rule 12(b)(6) of the Utah Rules of Civil Procedure, it was properly treated as a motion for summary judgment by the trial court because IID supported its motion with sources outside the pleadings. See Utah R. Civ. P. 12(b); DOIT, Inc. v. Touche, Ross & Co., 926 P.2d 835, 838 n. 3 (Utah 1996). [2] Because our ruling on the claim preclusion issue is dispositive, we have no occasion to address the parties' alternative arguments concerning issue preclusion. [3] Many other courts have come to the same conclusion when a second action alleging adverse possession has been brought by the party who failed to prove its entitlement to real property in a prior action premised on some other theory. See, e.g., West Mich. Park Ass'n v. Fogg, 158 Mich.App. 160, 404 N.W.2d 644, 648 (1987) ("While it is true that the plaintiffs did not claim the property by adverse possession in [the prior action], that claim could have been made in [the prior action]. It is therefore barred[.]"), appeal denied, No. 80701 (Mich. Aug. 28, 1987); Hangman v. Bruening, 247 Neb. 769, 530 N.W.2d 247, 249 (1995) ("The theory of adverse possession could have been raised in the earlier quiet title litigation. All matters which could have been litigated in the earlier proceedings are barred by the doctrine of res judicata."); Hyman, 434 N.Y.S.2d at 745 ("At the time the first action for reformation was commenced, the cause of action for adverse possession was also viable and could also have been pleaded in the prior complaint and determined in the prior action."). [4] It is inarguable that a final judgment was entered in the prior action. AEM's contention in this appeal is really that that judgment did not encompass various claims in issue between the parties, including ownership of the parking lot parcel. [5] Because the trial court specifically addressed the breach of Separation Agreement claims and granted summary judgment thereon in favor of the defendants, those claims are not implicated by the trial court's statement in the Final Order that "[a]ll claims of the parties set forth in their pleadings not reduced to summary judgment herein or otherwise dealt with by this Order are hereby dismissed." We therefore have no occasion to consider AEM's argument that the trial court's language concerning these stray claims effected a dismissal without prejudice under Rule 41 of the Utah Rules of Civil Procedure.
/** * Copyright © Magento, Inc. All rights reserved. * See COPYING.txt for license details. */ /** * @api */ define([ 'underscore', 'mageUtils', 'uiRegistry', './abstract', 'uiLayout' ], function (_, utils, registry, Abstract, layout) { 'use strict'; var inputNode = { parent: '${ $.$data.parentName }', component: 'Magento_Ui/js/form/element/abstract', template: '${ $.$data.template }', provider: '${ $.$data.provider }', name: '${ $.$data.index }_input', dataScope: '${ $.$data.customEntry }', customScope: '${ $.$data.customScope }', sortOrder: { after: '${ $.$data.name }' }, displayArea: 'body', label: '${ $.$data.label }' }; /** * Parses incoming options, considers options with undefined value property * as caption * * @param {Array} nodes * @return {Object} */ function parseOptions(nodes, captionValue) { var caption, value; nodes = _.map(nodes, function (node) { value = node.value; if (value === null || value === captionValue) { if (_.isUndefined(caption)) { caption = node.label; } } else { return node; } }); return { options: _.compact(nodes), caption: _.isString(caption) ? caption : false }; } /** * Recursively loops over data to find non-undefined, non-array value * * @param {Array} data * @return {*} - first non-undefined value in array */ function findFirst(data) { var value; data.some(function (node) { value = node.value; if (Array.isArray(value)) { value = findFirst(value); } return !_.isUndefined(value); }); return value; } /** * Recursively set to object item like value and item.value like key. * * @param {Array} data * @param {Object} result * @returns {Object} */ function indexOptions(data, result) { var value; result = result || {}; data.forEach(function (item) { value = item.value; if (Array.isArray(value)) { indexOptions(value, result); } else { result[value] = item; } }); return result; } return Abstract.extend({ defaults: { customName: '${ $.parentName }.${ $.index }_input', elementTmpl: 'ui/form/element/select', caption: '', options: [] }, /** * Extends instance with defaults, extends config with formatted values * and options, and invokes initialize method of AbstractElement class. * If instance's 'customEntry' property is set to true, calls 'initInput' */ initialize: function () { this._super(); if (this.customEntry) { registry.get(this.name, this.initInput.bind(this)); } if (this.filterBy) { this.initFilter(); } return this; }, /** * Calls 'initObservable' of parent, initializes 'options' and 'initialOptions' * properties, calls 'setOptions' passing options to it * * @returns {Object} Chainable. */ initObservable: function () { this._super(); this.initialOptions = this.options; this.observe('options caption') .setOptions(this.options()); return this; }, /** * Set link for filter. * * @returns {Object} Chainable */ initFilter: function () { var filter = this.filterBy; this.filter(this.default, filter.field); this.setLinks({ filter: filter.target }, 'imports'); return this; }, /** * Creates input from template, renders it via renderer. * * @returns {Object} Chainable. */ initInput: function () { layout([utils.template(inputNode, this)]); return this; }, /** * Matches specified value with existing options * or, if value is not specified, returns value of the first option. * * @returns {*} */ normalizeData: function () { var value = this._super(), option; if (value !== '') { option = this.getOption(value); return option && option.value; } if (!this.caption()) { return findFirst(this.options); } }, /** * Filters 'initialOptions' property by 'field' and 'value' passed, * calls 'setOptions' passing the result to it * * @param {*} value * @param {String} field */ filter: function (value, field) { var source = this.initialOptions, result; field = field || this.filterBy.field; result = _.filter(source, function (item) { return item[field] === value || item.value === ''; }); this.setOptions(result); }, /** * Change visibility for input. * * @param {Boolean} isVisible */ toggleInput: function (isVisible) { registry.get(this.customName, function (input) { input.setVisible(isVisible); }); }, /** * Sets 'data' to 'options' observable array, if instance has * 'customEntry' property set to true, calls 'setHidden' method * passing !options.length as a parameter * * @param {Array} data * @returns {Object} Chainable */ setOptions: function (data) { var captionValue = this.captionValue || '', result = parseOptions(data, captionValue), isVisible; this.indexedOptions = indexOptions(result.options); this.options(result.options); if (!this.caption()) { this.caption(result.caption); } if (this.customEntry) { isVisible = !!result.options.length; this.setVisible(isVisible); this.toggleInput(!isVisible); } return this; }, /** * Processes preview for option by it's value, and sets the result * to 'preview' observable * * @returns {Object} Chainable. */ getPreview: function () { var value = this.value(), option = this.indexedOptions[value], preview = option ? option.label : ''; this.preview(preview); return preview; }, /** * Get option from indexedOptions list. * * @param {Number} value * @returns {Object} Chainable */ getOption: function (value) { return this.indexedOptions[value]; }, /** * Select first available option * * @returns {Object} Chainable. */ clear: function () { var value = this.caption() ? '' : findFirst(this.options); this.value(value); return this; }, /** * Initializes observable properties of instance * * @returns {Object} Chainable. */ setInitialValue: function () { if (_.isUndefined(this.value()) && !this.default) { this.clear(); } return this._super(); } }); });
8.935. Let o = n + s. Sort 3/8, -1/3, o in descending order. 3/8, o, -1/3 Let g be 2*(425/(-50) - (-4)/(4/7)). Put g, -174, 5, -4 in descending order. 5, g, -4, -174 Let d = -27989 - -251899/9. Let n = 0.32 + -0.02. Put -0.07, d, 4, n in ascending order. d, -0.07, n, 4 Let n = -48 - -43. Let t be (-8)/32*3*(-8)/6. Sort t, n, 3, -4 in increasing order. n, -4, t, 3 Let j = 12 + -11.8. Let u = 0.58 - -3.42. Put u, -1, -2/5, j in decreasing order. u, j, -2/5, -1 Suppose 3*l = -2*a - 15, 4*l = -202*a + 206*a - 20. Put -44, l, 23 in increasing order. -44, l, 23 Let x = -8 - -5. Suppose 4*n + 34 = -z, 0*n + 2*n = -z - 32. Let l be ((-3)/z*8)/((-2)/10). Put 5, l, x in decreasing order. 5, x, l Let a = -4 - -8. Suppose i - 23 = 4*t + t, -a*t = 5*i + 30. Let x(g) = 55*g - 711. Let f be x(13). Sort f, 11, i in descending order. 11, f, i Suppose 15*g = -6 + 6. Let x be g + 4/(-2) + (36 - 31). Suppose 2*i - d = 6*i - 7, -10 = -4*i + 2*d. Put i, 5, -5, x in ascending order. -5, i, x, 5 Suppose -20*n - 42 = 18. Sort 4, 24, n, 3 in decreasing order. 24, 4, 3, n Suppose 0 = 3*w - 2*h + 2, 111*w + 4 = 108*w + h. Sort 1, w, 14, 5. w, 1, 5, 14 Suppose 5*y + 1 = 4*r, 5*r = 2*y - 0*y + 14. Suppose -r*v = -38 + 46. Let s = -30.054 - -0.054. Put s, -0.2, v in descending order. -0.2, v, s Let a be (336/32 + -12)/15. Sort a, 503, -0.3 in decreasing order. 503, a, -0.3 Let f be 2 - (-5 - ((-119)/35 + -3)). Let r be ((-2)/9)/(-2)*3. Let g = -16.6 + 17. Put r, g, 4, f in decreasing order. 4, f, g, r Let g = 3.7771 + -3.4771. Sort -176/9, -0.3, -3, g in descending order. g, -0.3, -3, -176/9 Let j be (-2 + 5)*(-140 + 135). Let i = -23 + 18. Put j, i, -3 in decreasing order. -3, i, j Let s = 0.20978 + 4.79022. Let x = 0.1 - 1.1. Let q = -11/8 - -13/8. Put x, -4, s, q in descending order. s, q, x, -4 Let r = -1.532 + 1.732. Sort -1/3, r, -2, 9/2 in ascending order. -2, -1/3, r, 9/2 Let d be 26/(-2)*(63/(-252) - 10/(-8)). Put d, -4, 179 in decreasing order. 179, -4, d Let o = -25549 + 25536. Sort -0.2, -133, o, -2 in increasing order. -133, o, -2, -0.2 Let s = 13142 + -13140. Put s, -5, -4, 1, 7 in increasing order. -5, -4, 1, s, 7 Suppose 32 = -0*o + 8*o. Let k be -4 + (0 + o)*-1. Let c be 0/(k/(-4) - 1). Sort c, 1, 6, -4 in decreasing order. 6, 1, c, -4 Let x(v) = v**3 - 3*v**2 - 4*v + 5. Let m be x(4). Suppose 13*q = m*q - 24. Sort 1, -5, q, -11 in decreasing order. 1, q, -5, -11 Let v(x) = -10*x + 286. Let z be v(29). Let l(b) = -b - 1. Let y(u) = u + 1. Let m(p) = 6*l(p) + 5*y(p). Let c be m(-3). Put -1, c, z, -5 in increasing order. -5, z, -1, c Suppose -5*s - 8 + 3 = 0, 4*p - 3*s - 851 = 0. Let c = -222 + p. Let x = 0 - -5. Put 0, c, x in increasing order. c, 0, x Let s be ((-2)/(-5))/((-54)/405). Put s, -4, -6, 2, -7 in decreasing order. 2, s, -4, -6, -7 Let a = -173 - -173.3. Let h = -1531 - -1534. Put -12, a, -2/5, h in decreasing order. h, a, -2/5, -12 Suppose -78 = -19*d - 7*d. Let k = -42/19 - -145/57. Sort d, 5, k, -1/4. -1/4, k, d, 5 Suppose -2*k = -56*i + 61*i - 442, -4*k + i + 928 = 0. Suppose d - u = 5, -2*u - u = 15. Sort d, k, -2, -0.4. -2, -0.4, d, k Suppose -6*w + 19 = -5. Suppose -5*g = -w*k - 8*g - 139, -2*g + 6 = 0. Sort k, 2, 4, 0 in decreasing order. 4, 2, 0, k Suppose 16*n + 135 = 519. Suppose q = -4*g + 4*q - n, -4*g + q - 16 = 0. Sort 2, g, 16. g, 2, 16 Let r = 25705 - 25700. Sort -13, -3, -1, r. -13, -3, -1, r Suppose 15*a - 7 = 14*a - 4*m, -2*a = m + 35. Sort a, 3, -3. a, -3, 3 Let u(t) = t**3 - 18*t**2. Let a be u(18). Let j(v) = -v**3 - 25*v**2 - v - 30. Let l be j(-25). Put l, 11, a in ascending order. l, a, 11 Let g(f) = 30*f - 116. Let a be g(4). Suppose a*i - 36 = 10*i. Sort 7, 3, i, -3 in decreasing order. 7, 3, -3, i Let m be (-9 - 1071/(-117)) + 814/286. Put -7, -80, -3, 5, m in decreasing order. 5, m, -3, -7, -80 Let y = -543/13 - -10707/247. Sort -3/5, -0.07, 0, y. -3/5, -0.07, 0, y Let v = 0.189491 - -3.810509. Let a = -24 + 49/2. Put 1, a, v, 2/23 in ascending order. 2/23, a, 1, v Let k = -12.27 - -188.87. Let q = -177 + k. Let y be 15/6 - 1 - 0. Put -6, y, q in ascending order. -6, q, y Let m be (-7 - 169/(-130)) + (-3)/10. Suppose -4*p - 44 - 28 = 0. Let k be 10/(-15)*p/(-4). Sort -4, m, 4, k in decreasing order. 4, k, -4, m Let a = 55 - 61. Let d = -153.05 - -159. Let f = a + d. Sort -2/9, f, -1 in decreasing order. f, -2/9, -1 Suppose 0 = -3*n - 0*n + m - 435, 0 = 2*n + 4*m + 290. Let s = 144 + n. Sort 4/5, 2, 0.5, s. s, 0.5, 4/5, 2 Let q = -21615 + 21600. Sort q, 2/3, -86 in decreasing order. 2/3, q, -86 Let a(w) = 6*w**2 + 266*w + 91. Let i be a(-44). Sort -19, -1, i in decreasing order. i, -1, -19 Suppose 0 = -4*s + f + 133 - 560, 0 = -s + 2*f - 112. Let a = s - -96. Sort a, 1, -7, 0. a, -7, 0, 1 Let j = 16/27 - 41/54. Sort 5/4, -4, 2, -24, j in ascending order. -24, -4, j, 5/4, 2 Let b = 3 + 0. Suppose 2*v + 191 = 5*s - 269, -3*v = 15. Let n be ((-6)/15)/(9/s) + 9. Put b, n, -3 in descending order. n, b, -3 Let b = -0.6 + 0.1. Let z = 167 + -166. Let r = 2/27 + 71/135. Put b, 0.1, z, r in increasing order. b, 0.1, r, z Let s be 7/((-14)/(-11))*(-8)/(-2). Let q(p) = p**3 + 6*p**2 + 6*p. Let x be q(-4). Suppose s = x*r - 10. Put -33, r, 1, -1 in ascending order. -33, -1, 1, r Suppose 0 = -q - 7 + 2. Suppose -650 = r - 645, -8*r = -4*b + 12. Sort q, 5, b. b, q, 5 Suppose 7*r - 23 = j + 223, 3*r = 15. Sort 1, -1, j. j, -1, 1 Let q = 32.468 + -34.468. Sort 0.06, 1, 0, 1.5, q in decreasing order. 1.5, 1, 0.06, 0, q Suppose 8 = 4*t, -2*d = -3*d + 3*t - 290. Put d, -3, -4 in descending order. -3, -4, d Suppose -2*j + 5*x = -15, j + 0*x = -x + 4. Sort 66, 0, j, 3 in decreasing order. 66, j, 3, 0 Let o be ((-4)/(-6))/(358/(-537)). Put o, 5, 4, 3, 1 in ascending order. o, 1, 3, 4, 5 Let c = 25.4 - 206.4. Let f = -179 - c. Put f, 0.2, 2/21 in decreasing order. f, 0.2, 2/21 Let k = 695.5796 + -0.1796. Let m = -695 + k. Put m, -12, -0.15 in decreasing order. m, -0.15, -12 Let x = -3431/27 - -147383/1161. Let p be (-162)/(-1032) + 1/(-4). Let v = x + p. Sort -3/5, -0.08, 3/4, v in decreasing order. 3/4, -0.08, v, -3/5 Let t = 10401 - 10400.8. Sort 6, 2, 12, 1, t. t, 1, 2, 6, 12 Let j be -15 - 1*(4 - 2). Let a = 31798 - 31795. Sort 1, 2, j, a. j, 1, 2, a Suppose -59*d - 16 = -57 - 18. Put 2, -52, d in descending order. 2, d, -52 Let u = 90 - 100.94. Let h = -15.9 - u. Let q = -0.04 + h. Sort -2/5, 5, q in descending order. 5, -2/5, q Let v = 2020 + -2020.2. Sort -11, v, 19, -2 in descending order. 19, v, -2, -11 Let m be -21 + 19 + 3/1. Sort m, 4, 5, -1, 6 in decreasing order. 6, 5, 4, m, -1 Let z(k) = 11*k - 3. Suppose 1 = -f + n, -5*f + 4*n - 4 = -7*f. Let g be z(f). Let m be (1 + 1)*(-1)/7. Sort 1, g, -5, m in descending order. 1, m, g, -5 Suppose 70 - 27 = -17*u - 26*u. Put u, 4, 10, 2, -0.018 in ascending order. u, -0.018, 2, 4, 10 Suppose -u - 3*q + 8 = -3*u, u = q - 3. Let h = 66 + -44. Let m be (-18)/54 - h/6. Put u, 2, m, 0 in decreasing order. 2, 0, u, m Let h be ((-1)/(-4))/((-33)/5599). Let l = -125/3 - h. Let m = -18 + 19. Sort -1, m, l in decreasing order. m, l, -1 Suppose -129 = 4*r - 49. Let y be 9/r + 25/100. Put 5, y, 9, -4 in increasing order. -4, y, 5, 9 Let o = -2/2133 - 31987/8532. Let i = -10642 - -10642.1. Sort i, 1/2, o in increasing order. o, i, 1/2 Let t = 4/2245 + -902/2245. Put 0, 1, t, -11 in descending order. 1, 0, t, -11 Let c = -3788.03 + 3785.03. Let j = -1 - -1.2. Sort j, c, -5, -0.1 in descending order. j, -0.1, c, -5 Suppose -130 = 214*y - 149*y. Sort 1, -5, y, 140. -5, y, 1, 140 Let b be 1624/12 - (-2)/(-6). Suppose 4*a - 495 = -b. Let x be ((-18)/(-30))/((0 - -2)/a). Sort 2, x, 5. 2, 5, x Let t(j) = -2*j**3 + 3*j - 7*j**2 + j**3 + 97 - 8*j - 86. Let b be t(-6). Let u = 151 - 150. Put b, 4, u in decreasing order. b, 4, u Let m(a) = -a**3 + 11*a**2 + 95*a + 115. Let y be m(17). Put y, -1/243, 1/7, 0.1 in descending order. 1/7, 0.1, -1/243, y Suppose -74*o - 47457 + 47605 = 0. Let y = -5 - -9. Suppose 3*j - j - y*v = -16, 5*j + 7 = -v. Sort -8, o, j in descending order. o, j, -8 Let s = 3.653 + -3.553. Put -1/5, s, -3/8, 0.5 in ascending order. -3/8, -1/5, s, 0.5 Let z = 0.4 - 0.9. Let m = -169.4 + 150. Let r = -16.4 - m. Put z, r, 0.2, -1 in ascending order. -1, z, 0.2, r Suppose 2*d = -5*t + 5*d +
The Growing Partisan Divide Over America’s Relationship With Israel A new Pew Research survey shows that there is a growing partisan divide in the United States over the issues involved in the dispute between Israel and the Palestinians, and it’s one that could have an impact on American foreign policy and American politics in the future: The partisan divide in Middle East sympathies, for Israel or the Palestinians, is now wider than at any point since 1978. Currently, 79% of Republicans say they sympathize more with Israel than the Palestinians, compared with just 27% of Democrats. Since 2001, the share of Republicans sympathizing more with Israel than the Palestinians has increased 29 percentage points, from 50% to 79%. Over the same period, the share of Democrats saying this has declined 11 points, from 38% to 27%. The latest national survey by Pew Research Center, conducted Jan. 10-15 among 1,503 adults, finds that 42% say Donald Trump is “striking the right balance” in the situation in the Middle East, while 30% say he favors Israel too much (just 3% say Trump sides too much with the Palestinians; 25% do not offer an opinion). At a similar point in Barack Obama’s presidency, 47% of Americans said he had struck a proper balance in dealing with the Middle East; 21% said he sided too much with the Palestinians, while 7% said he favored Israel too much. The survey finds that while Republicans and Democrats are deeply divided in views of Israel, so too do they differ markedly in opinions about Benjamin Netanyahu, Israel’s prime minister. Nearly three times as many Republicans (52%) as Democrats (18%) have favorable impressions of Israel’s leader. About half of Americans say a two-state solution is possible in the Middle East: 49% say a way can be found for Israel and an independent Palestinian state “to coexist peacefully,” while 39% say this is not possible. Democrats are far more likely than Republicans to say a two-state solution is possible (58% vs. 40%). When asked about the dispute between Israel and the Palestinians, 46% of Americans say they sympathize more with the Israelis, 16% say they sympathize more with the Palestinians and about four-in-ten (38%) either volunteer that their sympathies are with both (5%), neither (14%) or that they do not know (19%). The overall balance of opinion has fluctuated only modestly since 1978, when 45% said they sympathized more with Israel, 14% with the Palestinians and 42% could not decide. But the partisan divide has widened considerably, especially over the past two decades. The share of Republicans who sympathize with Israel has never been higher, dating back four decades. Nearly eight-in-ten Republicans (79%) sympathize more with Israel than the Palestinians, while just 6% sympathize more with the Palestinians; another 7% say they sympathize with both or neither, while 9% say they do not know. As was the case last year, Democrats are divided in views of the Middle East conflict: Currently, 27% of Democrats say they sympathize more with Israel, while 25% say they sympathize more with the Palestinians; another 23% say they sympathize with neither or both sides and one-quarter (25%) say they don’t know. Democrats also were divided last year, when 33% said they sympathized with Israel and 31% said the Palestinians. Since then, the share of Democrats saying they don’t know has increased from 17% to 25% and the share saying they sympathize with both or neither has ticked up slightly from 19% to 23%. As recently as two years ago, in April 2016, Democrats were more likely to sympathize more with Israel (43%) than with the Palestinians (29%), with 16% saying they sympathized with both or neither. Among Democrats, the decline over the last few years in those who say they sympathize more with Israel is seen both among liberals and among conservatives and moderates. The share of liberal Democrats who sympathize more with Israel than the Palestinians has declined from 33% to 19% since 2016. Currently, nearly twice as many liberal Democrats say they sympathize more with the Palestinians than with Israel (35% vs. 19%); 22% of liberal Democrats sympathize with both sides or neither side and 24% do not offer an opinion. Moderate and conservative Democrats continue to sympathize more with Israel (35%) than the Palestinians (17%). However, the share of conservative and moderate Democrats who sympathize more with Israel has declined 18 percentage points since 2016 (from 53% to 35%). (…) Opinions of Israel’s prime minister, Benjamin Netanyahu, are basically unchanged from last year. About as many say they have a favorable view (31%) as an unfavorable opinion (28%) of Netanyahu; 41% express no opinion of Israel’s prime minister. What was once a mere five percentage-point difference between the parties over support for Israel is now a 52-point rift. The partisan polarization that’s produced a hollowing out of the ideological center in American public life on a growing number of issues has now reached the politics of the Middle East. The practical consequences are unlikely to be pretty. For one thing, the growing gap between the parties opens the prospect of wild swings in policy from administration to administration. With the GOP’s military hawks and millenarian evangelicals firmly committed to defending the Jewish state regardless of its actions in the West Bank and Gaza Strip, Republican presidents will be increasingly likely to follow President Trump’s lead in siding unconditionally and unambivalently with Israel in its conflict with the Palestinians. (…) With the gap between the parties — and within the Democratic Party — growing ever-wider on the issue, get ready for the Israeli-Palestinian conflict to erupt in electoral form within the American political system. Add it to the lengthening list of issues on which finding common ground and consensus eludes us as a nation. There are more details, including more detailed demographic breakdowns, at the link, but the general conclusion is clear. When it comes to attitudes toward Israel the last ten years or so have seen a distinct change from what used to be the status quo in the United States. This is most notable with respect to the Republican Party, where blind and unquestioning support for Israel generally and Netanyahu specifically has become something of a litmus test. As I’ve noted before, this wasn’t always the case: There was a time when Republican Presidents and politicians were critical of Israeli actions and even openly defied the wishes of the Israeli government and its supporters in the United States. President Eisenhower put pressure on Israel, Britain, and France when those three nations invaded Egypt in an effort to seize the Suez Canal. President Nixon supported Israel during the Yom Kippur War, but was also critical of Israeli policy when it conflicted with his policy of currying favor with anti-Communist Arab nations that were also opposed to Israel. President George H.W. Bush’s Administration was similarly critical of Israel and actively lobbied the nation against retaliating when Saddam Hussein began lobbing Scud Missiles toward Israel during the Persian Gulf War in an effort to break the multinational coalition that was, quite literally, on Iraq’s doorstep. And, perhaps most significantly for contemporary Republicans, the policy of the Reagan Administration toward Israel in the 1980s was far from obsequious and often quite critical. For example, Reagan defied objections from Israel and its supporters in the U.S. and sold AWACS aircraft to Saudi Arabia, supported a United Nations resolution condemning Israel’s attack on a nuclear plant in Iraq, and strongly criticized the Israeli invasion of Lebanon in June 1982. Additionally, both the Reagan and Bush 41 Administrations called on Israel to reach out to Arabs as part of Middle East peace initiatives. None of that would be welcome in the modern Republican Party. Not only is criticism of Israel seemingly not allowed, but even questioning the assertion that Israel is “America’s most important ally” or arguing that policies of the Israeli government vis a vis its neighbors or the Palestinians are wrong is met with attacks, derision, and the assertion that the person making the argument may be bigoted. This kind of attitude is as wrong when its applied to Israel as it would be when applied to the United States. Even accepting the notion that Israel is our “most important” ally, a debate assertion to be honest, must mean being willing to criticize that ally when they do something wrong. It also means recognizing that the interests of the United States and the interests of Israel, while often parallel are not identical. President Reagan recognized that fact, but one has to wonder what the new Republican orthodoxy on Israel would have to say about him today. Rather than bipartisan unity when it comes to American policy toward Israel, the last fifteen years or so have shown indications of the stark and increasing partisan divide that this poll finds. The differences, for example, in American foreign policy toward Israel and the Israeli/Palestinian issue has shown significant differences that began during the Administration of George W. Bush, who many people described as the most pro-Israeli President in American history. Bush, of course, was followed by Barack Obama whose policies toward Israel were roughly the same as those of his predecessors. Despite that fact, though, it became clear that there was significant antipathy between Obama and Israeli Prime Minister Benjamin Netanyahu, principally over the issue of how to approach Iran and the Iranian nuclear research program. Because of that, many Republicans and conservatives characterized Obama as being anti-Israel, or even anti-Semitic, notwithstanding the fact that his Administration was, if anything, even more pro-Israeli than some Republican President’s such as Ronald Reagan, who had significant differences issues with many things Israel did such as its war in Lebanon in the early 1980s. Now, of course, we have the Trump Administration which appears to have put the American thumb on the scale in Israel’s favor far more than any of its predecessors. This can be seen most notably, of course, in the recognize Jerusalem as Israel’s capital notwithstanding the outstanding issues regarding its status and decision to decertify Iranian compliance with the 2015 nuclear deal despite the fact that all of the available evidence shows that Iran is complying with its obligations under the agreement. If I were an Israeli official or politician, I would be deeply concerned about this evidence of an increasing partisan divide in the United States regarding Israel. In the past, Israeli leaders could count on the fact that the United States would largely be in their corner. With polling in the United States showing not only a deep divide between Republicans and Democrats over support for Israel but also a decline in pro-Israeli sentiment among self-identified Independents. In no small part, it strikes me that much of this can be laid at the feet of Israeli Prime Minister Benjamin Netenyahu, who has done more than any of his predecessors to stoke the partisan fires here in the United States when he believes it to be in the interests of his country or, more specifically, in his personal political interests. This was most apparent, of course, during the Obama year when Netanyahu seemed to go out of his way to go behind the back of the Administration in communications with Republicans in Congress in which he clearly sought to undermine the ongoing negotiations with Iran over its nuclear weapons program This included a speech to Congress in March 2016 when he was running for re-election at the invitation of House Republicans, a decision that was opposed by most Americans. All of this has no doubt contributed to the partisan gap when it comes to policy toward Israel, and if it continues it could mean changes in American policy in the future based solely on which party controls the White House. This would not be in Israel’s interests, of course, and it suggests that it would be better for them to be more mindful of the fact that there is more than one political party in the United States. Comments Netanyahu did a lot to damage his country’s bipartisan relationship with the US — and he got nothing for it in the end, as the Iran Nuclear Deal went through. Trump has solidified this divide by moving the embassy. I cannot imagine that turning your country’s relationship with its biggest ally into a political football in the domestic politics of that ally is a good idea, but what do I know? Wonder what Netanyahu and the other Israeli politicians will do when they discover that a lot of the lunatics supporting them in the US are expecting them to all die in a nuclear holocaust in preparation for the Second Coming. Either that or convert to Christianity. The trouble with using a bunch of religious lunatics to keep yourself in power is at some point their “support” of you may end up pushing you down paths you don’t like to go down. Wonder what Netanyahu and the other Israeli politicians will do when they discover that a lot of the lunatics supporting them in the US are expecting them to all die in a nuclear holocaust in preparation for the Second Coming. Bibi isn’t stupid. He just doesn’t care. I’ve known a lot of right-wing Jews (and I’ve also read the writings of Norman Podhoretz), and many of them hold to an “enemy of my enemy is my friend” logic in which they view the evangelical Zionist nuts as the lesser of the two evils. But Bibi has brought this level of myopia to new heights. He declared at the start of Trump’s presidency that the Jewish state had no greater friend than Trump–a man who has made openly anti-Semitic tweets and played footsie with massive Jew-haters such as David Duke. Like Trump himself, Bibi waited several days before issuing a vague, anodyne condemnation of the neo-Nazis in Charlottesville. (In contrast, his right-wing rival Naftali Bennett issued a clear and unequivocal condemnation immediately. Not everyone on the Israeli right has got their head as far up Trump’s ample tuches as the Prime Minister.) But perhaps the most telling incident was when Bibi’s son retweeted an overtly anti-Semitic graphic about George Soros. These people have cast their lot in with the heirs to Hitler, all the while thinking they’re the ones saving the Jews from themselves. Being pro-Israel and being pro-Bibi/Likud are two different things. I can certainly tell the difference. But we do this with many of our foreign relations: we put all our eggs in the Yeltsin basket or the Mubarak basket or the Shah of Iran basket and then get really surprised when political forces we ignored turn out to have big impacts on their own countries. I often wonder what we’re missing in Israel or Iraq or Russia or Europe because we prefer to personalize relations with only one or two people per country.
@extends ('layouts.master') @section('page-header', 'Invade') @section('content') <div class="row"> <div class="col-sm-12 col-md-9"> @if ($protectionService->isUnderProtection($selectedDominion)) <div class="box box-primary"> <div class="box-header with-border"> <h3 class="box-title"><i class="ra ra-crossed-swords"></i> Invade</h3> </div> <div class="box-body"> You are currently under protection for @if ($protectionService->getUnderProtectionHoursLeft($selectedDominion)) <b>{{ number_format($protectionService->getUnderProtectionHoursLeft($selectedDominion), 2) }}</b> more hours @else <b>{{ $selectedDominion->protection_ticks_remaining }}</b> ticks @endif and may not invade during that time. </div> </div> @elseif ($selectedDominion->morale < 70) <div class="box box-primary"> <div class="box-header with-border"> <h3 class="box-title"><i class="ra ra-crossed-swords"></i> Invade</h3> </div> <div class="box-body"> Your military needs at least 70% morale to invade others. Your military currently has {{ $selectedDominion->morale }}% morale. </div> </div> @else <form action="{{ route('dominion.invade') }}" method="post" role="form" id="invade_form"> @csrf <div class="box box-primary"> <div class="box-header with-border"> <h3 class="box-title"><i class="ra ra-crossed-swords"></i> Invade</h3> </div> <div class="box-body"> <div class="form-group"> <label for="target_dominion">Select a target</label> <select name="target_dominion" id="target_dominion" class="form-control select2" required style="width: 100%" data-placeholder="Select a target dominion" {{ $selectedDominion->isLocked() ? 'disabled' : null }}> <option></option> @foreach ($rangeCalculator->getDominionsInRange($selectedDominion, false) as $dominion) <option value="{{ $dominion->id }}" data-land="{{ number_format($landCalculator->getTotalLand($dominion)) }}" data-percentage="{{ number_format($rangeCalculator->getDominionRange($selectedDominion, $dominion), 1) }}" data-war="{{ $governmentService->isAtWarWithRealm($selectedDominion->realm, $dominion->realm) ? 1 : 0 }}"> {{ $dominion->name }} (#{{ $dominion->realm->number }}) - {{ $dominion->race->name }} </option> @endforeach </select> </div> </div> </div> <div class="box box-primary"> <div class="box-header with-border"> <h3 class="box-title"><i class="fa fa-users"></i> Units to send</h3> </div> <div class="box-body table-responsive no-padding"> <table class="table"> <colgroup> <col> <col width="10%"> <col width="10%"> <col width="10%"> <col width="15%"> </colgroup> <thead> <tr> <th>Unit</th> <th class="text-center">OP / DP</th> <th class="text-center">Available</th> <th class="text-center">Send</th> <th class="text-center">Total OP / DP</th> </tr> </thead> <tbody> @php $offenseVsBuildingTypes = []; @endphp @foreach (range(1, 4) as $unitSlot) @php $unit = $selectedDominion->race->units->filter(function ($unit) use ($unitSlot) { return ($unit->slot === $unitSlot); })->first(); @endphp @if ($unit->power_offense == 0) @continue @endif @php $offensivePower = $militaryCalculator->getUnitPowerWithPerks($selectedDominion, null, null, $unit, 'offense'); $defensivePower = $militaryCalculator->getUnitPowerWithPerks($selectedDominion, null, null, $unit, 'defense'); $hasDynamicOffensivePower = $unit->perks->filter(static function ($perk) { return starts_with($perk->key, ['offense_from_', 'offense_staggered_', 'offense_vs_']); })->count() > 0; if ($hasDynamicOffensivePower) { $offenseVsBuildingPerk = $unit->getPerkValue('offense_vs_building'); if ($offenseVsBuildingPerk) { $offenseVsBuildingTypes[] = explode(',', $offenseVsBuildingPerk)[0]; } } $hasDynamicDefensivePower = $unit->perks->filter(static function ($perk) { return starts_with($perk->key, ['defense_from_', 'defense_staggered_', 'defense_vs_']); })->count() > 0; @endphp <tr> <td> {!! $unitHelper->getUnitTypeIconHtml("unit{$unitSlot}", $selectedDominion->race) !!} <span data-toggle="tooltip" data-placement="top" title="{{ $unitHelper->getUnitHelpString("unit{$unitSlot}", $selectedDominion->race) }}"> {{ $unitHelper->getUnitName("unit{$unitSlot}", $selectedDominion->race) }} </span> </td> <td class="text-center"> <span id="unit{{ $unitSlot }}_op">{{ (strpos($offensivePower, '.') !== false) ? number_format($offensivePower, 1) : number_format($offensivePower) }}</span>{{ $hasDynamicOffensivePower ? '*' : null }} / <span id="unit{{ $unitSlot }}_dp" class="text-muted">{{ (strpos($defensivePower, '.') !== false) ? number_format($defensivePower, 1) : number_format($defensivePower) }}</span><span class="text-muted">{{ $hasDynamicDefensivePower ? '*' : null }}</span> </td> <td class="text-center"> {{ number_format($selectedDominion->{"military_unit{$unitSlot}"}) }} </td> <td class="text-center"> <input type="number" name="unit[{{ $unitSlot }}]" id="unit[{{ $unitSlot }}]" class="form-control text-center" placeholder="0" min="0" max="{{ $selectedDominion->{"military_unit{$unitSlot}"} }}" data-slot="{{ $unitSlot }}" data-amount="{{ $selectedDominion->{"military_unit{$unitSlot}"} }}" data-op="{{ $unit->power_offense }}" data-dp="{{ $unit->power_defense }}" data-need-boat="{{ (int)$unit->need_boat }}" {{ $selectedDominion->isLocked() ? 'disabled' : null }}> </td> <td class="text-center" id="unit{{ $unitSlot }}_stats"> <span class="op">0</span> / <span class="dp text-muted">0</span> </td> </tr> @endforeach @foreach ($offenseVsBuildingTypes as $buildingType) <tr> <td colspan="3" class="text-right"> <b>Enter target {{ ucwords(str_replace('_', ' ', $buildingType)) }} percentage:</b> </td> <td> <input type="number" step="any" name="calc[target_{{ $buildingType }}_percent]" class="form-control text-center" min="0" max="100" placeholder="0" {{ $selectedDominion->isLocked() ? 'disabled' : null }}> </td> <td>&nbsp;</td> </tr> @endforeach </tbody> </table> </div> </div> <div class="row"> <div class="col-sm-12 col-md-6"> <div class="box box-danger"> <div class="box-header with-border"> <h3 class="box-title"><i class="ra ra-sword"></i> Invasion force</h3> </div> <div class="box-body table-responsive no-padding"> <table class="table"> <colgroup> <col width="50%"> <col width="50%"> </colgroup> <tbody> <tr> <td>OP:</td> <td> <strong id="invasion-force-op" data-amount="0">0</strong> </td> </tr> <tr> <td>DP:</td> <td id="invasion-force-dp" data-amount="0">0</td> </tr> <tr> <td>Boats:</td> <td> <span id="invasion-force-boats" data-amount="0">0</span> / {{ number_format(floor($selectedDominion->resource_boats)) }} </td> </tr> <tr> <td> Max OP: <i class="fa fa-question-circle" data-toggle="tooltip" data-placement="top" title="You may send out a maximum of 125% of your new home DP in OP. (5:4 rule)"></i> </td> <td id="invasion-force-max-op" data-amount="0">0</td> </tr> <tr> <td> Target Min DP: <i class="fa fa-question-circle" data-toggle="tooltip" data-placement="top" title="The minimum defense for a dominion is 3x their land size."></i> </td> <td id="target-min-dp" data-amount="0">0</td> </tr> </tbody> </table> </div> <div class="box-footer"> <button type="submit" class="btn btn-danger" id="invade-button" {{ $selectedDominion->isLocked() || $selectedDominion->round->hasOffensiveActionsDisabled() ? 'disabled' : null }}> <i class="ra ra-crossed-swords"></i> Invade </button> </div> </div> </div> <div class="col-sm-12 col-md-6"> <div class="box"> <div class="box-header with-border"> <h3 class="box-title"><i class="fa fa-home"></i> New home forces</h3> </div> <div class="box-body table-responsive no-padding"> <table class="table"> <colgroup> <col width="50%"> <col width="50%"> </colgroup> <tbody> <tr> <td>OP:</td> <td id="home-forces-op" data-original="{{ $militaryCalculator->getOffensivePower($selectedDominion) }}" data-amount="0"> {{ number_format($militaryCalculator->getOffensivePower($selectedDominion), 2) }} </td> </tr> <tr> <td>DP:</td> <td id="home-forces-dp" data-original="{{ $militaryCalculator->getDefensivePower($selectedDominion) }}" data-amount="0"> {{ number_format($militaryCalculator->getDefensivePower($selectedDominion), 2) }} </td> </tr> <tr> <td>Boats:</td> <td id="home-forces-boats" data-original="{{ floor($selectedDominion->resource_boats) }}" data-amount="0"> {{ number_format(floor($selectedDominion->resource_boats)) }} </td> </tr> <tr> <td> Min DP: <i class="fa fa-question-circle" data-toggle="tooltip" data-placement="top" title="You must leave at least 33% of your total DP at home. (33% rule)"></i> </td> <td id="home-forces-min-dp" data-amount="0">0</td> </tr> <tr> <td>DPA:</td> <td id="home-forces-dpa" data-amount="0"> {{ number_format($militaryCalculator->getDefensivePower($selectedDominion) / $landCalculator->getTotalLand($selectedDominion), 3) }} </td> </tr> </tbody> </table> </div> </div> </div> </div> </form> @endif </div> <div class="col-sm-12 col-md-3"> <div class="box"> <div class="box-header with-border"> <h3 class="box-title">Information</h3> </div> <div class="box-body"> <p>Here you can invade other players to try to capture some of their land and to gain prestige. Invasions are successful if you send more OP than they have DP.</p> <p>Find targets using <a href="{{ route('dominion.magic') }}">magic</a>, <a href="{{ route('dominion.espionage') }}">espionage</a> and the <a href="{{ route('dominion.op-center') }}">Op Center</a>. Communicate with your realmies using the <a href="{{ route('dominion.council') }}">council</a> to coordinate attacks.</p> <p>Be sure to calculate your OP vs your target's DP to avoid blindly sending your units to their doom.</p> <p>You can only invade dominions that are within your range, and you will only gain prestige and discounted construction on targets <b>75% or greater</b> relative to your own land size.</p> @if ($selectedDominion->morale < 100) <p>You have {{ $selectedDominion->morale }}% morale, which is reducing your offense and defense by {{ number_format(100 - $militaryCalculator->getMoraleMultiplier($selectedDominion) * 100, 2) }}%.</p> @else <p>You have {{ $selectedDominion->morale }}% morale.</p> @endif </div> </div> </div> </div> @endsection @push('page-styles') <link rel="stylesheet" href="{{ asset('assets/vendor/select2/css/select2.min.css') }}"> @endpush @push('page-scripts') <script type="text/javascript" src="{{ asset('assets/vendor/select2/js/select2.full.min.js') }}"></script> @endpush @push('inline-scripts') <script type="text/javascript"> (function ($) { // Prevent accidental submit $(document).on("keydown", "form", function(event) { return event.key != "Enter"; }); var invasionForceOPElement = $('#invasion-force-op'); var invasionForceDPElement = $('#invasion-force-dp'); var invasionForceBoatsElement = $('#invasion-force-boats'); var invasionForceMaxOPElement = $('#invasion-force-max-op'); var targetMinDPElement = $('#target-min-dp'); var homeForcesOPElement = $('#home-forces-op'); var homeForcesDPElement = $('#home-forces-dp'); var homeForcesBoatsElement = $('#home-forces-boats'); var homeForcesMinDPElement = $('#home-forces-min-dp'); var homeForcesDPAElement = $('#home-forces-dpa'); var invadeButtonElement = $('#invade-button'); var allUnitInputs = $('input[name^=\'unit\']'); $('#target_dominion').select2({ templateResult: select2Template, templateSelection: select2Template, }); @if (!$protectionService->isUnderProtection($selectedDominion)) updateUnitStats(); @endif $('#target_dominion').change(function (e) { updateUnitStats(); }); $('input[name^=\'calc\']').change(function (e) { updateUnitStats(); }); $('input[name^=\'unit\']').change(function (e) { updateUnitStats(); }); function updateUnitStats() { // Update unit stats $.get( "{{ route('api.dominion.invasion') }}?" + $('#invade_form').serialize(), {}, function(response) { if(response.result == 'success') { $.each(response.units, function(slot, stats) { // Update unit stats data attributes $('#unit\\['+slot+'\\]').data('dp', stats.dp); $('#unit\\['+slot+'\\]').data('op', stats.op); // Update unit stats display $('#unit'+slot+'_dp').text(stats.dp.toLocaleString(undefined, {maximumFractionDigits: 2})); $('#unit'+slot+'_op').text(stats.op.toLocaleString(undefined, {maximumFractionDigits: 2})); }); // Update OP / DP data attributes invasionForceOPElement.data('amount', response.away_offense); invasionForceDPElement.data('amount', response.away_defense); invasionForceBoatsElement.data('amount', response.boats_needed); invasionForceMaxOPElement.data('amount', response.max_op); targetMinDPElement.data('amount', response.target_min_dp); homeForcesOPElement.data('amount', response.home_offense); homeForcesDPElement.data('amount', response.home_defense); homeForcesBoatsElement.data('amount', response.boats_remaining); homeForcesMinDPElement.data('amount', response.min_dp); homeForcesDPAElement.data('amount', response.home_dpa); // Update OP / DP display invasionForceOPElement.text(response.away_offense.toLocaleString(undefined, {maximumFractionDigits: 2})); invasionForceDPElement.text(response.away_defense.toLocaleString(undefined, {maximumFractionDigits: 2})); invasionForceBoatsElement.text(response.boats_needed.toLocaleString(undefined, {maximumFractionDigits: 2})); invasionForceMaxOPElement.text(response.max_op.toLocaleString(undefined, {maximumFractionDigits: 2})); targetMinDPElement.text(response.target_min_dp.toLocaleString(undefined, {maximumFractionDigits: 2})); homeForcesOPElement.text(response.home_offense.toLocaleString(undefined, {maximumFractionDigits: 2})); homeForcesDPElement.text(response.home_defense.toLocaleString(undefined, {maximumFractionDigits: 2})); homeForcesBoatsElement.text(response.boats_remaining.toLocaleString(undefined, {maximumFractionDigits: 2})); homeForcesMinDPElement.text(response.min_dp.toLocaleString(undefined, {maximumFractionDigits: 2})); homeForcesDPAElement.text(response.home_dpa.toLocaleString(undefined, {maximumFractionDigits: 3})); calculate(); } } ); } function calculate() { // Calculate subtotals for each unit allUnitInputs.each(function () { var unitOP = parseFloat($(this).data('op')); var unitDP = parseFloat($(this).data('dp')); var amountToSend = parseInt($(this).val() || 0); var totalUnitOP = amountToSend * unitOP; var totalUnitDP = amountToSend * unitDP; var unitSlot = parseInt($(this).data('slot')); var unitStatsElement = $('#unit' + unitSlot + '_stats'); unitStatsElement.find('.op').text(totalUnitOP.toLocaleString(undefined, {maximumFractionDigits: 2})); unitStatsElement.find('.dp').text(totalUnitDP.toLocaleString(undefined, {maximumFractionDigits: 2})); }); // Check if we have enough of these bad bois /* __--___ >_'--'__' _________!__________ / / / / / / / / / / / / | | | | | | __^ | | | | | | _/@ \ \ \ \ \ \ \ S__ | \ \ \ \ \ \ __ ( | | \___\___\___\___\___\ / \ | \ | | |\| \ \____________!________________/ / \ _______OOOOOOOOOOOOOOOOOOO________/ \________\\\\\\\\\\\\\\\\\\_______/ %%%^^^^^%%%%%^^^^!!^%%^^^^%%%%%!!!!^^^^^^!%^^^%%%%!!^^ ^^!!!!%%%%^^^^!!^^%%%%%^^!!!^^%%%%%!!!%%%%^^^!!^^%%%!! Shamelessly stolen from http://www.asciiworld.com/-Boats-.html */ var hasEnoughBoats = parseInt(invasionForceBoatsElement.data('amount')) <= {{ floor($selectedDominion->resource_boats) }}; if (!hasEnoughBoats) { invasionForceBoatsElement.addClass('text-danger'); homeForcesBoatsElement.addClass('text-danger'); } else { invasionForceBoatsElement.removeClass('text-danger'); homeForcesBoatsElement.removeClass('text-danger'); } // Check 33% rule var minDefenseRule = parseFloat(homeForcesDPElement.data('amount')) < parseFloat(homeForcesMinDPElement.data('amount')); if (minDefenseRule) { homeForcesDPElement.addClass('text-danger'); } else { homeForcesDPElement.removeClass('text-danger'); } // Check 5:4 rule var maxOffenseRule = parseFloat(invasionForceOPElement.data('amount')) > parseFloat(invasionForceMaxOPElement.data('amount')); if (maxOffenseRule) { invasionForceOPElement.addClass('text-danger'); } else { invasionForceOPElement.removeClass('text-danger'); } // Check if invade button should be disabled if (!hasEnoughBoats || maxOffenseRule || {{ $selectedDominion->round->hasOffensiveActionsDisabled() ? 1 : 0 }}) { invadeButtonElement.attr('disabled', 'disabled'); } else { invadeButtonElement.removeAttr('disabled'); } } })(jQuery); function select2Template(state) { if (!state.id) { return state.text; } const land = state.element.dataset.land; const percentage = state.element.dataset.percentage; const war = state.element.dataset.war; let difficultyClass; if (percentage >= 120) { difficultyClass = 'text-red'; } else if (percentage >= 75) { difficultyClass = 'text-green'; } else if (percentage >= 66) { difficultyClass = 'text-muted'; } else { difficultyClass = 'text-gray'; } warStatus = ''; if (war == 1) { warStatus = '<div class="pull-left">&nbsp;<span class="text-red">WAR</span></div>'; } return $(` <div class="pull-left">${state.text}</div> ${warStatus} <div class="pull-right">${land} land <span class="${difficultyClass}">(${percentage}%)</span></div> <div style="clear: both;"></div> `); } </script> @endpush
require 'spec_helper' describe FastAttributes do describe '.type_casting' do it 'returns predefined type casting rules' do expect(FastAttributes.type_casting.keys).to include(String) expect(FastAttributes.type_casting.keys).to include(Integer) expect(FastAttributes.type_casting.keys).to include(Float) expect(FastAttributes.type_casting.keys).to include(Array) expect(FastAttributes.type_casting.keys).to include(Date) expect(FastAttributes.type_casting.keys).to include(Time) expect(FastAttributes.type_casting.keys).to include(DateTime) expect(FastAttributes.type_casting.keys).to include(BigDecimal) end end describe '.get_type_casting' do it 'returns type casting function' do expect(FastAttributes.get_type_casting(String)).to be_a(FastAttributes::TypeCast) expect(FastAttributes.get_type_casting(Time)).to be_a(FastAttributes::TypeCast) end end describe '.set_type_casting' do after do FastAttributes.remove_type_casting(OpenStruct) end it 'adds type to supported type casting list' do expect(FastAttributes.get_type_casting(OpenStruct)).to be(nil) FastAttributes.set_type_casting(OpenStruct, 'OpenStruct.new(a: %s)') expect(FastAttributes.get_type_casting(OpenStruct)).to be_a(FastAttributes::TypeCast) end end describe '.remove_type_casting' do before do FastAttributes.set_type_casting(OpenStruct, 'OpenStruct.new(a: %s)') end it 'removes type casting function from supported list' do FastAttributes.remove_type_casting(OpenStruct) expect(FastAttributes.get_type_casting(OpenStruct)).to be(nil) end end describe '.type_exists?' do it 'checks if type is registered' do expect(FastAttributes.type_exists?(DateTime)).to be(true) expect(FastAttributes.type_exists?(OpenStruct)).to be(false) end end describe '#attribute' do it 'raises an exception when type is not supported' do type = Class.new(Object) { def self.inspect; 'CustomType' end } klass = Class.new(Object) { extend FastAttributes } expect{klass.attribute(:name, type)}.to raise_error(FastAttributes::UnsupportedTypeError, 'Unsupported attribute type "CustomType"') expect{klass.attribute(:name, :type)}.to raise_error(FastAttributes::UnsupportedTypeError, 'Unsupported attribute type ":type"') end it 'generates getter methods' do book = Book.new expect(book.respond_to?(:title)).to be(true) expect(book.respond_to?(:name)).to be(true) expect(book.respond_to?(:pages)).to be(true) expect(book.respond_to?(:price)).to be(true) expect(book.respond_to?(:authors)).to be(true) expect(book.respond_to?(:published)).to be(true) expect(book.respond_to?(:sold)).to be(true) expect(book.respond_to?(:finished)).to be(true) expect(book.respond_to?(:rate)).to be(true) end it 'is possible to override getter method' do toy = Toy.new expect(toy.name).to eq(' toy!') toy.name = 'bear' expect(toy.name).to eq('bear toy!') end it 'generates setter methods' do book = Book.new expect(book.respond_to?(:title=)).to be(true) expect(book.respond_to?(:name=)).to be(true) expect(book.respond_to?(:pages=)).to be(true) expect(book.respond_to?(:price=)).to be(true) expect(book.respond_to?(:authors=)).to be(true) expect(book.respond_to?(:published=)).to be(true) expect(book.respond_to?(:sold=)).to be(true) expect(book.respond_to?(:finished=)).to be(true) expect(book.respond_to?(:rate=)).to be(true) end it 'is possible to override setter method' do toy = Toy.new expect(toy.price).to be(nil) toy.price = 2 expect(toy.price).to eq(4) end it 'setter methods convert values to correct datatype' do book = Book.new book.title = 123 book.name = 456 book.pages = '250' book.price = '2.55' book.authors = 'Jobs' book.published = '2014-06-21' book.sold = '2014-06-21 20:45:15' book.finished = '2014-05-20 21:35:20' book.rate = '4.1' expect(book.title).to eq('123') expect(book.name).to eq('456') expect(book.pages).to be(250) expect(book.price).to eq(BigDecimal.new('2.55')) expect(book.authors).to eq(%w[Jobs]) expect(book.published).to eq(Date.new(2014, 6, 21)) expect(book.sold).to eq(Time.new(2014, 6, 21, 20, 45, 15)) expect(book.finished).to eq(DateTime.new(2014, 5, 20, 21, 35, 20)) expect(book.rate).to eq(4.1) end it 'setter methods accept values which are already in a proper type' do book = Book.new book.title = title = 'One' book.name = name = 'Two' book.pages = pages = 250 book.price = price = BigDecimal.new('2.55') book.authors = authors = %w[Jobs] book.published = published = Date.new(2014, 06, 21) book.sold = sold = Time.new(2014, 6, 21, 20, 45, 15) book.finished = finished = DateTime.new(2014, 05, 20, 21, 35, 20) book.rate = rate = 4.1 expect(book.title).to be(title) expect(book.name).to be(name) expect(book.pages).to be(pages) expect(book.price).to eq(price) expect(book.authors).to be(authors) expect(book.published).to be(published) expect(book.sold).to be(sold) expect(book.finished).to be(finished) expect(book.rate).to be(rate) end it 'setter methods accept nil values' do book = Book.new book.title = 'One' book.name = 'Two' book.pages = 250 book.price = BigDecimal.new('2.55') book.authors = %w[Jobs] book.published = Date.new(2014, 06, 21) book.sold = Time.new(2014, 6, 21, 20, 45, 15) book.finished = DateTime.new(2014, 05, 20, 21, 35, 20) book.rate = 4.1 book.title = nil book.name = nil book.pages = nil book.price = nil book.authors = nil book.published = nil book.sold = nil book.finished = nil book.rate = nil expect(book.title).to be(nil) expect(book.name).to be(nil) expect(book.pages).to be(nil) expect(book.price).to be(nil) expect(book.authors).to be(nil) expect(book.published).to be(nil) expect(book.sold).to be(nil) expect(book.finished).to be(nil) expect(book.rate).to be(nil) end it 'setter methods raise an exception when cannot parse values' do object = BasicObject.new def object.to_s; 'BasicObject'; end def object.to_str; 1/0 end book = Book.new expect{ book.title = object }.to raise_error(FastAttributes::TypeCast::InvalidValueError, 'Invalid value "BasicObject" for attribute "title" of type "String"') expect{ book.name = object }.to raise_error(FastAttributes::TypeCast::InvalidValueError, 'Invalid value "BasicObject" for attribute "name" of type "String"') expect{ book.pages = 'number' }.to raise_error(FastAttributes::TypeCast::InvalidValueError, 'Invalid value "number" for attribute "pages" of type "Integer"') expect{ book.price = 'bigdecimal' }.to raise_error(FastAttributes::TypeCast::InvalidValueError, 'Invalid value "bigdecimal" for attribute "price" of type "BigDecimal"') expect{ book.published = 'date' }.to raise_error(FastAttributes::TypeCast::InvalidValueError, 'Invalid value "date" for attribute "published" of type "Date"') expect{ book.sold = 'time' }.to raise_error(FastAttributes::TypeCast::InvalidValueError, 'Invalid value "time" for attribute "sold" of type "Time"') expect{ book.finished = 'datetime' }.to raise_error(FastAttributes::TypeCast::InvalidValueError, 'Invalid value "datetime" for attribute "finished" of type "DateTime"') expect{ book.rate = 'float' }.to raise_error(FastAttributes::TypeCast::InvalidValueError, 'Invalid value "float" for attribute "rate" of type "Float"') end it 'setter method can escape placeholder using double %' do placeholder = PlaceholderClass.new placeholder.value = 3 expect(placeholder.value).to eq('value %s %value %%s 2') end it 'setter method can accept %a placeholder which return attribute name' do placeholder = PlaceholderClass.new placeholder.title = 'attribute name 1' expect(placeholder.title).to eq('title') placeholder.title = 'attribute name 2' expect(placeholder.title).to eq('title%a%title%title!') end it 'generates lenient attributes which do not correspond to a particular data type' do lenient_attribute = LenientAttributes.new expect(lenient_attribute.terms_of_service).to be(nil) lenient_attribute.terms_of_service = 'yes' expect(lenient_attribute.terms_of_service).to be(true) lenient_attribute.terms_of_service = 'no' expect(lenient_attribute.terms_of_service).to be(false) lenient_attribute.terms_of_service = 42 expect(lenient_attribute.terms_of_service).to be(nil) end it 'allows to define attributes using symbols as a data type' do book = DefaultLenientAttributes.new book.title = title = 'One' book.pages = pages = 250 book.price = price = BigDecimal.new('2.55') book.authors = authors = %w[Jobs] book.published = published = Date.new(2014, 06, 21) book.sold = sold = Time.new(2014, 6, 21, 20, 45, 15) book.finished = finished = DateTime.new(2014, 05, 20, 21, 35, 20) book.rate = rate = 4.1 expect(book.title).to be(title) expect(book.pages).to be(pages) expect(book.price).to eq(price) expect(book.authors).to be(authors) expect(book.published).to be(published) expect(book.sold).to be(sold) expect(book.finished).to be(finished) expect(book.rate).to be(rate) end context 'boolean attribute' do let(:object) { DefaultLenientAttributes.new } context 'when value is not set' do it 'return nil' do expect(object.active).to be(nil) end end context 'when value represents true' do it 'returns true' do object.active = true expect(object.active).to be(true) object.active = 1 expect(object.active).to be(true) object.active = '1' expect(object.active).to be(true) object.active = 't' expect(object.active).to be(true) object.active = 'T' expect(object.active).to be(true) object.active = 'true' expect(object.active).to be(true) object.active = 'TRUE' expect(object.active).to be(true) object.active = 'on' expect(object.active).to be(true) object.active = 'ON' expect(object.active).to be(true) end end context 'when value represents false' do it 'returns false' do object.active = false expect(object.active).to be(false) object.active = 0 expect(object.active).to be(false) object.active = '0' expect(object.active).to be(false) object.active = 'f' expect(object.active).to be(false) object.active = 'F' expect(object.active).to be(false) object.active = 'false' expect(object.active).to be(false) object.active = 'FALSE' expect(object.active).to be(false) object.active = 'off' expect(object.active).to be(false) object.active = 'OFF' expect(object.active).to be(false) end end end end describe '#define_attributes' do describe 'option initialize: true' do it 'generates initialize method' do reader = Reader.new(name: 104, age: '23') expect(reader.name).to eq('104') expect(reader.age).to be(23) end it 'is possible to override initialize method' do window = Window.new expect(window.height).to be(200) expect(window.width).to be(80) window = Window.new(height: 210, width: 100) expect(window.height).to be(210) expect(window.width).to be(100) end end describe 'option attributes: true' do it 'generates attributes method' do publisher = Publisher.new expect(publisher.attributes).to eq({'name' => nil, 'books' => nil}) reader = Reader.new expect(reader.attributes).to eq({'name' => nil, 'age' => nil}) end it 'is possible to override attributes method' do window = Window.new(height: 220, width: 100) expect(window.attributes).to eq({'height' => 220, 'width' => 100, 'color' => 'white'}) end it 'attributes method return all attributes with their values' do publisher = Publisher.new publisher.name = 101 publisher.books = '20' expect(publisher.attributes).to eq({'name' => '101', 'books' => 20}) reader = Reader.new reader.name = 102 reader.age = '25' expect(reader.attributes).to eq({'name' => '102', 'age' => 25}) end end describe 'option attributes: :accessors' do it 'doesn\'t interfere when you don\'t use the option' do klass = AttributesWithoutAccessors.new expect(klass.attributes).to eq({'title' => nil, 'pages' => nil, 'color' => 'white'}) end it "is returns the values of accessors, not the ivars" do klass = AttributesWithAccessors.new(pages: 10, title: 'Something') expect(klass.attributes['pages']).to be(20) expect(klass.attributes['title']).to eq('A Longer Title: Something') end it 'is possible to override attributes method' do klass = AttributesWithAccessors.new(pages: 10, title: 'Something') expect(klass.attributes).to eq({'pages' => 20, 'title' => 'A Longer Title: Something', 'color' => 'white'}) end it 'works with default attributes' do klass = AttributesWithAccessorsAndDefaults.new expect(klass.attributes).to eq({'pages' => 20, 'title' => 'a title'}) end end end describe "default attributes" do it "sets the default values" do class_with_defaults = ClassWithDefaults.new expect(class_with_defaults.title).to eq('a title') expect(class_with_defaults.pages).to be(10) expect(class_with_defaults.authors).to eq([1, 2, 4]) end it "allows you to override default values" do class_with_defaults = ClassWithDefaults.new(title: 'Something', authors: [1, 5, 7]) expect(class_with_defaults.title).to eq('Something') expect(class_with_defaults.pages).to be(10) expect(class_with_defaults.authors).to eq([1, 5, 7]) end it "allows callable default values" do class_with_defaults = ClassWithDefaults.new expect(class_with_defaults.callable).to eq("callable value") end it "doesn't use the same instance between multiple instances" do class_with_defaults = ClassWithDefaults.new class_with_defaults.authors << 2 class_with_defaults2 = ClassWithDefaults.new expect(class_with_defaults2.authors).to eq([1, 2, 4]) end end describe 'collection member coercions' do let(:instance) { ClassWithCollectionMemberAttribute.new } let(:invites) do [ { name: 'Ivan', email: 'ivan@example.com' }, { name: 'Igor', email: 'igor@example.com' } ] end let(:address_hash) do { address: '123 6th St. Melbourne, FL 32904', locality: 'Melbourne', region: 'FL', postal_code: '32904' } end it 'must parse integer value' do instance.page_numbers = '1' expect(instance.page_numbers).to eq [1] end it 'must parse integer values' do instance.page_numbers = [1, '2', nil] expect(instance.page_numbers).to eq [1, 2, nil] end it 'must parse string values' do instance.words = ['one', 2, 'three', nil] expect(instance.words).to eq ['one', '2', 'three', nil] end it 'must parse custom class values' do instance.invites = invites expect(instance.invites.size).to eq invites.size expect(instance.invites[0].is_a?(InviteForm)).to be true expect(instance.invites[1].is_a?(InviteForm)).to be true expect(instance.invites[0].name).to eq invites[0][:name] expect(instance.invites[0].email).to eq invites[0][:email] expect(instance.invites[1].name).to eq invites[1][:name] expect(instance.invites[1].email).to eq invites[1][:email] end it 'must parse set values' do instance.addresses = [address_hash] item = instance.addresses.to_a[0] expect(instance.addresses.size).to eq 1 expect(item.is_a?(Address)).to eq true expect(item.address).to eq address_hash[:address] expect(item.postal_code).to eq address_hash[:postal_code] end end end
1. Introduction {#sec0005} =============== Canine coronavirus (CCoV; order *Nidovirales*, family Coronaviridae) is a large, enveloped, single stranded, RNA virus responsible for enteritis in dogs ([@bib0045]). Recently, due to changes in virus classification, the virus was classified as a member of the genus *Alphacoronavirus*, species *Alphacoronavirus-1*, together with transmissible gastroenteritis virus of swine (TGEV) and feline coronavirus (FCoV) ([@bib0015]). The genome, 27 kb in length, contains two large overlapping open reading frames (ORFs), ORF1a and ORF1b which encompass the 5 two thirds of the genomic RNA and encode polyproteins leading to the replicase complex. The ORFs, encoding for the structural spike (S), envelope (E), membrane (M) and nucleocapsid (N) proteins and the non-structural proteins (3a, 3b, 3c, 7a and 7b), are located downstream of the replicase gene ([@bib0045]). Coronaviruses are characterized by constant genetic evolution and diversity. To date, two different CCoV types have been recognized, CCoV type I (CCoV-I) and CCoV type II (CCoV-II), that share significant genetic similarity with FCoV type I (FCoV-I) and FCoV type II (FCoV-II), respectively ([@bib0045]). Moreover, in 2009, TGEV-like CCoVs of potential recombinant origin were identified and characterized as a new CCoV subtype (CCoV-IIb) ([@bib0050], [@bib0055], [@bib0060]). CCoV is the causative agent of gastroenteritis in dogs, characterized by high morbidity and low mortality. Clinical signs include anorexia, lethargy, vomiting, mild to severe diarrhoea (usually lasting 1--2 weeks) and occasionally death, mainly in puppies. The disease is more severe in young animals ([@bib0010]). Systemic infections are not usual; however, during the past few years, there have been reports of fatal disease, with CCoV strains detected in the enteric tract, as well as in the organs ([@bib0005], [@bib0050]). In 2010, CCoV identification, molecular characterization and sequence analysis took place for the first time in Greece, regarding common enteric CCoV-II strains detected in a severe outbreak of diarrhoea in a kennel ([@bib0095]). In the current study we report the quantitation and molecular characterization of two TGEV-like CCoV strains, detected in the organs of two puppies displaying fatal enteritis. 2. Materials and methods {#sec0010} ======================== 2.1. Clinical case {#sec0015} ------------------ During summer of 2009, two dead dogs were submitted for laboratory investigation. The dogs were coming from two different pet shops of Thessaloniki, a city of northern Greece. Both dogs, a 6-week-old Yorkshire Terrier (66/09) and a 16-week-old Pomeranian (68/09), presented fever, lethargy, inappetence, severe haemorrhagic diarrhoea and vomiting leading to death, 2 days after the onset of the symptoms. The first puppy was vaccinated with a single dose of a polyvalent vaccine against all major infectious diseases (canine distemper, infectious hepatitis, parvoviral enteritis, parainfluenza and leptospirosis) 2 weeks before the symptoms, while the second one, had never been vaccinated. Necropsy examination of both dogs revealed linear haemorrhages of the intestinal wall, haemorrhagic enteritis and ulcerated duodenum. Sero-sanguineous fluid was observed in the abdominal cavity of the Pomeranian. Lungs of both puppies were congested with multiple areas of emphysema. No lesions were observed at the heart. Liver of both puppies appeared enlarged, friable and yellow-brown in color with multifocal discolorated spots. Congested vessels in the dura mater of the brain were also observed. 2.2. Screening for viral pathogens {#sec0020} ---------------------------------- Samples from the faeces and the parenchymatous organs were subjected to virological investigations, using methods previously described, regarding common canine viral pathogens e.g., canine parvovirus type 2 (CPV-2) (PCR and real time PCR) ([@bib0020], [@bib0030], [@bib0035]), canine distemper virus (CDV) (RT-PCR) ([@bib0065]), canine adenovirus type 1 and type 2 (CAV-1 and CAV-2) (PCR) ([@bib0075]) and CCoV (RT-PCR) ([@bib0100]). 2.3. Virus isolation {#sec0025} -------------------- For virus isolation, A-72 cell line (canine fibrosarcoma) was used. The cells were grown in Dulbecco-Minimum Essential Medium (D-MEM) supplemented with 10% foetal bovine serum (FBS). Faecal and tissue samples were homogenized (10%, w/v) in D-MEM and centrifuged at 8000 ×  *g* for 10 min. Supernatants were treated with antibiotics (1000 IU/ml penicillin and 100 μg/ml streptomycin) for 30 min, inoculated on partially confluent A72 cell cultures and then, they were incubated at 37 °C in a 5% CO~2~ incubator. After an adsorption period of 30 min, D-MEM was added. Cells were daily observed for cytopathic effect (cpe) of CCoV for 5 days. An immunofluorescence (IF) assay was used for the detection of CCoV at the infected cells. For the IF assay a 1:100 dilution of cat polyclonal serum specific for *Alphacoronavirus-1* and a 1:100 dilution of goat anti-cat IgG conjugated with fluorescein isothiocyanate (Sigma--Aldrich, USA). Each sample was considered negative after 3 passages. 2.4. CCoV characterization and quantitation {#sec0030} ------------------------------------------- RNA was extracted from faecal and organ samples of both dogs using the QIAamp Viral RNA Mini Kit and the RNeasy Mini Kit (Qiagen GmbH, Hilden, Germany), respectively. For CCoV type I and II detection and quantitation in faecal and organ samples, two real time RT-PCR assays with the same sensitivity were used ([@bib0025]). Reverse transcription was performed using GeneAmp^®^ RNA PCR (Applied Biosystems, Italy) according to the manufacturer\'s instructions. For the discrimination of classical (subtype IIa) and TGEV-like (subtype IIb) CCoVs, two RT-PCR assays with comparable levels of sensitivity were performed, as previously described ([@bib0055]). RT-PCRs with primers 20179/INS-R (CCoV-IIa) or 20179/174-268 (CCoV-IIb) were conducted using SuperScript One-Step RT-PCR for Long Templates (Invitrogen S.R.L.). In order to verify the absence of TGEV strains in the samples that were positive by CCoV-IIb specific assay, an RT-PCR, able to discriminate CCoV and TGEV according to the amplicon size was used ([@bib0120]). 2.5. Sequencing and sequence analysis {#sec0035} ------------------------------------- The 3′ end of the genome of the CCoV-IIb strains was amplified as previously described, using viral RNA extracted from the lungs, SuperScript One-Step RT-PCR for Long Templates (Invitrogen S.R.L.) and six pairs of primers, specific for overlapping fragments, encompassing ORFs 2, 3a, 3b, 3c, 4, 5, 6, 7a and 7b ([@bib0040]). The nucleotide sequences were determined in both directions by a commercial facility (Beckman Coulter Genomics, United Kingdom). Sequence assembling and analysis were carried out using the BioEdit software package ([@bib0070]) and the National Center for Biotechnology Information (NCBI; [http://www.ncbi.nlm.nih.gov](http://www.ncbi.nlm.nih.gov/)) and European Molecular Biology Laboratory (EMBL; [http://www.ebi.ac.uk](http://www.ebi.ac.uk/)) analysis tools. Phylogenetic analysis was conducted using MEGA4 program ([@bib0110]). Phylogenetic trees, based on the amino acid sequences of S, E, M and N proteins, were elaborated using neighbor-joining method, supplying a statistical support with bootstrapping over 1000 replicates. SimPlot was used for nucleotide sequence comparison of the two strains to *Alphacoronavirus-1* reference strains ([@bib0085]). The sequences of strains 66/09 and 68/09 were registered in GenBank under the accession numbers HQ450376 and HQ450377, respectively. 3. Results {#sec0040} ========== 3.1. CCoV detection, characterization and isolation {#sec0045} --------------------------------------------------- By means of nested PCR assay for CCoV, viral RNA was detected in faeces, lungs, spleen, kidneys, pancreas, heart, and liver of both puppies. In addition, the brain of the Pomeranian (68/09) was tested positive, while the brain of the Yorkshire Terrier (66/09) was tested negative. By genotype specific real time RT-PCR assays, only CCoV-II was detected in all positive samples. CCoV-II RNA copies/μl of template in the samples are shown in [Table 1](#tbl0005){ref-type="table"} .Table 1CCoV-II RNA copies/μl of template in the samples of the two puppies, tested by genotype-specific real time RT-PCR.Sample66/09 (Yorkshire Terrier)68/09 (Pomeranian)Faeces3.59 × 10^3^7.22 × 10^5^Liver4.64 × 10^4^3.21 × 10^5^Spleen5.20 × 10^5^1.55 × 10^7^Pancreas2.75 × 10^2^2.03 × 10^4^Kidney1.23 × 10^5^3.37 × 10^6^Lung5.99 × 10^6^4.10 × 10^6^Heart1.14 × 10^5^7.08 × 10^6^Brainn.d.2.47 × 10^3^[^1] In the faecal samples of the two puppies, both CCoV-II subtypes were detected, while in the organs which tested positive, only CCoV which was characterized as TGEV-like (CCoV-IIb) was detected. No TGEV strains were detected in the samples. The CCoV-IIb strains (66/09 and 68/09) were isolated from the lung homogenates of both puppies. A-72 cells developed a cytopathic effect that consisted of cell rounding and lysis of the monolayer. In addition, cells were tested positive by the immunofluorescence assay. Viral titres on cell cultures were 10^4.25^ (66/09) and 10^4^ TCID~50~/50 μl (68/09) at the 3rd passage. 3.2. Detection of other viral pathogens {#sec0050} --------------------------------------- Both puppies were tested positive for CPV-2a field strains and negative for CDV, CAV-1 and CAV-2. 3.3. Sequencing results and phylogenetic analysis {#sec0055} ------------------------------------------------- A total of 8822 and 8828 nucleotides were determined for strains 66/09 and 68/09, respectively, encompassing ORFs 2 (S protein), 3a, 3b, 3c, 4 (E protein), 5 (M protein), 6 (N protein), 7a and 7b. Alignment of the sequences with TGEV, CCoV and FCoV reference strains available in GenBank showed the highest identity to CCoV-IIb reference strain 119/08 (EU924791) (98.2% and 98.9% for 66/09 and 68/09, respectively). The two Greek strains shared an identity of 98%. The spike protein gene of both strains was 4374 nucleotides long, encoding a protein of 1457 amino acids. When compared to four TGEV-like reference strains (430/07, 119/08, 174/06 and 341/05), no insertions or deletions were observed. The two strains shared 97.6% aa identity to each other, while they showed the highest aa identity to CCoV-IIb reference strain 119/08 (98.3%). By Simplot analysis, the two strains displayed higher nucleotide conservation with the TGEV strain Purdue than with the pantropic CCoV-IIa strain CB/05, at the 5′-end of the S gene ([Fig. 1](#fig0005){ref-type="fig"} ). Phylogenetic analysis revealed that the two Greek strains were more closely related to the four CCoV-IIb reference strains detected in dogs' organs ([Fig. 2](#fig0010){ref-type="fig"}a).Fig. 1S gene sequences analysis with Simplot. The S gene of CCoV-IIb strain 68/09, TGEV strain Purdue and CCoV-IIa pantropic strain CB/05 were plotted against the S gene of CCoV-IIb strain 66/09.Fig. 2Neighbor-joining trees of the Greek strains, based on the S (a), E (b), M (c) and N (d) protein. The trees are rooted on the group 2 canine respiratory coronavirus (CRCoV). The numbers represent the percentage of replicate trees based on 1000 bootstrap replicates. The envelope protein was found to be 82 amino acids in length, like in most canine coronavirus strains and in three TGEV-like reference strains, 119/08, 174/06 and 341/05, with the exception of 430/07, which is 7 amino acids shorter. The Greek strains had high amino acid identity to each other (98.7%). E protein of strains 66/09 and 68/09 had the highest amino acid identity (100% and 98.7%, respectively) to the CCoV-IIb strains 341/05, 119/08, and to CCoV-IIa CB/05. In the E protein, phylogenetic analysis revealed that the two strains were closely related to CCoV type II strains ([Fig. 2](#fig0010){ref-type="fig"}b). The membrane protein (M protein) of strains 66/09 and 68/09 was found to be 260 and 262 amino acids long, respectively. Two amino acids were missing from the N-terminal end of the M protein of strain 66/09 in positions 24 and 36, as it has been also observed in reference CCoV-IIb strains 174/06 and 341/05. The two strains shared high amino acid similarity (94.6%). M protein of strains 66/09 and 68/09 had the highest amino acid identity to the CCoV-IIb reference strains detected in the organs (97.3% and 100%, respectively). Phylogenetic analysis of the M protein showed that the two strains were closely related to CCoV-IIa and CCoV-IIb strains ([Fig. 2](#fig0010){ref-type="fig"}c). The N gene (nucleoprotein) was found to be 1149 nucleotides in length, coding for a polypeptide of 382 amino acids. The two proteins were 98.1% similar. The amino acid sequences had the highest identity with CCoV-IIb 119/08 (98.6% and 99.4%, for 66/09 and 68/09, respectively). Phylogenetic analysis revealed that the two Greek strains were more closely related to CCoV-II reference strains ([Fig. 2](#fig0010){ref-type="fig"}d). 4. Discussion {#sec0060} ============= Homologous RNA recombination consists one of the major "powers" of genetic evolution and diversity, regarding coronaviruses ([@bib0125]). Under field conditions, mixed infections are required to give rise to recombination events. So far, experimental infections of piglets ([@bib0130]) and dogs ([@bib0080]) with CCoV and TGEV strains, respectively, and the fact that feline aminopeptidase N serves as a functional receptor for both CCoV and TGEV ([@bib0115]), strongly suggest that the two viruses can be found growing at the same "environment" in nature, although the exact host of recombination still remains unknown. A canine coronavirus strain (UCD-1) of potential recombinant origin with TGEV, was identified for the first time in the late 90s ([@bib0120]). Recently, TGEV-like strains were reported, circulating in dogs in different countries of Europe ([@bib0055]). The strains were detected in faecal samples of dogs with gastroenteritis, they were classified as the new subtype CCoV-IIb and it was suggested that they were a result of recombination events, occurring at different times of these, regarding the old strain UCD-1 ([@bib0055]). In the present study, sequence and phylogenetic analysis takes place for the first time in CCoV-IIb strains detected in Greece. Moreover, our findings suggest that TGEV-like CCoV strains spreading to the internal organs are circulating in dogs, since so far, there has been only one report in Italy ([@bib0050]). By means of real time RT-PCR, tissue distribution and quantitation of both strains were assessed for the first time, revealing the spreading of the virus to the internal organs. The CPV-2 coinfection may contribute to the spreading of TGEV-like CCoV strains, since so far, they have been only detected in organs of dogs infected also with CPV-2 ([@bib0050]). However, the detection of CCoV-IIa strains strictly in the faeces, in both cases, suggests that CCoV-IIb may have an advantage in disseminating through the dog. In the first report of TGEV-like strains detected in the organs, CCoV-I was also detected strictly in the intestinal content in two cases ([@bib0050]). These cases strongly suggest a difference in pathobiology of CCoV-IIb with respect to CCoV-I/IIa. By sequence and phylogenetic analysis, it was shown that both strains segregate constantly with the CCoV-IIb reference strains detected in the organs of dogs. Accordingly, the strains were highly similar to TGEV in the 5′ end of the S gene, whereas they clustered with the pantropic CCoV variant CB/05 (subtype CCoV-IIa) in the E, M and N proteins. In a previous study, CCoV-IIb strains detected in the organs were found to share higher amino acid identity with CB/05 than with CCoV common enteric strains, at the level of the same proteins ([@bib0050]). Whether the ability of CCoV-IIb strains to spread to the organs is related to the recently recognized recombinant protein S or to the CB/05-like proteins (E, M and N) needs further research. However, the S-protein "scenario" seems to be more possible, since in coronaviruses S protein mediates receptor attachment, and tissue tropism shift has been associated with mutations in the S gene ([@bib0090]). In the last decade, new genotypes and subtypes of canine coronavirus have been recognized. Furthermore, a pantropic variant with the ability to cause fatal systemic infection was detected ([@bib0005]). Previous studies revealed that there are antigenic differences between CCoV-I and II ([@bib0105]). In addition, antigenic differences were observed between the two subtypes, CCoV-IIa and CCoV-IIb (TGEV-like CCoVs) ([@bib0050]). Whether the currently circulating vaccines can protect against the TGEV-like recombinant isolates or not has to be verified via vaccinations and experimental infections. 5. Conclusion {#sec0065} ============= In conclusion, this was the first report of CCoV-IIb tissue distribution. Up to now, there has been only one report of TGEV-like strains detected in internal organs of puppies in Italy. Based on sequence and phylogenetic analysis of the structural proteins, the two Greek isolates were found to be related to the Italian prototype CCoV-IIb strains. In addition, in all cases a mixed infection with CPV-2 was reported. However, the detection of CCoV-IIa strains, strictly at the faeces, suggests that CCoV-IIb strains may have an advantage in disseminating throughout a dog with CPV-2 coinfection, in contrast to common enteric CCoV-IIa strains. Ntafis Vasileios is grateful to Alexander S. Onassis Public Benefit Foundation for doctoral funding. [^1]: n.d., not detected.
Pages September 28, 2010 Bedtime Stories I curl up and tuck my bare feet under me, squeeze tight my eyes and try to think about bedtime as a little girl. If I had a routine, it included the tiny trial-size perfumes on my brass vanity, I'd smell them, and their lids clink. And sometimes I'd peek into my closet at my Guess? jean jacket. Shut the door and smile and feel lucky. Bedtime is exhausting here, and I finally get them all to sleep and then my mind races about what they think about before they drift off, and I wonder, did my words and actions blanket them softly, or scratchy? I clean up half-heartedly, but I don't even really give that much. I haven't the energy to be more than a blob. I give myself quiet time, but then I feel guilty, and wasted. I'm tired but too selfish to sleep. I stay home all day with my kids, and then all night. And I know better, to cut myself some slack but seriously, I'm not any good at this. If this is what I am and what I'm going to do with my life, if this is what I'm going to lay it all down for, I want to at least be a little good at it. What is my strong suit? I can't keep the house clean enough, stay ahead of the laundry pile, I try to make good meals but sometimes they are pitiful, including the ramen noodles they had last night. When do I get to punch out? And what do their hearts feel, when they see their life-less mother, that's let herself go, on the inside and the outside, 54 comments: Oh Steph, I so know where you are coming from. This mothering, this parenting is trying and rewarding and taxing and amazing all at once, isn't it. Your strong suit is that you are you. You love them and they know that. I love that you're so transparent & honest- in good & bad times...I think that you and your children are just precious. Don't be hard on yourself- we all over extend ourselves...it's those who know it & want to be better who already ARE better. :) That's you! It's sooooo hard. I think that exact same thing. "Is this what I'm meant to do with my life? if it is, then I suck at it, and I should be getting better." But you know what, I bet ifyou asked Evie (or Ivy) if they thought we sucked at it, they would scream "helll nooo." I think our kids know they are loved. They feel it every time we hug them, brush their hair, make them breakfast, and kiss their boo boos. I bet you if our kids could communicate how they feel about us, it would be all hearts and rainbows. By nightfall I'm a total grouch! Feeling the full weight on my shoulders of caring for a house, husband and three children. I co-sleep with my youngest while feeling guilty that I'm not cuddling all of them. Are they laying in bed sad and scared? Do they need me more than I'm able to give them? Are they going to remember all the good times or just when I yell at them? I wish I had it all together. I wish my house and meals resembled that of Martha Stewart, but they don't. I hope my boys appreciate my hard work, and the fact that I don't have all the answers. I could have written this post. I feel like my words blanket Maggie in a scratchy way most nights. Then I go in for another hug, kiss and "I love you more". I never knew this job could be so hard or that I could feel SO bad about my mothering, cooking and housekeeping on a daily basis. My house is cleaner than most, but I have too many places where I just stick things to get them out of the way. I feel like I should be better at all of this since I don't have a "real" job. Uggghhh. I've been feeling like this lately too. Sometimes I think I'm a horrible Mom for just wanting to be away, by myself for a whole day (or two) and not feel guilty for it. It's hard being the one home ALL THE TIME and Lucy has been seeing the result of my frustrations a lot lately. I always make sure to give her lots of higs and kisses and I Love You's but I also wonder if it's enough sometimes. I guess only time will tell...but I'll keep picking myself up after the bad days and try to make the next one better. That's all we can do. I feel this way too, a lot! Thanks for being so honest. It's easy to tell other mothers to take time for themselves...but when it comes to myself, I have a hard time taking time for myself without feeling guilty. Lately, I've been giving myself a "punching out" time an hour before I go to bed, where I tell myself that I am NOT going to do housework. It has helped immensely. Also, believe me, your kids are never going to remember the dust bunnies under the couch, but they WILL remember a mother that loved them to pieces! Oh dear sweet Stephanie, there are many a days that I have felt like this. My husband used to come home and see me this way and wonder why I didn't do this or that. But he sees our girls and he realizes how good they are. (especially when compared to some of their peers) He sees how they are so well adjusted and he hears good things about them from others and one day he said to me. "You are such an excellent mother and our girls show for that and while I would want everything to be perfect in this season of our lives the most important thing is them and I am thankful that you are doing a good job" Now that didn't come until the girls were 10 and 7. I used to try to do it all and I realized that I can't. It's no use beating yourself up over it, because when I realized that I couldn't do it all and things were going to be left undone. Well that is when I started to feel better about myself and I don't get so overwhelmed. Altho there are many days when I feel overwhelmed i realize that one day they will be gone and doing their own thing and how I would probably cherish the craziness and the dishes piling up and the days I sometimes go in between showers. Oh hon. To know that even you feel this way...of course you do. You are human. But know that so many of us admire you -- and the way that you mother -- so much. You do need to carve out time to nurture yourself. Not just to catch a break, but to actually nurture yourself and feed your soul. You are an incredible mother. It shines through more than you realize - we see it in the background of your stories and your photos, things you don't even realize you're showing us. Your children are loved, and they feel it. If we can feel it from here, you can bet that they can. And the house and the meals and the laundry? We are all in the same boat, my friend. I was just confessing to some other mothers this morning that I am certain my house has more layers of filth and grime than anyone else's, that I make quesadillas and call it 'dinner'. That the laundry baskets have taken on the status of livingroom furniture because they've been sitting there for so long. And these other mothers? They said -- me too. Same story, different details. These days of raising little people are busy and full and draining and the work keeps coming. Just when you catch a breath another wave hits, and you have to keep swimming, right? But every now and then you need to climb up on one of those long inflatable rafts for one. One with a cupholder. And just float, because you need the rest and the pampering in order to keep swimming. Love to you, my sweet friend. Your honesty just helped many, many mothers feel a little less alone, a little less like failures. I hope the comments that pour in do the same for you. What they know is that they are safe, happy, warm, and loved. We all fall short of our own (too high) expectations.It made me cry with relief to know that I'm not the only one who feels this way many, many nights. Sweet sweet Steph....this group of thoughts shows just how wonderful a mother you are. Questioning yourself is a great indicator of how much you care. If you weren't questioning your actions, how you are raising them, what your life is like and worrying about how your children are - well, then, you wouldn't be a great mother. Because if you think you are doing everything right, you are wrong. So take those thoughts that are scary and depressing, take a moment to accept the feeling...and then move on. Do something that doesn't take a long time that you can feel accomplished about. Not the laundry that will always pile up, or the toys that never seem put away for very long. Make your bed. Clean the sink. Wash the windows. Something that has an end point, at least for a few days. Then take a moment for yourself. Read a book. Sit outside with a cup of tea. Take a relaxing bubble bath. Do something that makes YOU happy. Don't make something for someone else. Not something that has a benefit for someone else. Something that is selfish. You can't take care of others if you aren't taking care of yourself. sorry about erasing the above comment - too many typos!sometimes it just feels better to say it aloud doesn't it?i have a post sitting in my to be posted pile apologizing for yelling to my cora. at bedtime. gah. i still "hate" myself for doing it!it sits there 'cause i'm not quite ready for everyone to see that i do that. though, don't we all? Your kids remember your heart...not the outside stuff. I yell at my babies more often than I want to and I ache because of it. My mother in law tells a story of a conversation she had with her now grown daughter. Her daughter told her friends her mom never yelled :-) She was overjoyed to hear her daughter had forgotten the grouchy days and the yelling, but remembered the love in her heart. I think every one of us has been there. I'm there all the time. Doesn't help when the husband complains about the state of the house or some sort of thing. Hope you remember soon that you are their mom and love them...and that's what's most important. And here's to us all learning to take care of ourselves as much as the kids. Hugs! You're probably doing better than you think you are. Bedtime is always hard for me too, with the last ebbing energy... Set yourself some really small positive goals tomorrow maybe. A story. An extra hug for each kid. Every bit helps you feel better about the job you're doing, and feels good to your kiddos too. Remember, God made kids to be raised by humans.HugsMary Oh honey, I only have one baby but I know how you feel. I collapsed in tears the other day because the house was a mess and the laundry was piled up and all I did was throw random stuff in the crock pot and call it chili. I was told I was amazing and that I was doing such a great job with the new baby. You are amazing. And you are doing such a great job. I can see it. Everyone who comes to your blog and sees your pictures can see it. I think about this too. Because I think it does. Many nights it brings the kind of shadows that blossom cruelly into nightmares and they're up screaming and I know some of the negative/sad/depressed/grieving/whatever energy somehow sloughed off my skin and onto them. Or maybe I'm just dramatic. But each little moment is a moment. Each one matters. And that isn't meant to be a bad thing, but a reminder to our souls that if we really tallied the moments, the "win" column would still be heavier than the "fail" column. But it's crazy for me to pretend my unhappiness won't be handed down to them as much as my happiness will... This is me. And what an ironic statement too, because sleep really is part of self-care. Nighttime is MY time, even though I'm usually too beat to do anything I actually enjoy. I remember feeling very lonely at night. I have always been a night owl, and I remember laying in my bed in the dark, just feeling alone, or reading with a rogue flashlight under the covers. It helped when we got a dog that would sleep with me. I think part of why I'm not worried about P sleeping with us 'forever' is because I don't want her to feel that way either. Nighttime can be a lonely place, and until she wants that time to herself, I am willing to share it. I think there are so many mothers that could write these exact words, but yet are too scared to because of the judgement that awaits them. We've (as moms) have piled so many expectations upon ourselves, it's impossible to meet them.... the perfect chef, impecible housekeeper, loving yet firm mother who gives 110% without anything in return and still has that extra 100% to give to her husband as he walks through the door. That's not to mention all the other roles and hats that we as moms have, it's not next to impossible to do... it IS impossible. And on top of it, we feel as if we're failing if we admit how hard it is and we can't do it. How messed up is all that????? Steph, you're kids know how much you love them... and sometimes yes, you're the grumpy mommy that puts them to bed, but I've found that sometimes I'm better off to admit it to my girls. Hey... guess what? Mommy's a bit tired and grumpy tonight, what should we do about that? Or I'll just appoligize now for it.... I'm amazed by the amount of understanding my 5 and 7 year old have when I'm honest with them. And no, I don't remember my mom's grumpy days.... thinking back, I love the fact my mom stayed home with us, she made that sacrifice. She did it out of love and I know now, as a mom, oh yes... she has grumpy days and days with no energy, but I don't remember it. I remember the crafts, the homemade cookies and the love. So much love..... I definitely identify with your feelings. You seem to be an amazing, engaged mothers--and for that you will be remembered. Also, I must say in response to your question, "What is my strong suit?" Writing. Resonating with your readers. Being a voice for moms. It helps my heart to read your blog, and see my feelings echoed on this page. You are amazing! Wow. Like the chorus of voices before me, I find this post so resonant and so bittersweet. Having read just a bit from you in the past, I suspect that your words and actions blanket your children in security and love. (Besides, I like to comfort myself with the thought that those of us who spend time asking these big questions are probably those least likely to to be living problematic answers to them.) I don't get it -- selfish? I may hardly know you, but I know you're anything but selfish. You have many strong suits. Writing being one of them. Loving your kids being another. So hard to strike the right balance between sacrificing yourself for your children, and not losing yourself in your children. Please know that it's more than okay - it's actually important - to take care of yourself, too. Try not to think of it as selfish. Think of it as making you a better mom and wife for the long haul. (Hope some of that was helpful at all.) Me too. Sometimes I go back in to their rooms after I've had a few minutes to re-group and I apologize and hug and kiss them goodnight again. I hope to goodness that's what they remember when they think back to bedtimes when they were little and not Zombie Mom with the the hollow eyes and deep, weary sighs. Maybe you could ask them - "What is bedtime like around here?" and have a discussion about it. I'm willing to bet that they mostly love and cherish it, and that they completely love and cherish you. I was just about to type the exact comment that Tiff above me said - how do you get in my head and make my thoughts sound much more eloquent then they actually are? I too am choosing to be home, but at times I wonder if my kids would be better off with other caregivers - ones that only do it from 9-5 and are therefore refreshed and have perspective and don't carry over the night or the previous day. I know it's such a cliche, but I think of this often: "motherhood is really hard if you're doing it right." I just want to give you a huge hug and take you out for coffee and encourage you the way another mom recently just did for me when I was feeling so much of these feelings. This season of motherhood is so intense. It's so intense. xxoo. I just wanted to tell you that I completely understand. And that when I think about "good parenting", a mom of which I aspire to be like, a mother I admire, I think of you, Steph. But I do know that we ALL feel exactly as you do sometimes. Just know it's normal to feel that way; and you are an incredible mother to those incredible children. me too. all of this times a hundred. I've felt it, thought it, done it. tonight, after emma was in bed and ken had lucy, I felt like I had accomplished so much b/c I cleaned the kitchen, put laundry in, fed the dogs and took the trash out. but then I had to step over the toys and the blankets and the baby wipes and the nursing bra (on the floor!) to get to the stairs ... and I just came right up the stairs without picking up. and I know Emma will go downstairs in the morning and step over all of it to get to the breakfast table. and she'll probably hear me grumble about the mess. secretly? I am so glad you mentioned ramen b/c my kid has been living off of annie's mac and cheese and turkey dogs for lunch since July. You are SOOO poetic and put into words so often what I feel but can't say. your strong suit from I 'know' of you??? You are there with your kids. you give to them every ounce of who you are. you are extremily nurturing even when you don't feel it.I read your blog because it reminds me of the joys of going to the park with my kids and the joys of sitting outside while they ride their bikes carefree. you remind me to get off this computer, put down the broom and get outside!!!seriously. You are awesome Steph. An awesome mom & an awesome person. I have read your words for over 2 years, so I KNOW that. You know that, too - most days. We all have those days where we don't feel good enough to be a mom. I work 40 hours a week outside the home, so I obviously am away from my family for that time - but I still need ME time on the weekend...even if it's for an hour. And I yell and feel guilty. Make time for you-you'll feel renewed...but you know that already :) Just a gentle reminder. we have ALL been there. There are nights I have laid in bed and cried over the millions of different ways I've failed my children in a single day. And yet...it's not all failure. I can't keep up with the laundry pile but I was there watching when they said "Look, mom!". I yelled too much but I made sure to give them an extra long hug and a heartfelt "I'm sorry." We none of us are as good at this as we think *everyone else must be. We never get to punch out...but the demands motherhood makes on us do change and fluctuate. My oldest is almost 13 and I've seen the other side. I'm not saying it's easier but it's a different kind of hard. And on those long long days, even a different kind of hard seems somehow easier. Steph: I know I am not a great commenter (sorry), but my computer time is so limited, but i do you read your post everyday and so appreciate them. I read this yesterday and felt like I really wanted to comment, here i am finally with a minute ... I have tried to think of what I wanted to say, you know those perfect words, that I am just not good with. I could tell you that you are a wonderful mom, because I think you are! I could tell you that I get it, because as the mom to 6 my life is crazy! But as I read through some of the comments, I see that has all been said. So I guess I will just say THANKS, THANKS for being a mom (a great one too), THANKS for articulating what some moms feel, THANKS for sharing, and mostly THANKS for being you, it makes the world a better place. such a sweet post. thank you for sharing. i am currently not winning mother of the year...baby is eating a late lunch b/c i forgot to bring a spoon to the park and therefore had to try to feed her with a fork. she didn't like it. and my sick four-year-old has been yelled at more than once today. none of it is his fault. i'm just feeling off and am praying that he isn't scarred for life. I know the feeling. I always wonder if my mother felt this way too with THREE kids (I have one and feel the same way you do!). And I feel like she didn't. Or if she did, she had some magic trick of making life seem beautiful. :) I wish she would share it! :)Just remember "this too shall pass" :)
Primary neuroendocrine carcinoma (NEC) in the liver is a rare entity that behaves aggressively. Primary hepatocellular carcinoma (HCC) with a NEC component is very rare, consisting of about 0.46% of primary hepatic tumors \[[@b1-jptm-2018-05-17]\]. Eighteen cases of primary combined or collided NEC and HCC have been reported in English literature to this date \[[@b1-jptm-2018-05-17]-[@b14-jptm-2018-05-17]\]. None of these cases had paraneoplastic syndromes or proved to be functional. Hypercalcemia is a well-known paraneoplastic metabolic condition associated with many malignancies \[[@b15-jptm-2018-05-17]\]. In HCC, hypercalcemia accounts for 7.8% of the paraneoplastic syndromes \[[@b16-jptm-2018-05-17]\]. While primary hyperparathyroidism is the most common cause for hypercalcemia without malignancies, hypercalcemia can occur in association with malignancies through other mechanisms. Most of the malignancies associated with hypercalcemia proved to be caused by parathyroid hormone (PTH)--related peptide (PTHrP) \[[@b17-jptm-2018-05-17]\]. Metastasis of the malignancies to the bone can also cause osteolysis leading to hypercalcemia \[[@b17-jptm-2018-05-17]\]. Only rare cases are considered to be a result of ectopic PTH production by the tumors. Here, we present a rare case of combined hepatic NEC and HCC with malignancy associated hypercalcemia caused by ectopic PTH production. Previously reported primary mixed HCC and NEC cases and ectopic PTH-producing HCC cases are also summarized and discussed. CASE REPORT =========== A 44-year-old man presented with a hepatic mass discovered during a regular abdominal ultrasound for hepatitis B virus associated chronic liver disease. The chronic liver disease was diagnosed 9 years ago and the patient was on Tenofovir. Laboratory findings showed elevated white blood cells (17,000/μL), mildly elevated aspartate aminotransferase (4 IU/L), alanine transaminase (22 IU/L), and normal calcium and phosphate levels. Computed tomographic scan identified one huge mass in segment (S) 8 and the other small mass in S6, with thrombi in right portal and hepatic veins. No other systemic lesion was found. The patient underwent right hemihepatectomy with partial diaphragm resection and lymph node dissection. On pathological examination, the cut section of S8 revealed a yellow-whitish mass measuring 10.5 × 8.0 with irregular margins and necrosis. The mass in S6 was a yellowish multinodular mass that measured 1.3 × 1.0. Tumor thrombosis was noted in the right portal vein, and cirrhosis was observed in the nonneoplastic liver. Histologically, the main mass in S8 consisted of two components; a dominant poorly differentiated carcinoma component (60%) composed of small tumor cells with enlarged vesicular irregular nuclei, high nuclear to cytoplasmic ratio, large nucleoli, and frequent mitoses, and multiple foci of typical HCC component (40%) showing trabecular architecture and grade 2 nuclei ([Fig. 1](#f1-jptm-2018-05-17){ref-type="fig"}). The tumor penetrated the Glisson's capsule directly invading the diaphragm and showed extensive necrosis and microvessel invasion. The poorly differentiated carcinoma component was focally positive for cytokeratin (CK) 7 and negative for α-fetoprotein, hepatocyte, glypican-3, and CK19 immunohistochemistry, and was interpreted as poorly differentiated cholangiocarcinoma component. The pathologic diagnosis of S8 mass was combined HCC and cholangiocarcinoma. The other mass in S6 showed typical histologic features of HCC. There was no metastasis in 22 lymph nodes. The patient subsequently received adjuvant concurrent chemoradiation therapy (CCRT) of one cycle of 5-flourouracil chemotherapy and two cycles of 5 fx radiation. On postoperative day 59, he visited the emergency room for nausea and vomiting. Laboratory results showed elevated levels of total calcium (13.2 mg/dL; normal range, 8.8 to 10.5), ionized calcium (2.3 mmol/L; normal range, 1.05 to 1.35), blood urea nitrogen (33 mg/dL; normal range, 10 to 26), and creatinine (2.16 mg/dL; normal range, 0.7 to 1.4) with normal to low levels of phosphate. Further evaluation of hypercalcemia revealed markedly increased PTH (3,859 by enzyme-linked immunosorbent assay; normal range, 15 to 65), and neuron-specific enolase (101.04 ng/mL; normal range, 0 to 16.3). Parathyroid scan was performed to exclude primary hyperparathyroidism, which showed no abnormality. Whole body positron emission tomography revealed multiple hypermetabolic lesions in the liver and whole skeleton, and biopsy of an osteolytic lesion involving a left rib discovered metastatic poorly differentiated carcinoma. Only the poorly differentiated carcinoma component, not the HCC component, was identified in the metastatic lesion. Regarding hypercalcemia, elevated PTH could not be explained with bone metastasis or PTHrP, and hypercalcemia persisted despite management. Finally, ectopic PTH production by the tumor was suggested as the cause of hypercalcemia. Meanwhile, the clinician in charge enquired to the pathologist of the presence of NEC component in the tumor based on the possibility that ectopic hormone could be secreted by NEC, the rapid progression of the tumor and the elevated neuron-specific enolase level. Subsequent immunohistochemistry of neuroendocrine markers and PTH were performed on both primary (S8 mass) and metastatic tumor specimens. CD56 stained positive while chromogranin and synaptophysin were focally positive in the poorly differentiated area on both specimens, implying neuroendocrine differentiation ([Fig. 2](#f2-jptm-2018-05-17){ref-type="fig"}). The component with typical HCC morphology was negative for all three markers ([Fig. 2](#f2-jptm-2018-05-17){ref-type="fig"}). There was no immunoreactivity for PTH on either specimen. Symptomatic treatment including continuous renal replacement therapy was applied for the acute renal failure induced by hypercalcemia. However, the patient expired of disease progression 2 months after diagnosis. This study was approved by the Institutional Review Board of Seoul National University Bundang Hospital (IRB No. B-1801-442-702), and patient consent was waived. DISCUSSION ========== Primary combined HCC and NEC is very rare. The initial pathologic diagnosis of this case was combined HCC and cholangiocarcinoma because its poorly differentiated component bore little resemblance to typical NEC morphology. However, with clinical suspicion, immunohistochemistry revealed multifocal areas within the poorly differentiated component that stained positive for neuroendocrine markers. Therefore, we classified it as combined HCC and NEC. The clinical characteristics of the 18 reported cases of primary mixed HCC and NEC are summarized in [Table 1](#t1-jptm-2018-05-17){ref-type="table"}. Most cases were associated with chronic hepatitis B or C. The reported carcinomas have been classified to two types according to its spatial histologic arrangement. Combined types have a transition zone in which HCC and NEC intermingle with each other whereas collision types show clear separation of the histologically different components, usually by fibrous septa. In our case, the HCC was tightly intermingled with the NEC component, their borders almost indiscernible due to transition zones. Therefore, we classified it as combined HCC and NEC. Primary mixed HCC and NEC generally tend to have a poorer prognosis than conventional HCC \[[@b1-jptm-2018-05-17]\]. Of the 18 cases summarized, eight patients experienced recurrence, six patients died within the year of operation from the disease, and only two patients were confirmed to be alive 2 years after surgery ([Table 1](#t1-jptm-2018-05-17){ref-type="table"}). Remarkably, in the cases with biopsy-confirmed metastasis, the NEC component was solely found in all occasions, similar to the presenting case. This indicates that the NEC component acts more aggressively, which has a much poorer prognosis than primary HCC \[[@b1-jptm-2018-05-17]\]. Therefore, it is important to identify the neuroendocrine component and assure proper treatment be given to the patient. None of the reported combined HCC-NEC described paraneoplastic syndrome or ectopic hormone production. To our knowledge, our case may be the first to report primary mixed HCC and NEC associated with malignancy-related hypercalcemia caused by ectopic PTH production. The patient had multiple bone metastases, and one of which was histologically confirmed. In hypercalcemia caused by osteolytic lesions or PTHrP produced by tumors, however, PTH levels are usually suppressed \[[@b15-jptm-2018-05-17]\]. It led us to favor ectopic PTH production to be the cause for hypercalcemia than bone metastasis or PTHrP, even though serum PTHrP level was not available. The prevalence of hypercalcemia accounts for 7.8% of the paraneoplastic syndromes observed in HCC, and is associated with short survival \[[@b16-jptm-2018-05-17]\]. Ectopic PTH production has been reported in only three HCC cases ([Table 2](#t2-jptm-2018-05-17){ref-type="table"}) and not in any primary hepatic NEC case. All three cases performed PTH immunohistochemistry on their biopsy specimens which were negative. Our case also showed negative results. These findings, rather than acting as counter-evidence of hormone production, may suggest that the tumor cells do not store PTH but secrete it into circulation soon after synthesis \[[@b18-jptm-2018-05-17],[@b19-jptm-2018-05-17]\]. We were not able to perform genetic analysis or RNA sequencing for PTH mRNA. As hypercalcemia developed during adjuvant CCRT, comparison of intact PTH levels of before and after the operation or CCRT was impossible. However, in our case, the patient developed hypercalcemia with elevated intact PTH as the metastatic lesions formed. Considering that the metastatic component was NEC, it may be possible to suggest that the intact PTH was synthesized by the NEC cells. Primary hepatic NEC has poor prognosis, and the NEC component of primary mixed HCC and NEC behaves aggressively. Clinicians and pathologists are advised to take caution for neuroendocrine differentiation when diagnosing poorly differentiated HCC. **Conflicts of Interest** No potential conflict of interest relevant to this article was reported. ![Representative histologic image of the main hepatic mass.](jptm-2018-05-17f1){#f1-jptm-2018-05-17} ![(A) The main hepatic tumor consists of neuroendocrine carcinoma (right side) and hepatocellular carcinoma (left side) components. On immunohistochemistry, the neuroendocrine carcinoma component is focally positive for CD56 (B), chromogranin (C), and synaptophysin (D).](jptm-2018-05-17f2){#f2-jptm-2018-05-17} ###### Summary of previously reported primary mixed hepatocellular and neuroendocrine carcinoma cases Age (yr)/sex Chronic hepatitis type Tumor size (cm) Nodal metastasis Type Ectopic hormone production Clinical course Treatment Survival -------------------------------------------------- -------------- ------------------------ ----------------- ------------------ ----------- ---------------------------- ------------------------------ ----------------------------------------------------------------------- ----------------------- Barsky *et al*. \[[@b2-jptm-2018-05-17]\] 43/M B Large Negative Combined None \- Chemotherapy (doxorubicin, 5-fluorouracil) Dead (26 mo) Artopoulos and Destuni \[[@b3-jptm-2018-05-17]\] 69/M B 10 Negative Combined None \- Surgery Not given Ishida *et al*. \[[@b4-jptm-2018-05-17]\] 72/M C 3 Positive (NEC) Collision None \- Surgery Not given Yamaguchi *et al*. \[[@b5-jptm-2018-05-17]\] 71/M C 4.1 Negative Combined None Recurred (5 mo, bone) Surgery Alive (F/U 5 mo) Garcia *et al*. \[[@b6-jptm-2018-05-17]\] 50/M C 5.3 Negative Collision None Recurred (4 mo, liver) Surgery → recur: chemotherapy (doxorubicin, thalidomide, bevacizumab) Alive (F/U 16 mo) Yang *et al*. \[[@b7-jptm-2018-05-17]\] 65/M B 7.5 Positive (NEC) Combined None Recurred (3 mo, liver) Surgery Dead (12 mo) Tazi *et al*. \[[@b8-jptm-2018-05-17]\] 68/M B 4.0 Positive (NEC) Collision None \- Surgery → chemotherapy (cisplatin, etoposide) Alive (F/U 28 mo) Nakanishi *et al*. \[[@b9-jptm-2018-05-17]\] 76/M C 3.0 Negative Combined None Recurred (6 mo, bone) TACE → surgery Dead (7 mo) Aboelenen *et al*. \[[@b10-jptm-2018-05-17]\] 51/M C 7.5 Negative Combined None \- Surgery Alive (F/U 6 mo) Nishino *et al*. \[[@b11-jptm-2018-05-17]\] 72/M C 2.5 Negative Combined None Recurred (1 wk, lymph nodes) Surgery → recur: chemotherapy (cisplatin, etoposide) Dead (2 mo) Nomura *et al*. \[[@b1-jptm-2018-05-17]\] 71/M C 4.1 Not given Combined None Recurred (liver) Surgery Dead (8 mo) Nomura *et al*. \[[@b1-jptm-2018-05-17]\] 71/M C 3.0 Not given Collision None Recurred (liver) RFA → surgery Dead (2 mo) Nomura *et al*. \[[@b1-jptm-2018-05-17]\] 58/M B 4.3 Not given Combined None \- Surgery Alive (F/U 20 mo) Nomura *et al*. \[[@b1-jptm-2018-05-17]\] 50/M B 1.8 Not given Combined None \- Surgery Alive (F/U 19 mo) Nomura *et al*. \[[@b1-jptm-2018-05-17]\] 63/M C 3.0 Not given Combined None \- IFN → surgery Alive (24 mo) Baker *et al*. \[[@b12-jptm-2018-05-17]\] 76/M None 5.5 Negative Collision None \- Surgery → chemotherapy (platinum-based) Alive (F/U not given) Choi *et al*. \[[@b13-jptm-2018-05-17]\] 72/M C 2.5 Negative Collision None Recurred (6 mo, liver) Surgery → recur: chemotherapy (cisplatin, etoposide) Alive (F/U 10 mo) Liu *et al*. \[[@b14-jptm-2018-05-17]\] 65/M C 4.3 Positive (NEC) Collision None \- Surgery Dead (1.3 mo) M, male; NEC, neuroendocrine carcinoma; F/U, follow-up; TACE, transarterial chemoembolization; RFA, radiofrequency ablation; IFN, interferon therapy. ###### Summary of previously reported hepatocellular carcinoma cases with ectopic PTH production Age/Sex Chronic hepatitis type Hepatocellular carcinoma Initial laboratory findings Parathyroid lesion Treatment Method of ectopic PTH confirmation Survival -------------------------------------------------- --------- ------------------------ ---------------------------------------------------- ----------------------------- -------------------- ------------------- ------------------------------------ --------------------- -------------------------------- --------------------------------------------------- ------------------- Koyama *et al*. \[[@b20-jptm-2018-05-17]\] 83/M C Single 8 cm mass 13 (8.9--10.1) 360 (15--50) 18.7 (13.8--55.3) 29.348 (0--10) None TAE Venous sampling Alive (F/U 24 mo) Decreased serum calcium and intact PTH after TAE Mahoney *et al*. \[[@b19-jptm-2018-05-17]\] 72/M None Multiple large lesions, extending into portal vein 14.5 (8.5--10.5) 92 (12--65) \< 0.7 (\< 1.3) Not given Parathyroid adenoma Parathyroid resection and TACE Sestamibi SPECT scan Dead (not given) Immunoradiometric assay and rapid assay Abe *et al*. \[[@b18-jptm-2018-05-17]\] 73/F B Large mass with multiple metastasis 12.9 (8.5--10.5) 99 (\< 60) \< 1 (not given) 189.3 (not given) None TACE Decreased serum calcium and intact PTH after TACE Dead (2 mo) PTH, parathyroid hormone; PTHrP, PTH-related peptide; AFP, α-fetoprotein; M, male; TAE, transcatheter arterial embolization; F/U, follow-up; TACE, transarterial chemoembolization; SPECT, single-photon emission computed tomographic.
The present invention relates to a magnetic head, in particular relates to such a magnetic head for recording and/or reproducing a magnetic signal on a magnetic medium with high recording density. The present invention relates in particular to a digital signal recording/reproducing system. The high density magnetic recording technique has been considerably improved, with the recording density becoming ten times as large as that of ten years ago. For instance the recording density up to 8000 bits/mm has been reported in an experiment with a single pole head. However, that value (8000 bits/mm) is obtained merely in an experiment, and the practical value is less than 3000 bits/mm even when a single pole head for vertical recording is used. Some of the important problems for achieving the high recording density are (1) to improve the remanent magnetization of a medium, (2) to keep the duration between a head and a medium small (less than 1 .mu.m), and/or (3) to improve the sensitivity of a head. Some of the prior magnetic heads are first described. (1) A single pole head; A single pole head as shown in FIG. 1 has the highest recording/reproducing density at present. In FIG. 1, the reference numeral 1.1 is a main magnetic pole, 1.2 is an auxiliary magnetic pole, 1.3 is a coil wound on the auxiliary magnetic pole 1.2, 1.4 is a recording medium which is made of for instance Co-Cr, 1.5 is a base support for supporting said medium 1.4, and 1.6 shows the width of said main magnetic pole 1.1. In FIG. 1, the leakage flux generated by the recorded signal on the recording medium 1.4 magnetizes the end of the main pole 1.1, then, the leakage flux from the main pole 1.1 is detected by the coil 1.3 wound around the auxiliary magnetic pole 1.2. In this case, the main magnetic pole 1.1 must directly contact with the recording medium 1.4 since the leakage flux from the recorded signal is very weak, and the recording medium 1.4 and the base support 1.5 must be flexible and thin since the duration between the main pole 1.1 and the auxiliary pole 1.2 must be less than 50 or 60 microns for detecting the leakage flux from the small main pole 1.1 (the width 1.6 of which is usually the same as the bit size (0.2-5.0 microns)). Accordingly, a single pole head in FIG. 1 is used only for a floppy disc, but cannot be used for a hard disc which has high recording density since the thickness of a hard disc is larger than 1-2 mm, and a single pole head cannot be used for that thick recording disc. (2) A magneto-resistance head (MR head); An MR head is shown in FIG. 2, in which the reference numeral 2.1 is a magneto-resistance element made of for instance permalloy film with the thickness (t), the width (w) and the length (L), 2.2 is a conductor provided at both the ends of said element 2.1. The MR head operates on the principle that the resistance of the element 2.1 depends upon the magnetic flux provided by the recording medium 1.4. In FIG. 2, when some predetermined current flows through element 2.1, the voltage across the element 2.1 changes according to the magnetic flux recorded on the medium 1.4, and said voltage is the output voltage of the head. The detailed analysis of an MR head is discussed in (IEEE, Trans. on Mag. Vol. MAG-7. No. 1 pp150-154, 1971, USA by R. P. Hunt in Ampex company), and according to that article, the output voltage V is proportional to (1-e.sup.-kw)/kw, where k=2.pi./(.lambda.), and .lambda. is the recording wavelength which is twice as long as the recording bit length. According to said equation, when the wavelength is small, the width (w) must be small in order to obtain the enough output voltage. For instance when =0.2 micron, the width (w) must be less than 1.0 micron, which is unpractical for manufacturing process. The loss increase with the width (w) in an MR head comes from the open magnetic loop of a magnetic circuit. FIG. 3 shows the improvement of an MR head, and the head of FIG. 3 has the closed magnetic circuit (article MR 82-24 in the Japanese Institute of Electronics and Communication, magnetic recording study group). In FIG. 3, the reference numeral 3.1 is a return path of flux and is made of ferrite, and 3.2 is non-magnetic portion, 2.1 and 2.2 show the same members as those of FIG. 2. The flux signal applied to the end of the MR element 2.1 returns to the recording medium through the return path 3.1. Thus, the reproduction of the signal with the width of 0.13 micron is possible by using the MR element with the width 20 microns on the condition that the relative output level is -45 dB. When the relative output level is -6 dB, said signal width must be 1.27 micron. Further, said output level is obtained on the condition that the medium contacts directly with the head. If the head aparts from the medium by the length L', the output level decreases by e.sup.-kL'. For instance, when the bit period is 0.1 micron and the length between the head and the medium is 0.1 micron, the output level decreases to 0.04, which cannot be reproduced even if that improved MR head in FIG. 3 is used. Concerning the decrease of the output level by the gap between the head and the medium, the vertical flux component Hy by the vertically recorded signal as shown in FIG. 4 is shown by the following equation. EQU Hy=2.pi.M.sub.r e.sup.-(.pi./d)y (Oe) (1) where M.sub.r is the remanent magnetization on the medium, d is the bit width, and the thickness loss by the thickness of the medium is neglected on the assumption that the thickness (t) of the medium is considerably larger than the bit width (d). The relations of the equation (1) is shown in the curves of FIG. 5, where M.sub.r =1000 emu/cc. (3) Optical magnetic reproduction; FIG. 6 shows the prior optical magnetic reproduction head, in which the reference numeral 6.1 is an optical source by a semiconductor laser, 6.2 is a polarizer, 6.3 is a beam splitter, 6.4 is an analyzer, 6.5 is an optical detector by a photo-diode, and 6.6 is magnetization. The optical beam generated by the optical source 6.1 is converted to a linear polarization by the polarizer 6.2, and the converted linear polarization is applied to the recording medium 1.4. The numeral 1.5 is a base support. The input beam is reflected by the medium, and the polarization direction of the reflected beam rotates on the principle of the magneto-optical effect according to the magnetization on the medium. The reflected beam is applied to the detector 6.5 through the optical analyzer 6.4 (which has the same structure as the polarizer). The strength of the optical beam at the output of the analyzer 6.4 depends upon the direction of the magnetization on the medium, therefore, the output voltage of the optical detector 6.5 depends upon the magnetization on the medium. In an optical magnetic head, the resolving power of the recorded bits is restricted by the diffraction limit. When a semiconductor laser with the wavelength 0.8 micron is used, the diffraction limit of that laser beam is about 0.4 micron. A laser source with the shorter wavelength would be requested for improving the resolving power, however, 0.8 micron wavelength is the limit at present, and no improvement of the recording density is expected so long as the present laser is used. (4) A copy type optical head (Magnetic recording study group report MR 79-11, Japanese Institute of Electronics and Communications); FIG. 7 shows a prior copy type optical head, in which 7.1 is a soft magnetic film made of for instance garnet or permalloy, 7.2 is magnetic flux in said soft magnetic film 7.1, 7.3 is leakage flux from the recording medium 1.4, and other numerals show the same memebers as those of the previous figures. In FIG. 7, the soft magnetic film is magnetized by the leakage flux 7.3 from the recording medium 1.4, thus, a magnetic copy of the recording medium is obtained in the soft magnetic film 7.1. The magnetic flux in the film 7.1 is optically read out on the same principle as that of FIG. 6. Although the head of FIG. 7 has the advantage that the medium noise is reduced since recording medium is not directly illuminated, that copy type head of FIG. 7 has still the restriction that the resolving power of the recorded bits depends upon the diffraction limit of the optical beam. Accordingly, the minimum size of the reproducable bit is about 0.5 micron with the use of such a head. FIG. 8 is a prior modification of a copy type optical head, and the configuration of FIG. 8 is shown in the Japanese patent publication 33781/81, in which 8.1 is a reflection mirror, 8.2 is an optical beam, and other members in FIG. 8 are the same as those of the same numeral members in the previous figures. The feature of the structure of FIG. 8 is that the soft magnetic film 7.1 contacts with the medium 1.4 with some angle P, thus, the reproduction of the shorter wavelength signal is improved. However, as mentioned in accordance with FIG. 5, the magnetic flux at the top of the head is very small when some duration between the head and the recording medium is provided. Further, the optical head has the disadvantage in general that only 1/100 of saturated level of the magnetic change can be used because of the shot-noise of the detector, and thus, the sensitivity of an optical head is small. Further, since the structure of FIG. 8 has no idea to illuminate the area closer than several microns to the end of the soft magnetic film 7.1, the reproduction of a small bit less than 1 micron is impossible. Another modification of a copy type optical head which is shown in U.S. Pat. No. 3,737,236 is shown in FIG. 9, in which 9.1 is an optical fiber, 9.2 is a core of that optical fiber, and other numerals are the same as those of the same numerals in the previous figures. The soft magnetic film 7.1 in FIG. 9 is positioned at the top of the optical fiber 9.1. Since the diameter of the core 9.2 is less than 50-60 microns, the optical beam can be concentrated on a small area of the soft magnetic film 7.1, and thus, the problem in FIG. 8 is solved by the structure of FIG. 9. However, the head of FIG. 9 has still the disadvantage that no idea is presented for the detection of a signal when leakage flux is weak due to the small recording bit. Further, no idea is presented for compensating the change of the polarization direction in an optical fiber in spite of the fact that an optical head reproduces a signal through the change of the polarization direction of an optical beam. By the way, the technique for applying bias flux in the magnetization hard axis for improving the sensitivity of the flux detection has been known in "Determination of Low-Intensity Magnetic Fields by Means of Ferromagnetic Film" by F. G. West et al in J. Appl. Phys. 34, pp1163, 1963, and/or "Vapor-Deposited Thin Film Recording Heads" IEEE Trans. on Mag. vol. MAG-7, pp675, 1971. In those prior arts, bias flux is applied in the magnetization hard axis direction, and the flux in a core is detected by a winding wound around the core. Due the presence of the winding, the size of the core must be larger than 500 microns (each side). Therefore, the flux to be detected must be uniform among the wide area which is equal to or larger than the size of the core. Further, due to the large size of the core, a plurality of magnetic domains exist in the core, and the magnetic flux in each domains might be random. Of course, the random flux in each domain decreases the sensitivity of the detection of the flux. Accordingly, the above two prior arts are impossible to apply for the detector of the magnetic flux which is weak and exists in very narrow limited area, although the sensitivity for detection flux which is uniform in a large area is somewhat improved. Therefore, the above two prior arts are not suitable for a magnetic head for high recording density in which magnetic flux of each magnetic cell to be detected is limited to a very small area. As described above in detail, a prior magnetic head has the disadvantage that a small bit (less than 1 micron) can not be reproduced, and therefore, is not capable of reproducing high recording density signal. Therefore, an improved magnetic head for the use of higher recording density has been desired.
Pritsiolas v Apple Bankcorp, Inc. (2014 NY Slip Op 05851) Pritsiolas v Apple Bankcorp, Inc. 2014 NY Slip Op 05851 Decided on August 20, 2014 Appellate Division, Second Department Published by New York State Law Reporting Bureau pursuant to Judiciary Law § 431. This opinion is uncorrected and subject to revision before publication in the Official Reports. Decided on August 20, 2014SUPREME COURT OF THE STATE OF NEW YORKAppellate Division, Second Judicial DepartmentWILLIAM F. MASTRO, J.P. THOMAS A. DICKERSON JEFFREY A. COHEN ROBERT J. MILLER, JJ. 2013-06033 2013-10980 (Index No. 12364/12) [*1]James Pritsiolas, et al., appellants, vApple Bankcorp, Inc., doing business as Apple Bank for Savings, respondent. William A. DiConza, Oyster Bay, N.Y., for appellants. Albanese & Albanese LLP, Garden City, N.Y. (Barry A. Oster of counsel), for respondent. DECISION & ORDER In an action pursuant to RPAPL article 15 to determine claims to real property, the plaintiffs appeal, as limited by their brief, from so much of (1) an order of the Supreme Court, Nassau County (Galasso, J.), entered April 9, 2013, as granted the defendant's cross motion for summary judgment, inter alia, dismissing the complaint, and (2) a judgment of the same court entered May 22, 2013, as, upon the order, inter alia, is in favor of the defendant and against them dismissing the complaint. By decision and order on motion of this Court dated January 15, 2014, as amended by a decision and order on motion dated January 29, 2014, the notice of appeal from the order was deemed also to be a notice of appeal from the judgment (see CPLR 5501[c]). ORDERED that the appeal from the order is dismissed, without costs or disbursements; and it is further, ORDERED that the judgment is modified, on the law, by (1) deleting the first and fifth decretal paragraphs thereof, (2) adding to the second decretal paragraph thereof, following the words "plaintiffs' complaint is hereby dismissed," the words "to the extent that the complaint seeks a determination that the plaintiffs are the owners of that portion of the disputed area of the subject real property which was unfenced as of the date of commencement of the action," and (3) adding to the third decretal paragraph thereof, following the words "Lots 21 and 22," the words "except for that portion of the disputed area of the subject real property which was fenced as of the date of the commencement of the action"; as so modified, the judgment is affirmed insofar as appealed from, without costs or disbursements, that branch of the defendant's motion which was for summary judgment dismissing the complaint is granted only to the extent that the complaint sought a determination that the plaintiffs are the owners of that portion of the disputed area of the subject real property which was unfenced as of the date of the commencement of the action, the order is modified accordingly, and the claim with respect to the unfenced portion of the disputed area is severed. The appeal from the intermediate order must be dismissed because the right of direct appeal therefrom terminated with the entry of judgment in the action (see Matter of Aho, 39 NY2d 241, 248). The issues raised on the appeal from the order are brought up for review and have been [*2]considered on the appeal from the judgment (see CPLR 5501[a][1]). The plaintiffs and the defendant own parcels of real property that are adjacent to one another. The plaintiffs acquired title to their parcel by deed dated January 29, 2001. On September 28, 2012, the plaintiffs commenced this action, seeking a judgment determining that they are the owners of a strip of property, measuring approximately 5 feet in width and 95 feet in length (hereinafter the disputed area), that runs along the southern boundary of their parcel and encroaches on the northern portion of the defendant's parcel. It is undisputed that a portion of the disputed area has been fenced in since 1992 as part of the rear yard of the residence currently occupied by the plaintiffs. The plaintiffs alleged in their complaint that they acquired title to the entire disputed area in 2002 by adverse possession, arising out of the combined use of the area by themselves and their immediate predecessor in title. The Supreme Court awarded summary judgment in favor of the defendant, inter alia, dismissing the complaint, and determined that the erection of the fence and the actions taken by the plaintiffs and their predecessor with respect to the disputed area were permissive, and not adverse within the meaning of RPAPL 543. The Supreme Court erred in applying RPAPL 543 to this action. Although that statute is generally applicable to actions involving claims of adverse possession that are commenced after its effective date of July 7, 2008, it does not apply where, as in this case, the property interest is alleged to have vested by adverse possession prior to the enactment of the statute (see Shilkoff v Longhitano, 94 AD3d 974, 976), since the statute "cannot be retroactively applied to deprive a claimant of a property right which vested prior to [its] enactment" (Hogan v Kelly, 86 AD3d 590, 592; see Franza v Olin, 73 AD3d 44). Therefore, the law in effect at the time the plaintiffs claim to have acquired title must be applied. In order to demonstrate adverse possession, the plaintiffs were required to satisfy the common-law elements that the possession was (1) hostile and under a claim of right, (2) actual, (3) open and notorious, (4) exclusive, and (5) continuous for the statutory period of 10 years (see Ram v Dann, 84 AD3d 1204, 1205; Corigliano v Sunick, 56 AD3d 1121). Additionally, under the former version of RPAPL 522 that was in effect at the relevant time, the plaintiffs were obligated to establish that the disputed area was either "usually cultivated or improved" or "protected by a substantial inclosure" (Skyview Motel, LLC v Wald, 82 AD3d 1081, 1082 [internal quotation marks omitted]; see BTJ Realty v Caradonna, 65 AD3d 657, 658). Contrary to the determination of the Supreme Court, under the circumstances presented here, the plaintiffs are entitled to tack any period of adverse possession enjoyed by their predecessor in title onto their own period of adverse possession (see Brand v Prince, 35 NY2d 634, 637; Stroem v Plackis, 96 AD3d 1040, 1042). The defendant demonstrated its prima facie entitlement to judgment as a matter of law dismissing the plaintiffs' claim of adverse possession of that portion of the disputed area which was unfenced by submitting evidence that the plaintiffs did not engage in any cultivation or improvement of that portion of the property. In response, the plaintiffs merely alleged in vague and conclusory terms that they "planted, watered, landscaped and maintained the entire area," although they simultaneously admitted that "there is absolutely nothing to maintain" in that portion of the area. At best, the plaintiffs assert that they merely attempted to keep the unfenced portion in presentable condition, which is inadequate to satisfy the requirement that the real property in dispute was usually cultivated or improved (see e.g. Walsh v Ellis, 64 AD3d 702, 704; Giannone v Trotwood Corp., 266 AD2d 430, 431; Simpson v Chien Yuan Kao, 222 AD2d 666, 667; Yamin v Daly, 205 AD2d 870, 871). Since the plaintiffs failed to raise a triable issue of fact in opposition to that branch of the defendant's motion which was for summary judgment pertaining to the unfenced portion of the disputed area, the Supreme Court correctly granted that branch of the motion. With regard to the fenced portion, the defendant demonstrated its prima facie entitlement to judgment as a matter of law by submitting the affidavits of its senior vice president and of a professional landscaper who had maintained the defendant's property for some 16 years. These affidavits indicated that the defendant had permitted the encroachment of the fence onto its property as a neighborly accommodation, and that the defendant's landscaper routinely entered the fenced area, with the knowledge and at least the implicit approval of the plaintiffs and their [*3]predecessor, in order to maintain the defendant's property beyond the fence. These affidavits, in conjunction with various documents submitted by the defendants, indicated that the requirement that the possession of the fenced portion by the plaintiffs and their predecessor occurred under a claim of right was not satisfied (see generally Koudellou v Sakalis, 29 AD3d 640, 641; Beyer v Patierno, 29 AD3d 613, 614-615; Bockowski v Malak, 280 AD2d 572; Soukup v Nardone, 212 AD2d 772, 774-775). However, the plaintiffs raised a triable issue of fact in opposition to this branch of the motion by submitting the affidavits of the plaintiff James Pritsiolas and of the plaintiffs' predecessor in title, both of whom denied that the defendant's landscaper had ever entered onto the fenced portion of the disputed area, and who further averred that they at all times considered the fenced portion to be part of the parcel that was conveyed to them. Accordingly, a triable issue of fact exists with regard to whether their possession of the fenced portion was under a claim of right (see generally Corigliano v Sunick, 56 AD3d 1121, 1122) and, therefore, the Supreme Court should have denied that branch of the motion. MASTRO, J.P., DICKERSON, COHEN and MILLER, JJ., concur. ENTER: Aprilanne Agostino Clerk of the Court
/* * threadpool-ms-io.c: Microsoft IO threadpool runtime support * * Author: * Ludovic Henry (ludovic.henry@xamarin.com) * * Copyright 2015 Xamarin, Inc (http://www.xamarin.com) * Licensed under the MIT license. See LICENSE file in the project root for full license information. */ #include "il2cpp-config.h" #if NET_4_0 #ifndef DISABLE_SOCKETS #if IL2CPP_PLATFORM_WIN32 #include "os/Win32/WindowsHeaders.h" #else #include <errno.h> #include <fcntl.h> #endif #include <vector> #include "gc/Allocator.h" #include "mono/ThreadPool/threadpool-ms.h" #include "mono/ThreadPool/threadpool-ms-io.h" #include "mono/ThreadPool/threadpool-ms-io-poll.h" #include "object-internals.h" #include "os/ConditionVariable.h" #include "os/Mutex.h" #include "os/Socket.h" #include "utils/CallOnce.h" #include "utils/Il2CppHashMap.h" #include "vm/Domain.h" #include "vm/Runtime.h" #include "vm/Thread.h" #include "vm/ThreadPool.h" #define UPDATES_CAPACITY 128 typedef std::vector<Il2CppObject*, il2cpp::gc::Allocator<Il2CppObject*> > ManagedList; struct ThreadPoolStateHasher { size_t operator()(int thread) const { return thread; } }; typedef Il2CppHashMap<int, ManagedList*, ThreadPoolStateHasher> ThreadPoolStateHash; typedef enum { UPDATE_EMPTY = 0, UPDATE_ADD, UPDATE_REMOVE_SOCKET, UPDATE_REMOVE_DOMAIN, } ThreadPoolIOUpdateType; typedef struct { int fd; Il2CppIOSelectorJob *job; } ThreadPoolIOUpdate_Add; typedef struct { int fd; } ThreadPoolIOUpdate_RemoveSocket; typedef struct { Il2CppDomain *domain; } ThreadPoolIOUpdate_RemoveDomain; typedef struct { ThreadPoolIOUpdateType type; union { ThreadPoolIOUpdate_Add add; ThreadPoolIOUpdate_RemoveSocket remove_socket; ThreadPoolIOUpdate_RemoveDomain remove_domain; } data; } ThreadPoolIOUpdate; typedef struct { il2cpp::vm::ThreadPool::ThreadPoolIOBackend backend; ThreadPoolIOUpdate* updates; int updates_size; il2cpp::os::FastMutex updates_lock; il2cpp::os::ConditionVariable updates_cond; il2cpp::os::Socket* wakeup_pipes [2]; } ThreadPoolIO; static il2cpp::utils::OnceFlag lazy_init_io_status; static bool io_selector_running = false; static ThreadPoolIO* threadpool_io; static il2cpp::vm::ThreadPool::ThreadPoolIOBackend backend_poll = { poll_init, poll_register_fd, poll_remove_fd, poll_event_wait }; static Il2CppIOSelectorJob* get_job_for_event (ManagedList *list, int32_t event) { IL2CPP_ASSERT(list); Il2CppIOSelectorJob* foundJob = NULL; int matchIndex = -1; for (size_t i = 0; i < list->size(); i++) { Il2CppIOSelectorJob *job = (Il2CppIOSelectorJob*)(*list)[i]; if (job->operation == event) { foundJob = job; matchIndex = (int)i; break; } } if (foundJob == NULL) return NULL; list->erase(list->begin() + matchIndex); return foundJob; } static int get_operations_for_jobs (ManagedList *list) { int operations = 0; for (size_t i = 0; i < list->size(); i++) { operations |= ((Il2CppIOSelectorJob*)(*list)[i])->operation; } return operations; } static void selector_thread_wakeup (void) { const char msg = 'c'; for (;;) { int32_t written = 0; const il2cpp::os::WaitStatus status = threadpool_io->wakeup_pipes[1]->Send((const uint8_t*)&msg, 1, il2cpp::os::kSocketFlagsNone, &written); if (written == 1) break; if (written == -1) { //g_warning ("selector_thread_wakeup: write () failed, error (%d)\n", WSAGetLastError ()); break; } if (status == il2cpp::os::kWaitStatusFailure) break; } } static void selector_thread_wakeup_drain_pipes (void) { uint8_t buffer [128]; for (;;) { int32_t received; il2cpp::os::WaitStatus status = threadpool_io->wakeup_pipes[0]->Receive(buffer, 128, il2cpp::os::kSocketFlagsNone, &received); if (received == 0) break; if (status == il2cpp::os::kWaitStatusFailure) break; } } typedef struct { Il2CppDomain *domain; ThreadPoolStateHash *states; } FilterSockaresForDomainData; static void filter_jobs_for_domain (void* key, void* value, void* user_data) { //FilterSockaresForDomainData *data; //MonoMList *list = (MonoMList *)value, *element; //MonoDomain *domain; //MonoGHashTable *states; //IL2CPP_ASSERT(user_data); //data = (FilterSockaresForDomainData *)user_data; //domain = data->domain; //states = data->states; //for (element = list; element; element = mono_mlist_next (element)) { // Il2CppIOSelectorJob *job = (Il2CppIOSelectorJob*) mono_mlist_get_data (element); // if (il2cpp::vm::Domain::GetCurrent() == domain) // mono_mlist_set_data (element, NULL); //} ///* we skip all the first elements which are NULL */ //for (; list; list = mono_mlist_next (list)) { // if (mono_mlist_get_data (list)) // break; //} //if (list) { // IL2CPP_ASSERT(mono_mlist_get_data (list)); // /* we delete all the NULL elements after the first one */ // for (element = list; element;) { // MonoMList *next; // if (!(next = mono_mlist_next (element))) // break; // if (mono_mlist_get_data (next)) // element = next; // else // mono_mlist_set_next (element, mono_mlist_next (next)); // } //} //mono_g_hash_table_replace (states, key, list); NOT_IMPLEMENTED("TODO"); } static void wait_callback (int fd, int events, void* user_data) { //Il2CppError error; if (il2cpp::vm::Runtime::IsShuttingDown ()) return; if (fd == threadpool_io->wakeup_pipes [0]->GetDescriptor()) { //mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_IO_THREADPOOL, "io threadpool: wke"); selector_thread_wakeup_drain_pipes (); } else { ThreadPoolStateHash *states; ManagedList *list = NULL; //void* k; bool remove_fd = false; int operations; IL2CPP_ASSERT(user_data); states = (ThreadPoolStateHash *)user_data; /*mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_IO_THREADPOOL, "io threadpool: cal fd %3d, events = %2s | %2s | %3s", fd, (events & EVENT_IN) ? "RD" : "..", (events & EVENT_OUT) ? "WR" : "..", (events & EVENT_ERR) ? "ERR" : "...");*/ ThreadPoolStateHash::iterator iter = states->find(fd); bool exists = iter != states->end(); if (!exists) IL2CPP_ASSERT("wait_callback: fd not found in states table"); else list = iter->second; if (list && (events & EVENT_IN) != 0) { Il2CppIOSelectorJob *job = get_job_for_event (list, EVENT_IN); if (job) { threadpool_ms_enqueue_work_item (il2cpp::vm::Domain::GetCurrent(), (Il2CppObject*) job); } } if (list && (events & EVENT_OUT) != 0) { Il2CppIOSelectorJob *job = get_job_for_event (list, EVENT_OUT); if (job) { threadpool_ms_enqueue_work_item (il2cpp::vm::Domain::GetCurrent(), (Il2CppObject*) job); } } remove_fd = (events & EVENT_ERR) == EVENT_ERR; if (!remove_fd) { //mono_g_hash_table_replace (states, int_TO_POINTER (fd), list); states->insert(ThreadPoolStateHash::value_type(fd, list)); operations = get_operations_for_jobs (list); /*mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_IO_THREADPOOL, "io threadpool: res fd %3d, events = %2s | %2s | %3s", fd, (operations & EVENT_IN) ? "RD" : "..", (operations & EVENT_OUT) ? "WR" : "..", (operations & EVENT_ERR) ? "ERR" : "...");*/ threadpool_io->backend.register_fd (fd, operations, false); } else { //mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_IO_THREADPOOL, "io threadpool: err fd %d", fd); states->erase(ThreadPoolStateHash::key_type(fd)); //mono_g_hash_table_remove (states, int_TO_POINTER (fd)); threadpool_io->backend.remove_fd (fd); } } } static void selector_thread (void* data) { //Il2CppError error; ThreadPoolStateHash *states; io_selector_running = true; if (il2cpp::vm::Runtime::IsShuttingDown ()) { io_selector_running = false; return; } states = new ThreadPoolStateHash(); //states = mono_g_hash_table_new_type (g_direct_hash, g_direct_equal, MONO_HASH_VALUE_GC, MONO_ROOT_SOURCE_THREAD_POOL, "i/o thread pool states table"); for (;;) { int i, j; int res; threadpool_io->updates_lock.Lock(); for (i = 0; i < threadpool_io->updates_size; ++i) { ThreadPoolIOUpdate *update = &threadpool_io->updates [i]; switch (update->type) { case UPDATE_EMPTY: break; case UPDATE_ADD: { int fd; int operations; //void* k; bool exists; ManagedList *list = NULL; Il2CppIOSelectorJob *job; fd = update->data.add.fd; IL2CPP_ASSERT(fd >= 0); job = update->data.add.job; IL2CPP_ASSERT(job); ThreadPoolStateHash::iterator iter = states->find(fd); exists = iter != states->end(); if (!exists) list = new ManagedList(); else list = iter->second; //exists = mono_g_hash_table_lookup_extended (states, int_TO_POINTER (fd), &k, (void**) &list); list->push_back((Il2CppObject*)job); states->insert(ThreadPoolStateHash::value_type(fd, list)); //mono_g_hash_table_replace (states, int_TO_POINTER (fd), list); operations = get_operations_for_jobs (list); /*mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_IO_THREADPOOL, "io threadpool: %3s fd %3d, operations = %2s | %2s | %3s", exists ? "mod" : "add", fd, (operations & EVENT_IN) ? "RD" : "..", (operations & EVENT_OUT) ? "WR" : "..", (operations & EVENT_ERR) ? "ERR" : "...");*/ threadpool_io->backend.register_fd (fd, operations, !exists); break; } case UPDATE_REMOVE_SOCKET: { int fd; //void* k; ManagedList *list = NULL; fd = update->data.remove_socket.fd; IL2CPP_ASSERT(fd >= 0); ThreadPoolStateHash::iterator iter = states->find(fd); bool exists = iter != states->end(); /*if (mono_g_hash_table_lookup_extended (states, int_TO_POINTER (fd), &k, (void**) &list))*/ if (exists) { states->erase(ThreadPoolStateHash::key_type(fd)); //mono_g_hash_table_remove (states, int_TO_POINTER (fd)); for (j = i + 1; j < threadpool_io->updates_size; ++j) { ThreadPoolIOUpdate *update = &threadpool_io->updates [j]; if (update->type == UPDATE_ADD && update->data.add.fd == fd) memset (update, 0, sizeof (ThreadPoolIOUpdate)); } for (size_t i = 0; i < list->size(); i++) { threadpool_ms_enqueue_work_item(il2cpp::vm::Domain::GetCurrent(), (*list)[i]); } list->clear(); //mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_IO_THREADPOOL, "io threadpool: del fd %3d", fd); threadpool_io->backend.remove_fd (fd); } break; } case UPDATE_REMOVE_DOMAIN: { Il2CppDomain *domain; domain = update->data.remove_domain.domain; IL2CPP_ASSERT(domain); FilterSockaresForDomainData user_data = { domain, states }; //mono_g_hash_table_foreach (states, filter_jobs_for_domain, &user_data); for (j = i + 1; j < threadpool_io->updates_size; ++j) { ThreadPoolIOUpdate *update = &threadpool_io->updates [j]; if (update->type == UPDATE_ADD && il2cpp::vm::Domain::GetCurrent() == domain) memset (update, 0, sizeof (ThreadPoolIOUpdate)); } break; } default: IL2CPP_ASSERT(0 && "Should not be reached"); } } threadpool_io->updates_cond.Broadcast(); if (threadpool_io->updates_size > 0) { threadpool_io->updates_size = 0; memset (threadpool_io->updates, 0, UPDATES_CAPACITY * sizeof (ThreadPoolIOUpdate)); } threadpool_io->updates_lock.Unlock(); //mono_trace (G_LOG_LEVEL_DEBUG, MONO_TRACE_IO_THREADPOOL, "io threadpool: wai"); res = threadpool_io->backend.event_wait (wait_callback, states); if (res == -1 || il2cpp::vm::Runtime::IsShuttingDown ()) break; } delete states; io_selector_running = false; } /* Locking: threadpool_io->updates_lock must be held */ static ThreadPoolIOUpdate* update_get_new (void) { ThreadPoolIOUpdate *update = NULL; IL2CPP_ASSERT(threadpool_io->updates_size <= UPDATES_CAPACITY); while (threadpool_io->updates_size == UPDATES_CAPACITY) { /* we wait for updates to be applied in the selector_thread and we loop * as long as none are available. if it happends too much, then we need * to increase UPDATES_CAPACITY */ threadpool_io->updates_cond.Wait(&threadpool_io->updates_lock); } IL2CPP_ASSERT(threadpool_io->updates_size < UPDATES_CAPACITY); update = &threadpool_io->updates [threadpool_io->updates_size ++]; return update; } static void wakeup_pipes_init(void) { il2cpp::os::Socket serverSock(NULL); serverSock.Create(il2cpp::os::kAddressFamilyInterNetwork, il2cpp::os::kSocketTypeStream, il2cpp::os::kProtocolTypeTcp); threadpool_io->wakeup_pipes[1] = new il2cpp::os::Socket(NULL); il2cpp::os::WaitStatus status = threadpool_io->wakeup_pipes[1]->Create(il2cpp::os::kAddressFamilyInterNetwork, il2cpp::os::kSocketTypeStream, il2cpp::os::kProtocolTypeTcp); IL2CPP_ASSERT(status != il2cpp::os::kWaitStatusFailure); if (serverSock.Bind("127.0.0.1", 0) == il2cpp::os::kWaitStatusFailure) { serverSock.Close(); IL2CPP_ASSERT(0 && "wakeup_pipes_init: bind () failed"); } il2cpp::os::EndPointInfo info; memset(&info, 0x00, sizeof(il2cpp::os::EndPointInfo)); if (serverSock.GetLocalEndPointInfo(info) == il2cpp::os::kWaitStatusFailure) { serverSock.Close(); IL2CPP_ASSERT(0 && "wakeup_pipes_init: getsockname () failed"); } if (serverSock.Listen(1024) == il2cpp::os::kWaitStatusFailure) { serverSock.Close(); IL2CPP_ASSERT(0 && "wakeup_pipes_init: listen () failed"); } if (threadpool_io->wakeup_pipes[1]->Connect(info.data.inet.address, info.data.inet.port) == il2cpp::os::kWaitStatusFailure) { serverSock.Close(); IL2CPP_ASSERT(0 && "wakeup_pipes_init: connect () failed"); } status = serverSock.Accept(&threadpool_io->wakeup_pipes[0]); IL2CPP_ASSERT(status != il2cpp::os::kWaitStatusFailure); status = threadpool_io->wakeup_pipes[0]->SetBlocking(false); if (status == il2cpp::os::kWaitStatusFailure) { threadpool_io->wakeup_pipes[0]->Close(); serverSock.Close(); } serverSock.Close(); } static bool lazy_is_initialized() { return lazy_init_io_status.IsSet(); } static void initialize(void* args) { IL2CPP_ASSERT(!threadpool_io); threadpool_io = new ThreadPoolIO(); IL2CPP_ASSERT(threadpool_io); threadpool_io->updates = (ThreadPoolIOUpdate*)il2cpp::gc::GarbageCollector::AllocateFixed(sizeof(ThreadPoolIOUpdate) * UPDATES_CAPACITY, NULL); threadpool_io->updates_size = 0; threadpool_io->backend = backend_poll; // if (g_getenv ("MONO_ENABLE_AIO") != NULL) { //#if defined(HAVE_EPOLL) // threadpool_io->backend = backend_epoll; //#elif defined(HAVE_KQUEUE) // threadpool_io->backend = backend_kqueue; //#endif // } wakeup_pipes_init (); if (!threadpool_io->backend.init ((int)threadpool_io->wakeup_pipes [0]->GetDescriptor())) IL2CPP_ASSERT(0 && "initialize: backend->init () failed"); if (!il2cpp::vm::Thread::CreateInternal(selector_thread, NULL, true, SMALL_STACK)) IL2CPP_ASSERT(0 && "initialize: vm::Thread::CreateInternal () failed "); } static void lazy_initialize() { il2cpp::utils::CallOnce(lazy_init_io_status, initialize, NULL); } static void cleanup (void) { /* we make the assumption along the code that we are * cleaning up only if the runtime is shutting down */ IL2CPP_ASSERT(il2cpp::vm::Runtime::IsShuttingDown ()); selector_thread_wakeup (); while (io_selector_running) il2cpp::vm::Thread::Sleep(1000); } void threadpool_ms_io_cleanup (void) { if (lazy_init_io_status.IsSet()) cleanup(); } void ves_icall_System_IOSelector_Add (Il2CppIntPtr handle, Il2CppIOSelectorJob *job) { ThreadPoolIOUpdate *update; IL2CPP_ASSERT(handle.m_value >= 0); IL2CPP_ASSERT((job->operation == EVENT_IN) ^ (job->operation == EVENT_OUT)); IL2CPP_ASSERT(job->callback); if (il2cpp::vm::Runtime::IsShuttingDown ()) return; /*if (mono_domain_is_unloading (mono_object_domain (job))) return;*/ lazy_initialize (); threadpool_io->updates_lock.Lock(); update = update_get_new (); il2cpp::os::SocketHandleWrapper socketHandle(il2cpp::os::PointerToSocketHandle(handle.m_value)); update->type = UPDATE_ADD; update->data.add.fd = (int)socketHandle.GetSocket()->GetDescriptor(); update->data.add.job = job; il2cpp::os::Atomic::MemoryBarrier(); /* Ensure this is safely published before we wake up the selector */ selector_thread_wakeup (); threadpool_io->updates_lock.Unlock(); } void ves_icall_System_IOSelector_Remove (Il2CppIntPtr handle) { il2cpp::os::SocketHandleWrapper socketHandle(il2cpp::os::PointerToSocketHandle(handle.m_value)); threadpool_ms_io_remove_socket ((int)socketHandle.GetSocket()->GetDescriptor()); } void threadpool_ms_io_remove_socket (int fd) { ThreadPoolIOUpdate *update; if (!lazy_is_initialized ()) return; threadpool_io->updates_lock.Lock(); update = update_get_new (); update->type = UPDATE_REMOVE_SOCKET; update->data.add.fd = fd; il2cpp::os::Atomic::MemoryBarrier(); /* Ensure this is safely published before we wake up the selector */ selector_thread_wakeup (); threadpool_io->updates_cond.Wait(&threadpool_io->updates_lock); threadpool_io->updates_lock.Unlock(); } #else void ves_icall_System_IOSelector_Add (Il2CppIntPtr handle, Il2CppIOSelectorJob *job) { IL2CPP_ASSERT(0 && "Should not be called"); } void ves_icall_System_IOSelector_Remove (Il2CppIntPtr handle) { IL2CPP_ASSERT(0 && "Should not be called"); } void threadpool_ms_io_cleanup (void) { IL2CPP_ASSERT(0 && "Should not be called"); } void threadpool_ms_io_remove_socket (int fd) { IL2CPP_ASSERT(0 && "Should not be called"); } #endif #endif
// Copyright (c) 2020 Tigera, Inc. All rights reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package intdataplane import ( "fmt" "io" "net" "os" "reflect" "regexp" "strings" log "github.com/sirupsen/logrus" "github.com/projectcalico/felix/ifacemonitor" "github.com/projectcalico/felix/ip" "github.com/projectcalico/felix/iptables" "github.com/projectcalico/felix/proto" "github.com/projectcalico/felix/routetable" "github.com/projectcalico/felix/rules" "github.com/projectcalico/libcalico-go/lib/set" ) // routeTableSyncer is the interface used to manage data-sync of route table managers. This includes notification of // interface state changes, hooks to queue a full resync and apply routing updates. type routeTableSyncer interface { OnIfaceStateChanged(string, ifacemonitor.State) QueueResync() Apply() error } // routeTable is the interface provided by the standard routetable module used to progam the RIB. type routeTable interface { routeTableSyncer SetRoutes(ifaceName string, targets []routetable.Target) SetL2Routes(ifaceName string, targets []routetable.L2Target) } type endpointManagerCallbacks struct { addInterface *AddInterfaceFuncs removeInterface *RemoveInterfaceFuncs updateInterface *UpdateInterfaceFuncs updateHostEndpoint *UpdateHostEndpointFuncs removeHostEndpoint *RemoveHostEndpointFuncs updateWorkloadEndpoint *UpdateWorkloadEndpointFuncs removeWorkloadEndpoint *RemoveWorkloadEndpointFuncs } func newEndpointManagerCallbacks(callbacks *callbacks, ipVersion uint8) endpointManagerCallbacks { if ipVersion == 4 { return endpointManagerCallbacks{ addInterface: callbacks.AddInterfaceV4, removeInterface: callbacks.RemoveInterfaceV4, updateInterface: callbacks.UpdateInterfaceV4, updateHostEndpoint: callbacks.UpdateHostEndpointV4, removeHostEndpoint: callbacks.RemoveHostEndpointV4, updateWorkloadEndpoint: callbacks.UpdateWorkloadEndpointV4, removeWorkloadEndpoint: callbacks.RemoveWorkloadEndpointV4, } } else { return endpointManagerCallbacks{ addInterface: &AddInterfaceFuncs{}, removeInterface: &RemoveInterfaceFuncs{}, updateInterface: &UpdateInterfaceFuncs{}, updateHostEndpoint: &UpdateHostEndpointFuncs{}, removeHostEndpoint: &RemoveHostEndpointFuncs{}, updateWorkloadEndpoint: &UpdateWorkloadEndpointFuncs{}, removeWorkloadEndpoint: &RemoveWorkloadEndpointFuncs{}, } } } func (c *endpointManagerCallbacks) InvokeInterfaceCallbacks(old, new map[string]proto.HostEndpointID) { for ifaceName, oldEpID := range old { if newEpID, ok := new[ifaceName]; ok { if oldEpID != newEpID { c.updateInterface.Invoke(ifaceName, newEpID) } } else { c.removeInterface.Invoke(ifaceName) } } for ifaceName, newEpID := range new { if _, ok := old[ifaceName]; !ok { c.addInterface.Invoke(ifaceName, newEpID) } } } func (c *endpointManagerCallbacks) InvokeUpdateHostEndpoint(hostEpID proto.HostEndpointID) { c.updateHostEndpoint.Invoke(hostEpID) } func (c *endpointManagerCallbacks) InvokeRemoveHostEndpoint(hostEpID proto.HostEndpointID) { c.removeHostEndpoint.Invoke(hostEpID) } func (c *endpointManagerCallbacks) InvokeUpdateWorkload(old, new *proto.WorkloadEndpoint) { c.updateWorkloadEndpoint.Invoke(old, new) } func (c *endpointManagerCallbacks) InvokeRemoveWorkload(old *proto.WorkloadEndpoint) { c.removeWorkloadEndpoint.Invoke(old) } // endpointManager manages the dataplane resources that belong to each endpoint as well as // the "dispatch chains" that fan out packets to the right per-endpoint chain. // // It programs the relevant iptables chains (via the iptables.Table objects) along with // per-endpoint routes (via the RouteTable). // // Since calculating the dispatch chains is fairly expensive, the main OnUpdate method // simply records the pending state of each interface and defers the actual calculation // to CompleteDeferredWork(). This is also the basis of our failure handling; updates // that fail are left in the pending state so they can be retried later. type endpointManager struct { // Config. ipVersion uint8 wlIfacesRegexp *regexp.Regexp kubeIPVSSupportEnabled bool // Our dependencies. rawTable iptablesTable mangleTable iptablesTable filterTable iptablesTable ruleRenderer rules.RuleRenderer routeTable routeTable writeProcSys procSysWriter osStat func(path string) (os.FileInfo, error) epMarkMapper rules.EndpointMarkMapper // Pending updates, cleared in CompleteDeferredWork as the data is copied to the activeXYZ // fields. pendingWlEpUpdates map[proto.WorkloadEndpointID]*proto.WorkloadEndpoint pendingIfaceUpdates map[string]ifacemonitor.State // Active state, updated in CompleteDeferredWork. activeWlEndpoints map[proto.WorkloadEndpointID]*proto.WorkloadEndpoint activeWlIfaceNameToID map[string]proto.WorkloadEndpointID activeUpIfaces set.Set activeWlIDToChains map[proto.WorkloadEndpointID][]*iptables.Chain activeWlDispatchChains map[string]*iptables.Chain activeEPMarkDispatchChains map[string]*iptables.Chain // Workload endpoints that would be locally active but are 'shadowed' by other endpoints // with the same interface name. shadowedWlEndpoints map[proto.WorkloadEndpointID]*proto.WorkloadEndpoint // wlIfaceNamesToReconfigure contains names of workload interfaces that need to have // their configuration (sysctls etc.) refreshed. wlIfaceNamesToReconfigure set.Set // epIDsToUpdateStatus contains IDs of endpoints that we need to report status for. // Mix of host and workload endpoint IDs. epIDsToUpdateStatus set.Set // hostIfaceToAddrs maps host interface name to the set of IPs on that interface (reported // fro the dataplane). hostIfaceToAddrs map[string]set.Set // rawHostEndpoints contains the raw (i.e. not resolved to interface) host endpoints. rawHostEndpoints map[proto.HostEndpointID]*proto.HostEndpoint // hostEndpointsDirty is set to true when host endpoints are updated. hostEndpointsDirty bool // activeHostIfaceToChains maps host interface name to the chains that we've programmed. activeHostIfaceToRawChains map[string][]*iptables.Chain activeHostIfaceToFiltChains map[string][]*iptables.Chain activeHostIfaceToMangleChains map[string][]*iptables.Chain // Dispatch chains that we've programmed for host endpoints. activeHostRawDispatchChains map[string]*iptables.Chain activeHostFilterDispatchChains map[string]*iptables.Chain activeHostMangleDispatchChains map[string]*iptables.Chain // activeHostEpIDToIfaceNames records which interfaces we resolved each host endpoint to. activeHostEpIDToIfaceNames map[proto.HostEndpointID][]string // activeIfaceNameToHostEpID records which endpoint we resolved each host interface to. activeIfaceNameToHostEpID map[string]proto.HostEndpointID needToCheckDispatchChains bool needToCheckEndpointMarkChains bool // Callbacks OnEndpointStatusUpdate EndpointStatusUpdateCallback callbacks endpointManagerCallbacks bpfEnabled bool } type EndpointStatusUpdateCallback func(ipVersion uint8, id interface{}, status string) type procSysWriter func(path, value string) error func newEndpointManager( rawTable iptablesTable, mangleTable iptablesTable, filterTable iptablesTable, ruleRenderer rules.RuleRenderer, routeTable routeTable, ipVersion uint8, epMarkMapper rules.EndpointMarkMapper, kubeIPVSSupportEnabled bool, wlInterfacePrefixes []string, onWorkloadEndpointStatusUpdate EndpointStatusUpdateCallback, bpfEnabled bool, callbacks *callbacks, ) *endpointManager { return newEndpointManagerWithShims( rawTable, mangleTable, filterTable, ruleRenderer, routeTable, ipVersion, epMarkMapper, kubeIPVSSupportEnabled, wlInterfacePrefixes, onWorkloadEndpointStatusUpdate, writeProcSys, os.Stat, bpfEnabled, callbacks, ) } func newEndpointManagerWithShims( rawTable iptablesTable, mangleTable iptablesTable, filterTable iptablesTable, ruleRenderer rules.RuleRenderer, routeTable routeTable, ipVersion uint8, epMarkMapper rules.EndpointMarkMapper, kubeIPVSSupportEnabled bool, wlInterfacePrefixes []string, onWorkloadEndpointStatusUpdate EndpointStatusUpdateCallback, procSysWriter procSysWriter, osStat func(name string) (os.FileInfo, error), bpfEnabled bool, callbacks *callbacks, ) *endpointManager { wlIfacesPattern := "^(" + strings.Join(wlInterfacePrefixes, "|") + ").*" wlIfacesRegexp := regexp.MustCompile(wlIfacesPattern) return &endpointManager{ ipVersion: ipVersion, wlIfacesRegexp: wlIfacesRegexp, kubeIPVSSupportEnabled: kubeIPVSSupportEnabled, bpfEnabled: bpfEnabled, rawTable: rawTable, mangleTable: mangleTable, filterTable: filterTable, ruleRenderer: ruleRenderer, routeTable: routeTable, writeProcSys: procSysWriter, osStat: osStat, epMarkMapper: epMarkMapper, // Pending updates, we store these up as OnUpdate is called, then process them // in CompleteDeferredWork and transfer the important data to the activeXYX fields. pendingWlEpUpdates: map[proto.WorkloadEndpointID]*proto.WorkloadEndpoint{}, pendingIfaceUpdates: map[string]ifacemonitor.State{}, activeUpIfaces: set.New(), activeWlEndpoints: map[proto.WorkloadEndpointID]*proto.WorkloadEndpoint{}, activeWlIfaceNameToID: map[string]proto.WorkloadEndpointID{}, activeWlIDToChains: map[proto.WorkloadEndpointID][]*iptables.Chain{}, shadowedWlEndpoints: map[proto.WorkloadEndpointID]*proto.WorkloadEndpoint{}, wlIfaceNamesToReconfigure: set.New(), epIDsToUpdateStatus: set.New(), hostIfaceToAddrs: map[string]set.Set{}, rawHostEndpoints: map[proto.HostEndpointID]*proto.HostEndpoint{}, hostEndpointsDirty: true, activeHostIfaceToRawChains: map[string][]*iptables.Chain{}, activeHostIfaceToFiltChains: map[string][]*iptables.Chain{}, activeHostIfaceToMangleChains: map[string][]*iptables.Chain{}, // Caches of the current dispatch chains indexed by chain name. We use these to // calculate deltas when we need to update the chains. activeWlDispatchChains: map[string]*iptables.Chain{}, activeHostFilterDispatchChains: map[string]*iptables.Chain{}, activeHostMangleDispatchChains: map[string]*iptables.Chain{}, activeHostRawDispatchChains: map[string]*iptables.Chain{}, activeEPMarkDispatchChains: map[string]*iptables.Chain{}, needToCheckDispatchChains: true, // Need to do start-of-day update. needToCheckEndpointMarkChains: true, // Need to do start-of-day update. OnEndpointStatusUpdate: onWorkloadEndpointStatusUpdate, callbacks: newEndpointManagerCallbacks(callbacks, ipVersion), } } func (m *endpointManager) OnUpdate(protoBufMsg interface{}) { log.WithField("msg", protoBufMsg).Debug("Received message") switch msg := protoBufMsg.(type) { case *proto.WorkloadEndpointUpdate: m.pendingWlEpUpdates[*msg.Id] = msg.Endpoint case *proto.WorkloadEndpointRemove: m.pendingWlEpUpdates[*msg.Id] = nil case *proto.HostEndpointUpdate: log.WithField("msg", msg).Debug("Host endpoint update") m.callbacks.InvokeUpdateHostEndpoint(*msg.Id) m.rawHostEndpoints[*msg.Id] = msg.Endpoint m.hostEndpointsDirty = true m.epIDsToUpdateStatus.Add(*msg.Id) case *proto.HostEndpointRemove: log.WithField("msg", msg).Debug("Host endpoint removed") m.callbacks.InvokeRemoveHostEndpoint(*msg.Id) delete(m.rawHostEndpoints, *msg.Id) m.hostEndpointsDirty = true m.epIDsToUpdateStatus.Add(*msg.Id) case *ifaceUpdate: log.WithField("update", msg).Debug("Interface state changed.") m.pendingIfaceUpdates[msg.Name] = msg.State case *ifaceAddrsUpdate: log.WithField("update", msg).Debug("Interface addrs changed.") if m.wlIfacesRegexp.MatchString(msg.Name) { log.WithField("update", msg).Debug("Workload interface, ignoring.") return } if msg.Addrs != nil { m.hostIfaceToAddrs[msg.Name] = msg.Addrs } else { delete(m.hostIfaceToAddrs, msg.Name) } m.hostEndpointsDirty = true } } func (m *endpointManager) CompleteDeferredWork() error { // Copy the pending interface state to the active set and mark any interfaces that have // changed state for reconfiguration by resolveWorkload/HostEndpoints() for ifaceName, state := range m.pendingIfaceUpdates { if state == ifacemonitor.StateUp { m.activeUpIfaces.Add(ifaceName) if m.wlIfacesRegexp.MatchString(ifaceName) { log.WithField("ifaceName", ifaceName).Info( "Workload interface came up, marking for reconfiguration.") m.wlIfaceNamesToReconfigure.Add(ifaceName) } } else { m.activeUpIfaces.Discard(ifaceName) } // If this interface is linked to any already-existing endpoints, mark the endpoint // status for recalculation. If the matching endpoint changes when we do // resolveHostEndpoints() then that will mark old and new matching endpoints for // update. m.markEndpointStatusDirtyByIface(ifaceName) // Clean up as we go... delete(m.pendingIfaceUpdates, ifaceName) } m.resolveWorkloadEndpoints() if m.hostEndpointsDirty { log.Debug("Host endpoints updated, resolving them.") m.resolveHostEndpoints() m.hostEndpointsDirty = false } if m.kubeIPVSSupportEnabled && m.needToCheckEndpointMarkChains { m.resolveEndpointMarks() m.needToCheckEndpointMarkChains = false } // Now send any endpoint status updates. m.updateEndpointStatuses() return nil } func (m *endpointManager) GetRouteTableSyncers() []routeTableSyncer { return []routeTableSyncer{m.routeTable} } func (m *endpointManager) markEndpointStatusDirtyByIface(ifaceName string) { logCxt := log.WithField("ifaceName", ifaceName) if epID, ok := m.activeWlIfaceNameToID[ifaceName]; ok { logCxt.Info("Workload interface state changed; marking for status update.") m.epIDsToUpdateStatus.Add(epID) } else if epID, ok := m.activeIfaceNameToHostEpID[ifaceName]; ok { logCxt.Info("Host interface state changed; marking for status update.") m.epIDsToUpdateStatus.Add(epID) } else { // We don't know about this interface yet (or it's already been deleted). // If the endpoint gets created, we'll do the update then. If it's been // deleted, we've already cleaned it up. logCxt.Debug("Ignoring interface state change for unknown interface.") } } func (m *endpointManager) updateEndpointStatuses() { log.WithField("dirtyEndpoints", m.epIDsToUpdateStatus).Debug("Reporting endpoint status.") m.epIDsToUpdateStatus.Iter(func(item interface{}) error { switch id := item.(type) { case proto.WorkloadEndpointID: status := m.calculateWorkloadEndpointStatus(id) m.OnEndpointStatusUpdate(m.ipVersion, id, status) case proto.HostEndpointID: status := m.calculateHostEndpointStatus(id) m.OnEndpointStatusUpdate(m.ipVersion, id, status) } return set.RemoveItem }) } func (m *endpointManager) calculateWorkloadEndpointStatus(id proto.WorkloadEndpointID) string { logCxt := log.WithField("workloadEndpointID", id) logCxt.Debug("Re-evaluating workload endpoint status") var operUp, adminUp, failed bool workload, known := m.activeWlEndpoints[id] if known { adminUp = workload.State == "active" operUp = m.activeUpIfaces.Contains(workload.Name) failed = m.wlIfaceNamesToReconfigure.Contains(workload.Name) } // Note: if endpoint is not known (i.e. has been deleted), status will be "", which signals // a deletion. var status string if known { if failed { status = "error" } else if operUp && adminUp { status = "up" } else { status = "down" } } logCxt = logCxt.WithFields(log.Fields{ "known": known, "failed": failed, "operUp": operUp, "adminUp": adminUp, "status": status, }) logCxt.Info("Re-evaluated workload endpoint status") return status } func (m *endpointManager) calculateHostEndpointStatus(id proto.HostEndpointID) (status string) { logCxt := log.WithField("hostEndpointID", id) logCxt.Debug("Re-evaluating host endpoint status") var resolved, operUp bool _, known := m.rawHostEndpoints[id] // Note: if endpoint is not known (i.e. has been deleted), status will be "", which signals // a deletion. if known { ifaceNames := m.activeHostEpIDToIfaceNames[id] if len(ifaceNames) > 0 { resolved = true operUp = true for _, ifaceName := range ifaceNames { if ifaceName == allInterfaces { // For * host endpoints we don't let particular interfaces // impact their reported status, because it's unclear what // the semantics would be, and we'd potentially have to look // at every interface on the host. continue } ifaceUp := m.activeUpIfaces.Contains(ifaceName) logCxt.WithFields(log.Fields{ "ifaceName": ifaceName, "ifaceUp": ifaceUp, }).Debug("Status of matching interface.") operUp = operUp && ifaceUp } } if resolved && operUp { status = "up" } else if resolved { status = "down" } else { // Known but failed to resolve, map that to error. status = "error" } } logCxt = logCxt.WithFields(log.Fields{ "known": known, "resolved": resolved, "operUp": operUp, "status": status, }) logCxt.Info("Re-evaluated host endpoint status") return status } func (m *endpointManager) resolveWorkloadEndpoints() { if len(m.pendingWlEpUpdates) > 0 { // We're about to make endpoint updates, make sure we recheck the dispatch chains. m.needToCheckDispatchChains = true } removeActiveWorkload := func(logCxt *log.Entry, oldWorkload *proto.WorkloadEndpoint, id proto.WorkloadEndpointID) { m.callbacks.InvokeRemoveWorkload(oldWorkload) m.filterTable.RemoveChains(m.activeWlIDToChains[id]) delete(m.activeWlIDToChains, id) if oldWorkload != nil { m.epMarkMapper.ReleaseEndpointMark(oldWorkload.Name) // Remove any routes from the routing table. The RouteTable will remove any // conntrack entries as a side-effect. logCxt.Info("Workload removed, deleting old state.") m.routeTable.SetRoutes(oldWorkload.Name, nil) m.wlIfaceNamesToReconfigure.Discard(oldWorkload.Name) delete(m.activeWlIfaceNameToID, oldWorkload.Name) } delete(m.activeWlEndpoints, id) } // Repeat the following loop until the pending update map is empty. Note that it's possible // for an endpoint deletion to add a further update into the map (for a previously shadowed // endpoint), so we cannot assume that a single iteration will always be enough. for len(m.pendingWlEpUpdates) > 0 { // Handle pending workload endpoint updates. for id, workload := range m.pendingWlEpUpdates { logCxt := log.WithField("id", id) oldWorkload := m.activeWlEndpoints[id] if workload != nil { // Check if there is already an active workload endpoint with the same // interface name. if existingId, ok := m.activeWlIfaceNameToID[workload.Name]; ok && existingId != id { // There is. We need to decide which endpoint takes preference. // (We presume this is some kind of make before break logic, and the // situation will shortly be resolved by one of the endpoints being // removed. But in the meantime we must have predictable // behaviour.) logCxt.WithFields(log.Fields{ "interfaceName": workload.Name, "existingId": existingId, }).Info("New endpoint has same iface name as existing") if wlIdsAscending(&existingId, &id) { logCxt.Info("Existing endpoint takes preference") m.shadowedWlEndpoints[id] = workload delete(m.pendingWlEpUpdates, id) continue } logCxt.Info("New endpoint takes preference; remove existing") m.shadowedWlEndpoints[existingId] = m.activeWlEndpoints[existingId] removeActiveWorkload(logCxt, m.activeWlEndpoints[existingId], existingId) } logCxt.Info("Updating per-endpoint chains.") if oldWorkload != nil && oldWorkload.Name != workload.Name { logCxt.Debug("Interface name changed, cleaning up old state") m.epMarkMapper.ReleaseEndpointMark(oldWorkload.Name) if !m.bpfEnabled { m.filterTable.RemoveChains(m.activeWlIDToChains[id]) } m.routeTable.SetRoutes(oldWorkload.Name, nil) m.wlIfaceNamesToReconfigure.Discard(oldWorkload.Name) delete(m.activeWlIfaceNameToID, oldWorkload.Name) } var ingressPolicyNames, egressPolicyNames []string if len(workload.Tiers) > 0 { ingressPolicyNames = workload.Tiers[0].IngressPolicies egressPolicyNames = workload.Tiers[0].EgressPolicies } adminUp := workload.State == "active" if !m.bpfEnabled { chains := m.ruleRenderer.WorkloadEndpointToIptablesChains( workload.Name, m.epMarkMapper, adminUp, ingressPolicyNames, egressPolicyNames, workload.ProfileIds, ) m.filterTable.UpdateChains(chains) m.activeWlIDToChains[id] = chains } // Collect the IP prefixes that we want to route locally to this endpoint: logCxt.Info("Updating endpoint routes.") var ( ipStrings []string natInfos []*proto.NatInfo addrSuffix string ) if m.ipVersion == 4 { ipStrings = workload.Ipv4Nets natInfos = workload.Ipv4Nat addrSuffix = "/32" } else { ipStrings = workload.Ipv6Nets natInfos = workload.Ipv6Nat addrSuffix = "/128" } if len(natInfos) != 0 { old := ipStrings ipStrings = make([]string, len(old)+len(natInfos)) copy(ipStrings, old) for ii, natInfo := range natInfos { ipStrings[len(old)+ii] = natInfo.ExtIp + addrSuffix } } var mac net.HardwareAddr if workload.Mac != "" { var err error mac, err = net.ParseMAC(workload.Mac) if err != nil { logCxt.WithError(err).Error( "Failed to parse endpoint's MAC address") } } var routeTargets []routetable.Target if adminUp { logCxt.Debug("Endpoint up, adding routes") for _, s := range ipStrings { routeTargets = append(routeTargets, routetable.Target{ CIDR: ip.MustParseCIDROrIP(s), DestMAC: mac, }) } } else { logCxt.Debug("Endpoint down, removing routes") } m.routeTable.SetRoutes(workload.Name, routeTargets) m.wlIfaceNamesToReconfigure.Add(workload.Name) m.activeWlEndpoints[id] = workload m.activeWlIfaceNameToID[workload.Name] = id delete(m.pendingWlEpUpdates, id) m.callbacks.InvokeUpdateWorkload(oldWorkload, workload) } else { logCxt.Info("Workload removed, deleting its chains.") removeActiveWorkload(logCxt, oldWorkload, id) delete(m.pendingWlEpUpdates, id) delete(m.shadowedWlEndpoints, id) if oldWorkload != nil { // Check for another endpoint with the same interface name, // that should now become active. bestShadowedId := proto.WorkloadEndpointID{} for sId, sWorkload := range m.shadowedWlEndpoints { logCxt.Infof("Old workload %v", oldWorkload) logCxt.Infof("Shadowed workload %v", sWorkload) if sWorkload.Name == oldWorkload.Name { if bestShadowedId.EndpointId == "" || wlIdsAscending(&sId, &bestShadowedId) { bestShadowedId = sId } } } if bestShadowedId.EndpointId != "" { m.pendingWlEpUpdates[bestShadowedId] = m.shadowedWlEndpoints[bestShadowedId] delete(m.shadowedWlEndpoints, bestShadowedId) } } } // Update or deletion, make sure we update the interface status. m.epIDsToUpdateStatus.Add(id) } } if !m.bpfEnabled && m.needToCheckDispatchChains { // Rewrite the dispatch chains if they've changed. newDispatchChains := m.ruleRenderer.WorkloadDispatchChains(m.activeWlEndpoints) m.updateDispatchChains(m.activeWlDispatchChains, newDispatchChains, m.filterTable) m.needToCheckDispatchChains = false // Set flag to update endpoint mark chains. m.needToCheckEndpointMarkChains = true } m.wlIfaceNamesToReconfigure.Iter(func(item interface{}) error { ifaceName := item.(string) err := m.configureInterface(ifaceName) if err != nil { if exists, err := m.interfaceExistsInProcSys(ifaceName); err == nil && !exists { // Suppress log spam if interface has been removed. log.WithError(err).Debug("Failed to configure interface and it seems to be gone") } else { log.WithError(err).Warn("Failed to configure interface, will retry") } return nil } return set.RemoveItem }) } func wlIdsAscending(id1, id2 *proto.WorkloadEndpointID) bool { if id1.OrchestratorId == id2.OrchestratorId { // Need to compare WorkloadId. if id1.WorkloadId == id2.WorkloadId { // Need to compare EndpointId. return id1.EndpointId < id2.EndpointId } return id1.WorkloadId < id2.WorkloadId } return id1.OrchestratorId < id2.OrchestratorId } func (m *endpointManager) resolveEndpointMarks() { if m.bpfEnabled { return } // Render endpoint mark chains for active workload and host endpoint. newEndpointMarkDispatchChains := m.ruleRenderer.EndpointMarkDispatchChains(m.epMarkMapper, m.activeWlEndpoints, m.activeIfaceNameToHostEpID) m.updateDispatchChains(m.activeEPMarkDispatchChains, newEndpointMarkDispatchChains, m.filterTable) } func (m *endpointManager) resolveHostEndpoints() { // Host endpoint resolution // ------------------------ // // There is a set of non-workload interfaces on the local host, each possibly with // IP addresses, that might be controlled by HostEndpoint resources in the Calico // data model. The data model syntactically allows multiple HostEndpoint // resources to match a given interface - for example, an interface 'eth1' might // have address 10.240.0.34 and 172.19.2.98, and the data model might include: // // - HostEndpoint A with Name 'eth1' // // - HostEndpoint B with ExpectedIpv4Addrs including '10.240.0.34' // // - HostEndpoint C with ExpectedIpv4Addrs including '172.19.2.98'. // // but at runtime, at any given time, we only allow one HostEndpoint to govern // that interface. That HostEndpoint becomes the active one, and the others // remain inactive. (But if, for example, the active HostEndpoint resource was // deleted, then one of the inactive ones could take over.) Given multiple // matching HostEndpoint resources, the one that wins is the one with the // alphabetically earliest HostEndpointId // // So the process here is not about 'resolving' a particular HostEndpoint on its // own. Rather it is looking at the set of local non-workload interfaces and // seeing which of them are matched by the current set of HostEndpoints as a // whole. newIfaceNameToHostEpID := map[string]proto.HostEndpointID{} newPreDNATIfaceNameToHostEpID := map[string]proto.HostEndpointID{} newUntrackedIfaceNameToHostEpID := map[string]proto.HostEndpointID{} newHostEpIDToIfaceNames := map[proto.HostEndpointID][]string{} for ifaceName, ifaceAddrs := range m.hostIfaceToAddrs { ifaceCxt := log.WithFields(log.Fields{ "ifaceName": ifaceName, "ifaceAddrs": ifaceAddrs, }) bestHostEpId := proto.HostEndpointID{} var bestHostEp proto.HostEndpoint HostEpLoop: for id, hostEp := range m.rawHostEndpoints { logCxt := ifaceCxt.WithField("id", id) if forAllInterfaces(hostEp) { logCxt.Debug("Skip all-interfaces host endpoint") continue } logCxt.WithField("bestHostEpId", bestHostEpId).Debug("See if HostEp matches interface") if (bestHostEpId.EndpointId != "") && (bestHostEpId.EndpointId < id.EndpointId) { // We already have a HostEndpointId that is better than // this one, so no point looking any further. logCxt.Debug("No better than existing match") continue } if hostEp.Name == ifaceName { // The HostEndpoint has an explicit name that matches the // interface. logCxt.Debug("Match on explicit iface name") bestHostEpId = id bestHostEp = *hostEp continue } else if hostEp.Name != "" { // The HostEndpoint has an explicit name that isn't this // interface. Continue, so as not to allow it to match on // an IP address instead. logCxt.Debug("Rejected on explicit iface name") continue } for _, wantedList := range [][]string{hostEp.ExpectedIpv4Addrs, hostEp.ExpectedIpv6Addrs} { for _, wanted := range wantedList { logCxt.WithField("wanted", wanted).Debug("Address wanted by HostEp") if ifaceAddrs.Contains(wanted) { // The HostEndpoint expects an IP address // that is on this interface. logCxt.Debug("Match on address") bestHostEpId = id bestHostEp = *hostEp continue HostEpLoop } } } } if bestHostEpId.EndpointId != "" { logCxt := log.WithFields(log.Fields{ "ifaceName": ifaceName, "bestHostEpId": bestHostEpId, }) logCxt.Debug("Got HostEp for interface") newIfaceNameToHostEpID[ifaceName] = bestHostEpId if len(bestHostEp.UntrackedTiers) > 0 { // Optimisation: only add the endpoint chains to the raw (untracked) // table if there's some untracked policy to apply. This reduces // per-packet latency since every packet has to traverse the raw // table. logCxt.Debug("Endpoint has untracked policies.") newUntrackedIfaceNameToHostEpID[ifaceName] = bestHostEpId } if len(bestHostEp.PreDnatTiers) > 0 { // Similar optimisation (or neatness) for pre-DNAT policy. logCxt.Debug("Endpoint has pre-DNAT policies.") newPreDNATIfaceNameToHostEpID[ifaceName] = bestHostEpId } // Record that this host endpoint is in use, for status reporting. newHostEpIDToIfaceNames[bestHostEpId] = append( newHostEpIDToIfaceNames[bestHostEpId], ifaceName) } oldID, wasKnown := m.activeIfaceNameToHostEpID[ifaceName] newID, isKnown := newIfaceNameToHostEpID[ifaceName] if oldID != newID { logCxt := ifaceCxt.WithFields(log.Fields{ "oldID": m.activeIfaceNameToHostEpID[ifaceName], "newID": newIfaceNameToHostEpID[ifaceName], }) logCxt.Info("Endpoint matching interface changed") if wasKnown { logCxt.Debug("Endpoint was known, updating old endpoint status") m.epIDsToUpdateStatus.Add(oldID) } if isKnown { logCxt.Debug("Endpoint is known, updating new endpoint status") m.epIDsToUpdateStatus.Add(newID) } } } // Similar loop to find the best all-interfaces host endpoint. bestHostEpId := proto.HostEndpointID{} var bestHostEp proto.HostEndpoint for id, hostEp := range m.rawHostEndpoints { logCxt := log.WithField("id", id) if !forAllInterfaces(hostEp) { logCxt.Debug("Skip interface-specific host endpoint") continue } if (bestHostEpId.EndpointId != "") && (bestHostEpId.EndpointId < id.EndpointId) { // We already have a HostEndpointId that is better than // this one, so no point looking any further. logCxt.Debug("No better than existing match") continue } logCxt.Debug("New best all-interfaces host endpoint") bestHostEpId = id bestHostEp = *hostEp } if bestHostEpId.EndpointId != "" { logCxt := log.WithField("bestHostEpId", bestHostEpId) logCxt.Debug("Got all interfaces HostEp") if len(bestHostEp.PreDnatTiers) > 0 { logCxt.Debug("Endpoint has pre-DNAT policies.") newPreDNATIfaceNameToHostEpID[allInterfaces] = bestHostEpId } newIfaceNameToHostEpID[allInterfaces] = bestHostEpId // Record that this host endpoint is in use, for status reporting. newHostEpIDToIfaceNames[bestHostEpId] = append( newHostEpIDToIfaceNames[bestHostEpId], allInterfaces) } if !m.bpfEnabled { // Set up programming for the host endpoints that are now to be used. newHostIfaceFiltChains := map[string][]*iptables.Chain{} for ifaceName, id := range newIfaceNameToHostEpID { log.WithField("id", id).Info("Updating host endpoint chains.") hostEp := m.rawHostEndpoints[id] // Update the filter chain, for normal traffic. var ingressPolicyNames, egressPolicyNames []string var ingressForwardPolicyNames, egressForwardPolicyNames []string if len(hostEp.Tiers) > 0 { ingressPolicyNames = hostEp.Tiers[0].IngressPolicies egressPolicyNames = hostEp.Tiers[0].EgressPolicies } if len(hostEp.ForwardTiers) > 0 { ingressForwardPolicyNames = hostEp.ForwardTiers[0].IngressPolicies egressForwardPolicyNames = hostEp.ForwardTiers[0].EgressPolicies } filtChains := m.ruleRenderer.HostEndpointToFilterChains( ifaceName, m.epMarkMapper, ingressPolicyNames, egressPolicyNames, ingressForwardPolicyNames, egressForwardPolicyNames, hostEp.ProfileIds, ) if !reflect.DeepEqual(filtChains, m.activeHostIfaceToFiltChains[ifaceName]) { m.filterTable.UpdateChains(filtChains) } newHostIfaceFiltChains[ifaceName] = filtChains delete(m.activeHostIfaceToFiltChains, ifaceName) } newHostIfaceMangleChains := map[string][]*iptables.Chain{} for ifaceName, id := range newPreDNATIfaceNameToHostEpID { log.WithField("id", id).Info("Updating host endpoint mangle chains.") hostEp := m.rawHostEndpoints[id] // Update the mangle table, for preDNAT policy. var ingressPolicyNames []string if len(hostEp.PreDnatTiers) > 0 { ingressPolicyNames = hostEp.PreDnatTiers[0].IngressPolicies } mangleChains := m.ruleRenderer.HostEndpointToMangleChains( ifaceName, ingressPolicyNames, ) if !reflect.DeepEqual(mangleChains, m.activeHostIfaceToMangleChains[ifaceName]) { m.mangleTable.UpdateChains(mangleChains) } newHostIfaceMangleChains[ifaceName] = mangleChains delete(m.activeHostIfaceToMangleChains, ifaceName) } newHostIfaceRawChains := map[string][]*iptables.Chain{} for ifaceName, id := range newUntrackedIfaceNameToHostEpID { log.WithField("id", id).Info("Updating host endpoint raw chains.") hostEp := m.rawHostEndpoints[id] // Update the raw chain, for untracked traffic. var ingressPolicyNames, egressPolicyNames []string if len(hostEp.UntrackedTiers) > 0 { ingressPolicyNames = hostEp.UntrackedTiers[0].IngressPolicies egressPolicyNames = hostEp.UntrackedTiers[0].EgressPolicies } rawChains := m.ruleRenderer.HostEndpointToRawChains( ifaceName, ingressPolicyNames, egressPolicyNames, ) if !reflect.DeepEqual(rawChains, m.activeHostIfaceToRawChains[ifaceName]) { m.rawTable.UpdateChains(rawChains) } newHostIfaceRawChains[ifaceName] = rawChains delete(m.activeHostIfaceToRawChains, ifaceName) } // Remove programming for host endpoints that are not now in use. for ifaceName, chains := range m.activeHostIfaceToFiltChains { log.WithField("ifaceName", ifaceName).Info( "Host interface no longer protected, deleting its normal chains.") m.filterTable.RemoveChains(chains) } for ifaceName, chains := range m.activeHostIfaceToMangleChains { log.WithField("ifaceName", ifaceName).Info( "Host interface no longer protected, deleting its preDNAT chains.") m.mangleTable.RemoveChains(chains) } for ifaceName, chains := range m.activeHostIfaceToRawChains { log.WithField("ifaceName", ifaceName).Info( "Host interface no longer protected, deleting its untracked chains.") m.rawTable.RemoveChains(chains) } m.callbacks.InvokeInterfaceCallbacks(m.activeIfaceNameToHostEpID, newIfaceNameToHostEpID) m.activeHostIfaceToFiltChains = newHostIfaceFiltChains m.activeHostIfaceToMangleChains = newHostIfaceMangleChains m.activeHostIfaceToRawChains = newHostIfaceRawChains } // Remember the host endpoints that are now in use. m.activeIfaceNameToHostEpID = newIfaceNameToHostEpID m.activeHostEpIDToIfaceNames = newHostEpIDToIfaceNames if m.bpfEnabled { return } // Rewrite the filter dispatch chains if they've changed. log.WithField("resolvedHostEpIds", newIfaceNameToHostEpID).Debug("Rewrite filter dispatch chains?") defaultIfaceName := "" if _, ok := newIfaceNameToHostEpID[allInterfaces]; ok { // All-interfaces host endpoint is active. Arrange for it to be the default, // instead of trying to dispatch to it directly based on the non-existent interface // name *. defaultIfaceName = allInterfaces delete(newIfaceNameToHostEpID, allInterfaces) } newFilterDispatchChains := m.ruleRenderer.HostDispatchChains(newIfaceNameToHostEpID, defaultIfaceName, true) m.updateDispatchChains(m.activeHostFilterDispatchChains, newFilterDispatchChains, m.filterTable) // Set flag to update endpoint mark chains. m.needToCheckEndpointMarkChains = true // Rewrite the mangle dispatch chains if they've changed. log.WithField("resolvedHostEpIds", newPreDNATIfaceNameToHostEpID).Debug("Rewrite mangle dispatch chains?") defaultIfaceName = "" if _, ok := newPreDNATIfaceNameToHostEpID[allInterfaces]; ok { // All-interfaces host endpoint is active. Arrange for it to be the // default. This is handled the same as the filter dispatch chains above. defaultIfaceName = allInterfaces delete(newPreDNATIfaceNameToHostEpID, allInterfaces) } newMangleDispatchChains := m.ruleRenderer.FromHostDispatchChains(newPreDNATIfaceNameToHostEpID, defaultIfaceName) m.updateDispatchChains(m.activeHostMangleDispatchChains, newMangleDispatchChains, m.mangleTable) // Rewrite the raw dispatch chains if they've changed. log.WithField("resolvedHostEpIds", newUntrackedIfaceNameToHostEpID).Debug("Rewrite raw dispatch chains?") newRawDispatchChains := m.ruleRenderer.HostDispatchChains(newUntrackedIfaceNameToHostEpID, "", false) m.updateDispatchChains(m.activeHostRawDispatchChains, newRawDispatchChains, m.rawTable) log.Debug("Done resolving host endpoints.") } // updateDispatchChains updates one of the sets of dispatch chains. It sends the changes to the // given iptables.Table and records the updates in the activeChains map. // // Calculating the minimum update prevents log spam and reduces the work needed in the Table. func (m *endpointManager) updateDispatchChains( activeChains map[string]*iptables.Chain, newChains []*iptables.Chain, table iptablesTable, ) { seenChains := set.New() for _, newChain := range newChains { seenChains.Add(newChain.Name) oldChain := activeChains[newChain.Name] if !reflect.DeepEqual(newChain, oldChain) { table.UpdateChain(newChain) activeChains[newChain.Name] = newChain } } for name := range activeChains { if !seenChains.Contains(name) { table.RemoveChainByName(name) delete(activeChains, name) } } } func (m *endpointManager) interfaceExistsInProcSys(name string) (bool, error) { var directory string if m.ipVersion == 4 { directory = fmt.Sprintf("/proc/sys/net/ipv4/conf/%s", name) } else { directory = fmt.Sprintf("/proc/sys/net/ipv6/conf/%s", name) } _, err := m.osStat(directory) if os.IsNotExist(err) { return false, nil } if err != nil { return false, err } return true, nil } func (m *endpointManager) configureInterface(name string) error { if !m.activeUpIfaces.Contains(name) { log.WithField("ifaceName", name).Info( "Skipping configuration of interface because it is oper down.") return nil } // Special case: for security, even if our IPv6 support is disabled, try to disable RAs on the interface. acceptRAPath := fmt.Sprintf("/proc/sys/net/ipv6/conf/%s/accept_ra", name) err := m.writeProcSys(acceptRAPath, "0") if err != nil { if exists, err := m.interfaceExistsInProcSys(name); err == nil && !exists { log.WithField("file", acceptRAPath).Debug( "Failed to set accept_ra flag. Interface is missing in /proc/sys.") } else { log.WithField("ifaceName", name).Warnf("Could not set accept_ra: %v", err) } } log.WithField("ifaceName", name).Info( "Applying /proc/sys configuration to interface.") if m.ipVersion == 4 { // Enable routing to localhost. This is required to allow for NAT to the local // host. err := m.writeProcSys(fmt.Sprintf("/proc/sys/net/ipv4/conf/%s/route_localnet", name), "1") if err != nil { return err } // Normally, the kernel has a delay before responding to proxy ARP but we know // that's not needed in a Calico network so we disable it. err = m.writeProcSys(fmt.Sprintf("/proc/sys/net/ipv4/neigh/%s/proxy_delay", name), "0") if err != nil { return err } // Enable proxy ARP, this makes the host respond to all ARP requests with its own // MAC. This has a couple of advantages: // // - In OpenStack, we're forced to configure the guest's networking using DHCP. // Since DHCP requires a subnet and gateway, representing the Calico network // in the natural way would lose a lot of IP addresses. For IPv4, we'd have to // advertise a distinct /30 to each guest, which would use up 4 IPs per guest. // Using proxy ARP, we can advertise the whole pool to each guest as its subnet // but have the host respond to all ARP requests and route all the traffic whether // it is on or off subnet. // // - For containers, we install explicit routes into the containers network // namespace and we use a link-local address for the gateway. Turing on proxy ARP // means that we don't need to assign the link local address explicitly to each // host side of the veth, which is one fewer thing to maintain and one fewer // thing we may clash over. err = m.writeProcSys(fmt.Sprintf("/proc/sys/net/ipv4/conf/%s/proxy_arp", name), "1") if err != nil { return err } // Enable IP forwarding of packets coming _from_ this interface. For packets to // be forwarded in both directions we need this flag to be set on the fabric-facing // interface too (or for the global default to be set). err = m.writeProcSys(fmt.Sprintf("/proc/sys/net/ipv4/conf/%s/forwarding", name), "1") if err != nil { return err } } else { // Enable proxy NDP, similarly to proxy ARP, described above. err := m.writeProcSys(fmt.Sprintf("/proc/sys/net/ipv6/conf/%s/proxy_ndp", name), "1") if err != nil { return err } // Enable IP forwarding of packets coming _from_ this interface. For packets to // be forwarded in both directions we need this flag to be set on the fabric-facing // interface too (or for the global default to be set). err = m.writeProcSys(fmt.Sprintf("/proc/sys/net/ipv6/conf/%s/forwarding", name), "1") if err != nil { return err } } return nil } func writeProcSys(path, value string) error { f, err := os.OpenFile(path, os.O_WRONLY, 0) if err != nil { return err } n, err := f.Write([]byte(value)) if err == nil && n < len(value) { err = io.ErrShortWrite } if err1 := f.Close(); err == nil { err = err1 } return err } // The interface name that we use to mean "all interfaces". This is intentionally longer than // IFNAMSIZ (16) characters, so that it can't possibly match a real interface name. var allInterfaces = "any-interface-at-all" // True if the given host endpoint is for all interfaces, as opposed to for a specific interface. func forAllInterfaces(hep *proto.HostEndpoint) bool { return hep.Name == "*" } // for implementing the endpointsSource interface func (m *endpointManager) GetRawHostEndpoints() map[proto.HostEndpointID]*proto.HostEndpoint { return m.rawHostEndpoints }
Related literature {#sec1} ================== For background to *N*-heterocyclic sulfanilamide derivatives, see: Kuz'mina *et al.* (1962[@bb6]); Jensen & Thorsteinsson (1941[@bb5]); Hunter & Kolloff (1943[@bb4]); Hultquist *et al.* (1951[@bb3]). For a related synthesis, see: Razvodovskaya *et al.* (1990[@bb7]). Experimental {#sec2} ============ {#sec2.1} ### Crystal data {#sec2.1.1} C~17~H~18~N~2~O~4~S~3~*M* *~r~* = 410.51Monoclinic,*a* = 9.3825 (2) Å*b* = 14.4047 (2) Å*c* = 14.2279 (3) Åβ = 102.666 (1)°*V* = 1876.14 (6) Å^3^*Z* = 4Mo *K*α radiationμ = 0.42 mm^−1^*T* = 296 K0.40 × 0.40 × 0.40 mm ### Data collection {#sec2.1.2} Bruker APEXII CCD diffractometerAbsorption correction: multi-scan (*SADABS*; Bruker, 2001[@bb1]) *T* ~min~ = 0.845, *T* ~max~ = 0.84517749 measured reflections4652 independent reflections3756 reflections with *I* \> 2σ(*I*)*R* ~int~ = 0.025 ### Refinement {#sec2.1.3} *R*\[*F* ^2^ \> 2σ(*F* ^2^)\] = 0.037*wR*(*F* ^2^) = 0.107*S* = 1.044652 reflections236 parametersH-atom parameters constrainedΔρ~max~ = 0.29 e Å^−3^Δρ~min~ = −0.29 e Å^−3^ {#d5e499} Data collection: *APEX2* (Bruker, 2010[@bb2]); cell refinement: *SAINT* (Bruker, 2010[@bb2]); data reduction: *SAINT*; program(s) used to solve structure: *SHELXS97* (Sheldrick, 2008[@bb8]); program(s) used to refine structure: *SHELXL97* (Sheldrick, 2008[@bb8]); molecular graphics: *SHELXTL* (Sheldrick, 2008[@bb8]); software used to prepare material for publication: *SHELXL97*. Supplementary Material ====================== Crystal structure: contains datablock(s) I, global. DOI: [10.1107/S1600536811028807/fj2445sup1.cif](http://dx.doi.org/10.1107/S1600536811028807/fj2445sup1.cif) Structure factors: contains datablock(s) I. DOI: [10.1107/S1600536811028807/fj2445Isup2.hkl](http://dx.doi.org/10.1107/S1600536811028807/fj2445Isup2.hkl) Supplementary material file. DOI: [10.1107/S1600536811028807/fj2445Isup3.cml](http://dx.doi.org/10.1107/S1600536811028807/fj2445Isup3.cml) Additional supplementary materials: [crystallographic information](http://scripts.iucr.org/cgi-bin/sendsupfiles?fj2445&file=fj2445sup0.html&mime=text/html); [3D view](http://scripts.iucr.org/cgi-bin/sendcif?fj2445sup1&Qmime=cif); [checkCIF report](http://scripts.iucr.org/cgi-bin/paper?fj2445&checkcif=yes) Supplementary data and figures for this paper are available from the IUCr electronic archives (Reference: [FJ2445](http://scripts.iucr.org/cgi-bin/sendsup?fj2445)). We are grateful to the National Science Council of the Republic of China and the Nanya Institute of Technology for support. Comment ======= In a series of N-heterocyclic sulfanilamide derivatives which prepared and are investigating biologically one of the compounds, 2-sulfanilyl-aminothiazoline, proved to be of particular interest, both chemically and therapeutically. (Kuz\'mina *et al.*, 1962; Jensen *et al.*, 1941; Hunter *et al.*, 1943; Hultquist *et al.*, 1951). The synthesis and character the 3-substituted 2-(thiophosphorylimino)thiazolidine compounds are also reported (Razvodovskaya *et al.*, 1990). Within this project the crystal structure of the title compound was determined. The crystal structure features inversion-related dimers linked by the weak intermolecular C---H···pi interactions in the solid state, while *Cg1* and *Cg2* are the centers of C4---C9 and C11---C16 and these carbon atoms of mean devition from plane are 0.0008 and 0.0043 Å. Weak C---H···O hydrogen bonds among the molecules are also observed in the solid state. The thiazolidine and the phenyl rings are not coplanar but twisted with each other by an interplanar angles of 79.1 (1) and 85.0 (1)°, respectively, while the dihedral angle between two phenyl groups is 76.0 (1)°. Experimental {#experimental} ============ The title compound was prepared according to a published procedure (Razvodovskaya *et al.*, 1990). Block like crystals suitable for X-ray crystallography were obtained by slow evaporization of the solvent from a solution of the title compound in methanol. Refinement {#refinement} ========== All the hydrogen atoms were discernible in the difference Fourier maps. However, they were situated into the idealized positions and constrained by the riding atom approximation: *C*---Hmethyl = 0.96 Å and *C*---Hmethylene = 0.97 Å while the methyls and methylenes were allowed to rotate about their respective axes; *C*---Haryl = 0.93 Å; *U*~iso~(Hmethyl) = 1.5*U*~eq~(Cmethyl); *U*~iso~(Haryl or methylene) = 1.2*U*~eq~(Caryl or methylene). Figures ======= ![Crystal structure of the title compound with atom labeling and displacement ellipsoids drawn at the 30% probability level.](e-67-o2103-fig1){#Fap1} Crystal data {#tablewrapcrystaldatalong} ============ ------------------------- --------------------------------------- C~17~H~18~N~2~O~4~S~3~ *F*(000) = 856 *M~r~* = 410.51 *D*~x~ = 1.453 Mg m^−3^ Monoclinic, *P*2~1~/*n* Mo *K*α radiation, λ = 0.71073 Å Hall symbol: -P 2yn Cell parameters from 8638 reflections *a* = 9.3825 (2) Å θ = 2.4--28.2° *b* = 14.4047 (2) Å µ = 0.42 mm^−1^ *c* = 14.2279 (3) Å *T* = 296 K β = 102.666 (1)° Block, colourless *V* = 1876.14 (6) Å^3^ 0.40 × 0.40 × 0.40 mm *Z* = 4 ------------------------- --------------------------------------- Data collection {#tablewrapdatacollectionlong} =============== ------------------------------------------------------------ -------------------------------------- Bruker APEXII CCD diffractometer 4652 independent reflections Radiation source: fine-focus sealed tube 3756 reflections with *I* \> 2σ(*I*) graphite *R*~int~ = 0.025 phi and ω scans θ~max~ = 28.3°, θ~min~ = 2.0° Absorption correction: multi-scan (*SADABS*; Bruker, 2001) *h* = −12→12 *T*~min~ = 0.845, *T*~max~ = 0.845 *k* = −15→19 17749 measured reflections *l* = −17→18 ------------------------------------------------------------ -------------------------------------- Refinement {#tablewraprefinementdatalong} ========== ---------------------------------------------------------------- ------------------------------------------------------------------------------------------------- Refinement on *F*^2^ Secondary atom site location: difference Fourier map Least-squares matrix: full Hydrogen site location: inferred from neighbouring sites *R*\[*F*^2^ \> 2σ(*F*^2^)\] = 0.037 H-atom parameters constrained *wR*(*F*^2^) = 0.107 *w* = 1/\[σ^2^(*F*~o~^2^) + (0.0497*P*)^2^ + 0.6237*P*\] where *P* = (*F*~o~^2^ + 2*F*~c~^2^)/3 *S* = 1.04 (Δ/σ)~max~ \< 0.001 4652 reflections Δρ~max~ = 0.29 e Å^−3^ 236 parameters Δρ~min~ = −0.29 e Å^−3^ 0 restraints Extinction correction: *SHELXL*, Fc^\*^=kFc\[1+0.001xFc^2^λ^3^/sin(2θ)\]^-1/4^ Primary atom site location: structure-invariant direct methods Extinction coefficient: 0.0091 (9) ---------------------------------------------------------------- ------------------------------------------------------------------------------------------------- Special details {#specialdetails} =============== ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Geometry. All e.s.d.\'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.\'s are taken into account individually in the estimation of e.s.d.\'s in distances, angles and torsion angles; correlations between e.s.d.\'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.\'s is used for estimating e.s.d.\'s involving l.s. planes. Refinement. Refinement of *F*^2^ against ALL reflections. The weighted *R*-factor *wR* and goodness of fit *S* are based on *F*^2^, conventional *R*-factors *R* are based on *F*, with *F* set to zero for negative *F*^2^. The threshold expression of *F*^2^ \> σ(*F*^2^) is used only for calculating *R*-factors(gt) *etc*. and is not relevant to the choice of reflections for refinement. *R*-factors based on *F*^2^ are statistically about twice as large as those based on *F*, and *R*- factors based on ALL data will be even larger. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å^2^) {#tablewrapcoords} ================================================================================================== ------ --------------- -------------- --------------- -------------------- -- *x* *y* *z* *U*~iso~\*/*U*~eq~ N1 0.16502 (15) 0.17067 (10) 0.38640 (10) 0.0446 (3) N2 0.08736 (15) 0.26664 (10) 0.25505 (10) 0.0448 (3) S1 0.34585 (5) 0.17353 (4) 0.27357 (4) 0.06158 (16) S2 0.10549 (5) 0.31474 (3) 0.15432 (3) 0.04983 (13) S3 0.01808 (5) 0.19186 (3) 0.43270 (3) 0.04805 (13) O1 −0.00727 (19) 0.38361 (9) 0.13422 (10) 0.0693 (4) O2 0.25206 (18) 0.34381 (12) 0.15775 (11) 0.0739 (4) O3 0.04496 (18) 0.13807 (11) 0.51854 (10) 0.0702 (4) O4 0.00217 (15) 0.28956 (9) 0.43817 (10) 0.0580 (3) C1 0.2585 (2) 0.08847 (14) 0.41762 (15) 0.0590 (5) H1A 0.2091 0.0321 0.3909 0.071\* H1B 0.2818 0.0836 0.4873 0.071\* C2 0.3947 (2) 0.10280 (16) 0.38077 (16) 0.0664 (6) H2A 0.4335 0.0436 0.3656 0.080\* H2B 0.4684 0.1338 0.4290 0.080\* C3 0.18420 (17) 0.21042 (11) 0.30207 (11) 0.0407 (3) C4 0.06041 (19) 0.22754 (11) 0.06650 (12) 0.0431 (4) C5 −0.08309 (19) 0.19554 (12) 0.04084 (13) 0.0481 (4) H5A −0.1533 0.2177 0.0723 0.058\* C6 −0.1198 (2) 0.13072 (13) −0.03153 (14) 0.0534 (4) H6A −0.2156 0.1094 −0.0486 0.064\* C7 −0.0173 (2) 0.09647 (13) −0.07954 (13) 0.0542 (4) C8 0.1244 (2) 0.12907 (15) −0.05287 (14) 0.0607 (5) H8A 0.1943 0.1068 −0.0844 0.073\* C9 0.1647 (2) 0.19422 (14) 0.01986 (14) 0.0538 (4) H9A 0.2606 0.2152 0.0370 0.065\* C10 −0.0606 (3) 0.02670 (16) −0.15964 (16) 0.0787 (7) H10A −0.1623 0.0119 −0.1675 0.118\* H10B −0.0034 −0.0287 −0.1439 0.118\* H10C −0.0438 0.0524 −0.2185 0.118\* C11 −0.12963 (18) 0.14496 (12) 0.34887 (12) 0.0460 (4) C12 −0.1544 (2) 0.04982 (13) 0.35063 (15) 0.0578 (5) H12A −0.0965 0.0124 0.3971 0.069\* C13 −0.2660 (2) 0.01228 (15) 0.28240 (17) 0.0652 (5) H13A −0.2837 −0.0512 0.2836 0.078\* C14 −0.3526 (2) 0.06605 (15) 0.21206 (15) 0.0579 (5) C15 −0.3267 (2) 0.16119 (15) 0.21243 (15) 0.0557 (5) H15A −0.3850 0.1986 0.1661 0.067\* C16 −0.21635 (19) 0.20075 (13) 0.28026 (14) 0.0507 (4) H16A −0.2002 0.2644 0.2800 0.061\* C17 −0.4719 (3) 0.0231 (2) 0.13607 (19) 0.0864 (8) H17A −0.4738 −0.0427 0.1461 0.130\* H17B −0.4536 0.0354 0.0735 0.130\* H17C −0.5644 0.0494 0.1403 0.130\* ------ --------------- -------------- --------------- -------------------- -- Atomic displacement parameters (Å^2^) {#tablewrapadps} ===================================== ----- ------------- ------------- ------------- --------------- -------------- --------------- *U*^11^ *U*^22^ *U*^33^ *U*^12^ *U*^13^ *U*^23^ N1 0.0461 (7) 0.0435 (7) 0.0418 (7) 0.0049 (6) 0.0045 (6) −0.0002 (6) N2 0.0453 (7) 0.0459 (8) 0.0429 (7) 0.0017 (6) 0.0095 (6) 0.0013 (6) S1 0.0508 (3) 0.0734 (3) 0.0623 (3) 0.0140 (2) 0.0163 (2) −0.0073 (2) S2 0.0627 (3) 0.0401 (2) 0.0470 (2) −0.00536 (18) 0.0128 (2) 0.00129 (17) S3 0.0555 (3) 0.0491 (3) 0.0406 (2) −0.00057 (18) 0.01281 (18) −0.00071 (17) O1 0.1039 (12) 0.0425 (7) 0.0589 (8) 0.0189 (7) 0.0122 (8) 0.0043 (6) O2 0.0802 (10) 0.0758 (10) 0.0684 (9) −0.0367 (8) 0.0217 (8) −0.0044 (8) O3 0.0878 (11) 0.0805 (10) 0.0427 (7) 0.0010 (8) 0.0149 (7) 0.0108 (7) O4 0.0645 (8) 0.0504 (7) 0.0619 (8) 0.0006 (6) 0.0197 (7) −0.0130 (6) C1 0.0684 (12) 0.0496 (10) 0.0521 (10) 0.0146 (9) −0.0017 (9) 0.0011 (8) C2 0.0637 (12) 0.0655 (13) 0.0631 (12) 0.0251 (10) −0.0009 (10) −0.0143 (10) C3 0.0399 (8) 0.0402 (8) 0.0402 (8) −0.0027 (6) 0.0052 (6) −0.0082 (6) C4 0.0484 (9) 0.0396 (8) 0.0416 (8) 0.0018 (7) 0.0110 (7) 0.0055 (7) C5 0.0472 (9) 0.0468 (9) 0.0520 (10) 0.0050 (7) 0.0144 (7) 0.0031 (8) C6 0.0529 (10) 0.0479 (10) 0.0546 (10) −0.0025 (8) 0.0014 (8) 0.0027 (8) C7 0.0726 (12) 0.0458 (10) 0.0403 (9) 0.0106 (9) 0.0041 (8) 0.0038 (7) C8 0.0659 (12) 0.0680 (13) 0.0519 (11) 0.0153 (10) 0.0210 (9) −0.0008 (9) C9 0.0484 (9) 0.0636 (11) 0.0512 (10) 0.0007 (8) 0.0153 (8) 0.0020 (8) C10 0.1124 (19) 0.0631 (13) 0.0513 (12) 0.0143 (13) −0.0025 (12) −0.0083 (10) C11 0.0473 (9) 0.0452 (9) 0.0478 (9) −0.0052 (7) 0.0155 (7) 0.0038 (7) C12 0.0651 (12) 0.0470 (10) 0.0626 (12) −0.0066 (9) 0.0171 (9) 0.0100 (9) C13 0.0710 (13) 0.0470 (11) 0.0826 (15) −0.0152 (9) 0.0279 (11) −0.0052 (10) C14 0.0491 (10) 0.0687 (12) 0.0607 (11) −0.0102 (9) 0.0226 (9) −0.0147 (10) C15 0.0441 (9) 0.0665 (12) 0.0578 (11) −0.0001 (8) 0.0139 (8) 0.0053 (9) C16 0.0467 (9) 0.0451 (9) 0.0622 (11) −0.0022 (7) 0.0157 (8) 0.0064 (8) C17 0.0689 (14) 0.0996 (19) 0.0894 (18) −0.0144 (13) 0.0148 (13) −0.0413 (15) ----- ------------- ------------- ------------- --------------- -------------- --------------- Geometric parameters (Å, °) {#tablewrapgeomlong} =========================== ---------------- ------------- ------------------- ------------- N1---C3 1.376 (2) C6---H6A 0.9300 N1---C1 1.483 (2) C7---C8 1.382 (3) N1---S3 1.6811 (15) C7---C10 1.507 (3) N2---C3 1.289 (2) C8---C9 1.387 (3) N2---S2 1.6340 (15) C8---H8A 0.9300 S1---C3 1.7368 (16) C9---H9A 0.9300 S1---C2 1.808 (2) C10---H10A 0.9600 S2---O2 1.4282 (15) C10---H10B 0.9600 S2---O1 1.4327 (15) C10---H10C 0.9600 S2---C4 1.7566 (17) C11---C16 1.383 (3) S3---O4 1.4192 (14) C11---C12 1.391 (3) S3---O3 1.4215 (14) C12---C13 1.373 (3) S3---C11 1.7536 (18) C12---H12A 0.9300 C1---C2 1.498 (3) C13---C14 1.380 (3) C1---H1A 0.9700 C13---H13A 0.9300 C1---H1B 0.9700 C14---C15 1.392 (3) C2---H2A 0.9700 C14---C17 1.508 (3) C2---H2B 0.9700 C15---C16 1.375 (3) C4---C9 1.383 (2) C15---H15A 0.9300 C4---C5 1.394 (2) C16---H16A 0.9300 C5---C6 1.377 (3) C17---H17A 0.9600 C5---H5A 0.9300 C17---H17B 0.9600 C6---C7 1.386 (3) C17---H17C 0.9600 C3---N1---C1 114.30 (15) C7---C6---H6A 119.2 C3---N1---S3 122.77 (11) C8---C7---C6 118.23 (17) C1---N1---S3 120.66 (13) C8---C7---C10 121.2 (2) C3---N2---S2 121.62 (12) C6---C7---C10 120.6 (2) C3---S1---C2 92.76 (9) C7---C8---C9 121.55 (18) O2---S2---O1 117.85 (10) C7---C8---H8A 119.2 O2---S2---N2 112.27 (9) C9---C8---H8A 119.2 O1---S2---N2 104.73 (8) C4---C9---C8 119.15 (18) O2---S2---C4 108.29 (9) C4---C9---H9A 120.4 O1---S2---C4 107.52 (9) C8---C9---H9A 120.4 N2---S2---C4 105.40 (8) C7---C10---H10A 109.5 O4---S3---O3 119.63 (9) C7---C10---H10B 109.5 O4---S3---N1 107.86 (8) H10A---C10---H10B 109.5 O3---S3---N1 103.39 (8) C7---C10---H10C 109.5 O4---S3---C11 110.02 (9) H10A---C10---H10C 109.5 O3---S3---C11 109.83 (9) H10B---C10---H10C 109.5 N1---S3---C11 104.90 (8) C16---C11---C12 120.69 (18) N1---C1---C2 106.21 (17) C16---C11---S3 120.74 (14) N1---C1---H1A 110.5 C12---C11---S3 118.52 (15) C2---C1---H1A 110.5 C13---C12---C11 118.67 (19) N1---C1---H1B 110.5 C13---C12---H12A 120.7 C2---C1---H1B 110.5 C11---C12---H12A 120.7 H1A---C1---H1B 108.7 C12---C13---C14 121.87 (19) C1---C2---S1 107.17 (13) C12---C13---H13A 119.1 C1---C2---H2A 110.3 C14---C13---H13A 119.1 S1---C2---H2A 110.3 C13---C14---C15 118.41 (19) C1---C2---H2B 110.3 C13---C14---C17 121.1 (2) S1---C2---H2B 110.3 C15---C14---C17 120.5 (2) H2A---C2---H2B 108.5 C16---C15---C14 120.96 (19) N2---C3---N1 120.10 (15) C16---C15---H15A 119.5 N2---C3---S1 128.55 (13) C14---C15---H15A 119.5 N1---C3---S1 111.35 (12) C15---C16---C11 119.39 (17) C9---C4---C5 120.25 (17) C15---C16---H16A 120.3 C9---C4---S2 120.28 (14) C11---C16---H16A 120.3 C5---C4---S2 119.40 (13) C14---C17---H17A 109.5 C6---C5---C4 119.28 (17) C14---C17---H17B 109.5 C6---C5---H5A 120.4 H17A---C17---H17B 109.5 C4---C5---H5A 120.4 C14---C17---H17C 109.5 C5---C6---C7 121.53 (18) H17A---C17---H17C 109.5 C5---C6---H6A 119.2 H17B---C17---H17C 109.5 ---------------- ------------- ------------------- ------------- Hydrogen-bond geometry (Å, °) {#tablewraphbondslong} ============================= --------------------------------------------------------------------------------------- Cg1 and Cg2 are the centroids of the C4--C9 and C11--C16 benzene rings, respectively. --------------------------------------------------------------------------------------- ---------------------- --------- --------- ----------- --------------- *D*---H···*A* *D*---H H···*A* *D*···*A* *D*---H···*A* C10---H10B···Cg1^i^ 0.97 2.91 3.567 (1) 127 C2---H2B···Cg2^ii^ 0.97 3.09 3.821 (1) 134 C1---H1B···O1^ii^ 0.97 2.59 3.394 (3) 141 C12---H12A···O3^iii^ 0.93 2.47 3.318 (2) 151 ---------------------- --------- --------- ----------- --------------- Symmetry codes: (i) −*x*, −*y*, −*z*; (ii) *x*+1/2, −*y*+1/2, *z*+1/2; (iii) −*x*, −*y*, −*z*+1. ###### Hydrogen-bond geometry (Å, °) *Cg*1 and *Cg*2 are the centroids of the C4--C9 and C11--C16 benzene rings, respectively. *D*---H⋯*A* *D*---H H⋯*A* *D*⋯*A* *D*---H⋯*A* ----------------------- --------- ------- ----------- ------------- C10---H10*B*⋯*Cg*1^i^ 0.97 2.91 3.567 (1) 127 C2---H2*B*⋯*Cg*2^ii^ 0.97 3.09 3.821 (1) 134 C1---H1*B*⋯O1^ii^ 0.97 2.59 3.394 (3) 141 C12---H12*A*⋯O3^iii^ 0.93 2.47 3.318 (2) 151 Symmetry codes: (i) ; (ii) ; (iii) .
s picked without replacement from {s: 1, i: 3, o: 9, n: 4, j: 2}? 1/323 Calculate prob of sequence aa when two letters picked without replacement from {a: 17, n: 1}. 8/9 Two letters picked without replacement from {w: 5, i: 1, z: 1, x: 2}. What is prob of sequence xw? 5/36 What is prob of sequence eehh when four letters picked without replacement from eeheeehee? 1/36 What is prob of sequence wnwk when four letters picked without replacement from {n: 4, y: 1, w: 6, l: 1, k: 1}? 1/143 Three letters picked without replacement from {r: 4, x: 15}. Give prob of sequence rxx. 140/969 What is prob of sequence cmi when three letters picked without replacement from {i: 5, o: 6, m: 1, c: 6}? 5/816 Calculate prob of sequence ceqe when four letters picked without replacement from znjcqzqzznzee. 1/4290 Four letters picked without replacement from buakuamme. Give prob of sequence ebmm. 1/1512 Three letters picked without replacement from {f: 4, a: 9, w: 3}. What is prob of sequence awa? 9/140 What is prob of sequence ccs when three letters picked without replacement from ssscckcsskckscccc? 7/85 What is prob of sequence xu when two letters picked without replacement from guuuxxxsuuuxg? 2/13 What is prob of sequence ufm when three letters picked without replacement from {m: 2, u: 1, f: 1}? 1/12 Two letters picked without replacement from vyyytttytv. What is prob of sequence ty? 8/45 Three letters picked without replacement from {m: 3, w: 1}. Give prob of sequence wmm. 1/4 Calculate prob of sequence zjcz when four letters picked without replacement from {z: 2, j: 2, c: 3, g: 4}. 1/660 Two letters picked without replacement from gebjwjyy. What is prob of sequence ey? 1/28 Three letters picked without replacement from rsbrrbrsbbb. What is prob of sequence srb? 4/99 What is prob of sequence eeyv when four letters picked without replacement from {e: 2, y: 1, v: 2}? 1/30 Calculate prob of sequence oott when four letters picked without replacement from ttottttoooot. 7/99 Two letters picked without replacement from ucwjnnnjnknncn. What is prob of sequence nk? 1/26 What is prob of sequence ucc when three letters picked without replacement from urrptrrrtctuttrtrc? 1/1224 Calculate prob of sequence dm when two letters picked without replacement from {o: 1, t: 1, d: 1, m: 1, a: 1}. 1/20 Calculate prob of sequence zz when two letters picked without replacement from zzozzvkzozpv. 5/22 What is prob of sequence ve when two letters picked without replacement from zzvzkevjevkvkkkezvve? 6/95 Four letters picked without replacement from lxlxxllxxxlll. What is prob of sequence lxxl? 21/286 Three letters picked without replacement from {y: 11, e: 2}. Give prob of sequence yye. 5/39 Three letters picked without replacement from {c: 2, d: 2}. Give prob of sequence dcd. 1/6 Calculate prob of sequence rd when two letters picked without replacement from {r: 1, x: 1, u: 2, d: 9}. 3/52 Calculate prob of sequence zm when two letters picked without replacement from {m: 1, y: 1, o: 1, r: 2, w: 8, z: 3}. 1/80 Three letters picked without replacement from {p: 3, i: 9}. Give prob of sequence iii. 21/55 Three letters picked without replacement from orpdpopdososlsd. What is prob of sequence sds? 3/455 Three letters picked without replacement from {u: 1, y: 3, b: 1, p: 2, a: 3, f: 1}. What is prob of sequence pba? 1/165 What is prob of sequence cch when three letters picked without replacement from {h: 3, c: 7}? 7/40 What is prob of sequence bb when two letters picked without replacement from ygbphpi? 0 What is prob of sequence zx when two letters picked without replacement from {x: 6, n: 3, z: 1, a: 4}? 3/91 Four letters picked without replacement from {n: 3, c: 8, r: 7}. What is prob of sequence rnrc? 7/510 What is prob of sequence dvv when three letters picked without replacement from vddvv? 1/5 Calculate prob of sequence aaah when four letters picked without replacement from {a: 3, h: 1, q: 16}. 1/19380 Calculate prob of sequence ehh when three letters picked without replacement from eheeheehheehhhhh. 3/20 What is prob of sequence qqq when three letters picked without replacement from {q: 5}? 1 Three letters picked without replacement from {d: 1, c: 1, s: 4, v: 3, q: 2, t: 1}. What is prob of sequence dss? 1/110 What is prob of sequence vbe when three letters picked without replacement from uhphvbbhhehbp? 1/572 What is prob of sequence zr when two letters picked without replacement from {z: 2, h: 2, c: 10, r: 4, i: 1}? 4/171 What is prob of sequence ydd when three letters picked without replacement from ydyyydycydycyccdy? 9/340 Two letters picked without replacement from pbppyxbeebbbbbxeepb. What is prob of sequence pb? 16/171 What is prob of sequence yddu when four letters picked without replacement from udduduy? 3/140 Three letters picked without replacement from {u: 1, i: 1, r: 2, e: 1, t: 2}. What is prob of sequence ute? 1/105 Four letters picked without replacement from {x: 1, q: 1, m: 1, o: 1, z: 6, l: 3}. Give prob of sequence oqxl. 1/5720 Two letters picked without replacement from umm. Give prob of sequence um. 1/3 Four letters picked without replacement from {m: 5, y: 2}. Give prob of sequence mymm. 1/7 Calculate prob of sequence cwoc when four letters picked without replacement from wcswwnwewwwcwwcocwc. 25/11628 What is prob of sequence xrn when three letters picked without replacement from {r: 1, d: 1, s: 1, x: 1, n: 1}? 1/60 Calculate prob of sequence hhee when four letters picked without replacement from ehhhheh. 1/21 What is prob of sequence aahh when four letters picked without replacement from vavvvavvvvvvhhv? 1/8190 Calculate prob of sequence ba when two letters picked without replacement from {i: 6, b: 7, a: 1, e: 3}. 7/272 Two letters picked without replacement from ojeooeejoffffoejo. Give prob of sequence ef. 1/17 Calculate prob of sequence ya when two letters picked without replacement from ayavfwvl. 1/28 Two letters picked without replacement from cgabw. Give prob of sequence gw. 1/20 Two letters picked without replacement from ljiyiy. Give prob of sequence iy. 2/15 Four letters picked without replacement from {c: 2, s: 1, b: 2, l: 1, j: 3, v: 3}. What is prob of sequence jsjl? 1/1980 Two letters picked without replacement from ibcdiggm. Give prob of sequence mi. 1/28 Calculate prob of sequence ge when two letters picked without replacement from oeuoueogeeuouuuuuoe. 5/342 Three letters picked without replacement from {d: 2, t: 1, a: 1, c: 1, r: 1, o: 2}. Give prob of sequence rtd. 1/168 Calculate prob of sequence ff when two letters picked without replacement from {f: 5, g: 2, x: 6}. 5/39 Three letters picked without replacement from {u: 2, m: 5, g: 4}. Give prob of sequence umu. 1/99 Calculate prob of sequence rm when two letters picked without replacement from myyyyyyllrxhyhhlhhll. 1/380 Three letters picked without replacement from {b: 6, z: 1, l: 5, t: 6}. Give prob of sequence lbl. 5/204 What is prob of sequence tttt when four letters picked without replacement from {t: 5}? 1 Two letters picked without replacement from {y: 1, f: 8, s: 5, e: 2}. What is prob of sequence sf? 1/6 Two letters picked without replacement from xuxqyee. What is prob of sequence uq? 1/42 Two letters picked without replacement from {d: 2, q: 6, x: 2, j: 1, o: 7}. What is prob of sequence jd? 1/153 Four letters picked without replacement from {v: 4, t: 5, l: 8}. Give prob of sequence tvll. 1/51 Calculate prob of sequence wk when two letters picked without replacement from wcwcwckcccwcww. 3/91 What is prob of sequence yg when two letters picked without replacement from bbjbbxybbbxxbgbbxj? 1/306 What is prob of sequence tt when two letters picked without replacement from {t: 9}? 1 Three letters picked without replacement from {c: 1, i: 5, d: 11}. Give prob of sequence iii. 1/68 What is prob of sequence jk when two letters picked without replacement from kxjkjjkjjkkjkj? 3/13 Two letters picked without replacement from {w: 4, k: 2, o: 1, f: 6, c: 3, r: 3}. Give prob of sequence fw. 4/57 What is prob of sequence lk when two letters picked without replacement from ppllslskllpps? 5/156 Three letters picked without replacement from butwtwbwbtwwbwwtbie. What is prob of sequence wbi? 35/5814 Four
1 866 396 4231 Story Transcript Recently, Bill Maher accused liberals, American liberals, as being soft on Islam while they’re being hypercritical of the American right. Here’s what he had to say. … Our guest on Reality Asserts Itself thinks all of that is Islamophobic. And now joining us in the studio to talk about all of this is Deepa Kumar. Deepa is an associate professor of media studies at Rutgers University and also serves as an officer in the teachers union. She’s authored two books, including Outside the Box: Corporate Media, Globalization, and the UPS Strike, and her latest is Islamophobia and the Politics of Empire. She’s currently working on her next book, titled Constructing the Terrorist Threat: The Cultural Politics of the National Security State. Thanks for joining us. DEEPA KUMAR, ASSOC. PROF. MEDIA STUDIES AND MIDEAST STUDIES, RUTGERS UNIV.: Thank you for having me. It’s an honor to be here. JAY: Now, usually we start with your personal back story, but because I’ve already teased the Bill Maher thing, we’re going to have to do the Bill Maher thing and then in the next segment do the back story,– KUMAR: That sounds good. JAY: –because I think everyone’s going to want to hear how you trash Bill Maher and why. KUMAR: Yes. JAY: So what’s wrong with what Bill Maher said? Bill Maher is saying that there’s a great denial of rights in much of the Muslim world, not just Islamic State, but Saudi Arabia and so on, and there isn’t a lot of loud critique about it. And certainly the American left spends a lot more time critiquing the domestic right. So doesn’t he have a point? KUMAR: Well, what Bill Maher said is a perfect example of what I call liberal Islamophobia, which is to take up liberal themes, such as human rights, women’s rights, the rights of gays and lesbians, the right to free speech, and so on and makes a case of the so-called Muslim world, like it is one big monolith in which these rights are uniformly denied to people, and then proceeds to equate, in essence, the politics of ISIS with the politics of the 1.5 billion people who practice Islam, when in fact you actually look at Muslim majority countries, which is the term that I prefer, they vary widely in terms of, for instance, the status of women. In Bangladesh, for instance, we’ve had two women heads of state voted into power, Khaleda Zia and Sheikh Hasina. But in Saudi Arabia, women aren’t allowed to drive. And so of course there are these kinds of examples from Muslim majority countries like Saudi Arabia, like Iran, where women’s rights are restricted. But by focusing just on those and somehow equating this to a problem of Islam as opposed to a problem of politics, he winds up perpetuating this notion that all Muslims are backward, which is the very essence of Islamophobia. JAY: He does something else, too. He ascribes, essentially, fundamentalism about the Quran, and then all believers in Islam are somehow also fundamentalists. KUMAR: Right. JAY: But he doesn’t do that for Christianity, because there’s just as much craziness in the Bible as there is in the Quran, I think. I don’t know. Maybe it could be there’s a little more. I don’t know. KUMAR: No, you know, any religious text, whether it’s the Quran or the Bible and so on, can be interpreted in multiple ways. There are progressive interpretations of it and then there are reactionary interpretations of it. And therefore–you know, I mean, think of this example. Imagine if for every act of terror committed by a Christian fundamentalist, a far-right militia person like Wade Michael Page, who went to Oak Creek, Wisconsin, and shot a gun at a Sikh temple and killed people and so on, now imagine if we were to generalize from Wade Michael Page to all of Christiandom, to all of the United States, and say, now everybody else should denounce this man and distance themselves from him; otherwise, you’re all culpable. Now, that’s completely ridiculous, and of course that would be ridiculous if we talk about Christians in the West, but apparently it’s completely acceptable when it comes to talking about Muslims. And so even President Obama said moderate Muslims should separate themselves from ISIS and from other groups and so on. Why? In what way, shape, or form are regular Muslims responsible for fundamentalism any more than regular Christians are responsible for Christian fundamentalism or regular Jews for Jewish fundamentalists? You get the idea. We see those people as being the extreme wing of a particular religious interpretation. JAY: But is there a little reluctance in the American left, especially when it comes to the Islamic State, of being really harsh about what they are? I mean, you may not want to use the word fascistic, just because fascism has sort of more modern implications in terms of Europe, but it has a lot of things in common in terms of they seem quite capable of genocide, they seem quite capable of killing people just because of their beliefs. KUMAR: Oh, of course. JAY: I mean, it’s a pretty barbaric thing. KUMAR: No doubt. JAY: Now, I just want to add one thing in saying that, just to the audience–they’ve heard me say this before. There is nothing the Islamic State has done that compares to the barbaric activity that the United States has done in Iraq and on and on, going right back to the atomic bombing of Japan. So if we’re talking scale here, the Islamic State is a whisper of what the United States has done. That being said, one does not need to hold back on describing IS as a barbaric, brutal force that the people of the region on the whole, you would think, will despise, just as much as most Afghans despise the Taliban. KUMAR: Absolutely. I completely agree with that. I think that–you know, in my book, actually, I have a pretty strong critique of the parties of political Islam, and I don’t think we should be soft on that. There was a tendency back in the 1970s, you know, when Foucault goes to Iran and so on, to see–particularly in France there was a tendency to somehow see the Islamists as being progressive and painting with progressive colors the Iranian Revolution and so forth. But I don’t think that tendency exists anymore. I think, if anything, one of the first few pieces that I wrote on this topic is about the left and their attitudes towards political Islam, is how ignorant the left was in terms of, actually, Islamophobia and in terms of sort of equating the Islamists with all of Islam and all of Muslims and so forth. So I think that there is, in the United States especially, a sort of blind spot around Islamophobia and a lack of a nuanced analysis of who these groups are, why they come to power, and what the historic conditions are for their rise. JAY: It seems to me where he is–one is this part of where he extends this to anyone who believes in Islam and tries to make Islam itself and come up with some quotes from the Quran that are particularly backward is one thing. But the second thing, which is this idea that he talks about our society having liberal values and free speech and this and that, you can argue what that’s becoming and with this national surveillance state and so on, but it’s still true. I mean, compared to a lot of societies,– KUMAR: Yes, but I’ll take–. JAY: –we can have this–well, let me just finish my point. I mean, we can have this conversation, and we’re not going to walk out and get arrested. KUMAR: Right. JAY: That being said, it’s at home you have those things. The United States has 50, 60 years of supporting the worst kind of dictatorships everywhere, and particularly in the Middle East, whether it’s supporting the Saudis and so on and so on. There’s no support for these kind of values when it conflicts with American interests abroad. KUMAR: Right. The part where I interrupted you, what I was going to say is that the narrative that gets constructed in the West and that Bill Maher and people like that are echoing is the clash of civilizations rhetoric, right, which was coined by Bernard Lewis and then popularized by Samuel Huntington, which is the idea that in the post-Cold War period, conflict would no longer be political, it will be cultural, and that there were seven or eight civilizations, each with their own unique cultures–the West and the Islamic world and so on–and that they are bound to conflict with each other. The problem–I mean, there are any number of problems. It’s ahistorical, it’s just wrong, and so on. But the problem I have with it, one problem, at least, is that it negates the fact that the rights that people do have in this country, whether you’re talking about workers’ rights, the rights of African Americans to vote, the right of women to vote, this didn’t happen automatically because some benevolent president decided. It’s people’s movements, it’s women fighting for 100 years and their male allies that caused suffrage, right? And, therefore, somehow to assume the liberal mantle as being the natural inheritance of what it means to be the West, starting from Greece to the present, and seeing the East, particularly Muslim majority countries, as being mired in barbarism, this is the classic language of colonialism, which whether Bill Maher knows it or not, that is what he’s echoing. And, in fact, even in the East you’ve had–you know, in Iran, in Egypt, you’ve had Feminist movements, you’ve had women’s rights movements, which we barely ever hear of. JAY: Well, the roots of this go right back to the early days of the Catholic Church and the fighting against–the Crusades, and then the Ottoman Empire. I mean, it wasn’t about liberal values then. It was about the true god, and they got the bad god, and we’re going to fight it out. But it’s kind of–the roots of it go very deep. KUMAR: Absolutely, which is why my book is called Islamophobia and the Politics of Empire and it starts with the Crusades, because every empire needs an enemy. And at least one of the motivations for the Crusades was to create this ideal Muslim enemy, which could then motivate people to go out and fight wars. But it was always–there was always a very contradictory notion of how to look at the East, because even while the Crusades are going on, you have the most horrific stereotypes of Muslims and all the rest of it. In al-Andalus, which is the name given to Muslim rule in the Iberian Peninsula–Spain and Portugal–you see the most advanced civilization. Remember, Europe is in the dark ages at this time, right, and here in al-Andalus you have street lighting, you have developments in science, medicine, and so on. In that region, Europeans had a very positive idea of Muslims and what they developed because they had actual contact with them. So, typically this idea of a Muslim enemy works when people have never met anybody from the Middle East or North Africa or have never traveled. And then the stereotypes can work, just as it’s working in the case of ISIS and scaring people to death. JAY: But I think one of the things that always gets missed in this conversation in the mainstream media is that these are class societies we’re talking about, the Muslim societies, Arab societies. And in many of them, the classes that are in power are barbaric and they are backward and they do call up the worst of whatever you can call up in Islam, the same way you can find in Christian fanatical regimes in Latin America and other times. KUMAR: Absolutely. JAY: But you do get–I go back to Maher’s point, even though I think–I mean, I agree with you; the way he formulates it is Islamophobic, but there is a kind of reluctance. Like, even on Iran for example–and this is not so much an Islamic issue, although it’s a bit of it–much of the left in the United States is very reluctant to say something critical of the Iranian regime. The Iranian regime suppresses–I’m not talking liberal now. Democratic liberals, of course they trash Iran. I’m talking about the left, who only want to talk about U.S. sanctions against Iran and the U.S. aggression against Iran and so on and so on, and they just won’t say a word about the way Iranian state suppresses its own people. And you get a bit of that. I think that’s part of what Maher is trying to put his finger on, even though I think he’s doing it in a brutal and–you know. KUMAR: I mean, I think there are two ways of doing this, right? On the one hand, there’s the sort of Bill Maher of way of doing it, which is then to present the U.S. government as somehow a force for good in the region, right? This is the old white man’s burden. JAY: But when he’ll talk about the Iraq War, he’ll trash American policy on Iraq. It’s on this issue where he doesn’t like what he thinks is a kind of hypocrisy. I mean, I think he’s completely naive on how he formulates it. KUMAR: I would formulate it differently, which is that I certainly have critiques of the dictatorships and of the censorship and of the violation of workers’ rights, for instance, in Iran and so on. But the fact of the matter is that you’ve had very important people from Iran, like Shirin Ebadi, the Nobel Peace Prize winner, who has exposed the extent of civil rights violations, human rights violations, and so on. And the key, I think, is to have solidarity, international solidarity, and to speak about the problems, whether it’s workers’ rights, women’s rights, what have you, in terms of how we can get together from a grassroots level to fight back. That’s a very different kind of framing, as opposed to Bill Maher saying, well, all of these Muslim societies are no different from ISIS. JAY: I think part of it is–and I think you mentioned this in the beginning–it’s completely ahistorical, what he says. Like, if you’re in any of the Muslim countries and you see what American policy in the Middle East has been, you are going to–unless you’re in the elite and you somehow benefit from it, but even amongst the elites I think there’s going to be resentment, and you’re going to have–. The thing is is: what else is there to have some sympathy for than the Islamist opposition? Why? Because the American policy and the Israeli policy destroyed the secular opposition. KUMAR: That’s right. That’s right. And this has been happening through the course of the Cold War, when it was clear that the secular nationalists, whether you’re talking about Gamal Abdel Nasser in Egypt or you’re talking about Mohammad Mosaddegh in Iran, when it was discovered that they couldn’t be co-opted to serve the U.S.’s interests in the region, the key policy from 1958 on with the Eisenhower doctrine was to create an Islamic bulwark to act as a counter to secular nationalism. And you read some of the accounts of what the CIA is doing, and they’re putting poison into Nasser’s cigarettes, they’re trying to put poison into his chocolates, some of these sorts of awful things that you think happen only in the movies. But at a very systemic level what they’re doing is funding and sponsoring all sorts of radical Islamist groups, all the way from Iran and across the region. JAY: Well, and most importantly it starts with Roosevelt, the deal with the Sauds. I mean, the Saudis are the heart of all of this. And this was the deal, that the Saudis would use the defense of Mecca to be the force to spread Wahhabism throughout the region, and all this extremism is part of American policy. KUMAR: Absolutely. In fact, the language that they used in the State Department is that they wanted the Saudi monarch to be an Islamic Pope and to use the legitimacy of being the guardians of Mecca and Medina to actually push people away from secularism. So absolutely. And Saudi Arabia had a very systematic program of Islamization, whether it was distributing Qurans for free, whether it was giving tons of petrodollars for setting up madrasas all over. Not just in the Middle East, but even in Pakistan they set up schools and colleges and send their preachers there and so forth. And the end result is the mujahideen, is al-Qaeda. And, I mean, I think that’s really important to bring up, because there’s a tendency to somehow think of the parties of political Islam as being the sort of logical outcome of this region–you know, this is all that Muslims can produce. But if you don’t talk about how left secular alternatives were systematically crushed by the U.S., by Saudi Arabia, by Israel, and so on, then you don’t get a sense that these are people just like anybody else who have a range of politics. JAY: And not only left-secular; they destroyed in Afghanistan a more normal capitalist development. They had a king that was a modernizer. They wanted to have a more modern capitalism. And they threw it all out the window to suck Russia into a war and then arming all the jihadists and, I mean, village elders who, I mean, didn’t know anything, give them rocket launchers, and they become the new powerbrokers. And then you wonder where the Taliban comes from. KUMAR: Right. In fact, every single reformist and pro-democratic movement that has come into being in the Middle East and North Africa, the U.S. has always been on the wrong side of it, even in Saudi Arabia. There was a modest movement called the Free Princes Movement, where they wanted a constitutional monarchy. Would the U.S. have any of it? Absolutely not. They immediately dispatched forces to make sure these forces are marginalized. There was a workers movement in the Shia eastern region trying to form unions, but Aramco, at that time American-owned, would have none of it. And so they got rid of that. So every step towards creating rights for a whole group of people, from workers to women and so on and so forth, the U.S. is always been on the wrong side, including after the Arab Spring of 2011, right? So you look at the role that the U.S. is played: support the dictators till the very last second, and then back to counterrevolutionaries, whether it’s Egypt and the military or it’s giving the green light to Saudi Arabia to crush the resistance in Bahrain, what have you. The U.S. has always–. JAY: Or, first of all, make a deal with Qatar to have Muslim Brotherhood. KUMAR: Exactly. JAY: And then let–they know if that were to perpetuate as a regime, they weren’t headed towards any grand democracy. KUMAR: Right. Right. And so I think that framework is important, because then you start to see that the people of the Middle East and North Africa are just like everybody else. They want economic rights. They want political rights and so on. And if the U.S. just stopped interfering, we would see a flowering of a different kind of society. JAY: Okay. So here’s a challenge to Bill Maher. So if somebody out there knows Bill Maher, get him to watch this, ’cause, you know, he’s not bad a lot of things. You know, he seems to be kind of evolving in his thinking. But he seems rather stuck on this issue. So anybody who knows Bill Maher–and we know we’ve got lots of viewers in L.A., in Hollywood. So shove this thing in front of his eyeballs and see if he has the courage to have Deepa or Deepa and me on the show and let’s talk about this, because so far he’s kind of buying into some ignorance. So we’re going to continue this conversation. Please join us for the next part of Reality Asserts Itself on The Real News Network. End DISCLAIMER: Please note that transcripts for The Real News Network are typed from a recording of the program. TRNN cannot guarantee their complete accuracy. Related Bios Deepa Kumar is an Associate Professor of Media Studies and Middle Eastern Studies at Rutgers University. Her work is driven by an active engagement with the key issues that characterize our era–neoliberalism and imperialism. Her first book, Outside the Box: Corporate Media, Globalization and the UPS Strike (University of Illinois Press, 2007), is about the… TheRealNewsNetwork.com, RealNewsNetwork.com, The Real News Network, Real News Network, The Real News, Real News, Real News For Real People, IWT are trademarks and service marks of Independent World Television inc. "The Real News" is the flagship show of IWT and The Real News Network. All original content on this site is copyright of The Real News Network. Click here for moreProblems with this site? Please let us know
Q: Compact command line argument parser So, I decided to write my own little command line argument parser for various other projects I work on. I am aware that there are many good command line parser libraries, but I did wrote my own anyway (practice & implementation specific reasons). The parser works fine, but I have a feeling that it can be improved a lot, mainly the following things come to mind Mainly the actual parser, CommandLineParser.cs. It seems very badly structured and I find it hard to read myself. Abstraction. I wonder if I can abstract it a bit more without making it a pain to use? Maybe by introducing some interfaces? Naming. I went with Option for the command line switch and with Value for the possible parameters. Are my methods/classes self-descriptive? Optimizations. I am sure there are segments that can be done more efficiently, mainly in CommandLineParser.ParseArguments(string[] args) A couple of things to note: I'd like to keep the structure for the CommandLineValue.cs and CommandLineOption.cs mostly the same as they are part of a plugin architecture to communicate command line arguments between the plugins and the main application. No usage of Attributes to store the command line options. I did write a couple of unit tests to verify the parsers functionality. Despite them being not the main class to review, I am appreciate feedback there too :) Parser: public class CommandLineParser { /// <summary> /// Defines all possible command line options the plugin can can process /// </summary> public List<CommandLineOption> SupportedOptions { get; } /// <summary> /// Initialize the commandline parser with a list of commandline options the plugin exposes /// </summary> /// <param name="supportedOptions"></param> public CommandLineParser(List<CommandLineOption> supportedOptions) { SupportedOptions = supportedOptions; } /// <summary> /// Parse the command line arguments and returns a list of commandline values that can be passed to the /// plugin for further processing. The function also handles invalid amount and/or format of options, values /// as well as missing required arguments etc /// </summary> /// <param name="args">The arguments to parse</param> /// <returns>A list of parsed commandline values + options</returns> /// <exception cref="InvalidCommandLineOptionException"></exception> /// <exception cref="InsufficientCommandLineValuesException"></exception> /// <exception cref="InvalidCommandLineValueException"></exception> /// <exception cref="MissingRequiredCommandLineOptionException"></exception> public IEnumerable<CommandLineValue> ParseArguments(string[] args) { var result = new List<CommandLineValue>(); if (args.Length == 0) return Enumerable.Empty<CommandLineValue>(); // Process all command line arguments for (int i = 0; i < args.Length; i++) { CommandLineOption option = null; if (!IsSupportedOption(args[i], out option)) throw new InvalidCommandLineOptionException($"{args[i]} is not a valid command line option"); // Verify if the option expects additional values if (HasAdditionalValues(option)) { // Check if enough additional values are given int additionalValues = option.ParameterTypes.Count; if (i + additionalValues + 1 > args.Length) throw new InsufficientCommandLineValuesException( $"{args[i]} expects {additionalValues} values."); // Check if the additional values are in the right format // ToDo: Find more elegant solution var values = args.ToList().GetRange(i + 1, i + additionalValues).ToList(); var types = option.ParameterTypes.ToList(); var castedValues = values.Zip(types, (value, type) => { try { return Convert.ChangeType(value, type); } catch { throw new InvalidCommandLineValueException( $"Cannot cast between value {value} to type {type}"); } }); result.Add(new CommandLineValue(option, castedValues.ToList())); // Increase i to skip to the next option i += additionalValues; } else { result.Add(new CommandLineValue(option, null)); } } // Collect required arguments List<string> requiredOptions = new List<string>(); foreach (var option in SupportedOptions) { if (option.Required) foreach (var tag in option.Tags) { requiredOptions.Add(tag); } } // Check that no required arguments are missing (or occur twice) var missing = GetMissingRequiredArgs<string>(requiredOptions, args.ToList()); if (missing == null) return result; throw new MissingRequiredCommandLineOptionException( $"The required arument(s) {string.Join(",", missing)} occured multiple times"); } /// <summary> /// Check that all required options are used and that they (the required options) dont occur multiple times are no duplicates /// </summary> /// <param name="required">A list of required options</param> /// <param name="arguments"><The args to check</param> /// <typeparam name="T">Any primitive type</typeparam> /// <exception cref="MissingRequiredCommandLineOptionException">Thrown if any distinct required arguments exist more then once</exception> /// <returns>A list of missing required args, if any. Null if none are missing.</returns> static List<T> GetMissingRequiredArgs<T>(List<T> required, List<T> arguments) { // convert to Dictionary where we store the required item as a key against count for an item var requiredDict = required.ToDictionary(k => k, v => 0); foreach (var item in arguments) { if (!requiredDict.ContainsKey(item)) continue; requiredDict[item]++; // if we have required, adding to count if (requiredDict[item] <= 1) continue; throw new DuplicateRequiredCommandLineOptionException( $"Required option {item} appeared more than once!"); } var result = new List<T>(); // now we are checking for missing items foreach (var key in requiredDict.Keys) { if (requiredDict[key] == 0) { result.Add(key); } } return result.Any() ? result : null; } /// <summary> /// Verify if given option is part of the supported options /// </summary> /// <returns>true if the option is supported otherwise false</returns> private bool IsSupportedOption(string optionIdentifier, out CommandLineOption option) { for (var index = 0; index < SupportedOptions.Count; index++) { var supportedOption = SupportedOptions[index]; if (supportedOption.Tags.Any(tag => tag == optionIdentifier)) { option = SupportedOptions[index]; return true; } } option = null; return false; } /// <summary> /// Indicates if a command line option has multiple values or if its just a flag /// </summary> /// <param name="option">Commandlineoption to check</param> /// <returns>true if the option has multiple values, otherwise false</returns> private bool HasAdditionalValues(CommandLineOption option) { var noParameters = option.ParameterTypes == null || option.ParameterTypes.Count == 0; return !noParameters; } } Classes to store commandline information: public class CommandLineOption { /// <summary> /// The identifier of the commandline option, e.g. -h or --help /// </summary> public ICollection<string> Tags { get; } /// <summary> /// Description of the commandline option /// </summary> public string Description { get; } /// <summary> /// Indicates if the argument is optional or required /// </summary> public bool Required { get; } /// <summary> /// Types of the additional provided values such as directory paths, values etc .. /// </summary> public IList<Type> ParameterTypes { get; } /// <summary> /// Create a new true/false commandline option /// </summary> /// <param name="tags">Identifier of the command line option</param> /// <param name="description">Description of the command line option</param> /// <param name="required">Indicates if the command line option is optional or not</param> public CommandLineOption(IEnumerable<string> tags, string description, bool required = false) { Tags = tags.ToList(); Description = description; Required = required; } /// <summary> /// Create a new true/false commandline option /// </summary> /// <param name="tags">Identifier of the command line option</param> /// <param name="description">Description of the command line option</param> /// <param name="required">Indicates if the command line option is optional or not</param> public CommandLineOption(IEnumerable<string> tags, string description, bool required = false, params Type[] parameterTypes): this(tags, description, required) { ParameterTypes = new List<Type>(parameterTypes); } } public class CommandLineValue : IEqualityComparer<CommandLineValue> { /// <summary> /// Holds all the values specified after a command line option /// </summary> public IList<object> Values { get; } /// <summary> /// The command line option the value(s) belong to /// </summary> public CommandLineOption Option { get; set; } /// <summary> /// Stores the values that correspond to a commandline option /// </summary> /// <param name="option">The commandline option the values refer to</param> /// <param name="values">The values that are stored</param> public CommandLineValue(CommandLineOption option, IList<object> values) { Option = option; Values = values; } public bool Equals(CommandLineValue x, CommandLineValue y) { if (x.Option.Description == y.Option.Description && x.Option.Required == y.Option.Required && x.Option.Tags.SequenceEqual(y.Option.Tags) && x.Option.ParameterTypes.SequenceEqual(y.Option.ParameterTypes) && x.Values.SequenceEqual(x.Values)) return true; return false; } public int GetHashCode(CommandLineValue obj) { return base.GetHashCode(); } } Custom Exception Classes: public class DuplicateRequiredCommandLineOptionException : Exception { public DuplicateRequiredCommandLineOptionException(string message) : base(message) { } } public class InsufficientCommandLineValuesException : Exception { public InsufficientCommandLineValuesException(string message) : base(message) { } } public class InvalidCommandLineOptionException : Exception { public InvalidCommandLineOptionException(string message) : base(message) { } } public class InvalidCommandLineValueException : Exception { public InvalidCommandLineValueException(string message) : base(message) { } } public class MissingRequiredCommandLineOptionException : Exception { public MissingRequiredCommandLineOptionException(string message) : base(message) { } } Unit Tests: public class CommandLineParserTests { [Fact] public void ParseDuplicateRequiredArguments() { var args = new[] {"--randomize", "-o", "/home/user/Documents", "--randomize", "-d"}; var supportedOptions = new List<CommandLineOption> { new CommandLineOption( new[] {"-r", "--randomize"}, "Random flag", true), new CommandLineOption( new[] {"-o", "--output-directory"}, "Specifies the output directory", true, typeof(string)), new CommandLineOption( new[] {"-d", "--dummy"}, "Just another unused flag"), }; var parser = new CommandLineParser(supportedOptions); Assert.Throws<DuplicateRequiredCommandLineOptionException>(() => parser.ParseArguments(args) ); } [Fact] public void ParseMissingRequiredArguments() { var args = new[] {"--randomize", "--output-directory", "/home/user/Documents"}; var supportedOptions = new List<CommandLineOption> { new CommandLineOption( new[] {"-r", "--randomize"}, "Random flag"), new CommandLineOption( new[] {"-o", "--output-directory"}, "Specifies the output directory", true, typeof(string)), new CommandLineOption( new[] {"-d", "--dummy"}, "Just another unused flag"), }; var parser = new CommandLineParser(supportedOptions); Assert.Throws<MissingRequiredCommandLineOptionException>(() => parser.ParseArguments(args) ); } [Fact] public void ParseMatchingTypeCommandLineValues() { var args = new[] {"--log", "info", "1337", "3.1415"}; var supportedOptions = new List<CommandLineOption> { new CommandLineOption( new[] {"-l", "--log"}, "Logs info from exactly three data sources", false, typeof(string), typeof(int), typeof(float)) }; var parser = new CommandLineParser(supportedOptions); var expectedValue = new CommandLineValue(new CommandLineOption( new[] {"-l", "--log"}, "Logs info from exactly three data sources", false, typeof(string), typeof(int), typeof(float)), new object[] {"info", 1337, (float) 3.1415}); var actualValue = parser.ParseArguments(args).ToList()[0]; Assert.True(expectedValue.Equals(actualValue, expectedValue)); } [Fact] public void ParseMismatchingTypeCommandLineValues() { var args = new[] {"--log", "info", "1337", "3.1415"}; var supportedOptions = new List<CommandLineOption> { new CommandLineOption( new[] {"-l", "--log"}, "Logs info from exactly three data sources", false, typeof(string), typeof(int), typeof(long)), }; var parser = new CommandLineParser(supportedOptions); Assert.Throws<InvalidCommandLineValueException>(() => parser.ParseArguments(args) ); } [Fact] public void ParseInsufficientCommandLineValues() { var args = new[] {"-l", "info", "info2"}; var supportedOptions = new List<CommandLineOption> { new CommandLineOption( new[] {"-l", "--log"}, "Logs info from exactly three data sources", false, typeof(string), typeof(string), typeof(string)), }; var parser = new CommandLineParser(supportedOptions); Assert.Throws<InsufficientCommandLineValuesException>(() => parser.ParseArguments(args) ); } [Fact] public void ParseInvalidCommandLineOption() { var args = new[] {"--force"}; var supportedOptions = new List<CommandLineOption> { new CommandLineOption(new[] {"-h", "--help"}, "Show the help menu"), }; var parser = new CommandLineParser(supportedOptions); Assert.Throws<InvalidCommandLineOptionException>(() => parser.ParseArguments(args) ); } [Fact] public void ParseNoCommandLineOptions() { var args = new string[] { }; var parser = new CommandLineParser(null); var result = parser.ParseArguments(args); Assert.Equal(Enumerable.Empty<CommandLineValue>(), result); } } I appreciate all suggestions. Feel free to be very nitpicky. :) A: Design Issues There are a couple of issues concerning your design. Lack of specification It is unclear which features should be supported by your API. This makes reviewing a bit fuzzy. Dependencies The parser depends on arguments already pre-parsed correctly by a shell. This limits the control you have over command line parsing. var args = new[] {"--log", "info", "1337", "3.1415"}; Consider breaking free from the shell and take on pre-parsing yourself. var args = "--log info 1337 3.1415"; // <- unparsed command line string Pollution The API mixes language structs with user-defined options. new CommandLineOption(new[] {"-l", "--log"} You do not want - and -- to be part of the Tags. These are delimiters in the lexing phase of your parser. By seperating lexing from parsing, you could extend the API more fluently by allowing other command line languages. For instance /log. Review Exception Classes Define a base class for all your exceptions CommandLineException. This way, you allow calling code to determine the granularity of exception handling. Since you make several custom exceptions, take advantage of storing some data on them. DuplicateRequiredCommandLineOptionException could store the duplicate option, and so on. Also provide constructors that take an inner exception. public class DuplicateRequiredCommandLineOptionException : CommandLineException { public CommandLineOption Option { get; } // include more constructors .. public DuplicateRequiredCommandLineOptionException( string messageCommandLineOption option) : base(message) { Option = option; } } CommandLineOption & CommandLineValue You have addressed you don't want to see too many changes for legacy reasons. I do propose to override the default Equals and GetHashCode on both classes and substitute IEqualityComparer with IEquatable. This way, you could improve your code. public bool Equals(CommandLineValue other) { return Option.Equals(other.Option) && Values.SequenceEqual(other.Values); } CommandLineParser You have indicated yourself you have problems parsing a flattened list to a hierarchical structure. There are common techniques for handling such situations. Have a look at Abstract Syntax Tree. You should create a syntax tree from the provided string[] args. This can be done with a Stack and Iterator. There are tons of examples online how to create an AST. // Check if the additional values are in the right format // ToDo: Find more elegant solution var values = args.ToList().GetRange(i + 1, i + additionalValues).ToList(); var types = option.ParameterTypes.ToList(); The second issue is - what I called pollution before - the lack of seperation of concerns. Your API is basically a simple compiler. The link shows you it's good practice to provide the following phases when building a compiler: pre-processing lexing parsing optimizing pretty printing Your API should definitely include lexing and parsing as seperate phases. lexing: create command line tokens and strip all the keywords and language-specific delimiters parsing: create an AST from the lexed tokens, then create CommandLineValue instances from the AST. Conclusion In the end, the quality of the API depends on a good specification covered by many unit tests. I feel you haven't established this yet.
From the Magazine Exclusive: How Elizabeth Holmes’s House of Cards Came Tumbling Down In a searing investigation into the once lauded biotech start-up Theranos, Nick Bilton discovers that its precocious founder defied medical experts—even her own chief scientist—about the veracity of its now discredited blood-testing technology. She built a corporation based on secrecy in the hope that she could still pull it off. Then, it all fell apart. The War Room It was late morning on Friday, October 16, when Elizabeth Holmes realized that she had no other choice. She finally had to address her employees at Theranos, the blood-testing start-up that she had founded as a 19-year-old Stanford dropout, which was now valued at some $9 billion. Two days earlier, a damning report published in The Wall Street Journal had alleged that the company was, in effect, a sham—that its vaunted core technology was actually faulty and that Theranos administered almost all of its blood tests using competitors’ equipment. The article created tremors throughout Silicon Valley, where Holmes, the world’s youngest self-made female billionaire, had become a near universally praised figure. Curiosity about the veracity of the Journal story was also bubbling throughout the company’s mustard-and-green Palo Alto headquarters, which was nearing the end of a $6.7 million renovation. Everyone at Theranos, from its scientists to its marketers, wondered what to make of it all. For two days, according to insiders, Holmes, who is now 32, had refused to address these concerns. Instead, she remained largely holed up in a conference room, surrounded by her inner circle. Half-empty food containers and cups of stale coffee and green juice were strewn on the table as she strategized with a phalanx of trusted advisers, including Ramesh “Sunny” Balwani, then Theranos’s president and C.O.O.; Heather King, the company’s general counsel; lawyers from Boies, Schiller & Flexner, the intrepid law firm; and crisis-management consultants. Most of the people in the war room had been there for two days and nights straight, according to an insider, leaving mainly to shower or make a feeble attempt at a couple of hours of shut-eye. There was also an uncomfortable chill in the room. At Theranos, Holmes preferred that the temperature be maintained in the mid-60s, which facilitated her preferred daily uniform of a black turtleneck with a puffy black vest—a homogeneity that she had borrowed from her idol, the late Steve Jobs. Holmes had learned a lot from Jobs. Like Apple, Theranos was secretive, even internally. Just as Jobs had famously insisted at 1 Infinite Loop, 10 minutes away, that departments were generally siloed, Holmes largely forbade her employees from communicating with one another about what they were working on—a culture that resulted in a rare form of executive omniscience. At Theranos, Holmes was founder, C.E.O., and chairwoman. There wasn’t a decision—from the number of American flags framed in the company’s hallway (they are ubiquitous) to the compensation of each new hire—that didn’t cross her desk. And like Jobs, crucially, Holmes also paid indefatigable attention to her company’s story, its “narrative.” Theranos was not simply endeavoring to make a product that sold off the shelves and lined investors’ pockets; rather, it was attempting something far more poignant. In interviews, Holmes reiterated that Theranos’s proprietary technology could take a pinprick’s worth of blood, extracted from the tip of a finger, instead of intravenously, and test for hundreds of diseases—a remarkable innovation that was going to save millions of lives and, in a phrase she often repeated, “change the world.” In a technology sector populated by innumerable food-delivery apps, her quixotic ambition was applauded. Holmes adorned the covers of Fortune, Forbes, and Inc., among other publications. She was profiled in The New Yorker and featured on a segment of Charlie Rose. In the process, she amassed a net worth of around $4 billion. Theranos blood-testing machines. By Jim Wilson/The New York Times/Redux. One of the only journalists who seemed unimpressed by this narrative was John Carreyrou, a recalcitrant health-care reporter from The Wall Street Journal. Carreyrou came away from The New Yorker story surprised by Theranos’s secrecy—such behavior was to be expected at a tech company but not a medical operation. Moreover, he was also struck by Holmes’s limited ability to explain how it all worked. When The New Yorker reporter asked about Theranos’s technology, she responded, somewhat cryptically, “a chemistry is performed so that a chemical reaction occurs and generates a signal from the chemical interaction with the sample, which is translated into a result, which is then reviewed by certified laboratory personnel.” Shortly after reading the article, Carreyrou started investigating Theranos’s medical practices. As it turned out, there was an underside to Theranos’s story that had not been told—one that involved questionable lab procedures and results, among other things. Soon after Carreyrou began his reporting, David Boies, the superstar lawyer—and Theranos board member—who had taken on Bill Gates in the 1990s and represented Al Gore during the 2000 Florida recount case, visited the Journal newsroom for a five-hour meeting. Boies subsequently returned to the Journal to meet with the paper’s editor in chief, Gerard Baker. Eventually, on October 16, 2015, the Journal published the article: HOT STARTUP THERANOS HAS STRUGGLED WITH ITS BLOOD-TEST TECHNOLOGY. During the two days in the war room, according to numerous insiders, Holmes heard various response strategies. The most cogent suggestion advocated enlisting members of the scientific community to publicly defend Theranos—its name an amalgam of “therapy” and “diagnosis.” But no scientist could credibly vouch for Theranos. Under Holmes’s direction, the secretive company had barred other scientists from writing peer-review papers on its technology. Absent a plan, Holmes embarked on a familiar course—she doubled down on her narrative. She left the war room for her car—she is often surrounded by her security detail, which sometimes numbers as many as four men, who (for safety reasons) refer to the young C.E.O. as “Eagle 1”—and headed to the airport. (She has been known to fly alone on a $6.5 million Gulfstream G150.) Holmes subsequently took off for Boston to attend a luncheon for a previously scheduled appearance at the Harvard Medical School Board of Fellows, where she would be honored as an inductee. During the trip, Holmes fielded calls from her advisers in the war room. She and her team decided on an interview with Jim Cramer, the host of CNBC’s Mad Money, with whom she had a friendship that dated from a previous interview. It was quickly arranged. Cramer generously began the interview by asking Holmes what had happened. Holmes, who talks slowly and deliberately, and blinks with alarming irregularity, replied with a variation of a line from Jobs. “This is what happens when you work to change things,” she said, her long blond hair tousled, her smile amplified by red lipstick. “First they think you’re crazy, then they fight you, and then, all of a sudden, you change the world.” When Cramer asked Holmes for a terse true-or-false answer about an accusation in the article, she replied with a meandering 198-word retort. By the time she returned to Palo Alto, the consensus was that it was time, at last, for Holmes to address her hundreds of employees. A company-wide e-mail instructed technicians in lab coats, programmers in T-shirts and jeans, and a slew of support staff to meet in the cafeteria. There, Holmes, with Balwani at her side, began an eloquent speech in her typical baritone, explaining to her loyal colleagues that they were changing the world. As she continued, Holmes grew more impassioned. The Journal, she said, had gotten the story wrong. Carreyrou, she insisted, with a tinge of fury, was simply picking a fight. She handed the stage to Balwani, who echoed her sentiments. After he wrapped up, the leaders of Theranos stood before their employees and surveyed the room. Then a chant erupted. “Fuck you . . .,” employees began yelling in unison, “Carreyrou.” It began to grow louder still. “Fuck you, Carreyrou!” Soon men and women in lab coats, and programmers in T-shirts and jeans, joined in. They were chanting with fervor: “Fuck you, Carreyrou!,” they cried out. “Fuck you, Carreyrou! Fuck. You. Carrey-rou!” The Game In Silicon Valley, every company has an origin story—a fable, often slightly embellished, that humanizes its mission for the purpose of winning over investors, the press, and, if it ever gets to that point, customers, too. These origin stories can provide a unique, and uniquely powerful, lubricant in the Valley. After all, while Silicon Valley is responsible for some truly astounding companies, its business dealings can also replicate one big confidence game in which entrepreneurs, venture capitalists, and the tech media pretend to vet one another while, in reality, functioning as cogs in a machine that is designed to not question anything—and buoy one another all along the way. It generally works like this: the venture capitalists (who are mostly white men) don’t really know what they’re doing with any certainty—it’s impossible, after all, to truly predict the next big thing—so they bet a little bit on every company that they can with the hope that one of them hits it big. The entrepreneurs (also mostly white men) often work on a lot of meaningless stuff, like using code to deliver frozen yogurt more expeditiously or apps that let you say “Yo!” (and only “Yo!”) to your friends. The entrepreneurs generally glorify their efforts by saying that their innovation could change the world, which tends to appease the venture capitalists, because they can also pretend they’re not there only to make money. And this also helps seduce the tech press (also largely comprised of white men), which is often ready to play a game of access in exchange for a few more page views of their story about the company that is trying to change the world by getting frozen yogurt to customers more expeditiously. The financial rewards speak for themselves. Silicon Valley, which is 50 square miles, has created more wealth than any place in human history. In the end, it isn’t in anyone’s interest to call bullshit. When Elizabeth Holmes emerged on the tech scene, around 2003, she had a preternaturally good story. She was a woman. She was building a company that really aimed to change the world. And, as a then dark-haired 19-year-old first-year at Stanford University’s School of Chemical Engineering, she already comported herself in a distinctly Jobsian fashion. She adopted black turtlenecks, would boast of never taking a vacation, and would come to practice veganism. She quoted Jane Austen by heart and referred to a letter that she had written to her father when she was nine years old insisting, “What I really want out of life is to discover something new, something that mankind didn’t know was possible to do.” And it was this instinct, she said, coupled with a childhood fear of needles, that led her to come up with her revolutionary company. Holmes had indeed mastered the Silicon Valley game. Revered venture capitalists, such as Tim Draper and Steve Jurvetson, invested in her; Marc Andreessen called her the next Steve Jobs. She was plastered on the covers of magazines, featured on TV shows, and offered keynote-speaker slots at tech conferences. (Holmes spoke at *Vanity Fair’*s 2015 New Establishment Summit less than two weeks before Carreyrou’s first story appeared in the Journal.) In some ways, the near-universal adoration of Holmes reflected her extraordinary comportment. In others, however, it reflected the Valley’s own narcissism. Finally, it seemed, there was a female innovator who was indeed able to personify the Valley’s vision of itself—someone who was endeavoring to make the world a better place. The original Theranos laboratory, in Palo Alto, 2014. By Drew Kelly. Holmes’s real story, however, was a little more complicated. When she first came up with the precursor to the idea of Theranos, which eventually aimed to reap vast amounts of data from a few droplets of blood derived from the tip of a finger, she approached several of her professors at Stanford, according to someone who knew Holmes back then. But most explained to the chemical-engineering major that it was virtually impossible to do so with any real efficacy. “I told her, I don’t think your idea is going to work,” Phyllis Gardner, a professor of medicine at Stanford, said to me, about Holmes’s seminal pitch for Theranos. As Gardner explained, it is impossible to get a precise result from the tip of a finger for most of the tests that Theranos would claim to conduct accurately. When a finger is pricked, the probe breaks up cells, allowing debris, among other things, to escape into the interstitial fluid. While it is feasible to test for pathogens this way, a pinprick is too unreliable for obtaining more nuanced readings. Furthermore, there isn’t that much reliable data that you can reap from such a small amount of blood. But Holmes was nothing if not determined. Rather than drop her idea, she tried to persuade Channing Robertson, her adviser at Stanford, to back her in her quest. He did. (“It would not be unusual for finger-stick testing to be met with skepticism,” says a spokesman for Theranos. “Patents from that period explain Elizabeth’s ideas and were foundational for the company’s current technologies.”) Holmes subsequently raised $6 million in funding, the first of almost $700 million that would follow. Money often comes with strings attached in Silicon Valley, but even by its byzantine terms, Holmes’s were unusual. She took the money on the condition that she would not divulge to investors how her technology actually worked, and that she had final say and control over every aspect of her company. This surreptitiousness scared off some investors. When Google Ventures, which focuses more than 40 percent of its investments on medical technology, tried to perform due diligence on Theranos to weigh an investment, Theranos never responded. Eventually, Google Ventures sent a venture capitalist to a Theranos Walgreens Wellness Center to take the revolutionary pinprick blood test. As the V.C. sat in a chair and had several large vials of blood drawn from his arm, far more than a pinprick, it became apparent that something was amiss with Theranos’s promise. Google Ventures wasn’t the only group with knowledge of blood testing which felt that way. One of Holmes’s first major hires, thanks to an introduction by Channing Robertson, was Ian Gibbons, an accomplished British scientist who had a slew of degrees from Cambridge University and had spent 30 years working on diagnostic and therapeutic products. Gibbons was tall and handsome, with straight reddish-brown hair and blue eyes. He had never owned a pair of jeans and spoke with a British accent that was a combination of colloquial and posh. In 2005, Holmes named him chief scientist. Gibbons, who was diagnosed with cancer shortly after joining the company, encountered a host of issues with the science at Theranos, but the most glaring was simple: the results were off. This conclusion soon led Gibbons to realize that Holmes’s invention was more of an idea than a reality. Still, bound by the scientific method, Gibbons wanted to try every possible direction and exhaust every option. So, for years, while Holmes put her fund-raising talents to use—hiring hundreds of marketers, salespeople, communications specialists, and even the Oscar-winning filmmaker Errol Morris, who was commissioned to make short industrial documentaries—Gibbons would wake early, walk his dogs along a trail near his home, and then set off for the office before seven A.M. In his downtime, he would read I, Claudius, a novel about a man who plays dumb to unwittingly become the most powerful person on earth. While Gibbons grew ever more desperate to come up with a solution to the inaccuracies of the blood-testing technology, Holmes presented her company to more investors, and even potential partners, as if it had a working, fully realized product. Holmes adorned her headquarters and Web site with slogans claiming, “One tiny drop changes everything,” and “All the same tests. One tiny sample,” and went into media overdrive. She also proved an effective crisis manager. In 2012, for instance, Holmes began talking to the Department of Defense about using Theranos’s technology on the battlefield in Afghanistan. But specialists at the D.O.D. soon uncovered that the technology wasn’t entirely accurate, and that it had not been vetted by the Food and Drug Administration. When the department notified the F.D.A. that something was amiss, according to The Washington Post, Holmes contacted Marine general James Mattis, who had initiated the pilot program. He immediately e-mailed his colleagues about moving the project forward. Mattis was later added to the company board when he retired from the service. (Mattis says he never tried to interfere with the F.D.A. but rather was “interested in rapidly having the company’s technologies tested legally and ethically.”) At around the same time, Theranos also decided to sue Richard Fuisz, an old friend and neighbor of Holmes’s family, alleging that he had stolen secrets that belonged to Theranos. As the suit progressed—it was eventually settled—Fuisz’s lawyers issued subpoenas to Theranos executives involved with the “proprietary” aspects of the technology. This included Ian Gibbons. But Gibbons didn’t want to testify. If he told the court that the technology did not work, he would harm the people he worked with; if he wasn’t honest about the technology’s problems, however, consumers could potentially harm their health, maybe even fatally. The late scientist Ian Gibbons. Holmes, meanwhile, did not seem willing to tolerate his resistance, according to his wife, Rochelle Gibbons. Even though Gibbons had warned that the technology wasn’t ready for the public, Holmes was preparing to open “Theranos Wellness Centers” in dozens of Walgreens across Arizona. “Ian felt like he would lose his job if he told the truth,” Rochelle told me as she wept one summer morning in Palo Alto. “Ian was a real obstacle for Elizabeth. He started to be very vocal. They kept him around to keep him quiet.” Channing Robertson, who had brought Gibbons to Theranos, recalls a different conversation, noting, “He suggested to me on numerous occasions that what we had accomplished at that time was sufficient to commercialize.” A few months later, on May 16, 2013, Gibbons was sitting in the family room with Rochelle, the afternoon light draping the couple, when the telephone rang. He answered. It was one of Holmes’s assistants. When Gibbons hung up, he was beside himself. “Elizabeth wants to meet with me tomorrow in her office,” he told his wife in a quivering voice. “Do you think she’s going to fire me?” Rochelle Gibbons, who had spent a lot of time with Holmes, knew that she wanted control. “Yes,” she said to her husband, reluctantly. She told him she thought he was going to be fired. Later that evening, gripped and overwhelmed with worry, Ian Gibbons tried to commit suicide. He was rushed to the hospital. A week later, with his wife by his side, Ian Gibbons died. When Rochelle called Holmes’s office to explain what had happened, the secretary was devastated and offered her sincere condolences. She told Rochelle Gibbons that she would let Holmes know immediately. But a few hours later, rather than a condolence message from Holmes, Rochelle instead received a phone call from someone at Theranos demanding that she immediately return any and all confidential Theranos property. The Enforcer In hundreds of interviews with the media and on panels, Holmes honed her story to near perfection. She talked about how she didn’t play with Barbies as a child, and how her father, Christian Holmes IV, who worked in environmental technology for Enron before going on to work in a number of senior government jobs in Washington, was one of her idols. But her reverence for Steve Jobs was perhaps most glaring. Besides the turtlenecks, Holmes’s proprietary blood-analysis device, which she named “Edison” after Thomas Edison, resembled Jobs’s NeXT computer. She designed her Theranos office with Le Corbusier black leather chairs, a Jobs favorite. She also adhered to a strange diet of only green juices (cucumber, parsley, kale, spinach, romaine lettuce, and celery), to be drunk only at specific times of the day. Like Jobs, too, her company was her life. She rarely ever left the office, only going home to sleep. To celebrate her birthday, Holmes held a party at Theranos headquarters with her employees. (Her brother, Christian, also works at Theranos.) But the most staggering characteristic that she borrowed from the late C.E.O. was his obsession with secrecy. And while Jobs had a fearsome security force who ensured that confidential information rarely, if ever, left Apple’s headquarters, Holmes had a single enforcer: Sunny Balwani, the company’s president and chief operating officer, until he stepped down in May. Balwani, who had previously worked at Lotus and Microsoft, had no experience in medicine. He was hired in 2009 to focus on e-commerce. Nevertheless, he was soon put in charge of the company’s most secret medical technology. According to a number of people with knowledge of the situation, the two had met years before he began at the company, when Holmes took a trip to China after she graduated from high school. The two eventually started dating, numerous people told me, and remained very loyal even after their relationship ended. Among Holmes’s security detail, Balwani was known as “Eagle 2.” When employees questioned the accuracy of the company’s blood-testing technology, it was Balwani who would chastise them in e-mails (or in person), sternly telling staffers, “This must stop,” as The Wall Street Journal reported. He ensured that scientists and engineers at Theranos did not talk to one another about their work. Applicants who came for job interviews were told that they wouldn’t know what the actual job was unless they were hired. Employees who spoke publicly about the company were met with legal threats. On LinkedIn, one former employee noted next to his job description, “I worked here, but every time I say what I did I get a letter from a lawyer. I probably will get a letter from a lawyer for writing this.” If people visited any of Theranos’s offices and refused to sign the company’s lengthy non-disclosure agreement, they were not allowed inside. Balwani’s lack of medical experience might have seemed unusual at such a company. But few at Theranos were in a position to point fingers. As Holmes started to assemble her board of directors, she chose a dozen older white men, almost none of whom had a background in anything related to health care. This included former secretary of state Henry Kissinger, former secretary of state George Shultz, former Georgia senator and chairman of the Armed Services Committee Sam Nunn, and William J. Perry, the former defense secretary. (Bill Frist, the former Senate majority leader, and former cardiovascular doctor, was an exception.) “This was a board that was better suited to decide if America should invade Iraq than vet a blood-testing company,” one person said to me. Gibbons told his wife that Holmes commanded their attention masterfully. Theranos’s board may not have been equipped to ask what exactly the company was building, or how, but others were. While Holmes was bounding around the world on a private plane, speaking on panels with Bill Clinton, and giving passionate TED talks, two government organizations started quietly inspecting the company. On August 25, 2015, months before the Journal story broke, three investigators from the F.D.A. arrived, unannounced, at Theranos’s headquarters, on Page Mill Road, with two more investigators sent to the company’s blood-testing lab in Newark, California, demanding to inspect the facilities. According to someone close to the company, Holmes was sent into a panic, calling advisers to try to resolve the issue. At around the same time, regulators from the Centers for Medicare and Medicaid Services, which regulates laboratories, visited the labs and found major inaccuracies in the testing being done on patients. (The Newark lab was run by an employee who was criticized for insufficient laboratory experience.) C.M.S. also soon discovered that some of the tests Theranos was performing were so inaccurate that they could leave patients at risk of internal bleeding, or of stroke among those prone to blood clots. The agency found that Theranos appeared to ignore erratic results from its own quality-control checks during a six-month period last year and supplied 81 patients with questionable test results. While the government was scouring through Theranos’s inaccurate files and data, Carreyrou was approaching the story not as a fawning tech blogger, but rather as a diligent investigative reporter. Carreyrou, who had worked at the Journal since 1999, had covered topics ranging from terrorism to European politics and financial misdeeds before returning to the New York newsroom and taking over the health-and-sciences bureau. As a reporter of obscure and often faceless subjects, he was not enticed by access, nor was he afraid of lawyers. In fact, he had won two Pulitzer Prizes for taking on nemeses as significant as Vivendi and the U.S. government. After a team of seasoned lawyers arrived at the Journal newsroom, Carreyrou was simply emboldened. “It’s O.K. if you’ve got a smartphone app or a social network, and you go live with it before it’s ready; people aren’t going to die,” he told me. “But with medicine, it’s different.” Meanwhile, Theranos had its lawyers send a letter to Rochelle Gibbons’s attorney, threatening legal action for talking to a reporter. “It has been the Company’s desire not to pursue legal action against Mrs. Gibbons,” a lawyer for Boies, Schiller & Flexner wrote. “Unless she immediately ceases these actions, she will leave the Company no other option but to pursue litigation to definitively put an end [to] these actions once and for all.” Others who spoke to the Journal were met with similar threats. By Carlos Chavarría/The New York Times/Redux. The End Back in March 2009, Holmes returned to the Stanford campus, where her story had begun, to talk to a group of students at the Stanford Technology Ventures Program. Her hair wasn’t yet bleached blond, but she had started to wear her uniform of a black turtleneck, and she was just beginning to morph into the idol she would soon become in Silicon Valley. For 57 minutes, Holmes paced in front of a chalkboard and answered questions about her vision. “It became clear to me,” she said with conviction, “that if I needed to, I’d re-start this company as much as possible to make this thing happen.” This is exactly what Holmes seems to be doing now. Executives from Theranos, including Holmes and Balwani, declined to sit for interviews. But on a recent July afternoon, I traveled to the company’s headquarters anyway. From the outside, Theranos seems to be in a sad state. The parking lot was devoid of cars, with more than half the spaces empty (or half full, depending on your outlook). The giant American flag that hangs in front of the building was flaccid at half-staff. On the edge of the parking lot, a couple of employees were smoking cigarettes as a single security guard stood nearby, taking a selfie. On the Friday morning that they gathered in the war room, Holmes and her team of advisers had believed that there would be one negative story from the Journal, and that Holmes would be able to squash the controversy. Then it would be back to business as usual, telling her flawlessly curated story to investors, to the media, and now to patients who used her technology. Holmes and her advisers couldn’t have been more wrong. Carreyrou subsequently wrote more than two dozen articles about the problems at Theranos. Walgreens severed its relationship with Holmes, shuttering all of its Wellness Centers. The F.D.A. banned the company from using its Edison device. In July, the Centers for Medicare and Medicaid Services banned Holmes from owning or running a medical laboratory for two years. (This decision is currently under appeal.) Then came the civil and criminal investigations by the U.S. Securities and Exchange Commission and the U.S. Attorney’s Office for the Northern District of California and two class-action fraud lawsuits. Theranos’s board has subsequently been cleaved in two, with Kissinger, Shultz, and Frist now merely “Counselors.” Holmes, meanwhile, isn’t going anywhere. As the C.E.O. and chairwoman of Theranos, only she can elect to replace herself. Forbes, clearly embarrassed by its cover story, removed Holmes from its list of “America’s Richest Self-Made Women.” A year earlier, it had estimated her wealth at $4.5 billion. “Today, Forbes is lowering our estimate of her net worth to nothing,” the editors wrote. Fortune had its mea culpa, with the author stating boldly that “Theranos misled me.” Director Adam McKay, fresh off his Oscar for The Big Short, has even signed on to make a movie based on Holmes, tentatively titled Bad Blood. (On the bright side for Holmes, Jennifer Lawrence is attached as the lead.) Silicon Valley, once so taken by Holmes, has turned its back, too. Countless investors have been quick to point out that they did not invest in the company—that much of its money came from the relatively somnolent worlds of mutual funds, which often accrue the savings of pensioners and retirees; private equity; and smaller venture-capital operations on the East Coast. In the end, one of the only Valley V.C. shops that actually invested in Theranos was Draper Fisher Jurvetson. Many may have liked what Holmes represented about their industry, but they didn’t seem to trust her with their money. Meanwhile, Holmes has somehow compartmentalized it all. In August, she flew to Philadelphia to speak at the American Association for Clinical Chemistry’s annual conference. Before she stepped out onstage, the conference organizers played the song “Sympathy for the Devil” for the ballroom, packed with more than 2,500 doctors and scientists. Holmes was wearing a blue button-up shirt and black blazer (she has recently abandoned the black turtleneck), and she spoke for an hour while rapidly flicking through her presentation. The audience was hoping that Holmes would answer questions about her Edison technology and explain whether or not she knew it was a sham. But instead Holmes showed off a new blood-testing technology that a lot of people in the room insisted was not new or groundbreaking. Later that day she was featured on Sanjay Gupta’s CNN show and a few weeks later appeared in San Francisco at a splashy dinner celebrating women in technology. “Elizabeth Holmes won’t stop,” Phyllis Gardner, the Stanford professor, told me. “She’s holding on to her story like a barnacle on the side of a ship.” Holmes may not be prepared to compartmentalize what comes next. When I arrived in Palo Alto in July, I wasn’t the only person setting out to interview anyone associated with Theranos and Holmes. The Federal Bureau of Investigation was, too. When I knocked on a door, I was only a day or two behind F.B.I. agents who were trying to put together a time line of what Holmes knew and when she knew it—adding the most unpredictable twist to a story she could no longer control.
--- title: ガス(燃料) actions: ['答え合わせ', 'ヒント'] requireLogin: true material: editor: language: sol startingCode: "zombiefactory.sol": | pragma solidity ^0.4.19; import "./ownable.sol"; contract ZombieFactory is Ownable { event NewZombie(uint zombieId, string name, uint dna); uint dnaDigits = 16; uint dnaModulus = 10 ** dnaDigits; struct Zombie { string name; uint dna; // ここに新しいデータを追加せよ } Zombie[] public zombies; mapping (uint => address) public zombieToOwner; mapping (address => uint) ownerZombieCount; function _createZombie(string _name, uint _dna) internal { uint id = zombies.push(Zombie(_name, _dna)) - 1; zombieToOwner[id] = msg.sender; ownerZombieCount[msg.sender]++; NewZombie(id, _name, _dna); } function _generateRandomDna(string _str) private view returns (uint) { uint rand = uint(keccak256(_str)); return rand % dnaModulus; } function createRandomZombie(string _name) public { require(ownerZombieCount[msg.sender] == 0); uint randDna = _generateRandomDna(_name); randDna = randDna - randDna % 100; _createZombie(_name, randDna); } } "zombiefeeding.sol": | pragma solidity ^0.4.19; import "./zombiefactory.sol"; contract KittyInterface { function getKitty(uint256 _id) external view returns ( bool isGestating, bool isReady, uint256 cooldownIndex, uint256 nextActionAt, uint256 siringWithId, uint256 birthTime, uint256 matronId, uint256 sireId, uint256 generation, uint256 genes ); } contract ZombieFeeding is ZombieFactory { KittyInterface kittyContract; function setKittyContractAddress(address _address) external onlyOwner { kittyContract = KittyInterface(_address); } function feedAndMultiply(uint _zombieId, uint _targetDna, string _species) public { require(msg.sender == zombieToOwner[_zombieId]); Zombie storage myZombie = zombies[_zombieId]; _targetDna = _targetDna % dnaModulus; uint newDna = (myZombie.dna + _targetDna) / 2; if (keccak256(_species) == keccak256("kitty")) { newDna = newDna - newDna % 100 + 99; } _createZombie("NoName", newDna); } function feedOnKitty(uint _zombieId, uint _kittyId) public { uint kittyDna; (,,,,,,,,,kittyDna) = kittyContract.getKitty(_kittyId); feedAndMultiply(_zombieId, kittyDna, "kitty"); } } "ownable.sol": | /** * @title Ownable * @dev The Ownable contract has an owner address, and provides basic authorization control * functions, this simplifies the implementation of "user permissions". */ contract Ownable { address public owner; event OwnershipTransferred(address indexed previousOwner, address indexed newOwner); /** * @dev The Ownable constructor sets the original `owner` of the contract to the sender * account. */ function Ownable() public { owner = msg.sender; } /** * @dev Throws if called by any account other than the owner. */ modifier onlyOwner() { require(msg.sender == owner); _; } /** * @dev Allows the current owner to transfer control of the contract to a newOwner. * @param newOwner The address to transfer ownership to. */ function transferOwnership(address newOwner) public onlyOwner { require(newOwner != address(0)); OwnershipTransferred(owner, newOwner); owner = newOwner; } } answer: > pragma solidity ^0.4.19; import "./ownable.sol"; contract ZombieFactory is Ownable { event NewZombie(uint zombieId, string name, uint dna); uint dnaDigits = 16; uint dnaModulus = 10 ** dnaDigits; struct Zombie { string name; uint dna; uint32 level; uint32 readyTime; } Zombie[] public zombies; mapping (uint => address) public zombieToOwner; mapping (address => uint) ownerZombieCount; function _createZombie(string _name, uint _dna) internal { uint id = zombies.push(Zombie(_name, _dna)) - 1; zombieToOwner[id] = msg.sender; ownerZombieCount[msg.sender]++; NewZombie(id, _name, _dna); } function _generateRandomDna(string _str) private view returns (uint) { uint rand = uint(keccak256(_str)); return rand % dnaModulus; } function createRandomZombie(string _name) public { require(ownerZombieCount[msg.sender] == 0); uint randDna = _generateRandomDna(_name); randDna = randDna - randDna % 100; _createZombie(_name, randDna); } } --- 見事だ!これでDAppの重要な部分の更新を可能にしつつ、他の誰かが我々のコントラクトをめちゃくちゃにするのを防ぐ方法が理解できただろう。 Solidityが他のプログラミング言語と比べて、かなり違う部分についてさらに教えてやろう: ## ガス — イーサリアムDAppの燃料 Solidityでは、ユーザーが関数を使用するたびに、**_ガス_**と呼ばれる通貨を支払うことになっている。ユーザーはEther(イーサと呼ぶ。イーサリアムの通貨だ)でガスを買い、アプリの関数を実行するのだ。 関数を実行するために必要なガスの量は、関数のロジックの複雑さによるのだ。個々の操作には、その操作を実行するためにどれくらいの計算資源が必要になるのかを計算したものに基づいて、**_ガスのコスト_**が決まっている(例えば、storageへの書き込みは整数の足し算に比べてずっと高いぞ) 各操作に必要なガスの価格の合計が、関数の **_ガスのコスト_**になる。 ユーザーは実際にお金を使って関数を動かすことになるから、イーサリアムは他のプログラミング言語よりもずっとコードの最適化が重要になるのだ。お主のコードがお粗末だと、ユーザーは余分にお金を支払わなければならなくなる。結果的には数千人のユーザーの数百万ドルを無駄にすることになるのだ。 ## なぜガスが必要なのか? イーサリアムは、大きくて、遅いけれども、極めて安全なコンピューターのようなものだ。関数を実行する時には、ネットワーク上で必要になるすべてのノードで同じ関数が実行されて、出力が正しいことを検証するのだ。何千ものノードが関数の実行を検証する仕組みこそが、イーサリアムを分散型にして、データを不変で検閲耐性の強いものにしているのだ。 イーサリアムの作成者は、誰かが無限ループを起こしてネットワークを詰まらせたり、非常に重い処理でネットワークの計算資源を食いつぶしたりしないようにしたいと願っていたのだ。だからこそ、トランザクションを無料にすることを避け、ユーザーに計算時間とストレージについて支払うようにしたのだ。 > 注:CryptoZombiesの作者がLoom Networkで構築しているようなサイドチェーンの場合は話は別です。 World of Warcraftのようなゲームをイーサリアムのメインのネットワーク上で直接動かすことは、ガスのコストが高額になることから、ありえない話です。しかし、別のコンセンサスアルゴリズムで動作するサイドチェーン上で実行することは十分考えられることです。後のレッスンで、どのようなタイプのDAppをイーサリアムのメインネットワークではなくサイドチェーン上に構築すべきかについて解説します。 ## ガスを節約するためのstruct構造 レッスン 1では、`uint`には様々なタイプがあることを教えたな。 `uint8`、 `uint16`、 `uint32`とかだ。 普通はこのサブタイプを使うメリットはない。なぜならSolidityは`uint`のサイズに関わらず256ビットのストレージを確保するからだ。例えば`uint` (`uint256`) の代わりに`uint8` を使ってもガスの節約にはならない。 しかしこれには例外がある。それは`struct`の中だ。 structの中に複数の `uint`がある場合、できる限り小さい単位の `uint`を使うことで、Solidityが複数の変数をまとめて、ストレージを小さくすることが可能だ。例をあげるぞ: ``` struct NormalStruct { uint a; uint b; uint c; } struct MiniMe { uint32 a; uint32 b; uint c; } // 複数の変数がまとめられるため、`mini` は`normal`に比べてガスコストが低くなる。 NormalStruct normal = NormalStruct(10, 20, 30); MiniMe mini = MiniMe(10, 20, 30); ``` このため、structの中ではできる限り小さな整数のサブタイプを使うようにすることだ。 また、同じデータ型の変数を一箇所にまとめることで(つまり、structの中で隣り合わせることで)、Solidityのstorageスペースを最小限に抑えることも可能だ。例えば、`uint c; uint32 a; uint32 b;`は、`uint32 a; uint c; uint32 b;`よりもコストが低くなる。なぜなら2つの`uint32`変数をまとめることできるからだ。 ## それではテストだ ここではゾンビに新たに2つの特徴を加えたい。`level` と`readyTime`だ。`readyTime`はゾンビに餌を与える間隔を設定するクールダウンタイマーに使用する。 では、`zombiefactory.sol`に戻るぞ。 1. `Zombie` structに2つのプロパティを追加せよ。プロパティは `level` (`uint32`)と、`readyTime` (`uint32`)である。このデータ型をまとめることができるようにstructの最後に設定せよ。 ゾンビのレベルとタイムスタンプの格納は32 bitsで十分だ。そこで、通常の`uint` (256ビット)ではなく、よりタイトにデータを格納することでガスコストを節約するのだ。
389 Pa. 304 (1957) Commonwealth v. Moon, Appellant. Supreme Court of Pennsylvania. Argued April 23, 1957. May 27, 1957. *305 Before JONES, C.J., BELL, CHIDSEY, MUSMANNO, ARNOLD and JONES, JJ. Edward Dumbauld, with him Thomas A. Waggoner, Jr., and E.H. Beshlin, for appellant. Frank P. Lawley, Jr., Deputy Attorney General, with him Harrington Adams, Deputy Attorney General, for appellee. OPINION BY MR. JUSTICE ARNOLD, May 27, 1957: The defendant, Norman W. Moon, murdered the Honorable Allison D. Wade, President Judge of Warren County. At his trial before a jury he was convicted of murder in the first degree and the jury imposed the death penalty, and defendant appeals. *306 Moon was directed to appear before Judge Wade on a charge of failure to comply with an order of support. (See Commonwealth v. Moon, 174 Pa. Superior Ct. 334, 101 A. 2d 147). The day before the date fixed for his appearance he cashed a check for $1,500 and purchased a forty-five calibre pistol, a box of shells and two clips, both of which were loaded when he entered the court room. Moon never paid anything on the support order entered against him. Having been ordered to appear before the Court of Quarter Sessions of Warren County for sentence (he was then in default $1,600 on the support order), he appeared before said court on January 13, 1954. When asked if he intended to comply with the order of support, Moon replied, "Absolutely not." When he stood before the court he drew the pistol in question, which had been loaded with one clip, and emptied the gun in various directions and at Judge Wade. Judge Wade stumbled from the dais and was prone on the floor of the court room when Moon fired two more shots. Judge Wade cried out, "Don't shoot, please don't shoot, I won't sentence you," to which Moon replied, "You God-damned . . . you will never get a chance to." Judge Wade's death was almost instantaneous. Moon then reloaded his gun and while making his escape from the court room threatened to kill attorney Bonavita who attempted to stop him. Moon got into his automobile and soon after officers stopped progress of the automobile by shooting the tires. He then got out of the car and shot himself in the neck, and was taken to the Warren Hospital, where he recovered. Repeatedly during his trial the defendant stated that he was extremely mad or furious. Subsequent to the verdict Moon presented a petition to appoint a commission to determine his mental condition. The court found him to be sane, but on appeal *307 this finding was reversed and this Court ordered a reexamination of the commission's findings and recommendation, and reconsideration of the evidence in the light of the statutory definition of mental illness. See Commonwealth v. Moon, 383 Pa. 18, 117 A. 2d 96. Thereafter the lower court executed the mandate of this Court, and entered an order refusing to commit him to a mental hospital and finding that it was not satisfied that he was mentally ill. On appeal this order was affirmed. See Commonwealth v. Moon, 386 Pa. 205, 125 A. 2d 594. Thereafter the court below heard the defendant's motion for new trial, denied the same, and this appeal followed. The defendant, who took the stand on his own behalf, admitted all of the circumstances of the killing, but claimed the following trial errors: (1) When examining on the voir dire the first juror drawn, counsel for appellant sought to ask the following hypothetical question: "Mrs. Knapp, again under the law of Pennsylvania, a person who at the time of the commission of any act which would otherwise be criminal, is unable to tell the difference between right and wrong and to appreciate the consequences of his acts, such a person is entitled to be found not guilty by reason of insanity. If you found from a fair preponderance of the evidence, that the accused at the time of the commission of this act, was unable to distinguish right from wrong and unable to appreciate the consequences of his act, would you then find him not guilty by reason of insanity?" The Commonwealth's objection was sustained and properly so. This Court declared in Commonwealth v. Bentley, 287 Pa. 539, at page 546, 135 A. 310: "In conducting the preliminary examination, considerable latitude must be permitted to elicit the necessary information, but it is to be strictly confined to inquiries disclosing qualifications, *308 or lack of them, and not extended so as to include hypothetical questions, when their evident purpose is to have the jurors indicate in advance what their decisions will be under a certain state of the evidence or upon a certain state of facts, and thus possibly commit them to definite ideas or views when the case shall be finally submitted to them for their decision." (2) The appellant claims the court below committed error in not sustaining the challenge for cause as to several jurors following examination on voir dire. These challenges for cause were made at a time when the defendant's peremptory challenges were not exhausted, and hence the refusal of the challenge cannot be prejudicial error. Sayres v. Commonwealth, 88 Pa. 291, 306, 307; Commonwealth v. Bibalo, 375 Pa. 257, 264, 100 A. 2d 45. The defendant is not entitled to the services of any particular juror but only as to twelve unprejudiced jurors. (3) The defendant was not prejudiced by the action of the court in belatedly permitting the Commonwealth to challenge peremptorily one Honhart after he had been passed as a juror by the Commonwealth. The court, after excusing said juror, immediately, of its own motion, granted an additional peremptory challenge to the defendant. In Commonwealth v. Schroeder, 302 Pa. 1, 152 A. 835, the defendant was convicted of murder in the first degree and sentenced to death. The court there charged that certain evidence on credibility of one of the defendant's witnesses was substantive evidence of defendant's guilt. This Court commented, at page 11: "Apparently the court did inadvertently treat the evidence as substantive proof. We are convinced that this instruction did defendant no harm and did not bring about her conviction." (Italics supplied). Thus this Court there determined that even though the instruction was erroneous, if the court was convinced *309 that the defendant was not harmed thereby, the judgment and sentence would be affirmed. "`The defendant in a homicide case has no standing in an appellate court to complain of an erroneous instruction unless the error contributed to the result reached by the jury': Com. v. Winter, 289 Pa. 284 [137 A. 261]; Com. v. Divomte, 262 Pa. 504 [105 A. 821].": Commonwealth v. Schroeder, supra. (Italics supplied). (4) The Commonwealth introduced the photograph of Judge Wade taken very shortly after the assault by the defendant. It was offered to show the location of the body in the court room and to show the direction of the bullet wounds in vital parts. Having examined the photograph carefully, we find that it was not inflammatory or prejudicial. It simply showed a body prone on the floor of the court room with stains, evidently blood, on its left side. Introduction of such exhibits is a purely discretionary matter for the trial judge, and we find no abuse of discretion: Commonwealth v. Ballem, 386 Pa. 20, 27, 123 A. 2d 728. (5) The defendant next complains that he was not allowed to show the bias of the witness, Bernice R. Seavy. We have examined this assignment and find no merit in it. Evidently the appellant misunderstood the answer of the witness. (6) Next the defendant objects to the memorandum made by Commonwealth's witness during an interview with the defendant while he was in the hospital. This memorandum was not signed or written by the defendant, nor is such necessary. The defendant was unable to speak due to his self-inflicted wounds and was instructed to nod his head yes or no, and to hold up fingers to indicate "how many." The memorandum was clearly admissible just as an unsigned or oral statement of the defendant would have been. Its weight was for the jury, and the defendant was not harmed *310 by the admission in evidence of the memorandum. The answers of the defendant to the questions asked were fully testified to by officers Naddeo and Mehallic. (7) The defendant complains about the lower court's instructions on the question of the alleged insanity of the defendant. There was no prejudicial error in the court's charge on the question of insanity, although much is attempted to be made of the self-inflicted wounds of the defendant. See Commonwealth v. Lewis, 222 Pa. 302, 303, 71 A. 18; Commonwealth v. Wireback, 190 Pa. 138, 42 A. 542; Commonwealth v. Barner, 199 Pa. 335, 49 A. 60. This was examined by the jury, which rejected the plea of sympathy based thereon. See also Commonwealth v. Moon, 383 Pa. 18, 117 A. 2d 96, and Commonwealth v. Moon, 386 Pa. 205, 125 A. 2d 594. Here the defendant was actuated by hatred of his wife and quite evidently had made up his mind that he would not comply with the court order against him for her support. The day before the killing he had obtained sufficient money to pay the back support, but refused to pay it, and instead murdered the judge who was about to order him to do so. The defendant had a night's sleep after he had purchased the gun and shells and obtained the $1,500. As long as we are to have the death penalty in Pennsylvania certainly this is a clear case for its imposition. In accordance with the Act of February 15, 1870, P.L. 15, Section 2, 19 PS § 1187, we have reviewed both the law and the evidence in this record, and have determined that all the ingredients necessary to constitute murder in the first degree have been proved to exist. The judgment is affirmed, and the record is remitted to the court below for the purpose of execution. *311 DISSENTING OPINION BY MR. JUSTICE MUSMANNO: The Majority Opinion in this case says: "As long as we are to have the death penalty in Pennsylvania certainly this is a clear case for its imposition." It is indeed difficult to imagine a more flagrant violation of law and order than the assassination of a judge sitting in Court — provided, of course, the killer is sane. If the heinousness of the act is to dictate the punishment, regardless of sanity or insanity, then Norman W. Moon should be executed. But, as long as we have in Pennsylvania the rule that an insane person should not be executed, certainly this is the case to apply it. I would think that the very act of shooting a judge suggests insanity at the outset. The Majority thinks otherwise. In addition, however, to the outer aspects of the case, we have the findings of a Sanity Commission, duly appointed by the Court, which unanimously found: "a. Norman W. Moon is in fact mentally ill. b. Norman W. Moon's mental illness is that of dementia praecox of the paranoid type. c. This illness is chronic and continuing. d. Norman W. Moon is a proper subject for commitment to a mental hospital." The lower Court which appointed the commission, after lauding the abilities and integrity of its members, declined to follow its recommendations. This Court, on appeal, affirmed the declination. (386 Pa. 205). I dissented from this Court's decision and I still believe that, in accordance with the Commission's recommendations, the defendant should be committed to a mental hospital until such time as he will have regained sanity, when the matter of the verdict against him in the trial for murder may be disposed of in accordance with law and justice. In my Dissenting Opinion (386 Pa. 219-231), I pointed out, as further evidence of Moon's insanity, the fact that he had set out to kill two other judges against whom he could not have had a sane animosity, namely, *312 Judge GUNTHER of the Superior Court, who had not written any Opinion against Moon, and the writer of this Opinion who had never theretofore participated in any way in any decision involving Moon. Moon had also tried to kill the district attorney of Warren County as well as the court stenographer, with neither of whom had he had any quarrel. And then he tried to kill himself, (a deed which could not possibly have brought him any advantage) by shooting himself in the neck. Moon was a man who had gone berserk. There was no rhyme, reason, purpose, or objective to his actions, so obviously maniacal. In my previous Dissenting Opinion I showed how the lower Court ignored the findings of a highly trained and qualified commission made up of two experienced doctors and a lawyer, and founded its conclusion mostly on the testimony of lay prison guards who were considerably limited in their appraisement of the subject of whom they spoke. I still believe the Court's action to have been serious error. This is a case where the Courts should have been extremely cautious in reaching their conclusions so that it could not be said, no matter how incorrectly, that they were influenced in their decision by the fact that a judge had been killed. Defendant's counsel in this appeal complains also of various trial errors. Without expressing my views on all the reasons advanced for a new trial, I wish to indicate my agreement with counsel's complaint that the prosecuting attorney improperly introduced in evidence a photograph of the body of Judge Wade, as it lay on the courtroom floor after the shooting. I believe that defense counsel is justified in complaining, as he does in his brief: "The prosecuting attorney advanced two reasons in support of the offer: (1) to show the location of the *313 body in the courtroom; and (2) to show the location of the bullet wounds in vital parts causing death. "Obviously the exhibit had no probative value and was not admissible under the first ground advanced. The testimony shows that the photograph does not depict the position or location of the body at the time of his death. The body had been moved, and the clothing had also been arranged so as to display prominently the gruesome spectacle of blood stains. . . "With respect to the second reason advanced in support of the offer, it is obvious that Exhibit 4 was merely cumulative. It was not necessary to offer the photograph in order to prove the corpus delicti or cause of death. "Before the photograph was offered, the Coroner had testified with respect to this point: `Q. What was the cause of death? A. Two wounds on the left side between, about at the elbow (Witness indicating on his own body) made by bullets.' "This testimony was subsequently elaborated and the witness illustrated his testimony adequately by indicating the location of the bullet marks on his own body. . . "Thus the real purpose of offering the photograph was manifestly to shock and horrify the jury. For this purpose, as the trial Court concedes and as the decisions of this Court have often emphasized, the exhibit should not have been admitted. It should have been excluded as inflammatory and undoubtedly prejudicial to defendant."
/* * ============================================================================= * * Copyright (c) 2011-2018, The THYMELEAF team (http://www.thymeleaf.org) * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * ============================================================================= */ package org.thymeleaf.templateparser.markup; import org.attoparser.AbstractChainedMarkupHandler; import org.attoparser.IMarkupHandler; import org.attoparser.ParseException; import org.thymeleaf.IEngineConfiguration; import org.thymeleaf.exceptions.TemplateProcessingException; import org.thymeleaf.standard.inline.IInlinePreProcessorHandler; import org.thymeleaf.standard.inline.OutputExpressionInlinePreProcessorHandler; import org.thymeleaf.templatemode.TemplateMode; /* * This class converts inlined output expressions into their equivalent element events, which makes it possible * to cache parsed inlined expressions. * * Some examples: * * [[${someVar}]] -> [# th:text="${someVar}"/] (decomposed into the corresponding events) * [(${someVar})] -> [# th:utext="${someVar}"/] (decomposed into the corresponding events) * * NOTE: The inlining mechanism is a part of the Standard Dialects, so the conversion performed by this handler * on inlined output expressions should only be applied if one of the Standard Dialects has been configured. * * --------------------------------------------------------------------------------------------------------------------- * NOTE: Any changes here should probably go too to org.thymeleaf.templateparser.text.InlinedOutputExpressionTextHandler * --------------------------------------------------------------------------------------------------------------------- * * @author Daniel Fernandez * @since 3.0.0 */ final class InlinedOutputExpressionMarkupHandler extends AbstractChainedMarkupHandler { private final OutputExpressionInlinePreProcessorHandler inlineHandler; InlinedOutputExpressionMarkupHandler( final IEngineConfiguration configuration, final TemplateMode templateMode, final String standardDialectPrefix, final IMarkupHandler handler) { super(handler); this.inlineHandler = new OutputExpressionInlinePreProcessorHandler( configuration, templateMode, standardDialectPrefix, new InlineMarkupAdapterPreProcessorHandler(handler)); } @Override public void handleText( final char[] buffer, final int offset, final int len, final int line, final int col) throws ParseException { this.inlineHandler.handleText(buffer, offset, len, line, col); } @Override public void handleStandaloneElementStart( final char[] buffer, final int nameOffset, final int nameLen, final boolean minimized, final int line, final int col) throws ParseException { this.inlineHandler.handleStandaloneElementStart(buffer, nameOffset, nameLen, minimized, line, col); } @Override public void handleStandaloneElementEnd( final char[] buffer, final int nameOffset, final int nameLen, final boolean minimized, final int line, final int col) throws ParseException { this.inlineHandler.handleStandaloneElementEnd(buffer, nameOffset, nameLen, minimized, line, col); } @Override public void handleOpenElementStart( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) throws ParseException { this.inlineHandler.handleOpenElementStart(buffer, nameOffset, nameLen, line, col); } @Override public void handleOpenElementEnd( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) throws ParseException { this.inlineHandler.handleOpenElementEnd(buffer, nameOffset, nameLen, line, col); } @Override public void handleAutoOpenElementStart( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) throws ParseException { this.inlineHandler.handleAutoOpenElementStart(buffer, nameOffset, nameLen, line, col); } @Override public void handleAutoOpenElementEnd( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) throws ParseException { this.inlineHandler.handleAutoOpenElementEnd(buffer, nameOffset, nameLen, line, col); } @Override public void handleCloseElementStart( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) throws ParseException { this.inlineHandler.handleCloseElementStart(buffer, nameOffset, nameLen, line, col); } @Override public void handleCloseElementEnd( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) throws ParseException { this.inlineHandler.handleCloseElementEnd(buffer, nameOffset, nameLen, line, col); } @Override public void handleAutoCloseElementStart( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) throws ParseException { this.inlineHandler.handleAutoCloseElementStart(buffer, nameOffset, nameLen, line, col); } @Override public void handleAutoCloseElementEnd( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) throws ParseException { this.inlineHandler.handleAutoCloseElementEnd(buffer, nameOffset, nameLen, line, col); } /* * No need to care about 'unmatched close' events - they don't influence the execution level nor inlining operations */ @Override public void handleAttribute( final char[] buffer, final int nameOffset, final int nameLen, final int nameLine, final int nameCol, final int operatorOffset, final int operatorLen, final int operatorLine, final int operatorCol, final int valueContentOffset, final int valueContentLen, final int valueOuterOffset, final int valueOuterLen, final int valueLine, final int valueCol) throws ParseException { this.inlineHandler.handleAttribute( buffer, nameOffset, nameLen, nameLine, nameCol, operatorOffset, operatorLen, operatorLine, operatorCol, valueContentOffset, valueContentLen, valueOuterOffset, valueOuterLen, valueLine, valueCol); } private static final class InlineMarkupAdapterPreProcessorHandler implements IInlinePreProcessorHandler { private IMarkupHandler handler; InlineMarkupAdapterPreProcessorHandler(final IMarkupHandler handler) { super(); this.handler = handler; } public void handleText( final char[] buffer, final int offset, final int len, final int line, final int col) { try { this.handler.handleText(buffer, offset, len, line, col); } catch (final ParseException e) { throw new TemplateProcessingException("Parse exception during processing of inlining", e); } } public void handleStandaloneElementStart( final char[] buffer, final int nameOffset, final int nameLen, final boolean minimized, final int line, final int col) { try { this.handler.handleStandaloneElementStart(buffer, nameOffset, nameLen, minimized, line, col); } catch (final ParseException e) { throw new TemplateProcessingException("Parse exception during processing of inlining", e); } } public void handleStandaloneElementEnd( final char[] buffer, final int nameOffset, final int nameLen, final boolean minimized, final int line, final int col) { try { this.handler.handleStandaloneElementEnd(buffer, nameOffset, nameLen, minimized, line, col); } catch (final ParseException e) { throw new TemplateProcessingException("Parse exception during processing of inlining", e); } } public void handleOpenElementStart( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) { try { this.handler.handleOpenElementStart(buffer, nameOffset, nameLen, line, col); } catch (final ParseException e) { throw new TemplateProcessingException("Parse exception during processing of inlining", e); } } public void handleOpenElementEnd( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) { try { this.handler.handleOpenElementEnd(buffer, nameOffset, nameLen, line, col); } catch (final ParseException e) { throw new TemplateProcessingException("Parse exception during processing of inlining", e); } } public void handleAutoOpenElementStart( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) { try { this.handler.handleAutoOpenElementStart(buffer, nameOffset, nameLen, line, col); } catch (final ParseException e) { throw new TemplateProcessingException("Parse exception during processing of inlining", e); } } public void handleAutoOpenElementEnd( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) { try { this.handler.handleAutoOpenElementEnd(buffer, nameOffset, nameLen, line, col); } catch (final ParseException e) { throw new TemplateProcessingException("Parse exception during processing of inlining", e); } } public void handleCloseElementStart( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) { try { this.handler.handleCloseElementStart(buffer, nameOffset, nameLen, line, col); } catch (final ParseException e) { throw new TemplateProcessingException("Parse exception during processing of inlining", e); } } public void handleCloseElementEnd( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) { try { this.handler.handleCloseElementEnd(buffer, nameOffset, nameLen, line, col); } catch (final ParseException e) { throw new TemplateProcessingException("Parse exception during processing of inlining", e); } } public void handleAutoCloseElementStart( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) { try { this.handler.handleAutoCloseElementStart(buffer, nameOffset, nameLen, line, col); } catch (final ParseException e) { throw new TemplateProcessingException("Parse exception during processing of inlining", e); } } public void handleAutoCloseElementEnd( final char[] buffer, final int nameOffset, final int nameLen, final int line, final int col) { try { this.handler.handleAutoCloseElementEnd(buffer, nameOffset, nameLen, line, col); } catch (final ParseException e) { throw new TemplateProcessingException("Parse exception during processing of inlining", e); } } public void handleAttribute( final char[] buffer, final int nameOffset, final int nameLen, final int nameLine, final int nameCol, final int operatorOffset, final int operatorLen, final int operatorLine, final int operatorCol, final int valueContentOffset, final int valueContentLen, final int valueOuterOffset, final int valueOuterLen, final int valueLine, final int valueCol) { try { this.handler.handleAttribute( buffer, nameOffset, nameLen, nameLine, nameCol, operatorOffset, operatorLen, operatorLine, operatorCol, valueContentOffset, valueContentLen, valueOuterOffset, valueOuterLen, valueLine, valueCol); } catch (final ParseException e) { throw new TemplateProcessingException("Parse exception during processing of inlining", e); } } } }
(e) -2 a Which is the biggest value? (a) 3/2 (b) 4 (c) -108 (d) -1/3 (e) 2 (f) -0.1 b What is the third biggest value in -4/11, 0.4, 0.03, 3/7, 0.16? 0.16 What is the smallest value in -350, 0.2, 1/36? -350 Which is the smallest value? (a) 1 (b) 7 (c) -8/15 (d) -15 (e) 4 d Which is the third biggest value? (a) -0.3 (b) -0.1 (c) 1/27 (d) -3 a Which is the third smallest value? (a) 86 (b) -0.1 (c) 1780 (d) 0.3 a What is the second biggest value in 24, 1/8, 1189? 24 Which is the smallest value? (a) 34 (b) 1/5 (c) 10/23 (d) -4/7 (e) -0.3 d Which is the fifth biggest value? (a) 9 (b) 3 (c) 0.5 (d) -7.8 (e) -12 e Which is the second smallest value? (a) -20 (b) 74 (c) 3 (d) -0.05 d Which is the fourth smallest value? (a) -5 (b) -2/109 (c) 12/13 (d) 15 d Which is the second smallest value? (a) -41704 (b) 3 (c) 2/15 (d) 5/2 (e) 1/6 (f) -1/4 f Which is the second biggest value? (a) 1.6 (b) 13 (c) 41 b Which is the second smallest value? (a) -7 (b) 320 (c) 7 c Which is the biggest value? (a) -524 (b) 0.06 (c) -1/3 (d) -0.5 (e) -0.024 b What is the second biggest value in 1/4, 1, 26.3? 1 What is the third biggest value in 0, -0.12773, 6? -0.12773 What is the biggest value in 5, 1862, 0.3, -1/5? 1862 Which is the second smallest value? (a) 0.3 (b) -317 (c) 3 (d) 2 (e) 0.2 e What is the smallest value in 1.1, 1, 172/9, 5, -2/9? -2/9 Which is the third smallest value? (a) 2 (b) 0 (c) 11749 (d) -4 (e) 3 a Which is the second biggest value? (a) 0.5 (b) -234 (c) -9 c Which is the third biggest value? (a) -0.5 (b) -3 (c) 0.0541 (d) 0.3 (e) 0.4 c Which is the sixth biggest value? (a) -5.1 (b) -8 (c) 3 (d) 0.1 (e) -1/3 (f) 5 b What is the biggest value in -5, -0.4, -2/7, 52/3, 0.3? 52/3 What is the fifth biggest value in -513, 4/7, 2, -3, -104? -513 Which is the third smallest value? (a) -0.2 (b) -545 (c) -0.5 (d) 5 a What is the second smallest value in 4, -2, -4, 96/11? -2 What is the smallest value in 6, 1/11, -4, 5/2, 3/4, -30/67? -4 What is the third smallest value in 1, -0.0623, 7? 7 What is the second smallest value in -8, 3690, 1/2, -0.2, -3/4? -3/4 What is the biggest value in 1/5, -3, -1015, 2/7? 2/7 Which is the second biggest value? (a) 0.7 (b) 1017 (c) 7/6 c What is the biggest value in 1/2, 4, 40, -0.29, -8? 40 Which is the second smallest value? (a) 4/3 (b) -28 (c) -1/8 (d) 1/4 (e) -2/23 c What is the fourth smallest value in 5, -187, 20, -3? 20 Which is the second smallest value? (a) -0.2 (b) -4/7 (c) 1 (d) 0.372 (e) -0.5 (f) 2/7 e Which is the fifth smallest value? (a) 4 (b) -3 (c) -2/53 (d) 3 (e) 1.5 a Which is the second smallest value? (a) 20 (b) 2/3 (c) 25.82 a What is the fourth smallest value in 11, -3.15, -5/2, 5, 3, -0.5? 3 Which is the third smallest value? (a) 0.378 (b) 1.27 (c) 2/7 b What is the third biggest value in -0.36, 2/83, 19? -0.36 What is the third smallest value in 0.6, 1/4, 6, 36, 4? 4 Which is the fourth smallest value? (a) -0.4 (b) -3/13 (c) -2.42 (d) 0.2 (e) -3 b Which is the second biggest value? (a) 1 (b) -0.16 (c) -1 (d) -52 b What is the biggest value in 3, -0.1, -294? 3 Which is the second biggest value? (a) 0 (b) -2 (c) -1/5 (d) 12/19 (e) -9 (f) -0.5 a What is the second biggest value in 0.1, 2/5, -2, 30, -21, -1/2? 2/5 Which is the biggest value? (a) 0.4 (b) 5 (c) 0.06 (d) -964 (e) 4 (f) -2 b Which is the third smallest value? (a) 29 (b) 11165 (c) 2 b Which is the smallest value? (a) -1 (b) -8/17 (c) 5 (d) 20/7 a What is the biggest value in 332, 0.03, 43? 332 Which is the third biggest value? (a) 152 (b) -2/3 (c) 0.7 b Which is the third biggest value? (a) 1/5 (b) -2 (c) 0.03 (d) 5 (e) 0.3 (f) 1719 e What is the smallest value in -50, -17, -1/73? -50 What is the fourth smallest value in 2, -4, 0.09, -10446? 2 What is the second biggest value in -99, 7, -13, 1? 1 Which is the second smallest value? (a) -0.07 (b) -51 (c) -22 (d) -0.4 c What is the smallest value in -1/2, -2/7, -1.86, 9.1? -1.86 What is the third biggest value in 0.4, 3, 10, 2, -4/3, -0.9? 2 What is the third biggest value in -2/15, 0.3, -1636? -1636 What is the sixth smallest value in -9, 5, 4, 3, -3/8, 1.1081? 5 Which is the third smallest value? (a) 1/2 (b) -4/3 (c) 1370 c What is the fifth smallest value in 2.4612, -4, -1, -1/7, 2/9, -3? 2/9 Which is the biggest value? (a) -5 (b) -2/23 (c) -4 (d) 17 (e) 5 d What is the biggest value in 3, -645, 0.1, 2/297? 3 What is the second smallest value in 0.2, 0.1, 1, -1/211? 0.1 Which is the fifth biggest value? (a) 47/2 (b) -2/3 (c) -2 (d) 0.02 (e) 1 c Which is the third smallest value? (a) 2/5 (b) 9 (c) -4 (d) -1/3 (e) 1/11 (f) 130 e What is the fifth biggest value in -4, 11/2, 5, -49, -1, 4? -4 Which is the fifth biggest value? (a) 1 (b) -2/5 (c) -112 (d) -1 (e) -2 c What is the second biggest value in -18/11, -1/3, 0.06, 4/5, -2/5, -53? 0.06 Which is the third biggest value? (a) 226 (b) 2/5 (c) -4 (d) 0.71 b What is the fourth biggest value in -0.8, 172, 18, 3/7, 0? 0 Which is the second biggest value? (a) 0.1 (b) 1 (c) -2 (d) 735 (e) 1/2 b Which is the fifth biggest value? (a) 1/8 (b) 6 (c) 20 (d) 35 (e) -2 e Which is the biggest value? (a) 1/8 (b) 0 (c) -1 (d) 987/37 d What is the smallest value in 2.487, 13, -1/5? -1/5 Which is the fourth smallest value? (a) -1011 (b) 0.3 (c) 0.2 (d) -5 (e) -1/7 c Which is the third biggest value? (a) -134636 (b) -3 (c) 4/3 a Which is the fifth smallest value? (a) -2/19 (b) -2/31 (c) -5 (d) 0.9 (e) 0.2 (f) -0.1 e What is the sixth smallest value in 1.1, 2, -58, -2, 0.5, -4? 2 Which is the smallest value? (a) 39/8 (b) -5/14 (c) 0 b Which is the fourth biggest value? (a) -4 (b) 5 (c) 80/7 (d) -1/2 a What is the third biggest value in 1/18, 3, 39, 4/15? 4/15 What is the fourth smallest value in -0.3, 3, -8, -2, -1, -17/5? -1 Which is the third biggest value? (a) 2 (b) -2/9 (c) -10 (d) 3 (e) -0.175 e What is the second smallest value in 2/7, -0.4, -3674? -0.4 What is the third smallest value in 0.479, 0.2, -0.07, 264? 0.479 Which is the fifth biggest value? (a) -0.21 (b) 0.3 (c) -3/5 (d) -56 (e) 0.2 (f) 2/3 c What is the second smallest value in 0.04, -0.3, 13, -94, -4/7? -4/7 What is the second smallest value in -1, 3/2, -4, -3/145? -1 Which is the fourth smallest value? (a) 0.4 (b) 1/2 (c) -2 (d) 42/13 (e) -3/2 (f) 3/7 f Which is the third biggest value? (a) 0.2 (b) -1/5 (c) -16/1137 b What is the third smallest value in -4, 224.1, 4, 76, 6, -0.3? 4 What is the smallest value in 50, 7, 2843/6? 7 Which is the fifth smallest value? (a) -0.1 (b) 1/4 (c) 2/7 (d) 0 (e) 2/111 c What is the fourth biggest value in 1554, -0.2, -3/2, 3.8? -3/2 Which is the third smallest value? (a) -2 (b) -5/11 (c) 1 (d) 0.5 (e) -6.8 (f) -4 a Which is the third smallest value? (a) -11 (b) -2/231 (c) 0.5 (d) 0.02 d What is the second smallest value in 0.15, -1/3, 2/7, -20, 6? -1/3 What is the fourth biggest value in 4, 1, -1/4, -4, 2/7, 308? 2/7 What is the biggest value in 94, -22, -1? 94 Which is the smallest value? (a) -3/5 (b) 12 (c) 74 (d) 3 a Which is the third biggest value? (a) -2 (b) -21 (c) -8/5 (d) -0.2 a Which is the second biggest value? (a) -0.193 (b) 3 (c) -54 (d) -3/2 a Which is the third biggest value? (a) 8 (b) 0.1 (c) -0.5 (d) 3 (e) -1608 b Which is the fourth biggest value? (a) 5/3 (b) -5520 (c) 2/15 (d) -7 b Which is the third smallest value? (a) -0.04 (b) -1/3 (c) -1713 a Which is the second biggest value? (a) 0.01849 (b) 2 (c) 5 b Which is the fourth biggest value? (a) -22 (b) 0.01 (c) 0.4 (d) -43 (e) 0.2 a What is the second smallest value in 1.2, 2/11, 0.16, -5? 0.16 What is the biggest value in -0.4, -1.59, -5, 1.04? 1.04 Which is the smallest value? (a) 1/9 (b) 1/4 (c) 0.9 (d) 3054 (e) -2/7 (f) 0 e What is the second smallest value in -94, -113/4, 3/5, 3? -113/4 Which is the fourth biggest value? (a) -1 (b) 3/7 (c) -74942 (d) 0.8 c What is the third biggest value in -2, 2, 316907, -0.1? -0.1 What is the second biggest value in 33, -11, -39, 6? 6 What is the fifth smallest value
COPIAPO, Chile — Chilean officials are taking measures to alleviate depression among the 33 miners trapped in a collapsed mine after telling them it might be months before rescuers will reach them, according to a report. Health Minister Jaime Manalich said the officials told the group that "they would not be rescued before the Fiestas Patrias, and that we hoped to get them out before Christmas," the AFP news agency reported. Fiestas Patrias is Chile's Independence Day celebration, held on Sept.18. Manalich told AFP that the miners, who are trapped 2,300 feet underground, reacted calmly to the news. The group has been trapped since Aug. 5. The news service said the government was taking steps — from getting doses of anti-depressants for the men to sending down fresh clothes and games — to help keep them physically and mentally fit for the grueling wait ahead. "We expect that after the initial euphoria of being found, we will likely see a period of depression and anguish," Manalich said. "We are preparing medication for them. It would be naive to think they can keep their spirits up like this." The government has asked NASA and Chile's submarine fleet for tips on survival in extreme, confined conditions, and are looking to send them space mission-like rations. "We hope to define a secure area where they can establish various places — one for resting and sleeping, one for diversion, one for food, another for work," Manalich said. Establishing a daily and nightly routine is important, the minister said, adding that having fun also will be critical. The rescue team is creating an entertainment program "that includes singing, games of movement, playing cards. We want them to record songs, to make videos, to create works of theater for the family." Second bore hole finished Some mining experts believe it will take far less than four months to dig the tunnel. Larry Grayson, a professor of mining engineering at Penn State University, said it could take just 25 to 30 days to reach the miners. Gustavo Lagos, a professor at the Catholic University of Chile's Center for Mining, estimated the job could be done in two months if all goes well and four months if it all bogs down. Still, officials are also planning exercise and other activities to keep the miners healthy and trim, using some of the passages that remain accessible to the miners, Manalich said. Even though the miners have lost around 22 pounds each, Chilean officials are trying to ensure they don't bulk up before their rescue. They said the miners would have to be no more than 35 inches around the waist to make it out of the tunnel. They remain days away from being able to eat solid food because they went hungry for so long. Rescuers have sent down a high-energy glucose gel, and on Wednesday they gave the miners cans of a milk-like drink enriched with calories and protein. The escape tunnel will be about 26 inches wide — the diameter of a typical bike tire — and stretch for more than 2,200 feet through solid rock. That's more than 80 inches in circumference, but rescuers also have to account for the space of the basket that will be used to pull the miners to safety. 'My soul ached'The miners and their relatives are exchanging letters via the shaft, a crucial part of maintaining their mental health. "You have no idea how much my soul ached to have been underground and unable to tell you I was alive," trapped miner Edison Pena said in a letter to his family. "The hardest thing is not being able to see you." Fellow miner Esteban Rojas promised his wife he would finally buy her a wedding dress as soon as he gets out, and hold a church marriage ceremony, 25 years after they wed in a registry office. Officials have been vetting letters sent by relatives, to avoid any shocks. Some disagree with the method. "It's very important for the miners' mental health that they communicate openly with their families, and without filters, either by letter or by phone," said Claudio Barrales, a psychologist at the Universidad Central in Santiago. Outside, Chilean flags are everywhere — including the torn one that became a symbol of Chile's resistance when a young man was photographed holding it just after a massive earthquake rocked the South American nation last year. That flag was raised above 33 others that sit on a hill over the mine, each representing one of the trapped men. Trapped miners' relatives, who have been living in plastic tents at the mine head in a makeshift settlement dubbed Camp Hope, have been gradually returning to their normal lives, but some were drawing up rosters to take turns being at the mine. Push for mining reformThe accident in the small gold and copper mine has turned a spotlight on mine safety in Chile, the world's No. 1 copper producer, although accidents are rare at major mines. The incident is not seen having a significant impact on output. President Sebastian Pinera has fired officials of Chile's mining regulator and vowed to overhaul the agency. Analysts say the feel-good factor of finding the miners alive, coupled with the government's hands-on approach, could help Pinera as he tries to push through changes to mining royalties that the center-left opposition had shot down. Some family members filed suit Wednesday against the mine's owner, Compania Minera San Esteban. Attorney Remberto Valdes, representing the miner Raul Bustos, accused the company of fraud and serious injury based on the lack of safety measures like the escape tunnel that the state-owned Codelco copper company is now preparing to dig. Four municipal governments in the area were preparing a similar claim. On Aug. 31, the men will have been trapped underground longer than any other miners in history. Last year, three miners survived 25 days trapped in a flooded mine in southern China. Few other rescues have taken more than two weeks. Video: Trapped miners need to watch their waistlines Transcript of: Trapped miners need to watch their waistlines BRIAN WILLIAMS, anchor (New Orleans):We have an update tonight on those miners, 33 of them in that 2,000-foot-deep copper mine in Chile . They were all discovered alive and mostly well after 17 days. Now, while it's true they may not be rescued until Christmas because boring a big enough hole is a gingerly business, and while it's now all about keeping them healthy and sane until then, we learned today they cannot get out if they are any bigger around than a 35-inch waistline. Sadly, it shouldn't be a problem as many of them have already lost a lot of pounds. Hearing this, we were reminded today, the average American waistline is almost 40 inches for men, 37 inches for American women. Carlos Galleguillos and Tabita Galleguillos, relatives of trapped miner Jorge Galleguillos, wait for news outside the San Jose Mine near Copiapo, Chile, on Monday, Oct. 11. The engineer leading Chilean rescue efforts, Andres Sougarett, said Monday his team successfully tested a rescue capsule nearly all the way down to where 33 miners are trapped. (Natacha Pisarenko / AP) ShareBack to slideshow navigation Drill operators Jeff Hart, left, and James Staffel, both U.S. citizens, wave as the drill that made the hole reaching the miners is transported away from the mine on Monday. (Jorge Saenz / AP) ShareBack to slideshow navigation Rescuers test a capsule similiar to the one that will be used to recover the trapped miners at the San Jose mine near Copiapo, Chile, on Sunday, Oct. 10. (Hugo Infante / AFP - Getty Images) ShareBack to slideshow navigation A relative of one of the miners is hugged by a policeman after the drilling machine completed an escape hole at the mine on Oct. 9. (Ivan Alvarado / Reuters) ShareBack to slideshow navigation Workers of the T-130 drill celebrate in the arid Atacama desert on Oct. 9. The crew drilling with the T-130 drill, part of an effort dubbed "Plan B" - one of three shafts attempting to reach 33 miners trapped deep underground - finally made contact with the miners' shelter. (Francesco Degasperi / AFP - Getty Images) ShareBack to slideshow navigation A clown named Rolly shows a flag that was sent by the 33 trapped miners as a gift at the camp where relatives wait for news outside the San Jose mine in Copiapo, Chile, on Wednesday, Oct. 6. The words on the flag read in Spanish, "A souvenir for clown Rolly, from the San Jose mine, thanks for making our children laugh." Thirty-three miners have been trapped deep underground in the copper and gold mine since it collapsed on Aug. 5. (Natacha Pisarenko / AP) ShareBack to slideshow navigation A worker checks part of a drill pulled from Rigg 421 on Sept. 24 at the San Jose mine near Copiapo, Chile, where 33 miners remain trapped. (Ivan Alvarado / AFP - Getty Images) ShareBack to slideshow navigation Children play Oct. 2 as a worker hangs a sign identifying a module to be used as school room for relatives of the trapped Chilean miners trapped. Many of the families of the miners are living in what is called "Camp Esperanza" or "Camp Hope." (Ariel Marinkovic / AFP - Getty Images) ShareBack to slideshow navigation A crane lifts a capsule that will be used as part of rescue operation for the miners at the San Jose copper and gold mine on Saturday, Sept. 25. (Ivan Alvarado / Reuters) ShareBack to slideshow navigation Nelly Bugueno, mother of trapped miner Victor Zamora, checks her cell phone as she walks past the tents where families of the 33 trapped miners are living as they await rescue on Friday. (Stringer/chile / Reuters) ShareBack to slideshow navigation Jesica Cortez, wife of Victor Zamora, one of the 33 miners trapped down in the shaft, rejoices as she reads a letter from her husband, at San Jose mine, near Copiapo, 800 km north of Santiago, on Wednesday, Sept. 18. (Martin Bernetti / AFP - Getty Images) ShareBack to slideshow navigation Members of a folkloric ballet perform at the camp where relatives of trapped miners wait for news outside the San Jose mine in Copiapo, Chile, Wednesday Sept. 1. Thirty-three miners have been trapped alive deep underground in the copper and gold mine since it collapsed on Aug. 5. (Roberto Candia / AP) ShareBack to slideshow navigation A composite image captured from a video on Tuesday, Sept. 1 shows four of 33 trapped miners waving at mine San Jose, near of Copiapo, Chile. (Codelco / Handout / EPA) ShareBack to slideshow navigation A sample of what it will be the first hot meal the miners still trapped in the San Jose Mine will have since the accident, Tuesday, Sept. 1 near Copiapo, Chile. (Ariel Marinkovic / AFP - Getty Images) ShareBack to slideshow navigation Evangelic Minister Javier Soto dedicates one of the 33 mini-bibles that will be given to the miners trapped in the San Jose mine, Monday. (Ariel Marinkovic / AFP - Getty Images) ShareBack to slideshow navigation View of 33 Chilean national flags placed outside the San Jose mine by the relatives of the 33 trapped miners in Copiapo, 800 km north of Santiago on Monday. (Ariel Marinkovic / AFP - Getty Images) ShareBack to slideshow navigation A worker checks the drill machine digging an escape hole for the 33 miners trapped underground in a copper and gold mine at Copiapo, north of Santiago, Chile, on Monday, Aug. 30. (Ivan Alvarado / Reuters) ShareBack to slideshow navigation Work to rescue the trapped miners continues on Aug. 30 at the mine, which is located 450 miles north of Santiago. (Ivan Alvarado / Reuters) ShareBack to slideshow navigation Relatives of those trapped underground in a copper and gold mine gather around a screen showing the miners inside the mine at Copiapo, north of Santiago, Chile, on Thursday, Aug. 26. (Ivan Alvarado / Reuters) ShareBack to slideshow navigation Marion Gallardo, the granddaughter of trapped miner Mario Gomez, writes a letter to her grandfather on Wednesday, Aug. 25. The 33 miners trapped in the San Esteban gold and copper mine in Copiapo, north of Santiago, since Aug. 5 say they are "enduring hell" underground, putting urgency into the rescue operation. (Ariel Marinkovic / AFP - Getty Images) ShareBack to slideshow navigation Elias Sepulveda and her cousin Katherine embrace in front of a tribute to their relatives, Esteban Rojas and Pablo Rojas, two of the miners trapped in the collapsed mine. (Roberto Candia / AP) ShareBack to slideshow navigation An officer stands in front of the machine that will be used to rescue the miners. The miners were trapped when the shaft they were working in collapsed. (Claudio Reyes / EPA) ShareBack to slideshow navigation Relatives of the trapped miners wave to rescue workers outside the collapsed mine. Rescue teams bored a small hole down more than 2,000 feet and used a video camera to confirm the miners were alive on Aug. 22. (Roberto Candia / AP) ShareBack to slideshow navigation Florencio Avalos, one of the trapped miners, is seen Aug. 23 in an image from video. The camera was lowered more than 2,000 feet into the copper and gold mine. (Reuters) ShareBack to slideshow navigation
/* * Copyright (C) 2007-2015 Lonelycoder AB * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. * * This program is also available under a commercial proprietary license. * For more information, contact andreas@lonelycoder.com */ #ifndef PROP_I_H__ #define PROP_I_H__ #include "prop.h" #include "misc/pool.h" #include "misc/redblack.h" #include "misc/lockmgr.h" extern hts_mutex_t prop_mutex; extern hts_mutex_t prop_tag_mutex; extern pool_t *prop_pool; extern pool_t *notify_pool; extern pool_t *sub_pool; TAILQ_HEAD(prop_queue, prop); LIST_HEAD(prop_list, prop); RB_HEAD_NFL(prop_tree, prop); LIST_HEAD(prop_sub_list, prop_sub); TAILQ_HEAD(prop_sub_dispatch_queue, prop_sub_dispatch); /** * */ struct prop_courier { struct prop_notify_queue pc_queue_nor; struct prop_notify_queue pc_queue_exp; struct prop_notify_queue pc_dispatch_queue; struct prop_notify_queue pc_free_queue; void *pc_entry_lock; lockmgr_fn_t *pc_lockmgr; hts_cond_t pc_cond; int pc_has_cond; hts_thread_t pc_thread; int pc_run; int pc_detached; int pc_flags; void (*pc_notify)(void *opaque); void *pc_opaque; void (*pc_prologue)(void); void (*pc_epilogue)(void); int pc_refcount; char *pc_name; }; /** * */ typedef struct prop_notify { TAILQ_ENTRY(prop_notify) hpn_link; prop_sub_t *hpn_sub; prop_event_t hpn_event; union { prop_t *p; prop_vec_t *pv; struct { float f; int how; } f; int i; struct { rstr_t *rstr; prop_str_type_t type; } rstr; struct event *e; struct { rstr_t *title; rstr_t *uri; } uri; const char *str; } u; #define hpn_prop u.p #define hpn_propv u.pv #define hpn_float u.f.f #define hpn_int u.i #define hpn_rstring u.rstr.rstr #define hpn_rstrtype u.rstr.type #define hpn_cstring u.str #define hpn_ext_event u.e #define hpn_uri_title u.uri.title #define hpn_uri u.uri.uri prop_t *hpn_prop_extra; int hpn_flags; } prop_notify_t; prop_notify_t *prop_get_notify(prop_sub_t *s); /** * Property types */ typedef enum { PROP_VOID, PROP_DIR, PROP_RSTRING, PROP_CSTRING, PROP_FLOAT, PROP_INT, PROP_URI, PROP_PROP, /* A simple reference to a prop */ PROP_ZOMBIE, /* Destroyed can never be changed again */ PROP_PROXY, /* Proxy property, real property is remote */ } prop_type_t; /** * */ struct prop { #ifdef PROP_DEBUG uint32_t hp_magic; #endif /** * Refcount. Not protected by mutex. Modification needs to be issued * using atomic ops. This refcount only protects the memory allocated * for this property, or in other words you can assume that a pointer * to a prop_t is valid as long as you own a reference to it. * * Note: hp_xref which is another refcount protecting contents of the * entire property */ atomic_t hp_refcount; /** * Property name. Protected by mutex */ const char *hp_name; union { struct { /** * Parent linkage. Protected by mutex */ struct prop *hp_parent; TAILQ_ENTRY(prop) hp_parent_link; /** * Subscriptions. Protected by mutex */ struct prop_sub_list hp_value_subscriptions; struct prop_sub_list hp_canonical_subscriptions; }; // When hp_type == PROP_PROXY struct { struct prop_list hp_owned; union { // if PROP_PROXY_OWNED_BY_PROP is NOT set struct { RB_ENTRY(prop) hp_owner_sub_link; struct prop_sub *hp_owner_sub; }; // if PROP_PROXY_OWNED_BY_PROP is set struct { LIST_ENTRY(prop) hp_owned_prop_link; }; }; }; }; /** * Originating property. Used when reflecting properties * in the tree (aka symlinks). Protected by mutex */ struct prop *hp_originator; LIST_ENTRY(prop) hp_originator_link; /** * Properties receiving our values. Protected by mutex */ struct prop_list hp_targets; /** * Payload type * Protected by mutex */ #ifdef PROP_DEBUG prop_type_t hp_type; #else uint8_t hp_type; #endif /** * Extended refcount. Used to keep contents of the property alive * We limit this to 255, should never be a problem. And it's checked * in the code as well * Protected by mutex */ uint8_t hp_xref; /** * Various flags * Protected by mutex */ uint16_t hp_flags; /** * The float/int prop should be clipped according to min/max */ #define PROP_CLIPPED_VALUE 0x1 /** * hp_name is not malloc()ed but rather points to a compile const string * that should not be free()d upon prop finalization */ #define PROP_NAME_NOT_ALLOCATED 0x2 /** * We hold an xref to prop pointed to by hp_originator. * So do a prop_destroy0() when we unlink/destroy this prop */ #define PROP_XREFED_ORIGINATOR 0x4 /** * This property is monitored by one or more of its subscribers */ #define PROP_MONITORED 0x8 /** * This property have a PROB_SUB_MULTI subscription attached to it */ #define PROP_MULTI_SUB 0x10 /** * This property have a PROB_MULTI_SUB property above it in the hierarchy */ #define PROP_MULTI_NOTIFY 0x20 #define PROP_REF_TRACED 0x40 /** * For mark and sweep */ #define PROP_MARKED 0x80 /** * For unlink mark and sweep */ #define PROP_INT_MARKED 0x100 /** * Special debug */ #define PROP_DEBUG_THIS 0x200 /** * Indicates that this is a proxy property that should follow symbolic * links when referenced on remote end. */ #define PROP_PROXY_FOLLOW_SYMLINK 0x400 /** * Set if a prop proxy is owned by a property. This basically means * that when the owning proprty is destroyed, this property should be * destroyed as well */ #define PROP_PROXY_OWNED_BY_PROP 0x800 /** * These two are used to carry the have_more_childs information * to subscriptions that arrive after the call to * have_more_childs() call have been made. */ #define PROP_HAVE_MORE 0x1000 #define PROP_HAVE_MORE_YES 0x2000 /** * Tags. Protected by prop_tag_mutex */ struct prop_tag *hp_tags; /** * Actual payload * Protected by mutex */ union { struct { float val, min, max; } f; struct { int val, min, max; } i; struct { rstr_t *rstr; prop_str_type_t type; } rstr; const char *cstr; struct { struct prop_queue childs; struct prop *selected; } c; struct pixmap *pixmap; struct { rstr_t *title; rstr_t *uri; } uri; struct { struct prop_proxy_connection *ppc; char **pfx; uint32_t id; } proxy; struct prop *prop; } u; #define hp_cstring u.cstr #define hp_rstring u.rstr.rstr #define hp_rstrtype u.rstr.type #define hp_float u.f.val #define hp_int u.i.val #define hp_childs u.c.childs #define hp_selected u.c.selected #define hp_pixmap u.pixmap #define hp_uri_title u.uri.title #define hp_uri u.uri.uri #define hp_prop u.prop #define hp_proxy_ppc u.proxy.ppc #define hp_proxy_id u.proxy.id #define hp_proxy_pfx u.proxy.pfx #ifdef PROP_DEBUG SIMPLEQ_HEAD(, prop_ref_trace) hp_ref_trace; const char *hp_file; int hp_line; #endif }; /** * This struct is used in the global dispatch (ie, where we don't * have a appointed courier) to maintain partial ordering of * notifications. * * Basically we need to make sure that we don't deliver notifications * out of order to subscriptions which could happen if we just * spawn a bunch of thread that dequeues notifications without * any control. * * With this struct we make sure that a single subscription can only * get served from a thread at a time. */ typedef struct prop_sub_dispatch { struct prop_notify_queue psd_notifications; TAILQ_ENTRY(prop_sub_dispatch) psd_link; struct prop_sub_dispatch_queue psd_wait_queue; // The refcount is only in use for subscriptions in // PROP_SUB_DISPATCH_MODE_GROUP int psd_refcount; } prop_sub_dispatch_t; /** * */ typedef struct prop_originator_tracking { prop_t *pot_p; struct prop_originator_tracking *pot_next; } prop_originator_tracking_t; /** * */ struct prop_sub { #ifdef PROP_SUB_STATS LIST_ENTRY(prop_sub) hps_all_sub_link; #endif /** * Callback. May never be changed. Not protected by mutex */ void *hps_callback; /** * Opaque value for callback */ void *hps_opaque; /** * Trampoline. A tranform function that invokes the actual user * supplied callback. * May never be changed. Not protected by mutex. */ prop_trampoline_t *hps_trampoline; /** * Pointer to dispatch structure * * If hps_global_dispatch is set this points to prop_sub_dispatch when * there are active notifications on this subscription. If notifications * are pendning it will be NULL * * If hps_global_dispatch is not set this points to a prop_courier * */ void *hps_dispatch; /** * Lock to be held when invoking callback. It must also be held * when destroying the subscription. */ void *hps_lock; /** * Function to call to obtain / release the lock. */ lockmgr_fn_t *hps_lockmgr; /** * Linkage to property or proxy connection. Protected by global mutex */ LIST_ENTRY(prop_sub) hps_value_prop_link; /** * Property backing this subscription. * * For non-proxied properties this points to the property with the value * and hps_value_prop_link is linked to that propertys list. * * For proxied properties this is only set if we are subscribing to the * value prop (PROP_SUB_SEND_VALUE_PROP) and if set we own the property * and must destroy it via prop_destroy0() when subscription dies. */ prop_t *hps_value_prop; union { struct { // If hps_proxy is not set, these are the "active" members prop_t *hps_canonical_prop; LIST_ENTRY(prop_sub) hps_canonical_prop_link; union { prop_originator_tracking_t *hps_pots; prop_t *hps_origin; }; }; // If hps_proxy is set, these are the "active" members struct { struct prop_proxy_connection *hps_ppc; struct prop_tree hps_prop_tree; int hps_proxy_subid; }; }; /** * Refcount. Not protected by mutex. Modification needs to be issued * using atomic ops. */ atomic_t hps_refcount; /** * Set when a subscription is destroyed. Protected by hps_lock. * In other words. It's impossible to destroy a subscription * if no lock is specified. */ uint8_t hps_zombie; /** * Used to avoid sending two notification when relinking * to another tree. Protected by global mutex */ uint8_t hps_pending_unlink : 1; uint8_t hps_multiple_origins : 1; uint8_t hps_dispatch_mode : 2; #define PROP_SUB_DISPATCH_MODE_COURIER 0 #define PROP_SUB_DISPATCH_MODE_GLOBAL 1 #define PROP_SUB_DISPATCH_MODE_GROUP 2 uint8_t hps_proxy : 1; /** * Flags as passed to prop_subscribe(). May never be changed */ uint16_t hps_flags; /** * Extra value for use by caller */ int hps_user_int; #ifdef PROP_SUB_RECORD_SOURCE const char *hps_file; int hps_line; #endif }; #ifdef PROP_DEBUG #define prop_ref_dec_locked(p) prop_ref_dec_traced_locked(p, __FILE__, __LINE__) void prop_ref_dec_traced_locked(prop_t *p, const char *file, int line); #else void prop_ref_dec_locked(prop_t *p); #endif prop_t *prop_create0(prop_t *parent, const char *name, prop_sub_t *skipme, int flags); prop_t *prop_make(const char *name, int noalloc, prop_t *parent); void prop_make_dir(prop_t *p, prop_sub_t *skipme, const char *origin); void prop_move0(prop_t *p, prop_t *before, prop_sub_t *skipme); void prop_req_move0(prop_t *p, prop_t *before, prop_sub_t *skipme); void prop_link0(prop_t *src, prop_t *dst, prop_sub_t *skipme, int hard, int debug); int prop_set_parent0(prop_t *p, prop_t *parent, prop_t *before, prop_sub_t *skipme); void prop_unparent0(prop_t *p, prop_sub_t *skipme); int prop_destroy0(prop_t *p); void prop_suggest_focus0(prop_t *p); void prop_unsubscribe0(prop_sub_t *s); rstr_t *prop_get_name0(prop_t *p); void prop_notify_child2(prop_t *child, prop_t *parent, prop_t *sibling, prop_event_t event, prop_sub_t *skipme, int flags); void prop_notify_childv(prop_vec_t *childv, prop_t *parent, prop_event_t event, prop_sub_t *skipme, prop_t *p2); void prop_print_tree0(prop_t *p, int indent, int followlinks); void prop_have_more_childs0(prop_t *p, int yes); void prop_want_more_childs0(prop_sub_t *s); void prop_set_string_exl(prop_t *p, prop_sub_t *skipme, const char *str, prop_str_type_t type); void prop_sub_ref_dec_locked(prop_sub_t *s); int prop_dispatch_one(prop_notify_t *n, int lockmode); void prop_courier_enqueue(prop_sub_t *s, prop_notify_t *n); const char *prop_get_DN(prop_t *p, int compact); #endif // PROP_I_H__
#!/usr/bin/env python # vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai __license__ = 'GPL v3' __copyright__ = '2012, Kovid Goyal <kovid@kovidgoyal.net>' __docformat__ = 'restructuredtext en' import re, codecs, os, numbers from collections import namedtuple from calibre import strftime from calibre.customize import CatalogPlugin from calibre.library.catalogs import FIELDS, TEMPLATE_ALLOWED_FIELDS from calibre.customize.conversion import DummyReporter from calibre.ebooks.metadata import format_isbn from polyglot.builtins import filter, string_or_bytes, unicode_type class BIBTEX(CatalogPlugin): 'BIBTEX catalog generator' Option = namedtuple('Option', 'option, default, dest, action, help') name = 'Catalog_BIBTEX' description = 'BIBTEX catalog generator' supported_platforms = ['windows', 'osx', 'linux'] author = 'Sengian' version = (1, 0, 0) file_types = {'bib'} cli_options = [ Option('--fields', default='all', dest='fields', action=None, help=_('The fields to output when cataloging books in the ' 'database. Should be a comma-separated list of fields.\n' 'Available fields: %(fields)s.\n' 'plus user-created custom fields.\n' 'Example: %(opt)s=title,authors,tags\n' "Default: '%%default'\n" "Applies to: BIBTEX output format")%dict( fields=', '.join(FIELDS), opt='--fields')), Option('--sort-by', default='id', dest='sort_by', action=None, help=_('Output field to sort on.\n' 'Available fields: author_sort, id, rating, size, timestamp, title.\n' "Default: '%default'\n" "Applies to: BIBTEX output format")), Option('--create-citation', default='True', dest='impcit', action=None, help=_('Create a citation for BibTeX entries.\n' 'Boolean value: True, False\n' "Default: '%default'\n" "Applies to: BIBTEX output format")), Option('--add-files-path', default='True', dest='addfiles', action=None, help=_('Create a file entry if formats is selected for BibTeX entries.\n' 'Boolean value: True, False\n' "Default: '%default'\n" "Applies to: BIBTEX output format")), Option('--citation-template', default='{authors}{id}', dest='bib_cit', action=None, help=_('The template for citation creation from database fields.\n' 'Should be a template with {} enclosed fields.\n' 'Available fields: %s.\n' "Default: '%%default'\n" "Applies to: BIBTEX output format")%', '.join(TEMPLATE_ALLOWED_FIELDS)), Option('--choose-encoding', default='utf8', dest='bibfile_enc', action=None, help=_('BibTeX file encoding output.\n' 'Available types: utf8, cp1252, ascii.\n' "Default: '%default'\n" "Applies to: BIBTEX output format")), Option('--choose-encoding-configuration', default='strict', dest='bibfile_enctag', action=None, help=_('BibTeX file encoding flag.\n' 'Available types: strict, replace, ignore, backslashreplace.\n' "Default: '%default'\n" "Applies to: BIBTEX output format")), Option('--entry-type', default='book', dest='bib_entry', action=None, help=_('Entry type for BibTeX catalog.\n' 'Available types: book, misc, mixed.\n' "Default: '%default'\n" "Applies to: BIBTEX output format"))] def run(self, path_to_output, opts, db, notification=DummyReporter()): from calibre.utils.date import isoformat from calibre.utils.html2text import html2text from calibre.utils.bibtex import BibTeX from calibre.library.save_to_disk import preprocess_template from calibre.utils.logging import default_log as log from calibre.utils.filenames import ascii_text library_name = os.path.basename(db.library_path) def create_bibtex_entry(entry, fields, mode, template_citation, bibtexdict, db, citation_bibtex=True, calibre_files=True): # Bibtex doesn't like UTF-8 but keep unicode until writing # Define starting chain or if book valid strict and not book return a Fail string bibtex_entry = [] if mode != "misc" and check_entry_book_valid(entry) : bibtex_entry.append('@book{') elif mode != "book" : bibtex_entry.append('@misc{') else : # case strict book return '' if citation_bibtex : # Citation tag bibtex_entry.append(make_bibtex_citation(entry, template_citation, bibtexdict)) bibtex_entry = [' '.join(bibtex_entry)] for field in fields: if field.startswith('#'): item = db.get_field(entry['id'],field,index_is_id=True) if isinstance(item, (bool, numbers.Number)): item = repr(item) elif field == 'title_sort': item = entry['sort'] elif field == 'library_name': item = library_name else: item = entry[field] # check if the field should be included (none or empty) if item is None: continue try: if len(item) == 0 : continue except TypeError: pass if field == 'authors' : bibtex_entry.append('author = "%s"' % bibtexdict.bibtex_author_format(item)) elif field == 'id' : bibtex_entry.append('calibreid = "%s"' % int(item)) elif field == 'rating' : bibtex_entry.append('rating = "%s"' % int(item)) elif field == 'size' : bibtex_entry.append('%s = "%s octets"' % (field, int(item))) elif field == 'tags' : # A list to flatten bibtex_entry.append('tags = "%s"' % bibtexdict.utf8ToBibtex(', '.join(item))) elif field == 'comments' : # \n removal item = item.replace('\r\n', ' ') item = item.replace('\n', ' ') # unmatched brace removal (users should use \leftbrace or \rightbrace for single braces) item = bibtexdict.stripUnmatchedSyntax(item, '{', '}') # html to text try: item = html2text(item) except: log.warn("Failed to convert comments to text") bibtex_entry.append('note = "%s"' % bibtexdict.utf8ToBibtex(item)) elif field == 'isbn' : # Could be 9, 10 or 13 digits bibtex_entry.append('isbn = "%s"' % format_isbn(item)) elif field == 'formats' : # Add file path if format is selected formats = [format.rpartition('.')[2].lower() for format in item] bibtex_entry.append('formats = "%s"' % ', '.join(formats)) if calibre_files: files = [':%s:%s' % (format, format.rpartition('.')[2].upper()) for format in item] bibtex_entry.append('file = "%s"' % ', '.join(files)) elif field == 'series_index' : bibtex_entry.append('volume = "%s"' % int(item)) elif field == 'timestamp' : bibtex_entry.append('timestamp = "%s"' % isoformat(item).partition('T')[0]) elif field == 'pubdate' : bibtex_entry.append('year = "%s"' % item.year) bibtex_entry.append('month = "%s"' % bibtexdict.utf8ToBibtex(strftime("%b", item))) elif field.startswith('#') and isinstance(item, string_or_bytes): bibtex_entry.append('custom_%s = "%s"' % (field[1:], bibtexdict.utf8ToBibtex(item))) elif isinstance(item, string_or_bytes): # elif field in ['title', 'publisher', 'cover', 'uuid', 'ondevice', # 'author_sort', 'series', 'title_sort'] : bibtex_entry.append('%s = "%s"' % (field, bibtexdict.utf8ToBibtex(item))) bibtex_entry = ',\n '.join(bibtex_entry) bibtex_entry += ' }\n\n' return bibtex_entry def check_entry_book_valid(entry): # Check that the required fields are ok for a book entry for field in ['title', 'authors', 'publisher'] : if entry[field] is None or len(entry[field]) == 0 : return False if entry['pubdate'] is None : return False else : return True def make_bibtex_citation(entry, template_citation, bibtexclass): # define a function to replace the template entry by its value def tpl_replace(objtplname) : tpl_field = re.sub(r'[\{\}]', '', objtplname.group()) if tpl_field in TEMPLATE_ALLOWED_FIELDS : if tpl_field in ['pubdate', 'timestamp'] : tpl_field = isoformat(entry[tpl_field]).partition('T')[0] elif tpl_field in ['tags', 'authors'] : tpl_field =entry[tpl_field][0] elif tpl_field in ['id', 'series_index'] : tpl_field = unicode_type(entry[tpl_field]) else : tpl_field = entry[tpl_field] return ascii_text(tpl_field) else: return '' if len(template_citation) >0 : tpl_citation = bibtexclass.utf8ToBibtex( bibtexclass.ValidateCitationKey(re.sub(r'\{[^{}]*\}', tpl_replace, template_citation))) if len(tpl_citation) >0 : return tpl_citation if len(entry["isbn"]) > 0 : template_citation = '%s' % re.sub(r'[\D]','', entry["isbn"]) else : template_citation = '%s' % unicode_type(entry["id"]) return bibtexclass.ValidateCitationKey(template_citation) self.fmt = path_to_output.rpartition('.')[2] self.notification = notification # Combobox options bibfile_enc = ['utf8', 'cp1252', 'ascii'] bibfile_enctag = ['strict', 'replace', 'ignore', 'backslashreplace'] bib_entry = ['mixed', 'misc', 'book'] # Needed beacause CLI return str vs int by widget try: bibfile_enc = bibfile_enc[opts.bibfile_enc] bibfile_enctag = bibfile_enctag[opts.bibfile_enctag] bib_entry = bib_entry[opts.bib_entry] except: if opts.bibfile_enc in bibfile_enc : bibfile_enc = opts.bibfile_enc else : log.warn("Incorrect --choose-encoding flag, revert to default") bibfile_enc = bibfile_enc[0] if opts.bibfile_enctag in bibfile_enctag : bibfile_enctag = opts.bibfile_enctag else : log.warn("Incorrect --choose-encoding-configuration flag, revert to default") bibfile_enctag = bibfile_enctag[0] if opts.bib_entry in bib_entry : bib_entry = opts.bib_entry else : log.warn("Incorrect --entry-type flag, revert to default") bib_entry = bib_entry[0] if opts.verbose: opts_dict = vars(opts) log("%s(): Generating %s" % (self.name,self.fmt)) if opts.connected_device['is_device_connected']: log(" connected_device: %s" % opts.connected_device['name']) if opts_dict['search_text']: log(" --search='%s'" % opts_dict['search_text']) if opts_dict['ids']: log(" Book count: %d" % len(opts_dict['ids'])) if opts_dict['search_text']: log(" (--search ignored when a subset of the database is specified)") if opts_dict['fields']: if opts_dict['fields'] == 'all': log(" Fields: %s" % ', '.join(FIELDS[1:])) else: log(" Fields: %s" % opts_dict['fields']) log(" Output file will be encoded in %s with %s flag" % (bibfile_enc, bibfile_enctag)) log(" BibTeX entry type is %s with a citation like '%s' flag" % (bib_entry, opts_dict['bib_cit'])) # If a list of ids are provided, don't use search_text if opts.ids: opts.search_text = None data = self.search_sort_db(db, opts) if not len(data): log.error("\nNo matching database entries for search criteria '%s'" % opts.search_text) # Get the requested output fields as a list fields = self.get_output_fields(db, opts) if not len(data): log.error("\nNo matching database entries for search criteria '%s'" % opts.search_text) # Initialize BibTeX class bibtexc = BibTeX() # Entries writing after Bibtex formating (or not) if bibfile_enc != 'ascii' : bibtexc.ascii_bibtex = False else : bibtexc.ascii_bibtex = True # Check citation choice and go to default in case of bad CLI if isinstance(opts.impcit, string_or_bytes) : if opts.impcit == 'False' : citation_bibtex= False elif opts.impcit == 'True' : citation_bibtex= True else : log.warn("Incorrect --create-citation, revert to default") citation_bibtex= True else : citation_bibtex= opts.impcit # Check add file entry and go to default in case of bad CLI if isinstance(opts.addfiles, string_or_bytes) : if opts.addfiles == 'False' : addfiles_bibtex = False elif opts.addfiles == 'True' : addfiles_bibtex = True else : log.warn("Incorrect --add-files-path, revert to default") addfiles_bibtex= True else : addfiles_bibtex = opts.addfiles # Preprocess for error and light correction template_citation = preprocess_template(opts.bib_cit) # Open output and write entries with codecs.open(path_to_output, 'w', bibfile_enc, bibfile_enctag)\ as outfile: # File header nb_entries = len(data) # check in book strict if all is ok else throw a warning into log if bib_entry == 'book' : nb_books = len(list(filter(check_entry_book_valid, data))) if nb_books < nb_entries : log.warn("Only %d entries in %d are book compatible" % (nb_books, nb_entries)) nb_entries = nb_books # If connected device, add 'On Device' values to data if opts.connected_device['is_device_connected'] and 'ondevice' in fields: for entry in data: entry['ondevice'] = db.catalog_plugin_on_device_temp_mapping[entry['id']]['ondevice'] # outfile.write('%%%Calibre catalog\n%%%{0} entries in catalog\n\n'.format(nb_entries)) outfile.write('@preamble{"This catalog of %d entries was generated by calibre on %s"}\n\n' % (nb_entries, strftime("%A, %d. %B %Y %H:%M"))) for entry in data: outfile.write(create_bibtex_entry(entry, fields, bib_entry, template_citation, bibtexc, db, citation_bibtex, addfiles_bibtex))
If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. The report faults McCabe for leaking information of an August 2016 call to Wall Street Journal reporter Devlin Barrett for an Oct. 30, 2016, story titled “FBI in Internal Feud Over Hillary Clinton Probe.” The story -- written just days before the presidential election – focused on the FBI announcing the reopening of the Clinton investigation after finding thousands of her emails on a laptop belonging to former Democratic Rep. Anthony Weiner, who was married to Clinton aide Huma Abedin. The Journal's account of the call says a senior Justice Department official expressed displeasure to McCabe that FBI agents were still looking into the Clinton Foundation, and that McCabe had defended agent's authority to pursue the issue. "Among the purposes of the disclosure was to rebut a narrative that had been developing following a story in The WSJ on Oct. 23, 2016, that questioned McCabe’s impartiality in overseeing FBI investigations involving [Clinton], and claimed that McCabe had ordered the termination of the [FBI's Clinton Foundation investigation] due to Department of Justice pressure," the report says. That leak confirmed the existence of the probe, which then-FBI Director James Comey had up to that point refused to do. The report says that McCabe "lacked candor" in a conversation with Comey when he said that he had not authorized the disclosure and didn't know who had done so. The IG also found that he also lacked candor when questioned by FBI agents on multiple occasions since that conversation, where he told agents that he did authorize the disclosure and did not know who was responsible. [...] Senate Judiciary Committee Chairman Bob Goodlatte, R-Va., said that the report showed that the decision to fire McCabe "was the correct one." "According to the inspector general report, Mr. McCabe repeatedly lied under oath about the disclosure of information to a reporter. In doing so, he not only violated FBI policy, but he may have committed a federal crime," he said in a statement. I am starting to think that McCabe, Comey, etc. would make great republican or democrat politicians... they seem to know how to lie, mudsling, and leak sh*t to the press like the best of them. Of course, working for the FBI is the wrong place to do it. "If there is one thing I am, it's always right." -Ted Nugent. "I honestly believe saying someone is a smart lawyer is damning with faint praise. The smartest people become engineers and scientists." -SU."Yet I still see wisdom in that which Uncle Ted posts." -creek.GIVE 'EM HELL, BRIGHAM! I am starting to think that McCabe, Comey, etc. would make great republican or democrat politicians... they seem to know how to lie, mudsling, and leak sh*t to the press like the best of them. Of course, working for the FBI is the wrong place to do it. Oh come on, Frank... you need to tell us how Drumpf is behind all of this. Connect the dots for us. "If there is one thing I am, it's always right." -Ted Nugent. "I honestly believe saying someone is a smart lawyer is damning with faint praise. The smartest people become engineers and scientists." -SU."Yet I still see wisdom in that which Uncle Ted posts." -creek.GIVE 'EM HELL, BRIGHAM! "If there is one thing I am, it's always right." -Ted Nugent. "I honestly believe saying someone is a smart lawyer is damning with faint praise. The smartest people become engineers and scientists." -SU."Yet I still see wisdom in that which Uncle Ted posts." -creek.GIVE 'EM HELL, BRIGHAM! I am starting to think that McCabe, Comey, etc. would make great republican or democrat politicians... they seem to know how to lie, mudsling, and leak sh*t to the press like the best of them. Of course, working for the FBI is the wrong place to do it. Libertarian politicians never do this! Because they never get elected. "There is no creature more arrogant than a self-righteous libertarian on the web, am I right? Those folks are just intolerable." "It's no secret that the great American pastime is no longer baseball. Now it's sanctimony." -- Guy Periwinkle, The Nix. "Juilliardk N I ibuprofen Hyu I U unhurt u" - creekster "If there is one thing I am, it's always right." -Ted Nugent. "I honestly believe saying someone is a smart lawyer is damning with faint praise. The smartest people become engineers and scientists." -SU."Yet I still see wisdom in that which Uncle Ted posts." -creek.GIVE 'EM HELL, BRIGHAM! Trump's anti-interventionist base, in this case, Alex Jones, reacts badly to Syria intervention: "Is no one pure in this world?.... F**k Trump!" "I think it was King Benjamin who said 'you sorry ass shitbags who have no skills that the market values also have an obligation to have the attitude that if one day you do in fact win the PowerBall Lottery that you will then impart of your substance to those without.'" - Goatnapper'96 It's OK though, because you can't kill terrorists without breaking a few eggs... amirite? So while the former is unacceptable and requires an IMMEDIATE response unilaterally by the POTUS (acting as judge, jury, and executioner), our own bombing campaigns which have over the last decade+ killed or maimed thousands of women and children with no end in sight barely registers on the American public's give-a-shit-o-meter. But, goddammit, we will not tolerate a dozen or so kids killed by someone else in some remote country halfway around the world from us. Nope. Because THAT crosses a line. You will forgive me for being so glib earlier in my responses to your posts, but I was busy talking with my financial planner for most of the day. This isn't my first rodeo. I was in the same position over a decade ago arguing against an Iraq invasion (as was Pat Buchanan and Bob Novak (RIP) and many other "isolationists"). I know that it is pointless for me to try and win over the opinion of jingoists like yourself, YOhio, and others. We will be in Damascus in two years time toppling a statue of Assad; en route we will kill (and probably torture) the Syrian populace. It's just part of the process. But, this time around I'm going to put my money where Frank Ryan's mouth is and invest a good chunk of my kids' trust funds in Raytheon, General Dynamics, Northrup Grumman, Lockheed Martin, etc. You see as much as I'd like my kids to live in the quiet cul-de-sac suburbia that I enjoy... living next to psychologists, college professors, optometrists, and other fake doctors; I'd rather have my kids living on waterfront property next to the *real* doctors, you know.. the kind that put orthopedic limbs on the vets returning home from the Middle East. So if I can make a little extra cash on this misadventure in the stock market, hey... so much the better. Because like the parents of the children that the US bombing campaigns kill; I've got dreams for my children too. Lockheed Martin (LMT) As I wrote yesterday, Lockheed' s older F-16 jet is considered a front runner in the Indian Air Force's potential $15 billion order for 110 fighter aircraft. Lockheed also produces the more modern (and stealth capable) F-22, and F-35 fighters. Both of these fighters are professionally thought to be effective against Russia's S-400 long range air defense missile system. That system is currently deployed in western Syria. Last week, Lockheed also won a $247 million contract from NASA to design and build an experimental aircraft that could operate without creating a traditional sonic boom. My price target: $375. Raytheon (RTN) First off, should the president decide to strike Syria without the use of American pilots, guess who produces the Tomahawk missile? That's right. These guys. On top of that, you might have noticed that two weeks ago, Poland agreed to spend $4.75 billion on RTN's Patriot missile defense system. By the way, this is the largest weapons deal in the history of Poland. Russia's annexation of the Crimean peninsula has not been lost on this former Warsaw Pact nation. In addition to increasing the dividend, Action Alerts PLUS holding Raytheon announced in late March that under the Department of Defense's DARPA program, it was developing technology that could control swarms of both air-based, and ground-based drone vehicles that might be launched using a "drag and drop" visual interface. My price target: $245. General Dynamics (GD) This is one firm where we have already seen cash flows and margins improving. GD is also another defense name that increased their dividend in March. Think the Navy gets some love in the 2018 federal budget that earmarked $654 billion for the Pentagon? Me too. Know who runs the Virginia class submarine program? General Dynamics. In fact, the Navy just awarded a $696 million modification to that program for 2019. One worry here is exposure to China. China is expected to be the hottest market for business jets over the next couple of decades, and General Dynamic's Gulfstream is the most popular business jet in that nation. Canada's Bombardier BDRBF is number two in that market, and eager. This will be a risk through the March 15 tariff hearing in Washington. My price target: $245 Kratos Defense & Security Solutions (KTOS) The stock has performed spectacularly since being impacted by the negative press regarding the Spruce Point analysis in mid-March. This calendar year, Kratos has landed at least $187.7 million in a series of awarded contracts, the details of which are at times murky due to the nature of the business. Though Spruce Point was correct in its assertion that the firm has gone through "multiple reinventions now hyping drones," it is just that, the elite level unmanned drone business, that is poised only to grow at this point, in my opinion. You're actually pretty funny when you aren't being a complete a-hole....so basically like 5% of the time. --Art Vandelay I would rather take a political risk in pursuit of peace, than to risk peace in pursuit of politics. --President Donald J. Trump Anyone can make war, but only the most courageous can make peace. --President Donald J. Trump You furnish the pictures, and Iíll furnish the war. --William Randolph Hearst Michael's Cohen's 3 clients are Trump, some corrupt RNC d-bag who needed help paying off mistresses and Sean Hannity: Michael Cohen Represented Sean Hannity, Lawyers Reveal Michael Cohen represented Fox News host Sean Hannity, Cohen's lawyers were forced to reveal in federal court on Monday. “We have been friends a long time. I have sought legal advice from Michael,” Hannity told the Wall Street Journal. Cohen was present at a hearing where his attorneys are challenging the FBI’s seizure of documents he claims are protected by attorney-client privilege. Cohen’s other two clients in recent years are President Donald Trump and Elliot Broidy, a Republican fundraiser. Cohen negotiated non-disclosure agreements with Trump and Broidy's alleged mistresses. On his radio show following the news, Hannity didn’t say why he worked with Cohen. “I think it’s pretty funny. It's very strange to have my own television network have my name up on the lower third.” WTF, man. And there is the wife of Rod Rosenstein who clients are also d-bags: I wonder if she and Cohen knew each other in law school. "If there is one thing I am, it's always right." -Ted Nugent. "I honestly believe saying someone is a smart lawyer is damning with faint praise. The smartest people become engineers and scientists." -SU."Yet I still see wisdom in that which Uncle Ted posts." -creek.GIVE 'EM HELL, BRIGHAM! "If there is one thing I am, it's always right." -Ted Nugent. "I honestly believe saying someone is a smart lawyer is damning with faint praise. The smartest people become engineers and scientists." -SU."Yet I still see wisdom in that which Uncle Ted posts." -creek.GIVE 'EM HELL, BRIGHAM! "If there is one thing I am, it's always right." -Ted Nugent. "I honestly believe saying someone is a smart lawyer is damning with faint praise. The smartest people become engineers and scientists." -SU."Yet I still see wisdom in that which Uncle Ted posts." -creek.GIVE 'EM HELL, BRIGHAM! Apparently Sean Hannity doesn't know what attorney/client confidentiality is. He admits to never retaining Michael Cohen, was never represented by Cohen, never revived or paid an invoice from Cohen. Yet he expects any conversations he had with Cohen to be protected by attorney/client privilege. Ain't it like most people, I'm no different. We love to talk on things we don't know about. "The only one of us who is so significant that Jeff owes us something simply because he decided to grace us with his presence is falafel." -- All-American "There is no creature more arrogant than a self-righteous libertarian on the web, am I right? Those folks are just intolerable." "It's no secret that the great American pastime is no longer baseball. Now it's sanctimony." -- Guy Periwinkle, The Nix. "Juilliardk N I ibuprofen Hyu I U unhurt u" - creekster "If there is one thing I am, it's always right." -Ted Nugent. "I honestly believe saying someone is a smart lawyer is damning with faint praise. The smartest people become engineers and scientists." -SU."Yet I still see wisdom in that which Uncle Ted posts." -creek.GIVE 'EM HELL, BRIGHAM! "If there is one thing I am, it's always right." -Ted Nugent. "I honestly believe saying someone is a smart lawyer is damning with faint praise. The smartest people become engineers and scientists." -SU."Yet I still see wisdom in that which Uncle Ted posts." -creek.GIVE 'EM HELL, BRIGHAM! Just wanted to mention how much I'm enjoying listening to Hannity tap dancing around the Michael Cohen issue. Hannity's trying to distance himself from Cohen while maintaining he's entitled to the attorney-client privilege while also dismissing his journalistic obligation to disclose his Cohen relationship while reporting on him--an impressive juggling act. I'm also amused that Cohen has only three clients, two of whom have apparently never paid him anything. And finally, I wonder if Hannity had to explain to his wife that unlike Cohen's two other clients, Cohen's services were for a purpose other than paying off porn stars and Playmates. It's great how Trump, even indirectly, has elevated the level of public discourse.
Tenkar's Tavern is supported by various affiliate programs, including Amazon, RPGNow and Humble Bundle. Your patronage is appreciated and helps keep the lights on and the taps flowing - Your Humble Bartender, Tenkar RPGNow Tuesday, February 9, 2016 How Long Before Hasbro Buys Out OneBookShelf? How long before Hasbro buys out OneBookShelf? Sure, it's a question that can't be answered yet, but it is an interesting thought. All of WotC's digital products are being sold through OBS (except it's VTT products, which are on Fantasy Grounds - that's a whole 'nother post) At this point, WotC / Hasbro is the dominant publisher on OBS and the Dungeon Master's Guild is going to lead to a HUGE amount of licensed D&D 5e products. That glut has the potential to seriously tilt the online market towards 5e if it hasn't already. Assuming a limited pool of money to be spent by consumers on RPG products, the Dungeon Master's Guild can seriously drain that pool. When does it become more profitable for Hasbro to own it's online distributor for it's PDF releases? What does this mean for the smaller publishers? When the Dungeon Master's Guild was announced, most of the fear I heard was "it's going to be the D20 era again, with a few gems hidden in piles of dung." Now, I think the fear should be for the market shift and the potential harm to publishers that aren't in the DM's Guild Market (which has huge restrictions and lack of ownership - again, another future post) The last event that had this much influence on online distributing of RPG products came with the RPGNow / DriveThruRPG merge. Was that merge good for consumers and small publishers? I truthfully don't know, but less competition rarely benefits those spending the cash. Now, has the current trend benefiting WotC / Hasbro and has it boosted D&D 5e? Is it adding dollars to the coffers of OBS? Of course to all. The question is: What is the long term cost to the hobby? 13 comments: Alright, so I'm behind the times, but isn't DM's Guild primarily for publishing adventures? The part of the d20 glut that wrinkled my nose the most was the dearth of source material and how much of it tripped over the rest. (I recall at least three different companies publishing rules for naval/maritime stuff, for instance) As for OBS, I don't really care who owns it as long as the non-D&D stuff remains available and the small guys don't get screwed out of profit. I have 6 products on the DMsGuild and NONE Of them are adventures. Looking at the top products, very few are in the Top 100. Why is it important who owns it? Hasbro won't give a shit about anything but the revenue stream. For those that are familiar with Magic Online, its a terrible, unreliable and old interface, but there is no reason for Hasbro to fix it because the money just pours in 24/7. Is Facebook bad for social networking? The problem is that numbers attract numbers and OBS passed that threshold long ago. The only competition on are the places oriented to more to general publishing like Lulu and Amazon/Createspace. As for the DM's Guild, I will say I think that it is just over right side of the line of being a fair deal. In exchange for using all of Wizard's IP for 5e and the Forgotten Realms, you fork over 50% of revenues, and you have to share your IP with everybody else in the DM's Guild for use ONLY in the DM's Guild. And you agree not to release the same product outside of the guild (if it was a generic 5e supplement). Now you still have full copyright to your original material. Wizards (or anybody else) has to come to you in order to use it outside of the DM's Guild. If you want to use your IP outside of the Guild you can outside of the restriction of having the same product available inside the guild and outside of the guild. My personal recommendation is this. If you have an idea that only make sense if it uses Forgotten Realm IP, then use the DM's Guild. In the rare instance if an idea relies on closed content by Wizards (for example Mind Flayer) again use the guild. Otherwise use the Open Game License and the 5e System Reference Document. I think the effort it would take the WotC team to explain to Hasbro execs what the DMsGuild/DNDClassics thing even is, let alone how its tied to OBS, would take more effort than it's worth when Hasbro then weighs the remaining profit not received (piddling compared to actual official D&D pdf sales I bet) against the cost and hassle of maintaining the site. If it were that valuable to them they could just set up their own shop and run it as the official D&D brand PDF store. Given they can't even divest resources to fan forums on their own home page this seems unlikely. # Nicholas Bergquist, Your mention of the fumbled fan forums might work against your point.). Buying one book could provide the manpower and expertise needed for WoTC to rebuild and maintain a proper web presence in addition to any benefits from controlling the distribution. Well...actually yeah after I wrote that I was thinking that a wholesale purchase of OBS, in which staff was retained, could put them in a good position....so I concede that could be an advantage to such a purchase. Assuming OBS ownership wanted to sell, that is. DMG stuff is pretty invisible if you stick to the regular OBS sites. I think the real problem is creators spending too much time on FR shovelware rather than writing interesting new stuff out of concern that if it's not on the DMG it won't get noticed. I hope not. The stock art products I am trying to push on OBS are designed with OSR buyers in mind. I am not sure how things would go for a lot of indies and non Hasbro big game companies if they took over. On the surface, it looks like a good idea, but when you look closer, it's a lot more complex. WotC has already demonstrated that it sucks at digital projects. Managing a software team is not at all like managing a publishing team. For them, it makes more sense to keep outsourcing to OBS because they are able to expand their revenue stream without the overhead of technology investment and employee headcount. Getting OBS employees to move from Georgia to Washington state could be a serious challenges and starting up a new team from scratch to maintain the existing software without having existing employees to do knowledge transfer would be a major challenge. Basically, with OBS, they have only a small amount of risk (similar to their "Morningstar" partnership). If the project fails, they look for a different partner. By bringing it in house, the bring on a huge amount of risk and headcount for a relatively small amount of revenue. Search This Blog Got Something Newsworthy? Your Bartender is Listening ;) Translate Contributors Why "Swords & Wizardry?" Believe me when I say I have them all in dead tree format. I have OSRIC in full size, trade paperback and the Player's Guide. I have LL and the AEC (and somewhere OEC, but I can't find it at the moment). Obviously I have Basic Fantasy RPG. Actually, I have the whole available line in print. Way too much Castles & Crusades. We all know my love for the DCC RPG. I even have Dark Dungeons in print, the Delving Deeper boxed set, Astonishing Swordsmen & Sorcerers of Hyperborea (thank you Kickstarter) (edit) BOTH editions of LotFP's Weird Fantasy and will soon have some dead tree copies of the Greyhawk Grognards Adventures Dark & Deep shipping shortly in my grubby hands awaiting a review.. I am so deep in the OSR when I come up for breath it's for the OSR's cousin, Tunnels & Trolls (and still waiting on dT&T to ship). So, out of all that, why Swords & Wizardry? Why, when I have been running a AD&D 1e / OSRIC campaign in Rappan Athuk am I using Swords & Wizardry and it's variant, Crypts & Things, for the second campaign? (Actually, now running a S&W Complete campaign, soon to be with multiple groups) Because the shit works. It's easy for lapsed gamers to pick up and feel like they haven't lost a step. I can house rule it and it doesn't break. It plays so close to the AD&D of my youth and college years (S&W Complete especially) that it continually surprises me. Just much less rules hopping than I remember. (my God but I can run it nearly without the book) I grab and pick and steal from just about all OSR and Original resources. They seem to fit into S&W with little fuss. It may be the same with LL and the rest, but for me the ease of use fit's my expectations with S&W. Even the single saving throw. That took me longer to adjust to, but even that seems like a natural to me now. Don't ask me why, it just does. Maybe it's the simplicity of it. At 45 48, simplicity and flexibility while remaining true to the feel of the original is an OSR hat trick for me ;) We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites
igit of 2095? 5 What is the hundreds digit of 3891? 8 What is the ten thousands digit of 17150? 1 What is the hundreds digit of 9010? 0 What is the tens digit of 1098? 9 What is the thousands digit of 3705? 3 What is the hundreds digit of 306? 3 What is the hundreds digit of 69949? 9 What is the ten thousands digit of 11883? 1 What is the units digit of 68990? 0 What is the tens digit of 988? 8 What is the thousands digit of 3649? 3 What is the ten thousands digit of 64866? 6 What is the thousands digit of 5585? 5 What is the units digit of 189? 9 What is the hundreds digit of 9322? 3 What is the hundreds digit of 30327? 3 What is the hundreds digit of 2159? 1 What is the tens digit of 504? 0 What is the thousands digit of 4602? 4 What is the hundreds digit of 739? 7 What is the hundreds digit of 428? 4 What is the thousands digit of 1089? 1 What is the hundreds digit of 57721? 7 What is the tens digit of 17335? 3 What is the tens digit of 755? 5 What is the hundreds digit of 2790? 7 What is the ten thousands digit of 68499? 6 What is the thousands digit of 9570? 9 What is the ten thousands digit of 16933? 1 What is the units digit of 316? 6 What is the units digit of 4766? 6 What is the tens digit of 1408? 0 What is the hundreds digit of 1352? 3 What is the tens digit of 6405? 0 What is the units digit of 1583? 3 What is the ten thousands digit of 12756? 1 What is the ten thousands digit of 56246? 5 What is the units digit of 2152? 2 What is the hundreds digit of 3623? 6 What is the thousands digit of 6082? 6 What is the hundreds digit of 978? 9 What is the thousands digit of 4101? 4 What is the thousands digit of 18614? 8 What is the units digit of 2619? 9 What is the units digit of 37381? 1 What is the hundreds digit of 3385? 3 What is the hundred thousands digit of 135208? 1 What is the ten thousands digit of 32544? 3 What is the units digit of 21497? 7 What is the tens digit of 63? 6 What is the tens digit of 8765? 6 What is the tens digit of 350? 5 What is the tens digit of 2224? 2 What is the hundreds digit of 61085? 0 What is the hundreds digit of 4507? 5 What is the hundreds digit of 1934? 9 What is the thousands digit of 4900? 4 What is the units digit of 1916? 6 What is the tens digit of 3806? 0 What is the ten thousands digit of 13139? 1 What is the tens digit of 83405? 0 What is the tens digit of 59391? 9 What is the thousands digit of 3130? 3 What is the tens digit of 11571? 7 What is the thousands digit of 8598? 8 What is the tens digit of 261? 6 What is the thousands digit of 9456? 9 What is the tens digit of 1143? 4 What is the hundreds digit of 273? 2 What is the tens digit of 804? 0 What is the thousands digit of 14209? 4 What is the units digit of 468? 8 What is the tens digit of 2042? 4 What is the thousands digit of 1041? 1 What is the ten thousands digit of 24819? 2 What is the hundreds digit of 1203? 2 What is the units digit of 6053? 3 What is the hundreds digit of 296? 2 What is the units digit of 202? 2 What is the tens digit of 3153? 5 What is the tens digit of 3636? 3 What is the units digit of 8165? 5 What is the units digit of 97878? 8 What is the ten thousands digit of 16371? 1 What is the units digit of 409? 9 What is the tens digit of 21669? 6 What is the thousands digit of 3481? 3 What is the hundreds digit of 1254? 2 What is the ten thousands digit of 126294? 2 What is the ten thousands digit of 35096? 3 What is the tens digit of 129982? 8 What is the units digit of 55596? 6 What is the tens digit of 1077? 7 What is the thousands digit of 7932? 7 What is the tens digit of 8511? 1 What is the thousands digit of 5019? 5 What is the ten thousands digit of 12610? 1 What is the units digit of 36700? 0 What is the hundreds digit of 24678? 6 What is the hundreds digit of 28719? 7 What is the tens digit of 113874? 7 What is the thousands digit of 10704? 0 What is the thousands digit of 55562? 5 What is the tens digit of 342? 4 What is the units digit of 1223? 3 What is the thousands digit of 5302? 5 What is the tens digit of 135979? 7 What is the ten thousands digit of 18281? 1 What is the hundreds digit of 58839? 8 What is the thousands digit of 71677? 1 What is the thousands digit of 10775? 0 What is the tens digit of 6885? 8 What is the thousands digit of 1174? 1 What is the units digit of 1514? 4 What is the units digit of 9904? 4 What is the hundreds digit of 2410? 4 What is the tens digit of 6147? 4 What is the ten thousands digit of 49031? 4 What is the tens digit of 37089? 8 What is the units digit of 850? 0 What is the units digit of 50079? 9 What is the hundreds digit of 2930? 9 What is the thousands digit of 38918? 8 What is the tens digit of 642? 4 What is the thousands digit of 7753? 7 What is the units digit of 855? 5 What is the units digit of 134462? 2 What is the tens digit of 24481? 8 What is the hundreds digit of 15838? 8 What is the hundreds digit of 1959? 9 What is the tens digit of 3095? 9 What is the thousands digit of 1442? 1 What is the thousands digit of 1961? 1 What is the ten thousands digit of 138226? 3 What is the hundreds digit of 945? 9 What is the hundreds digit of 3988? 9 What is the hundreds digit of 35365? 3 What is the units digit of 1967? 7 What is the units digit of 1807? 7 What is the tens digit of 756? 5 What is the thousands digit of 57185? 7 What is the thousands digit of 2768? 2 What is the thousands digit of 14119? 4 What is the hundreds digit of 515? 5 What is the tens digit of 11855? 5 What is the tens digit of 3599? 9 What is the units digit of 1639? 9 What is the hundreds digit of 1224? 2 What is the thousands digit of 4049? 4 What is the ten thousands digit of 132552? 3 What is the hundreds digit of 22555? 5 What is the tens digit of 10? 1 What is the thousands digit of 68496? 8 What is the hundreds digit of 134659? 6 What is the tens digit of 137? 3 What is the hundreds digit of 5575? 5 What is the units digit of 1184? 4 What is the tens digit of 8139? 3 What is the units digit of 47513? 3 What is the units digit of 6612? 2 What is the tens digit of 1705? 0 What is the hundreds digit of 565? 5 What is the units digit of 69? 9 What is the thousands digit of 38203? 8 What is the tens digit of 501? 0 What is the units digit of 16103? 3 What is the hundreds digit of 38427? 4 What is the units digit of 4262? 2 What is the hundreds digit of 5376? 3 What is the thousands digit of 3040? 3 What is the tens digit of 65221? 2 What is the tens digit of 102895? 9 What is the tens digit of 134150? 5 What is the tens digit of 186? 8 What is the tens digit of 140731? 3 What is the hundreds digit of 26850? 8 What is the units digit of 4520? 0 What is the units digit of 22308? 8 What is the units digit of 976? 6 What is the hundreds digit of 686? 6 What is the tens digit of 7206? 0 What is the tens digit of 6733? 3 What is the hundreds digit of 3023? 0 What is the thousands digit of 23857? 3 What is the thousands digit of 3696? 3 What is the units digit of 1126? 6 What is the hundreds digit of 476? 4 What is the units digit of 24662? 2 What is the hundreds digit of 6074? 0 What is the tens digit of 53080? 8 What is the thousands digit of 7679? 7 What is the tens digit of 858? 5 What is the hundreds digit of 17782? 7 What is the tens digit of 102810? 1 What is the tens digit of 2061? 6 What is the tens digit of 14852? 5 What is the tens digit of 15674? 7 What is the tens digit of 11534? 3 What is the units digit of 7113? 3 What is the thousands digit of 1072? 1 What is the thousands digit of 1132? 1 What is the units digit of 1359? 9 What is the tens digit of 11912? 1 What is the hundreds digit of 9346? 3 What is the units digit of 244? 4 What is the tens digit of 23? 2 What is the tens digit of 937? 3 What is the hundreds digit of 63450? 4 What is the thousands digit of 16670? 6 What is the hundreds digit of 46930? 9 What is the units digit of 1228? 8 What is the hundreds digit of 68749? 7 What is the thousands digit of 8814? 8 What is the thousands digit of 1832? 1 What is the tens digit of 864? 6 What is the ten thousands digit of 72913? 7 What is the hundreds digit of 11315? 3 What is the units digit of 1716? 6 What is the units digit of 83461? 1 What is the tens digit of 399
62516 - -107061. What is the thousands digit of u? 4 Suppose q - 38*q + 340844 = 0. What is the thousands digit of q? 9 Let f = 219 - 207. What is the tens digit of f/(-36) + (-23)/3*-65? 9 Suppose -29*r + 30*r = 2. Suppose 5*a - 4*a = -p + 418, -r*a = 5*p - 2084. What is the hundreds digit of p? 4 Let o(m) = -m**3 + 62*m**2 - 59*m - 603. What is the hundreds digit of o(49)? 7 What is the units digit of 6/2 + 256/3968 + (-3669652)/(-62)? 1 Let c(l) = 101*l**2 - l + 1. Let n be c(1). Suppose -v + 13 = -0*v. Suppose -f + n = v. What is the units digit of f? 8 Suppose -4*q = -12, 5*q + 358 = 4*b - 315. What is the hundreds digit of b + 2 - (-10 - -12)? 1 Suppose -101*c + 66*c - 1322080 = -67*c. What is the tens digit of c? 1 Let s = -745 + 89. Let i = -250 - s. What is the tens digit of i? 0 Suppose 0 = k - 5*n - 4066, -2922 = 2*k - 2*n - 11078. What is the thousands digit of k? 4 Let l = -15709 + 24625. What is the thousands digit of l? 8 Let v(k) = 23*k**3 + k + 2. Let o be v(-2). Let j = 264 + -293. Let x = j - o. What is the hundreds digit of x? 1 Suppose -11*b - 19*b - 240 = 0. Let w be (-7)/((-14)/(-4)) - -11. What is the units digit of ((-4)/w)/(b/(-120))*-6? 0 Let h(b) = -7*b**2 + 42*b**3 - 6 + 0*b**2 + 9*b**2 - 4*b**2 + b. Let n be h(2). Let u = -178 + n. What is the hundreds digit of u? 1 Let l(t) = -19*t - 50. Let m be l(-4). Suppose 0 = -s + 2, -4*y + 2*y - 5*s = -m. What is the units digit of 27/(28/y - 3)? 4 Let a(b) be the second derivative of 47*b**3/6 - 3*b**2 + 13*b. Let t be a(7). What is the tens digit of t/4 + (11 - 5)/24? 8 Suppose 0 = -196*s + 192*s + 3980. Let i = s + -774. What is the units digit of i? 1 Let a(l) = 20*l**2 + l + 4. Let d be a(6). Suppose -5*t + 2*t - d = -s, -240 = t - s. What is the hundreds digit of 14/28 - t/2? 1 Let y(k) = 3*k + 11965. What is the tens digit of y(0)? 6 Suppose 4*g - 267 = g. Let m = g - 164. What is the tens digit of (m*60/54)/((-2)/6)? 5 Suppose 69 = -13*c - 9. Let f(w) = -15*w**2 - 20*w - 15. Let i(j) = 8*j**2 + 10*j + 8. Let b(k) = -4*f(k) - 7*i(k). What is the tens digit of b(c)? 8 Let k(h) = -168*h - 1214. What is the thousands digit of k(-44)? 6 Suppose 2*h = 3*x - 3*h + 89, 3*x - h + 85 = 0. Let a = x - -35. What is the units digit of ((-14)/(-8))/(a/28)? 7 Suppose -5*l + 48 = 5*c - 7, 0 = c. Suppose l + 5 = 4*w, 4*r = 4*w + 4492. What is the tens digit of r? 2 Let m = -8040 - -21763. What is the thousands digit of m? 3 Let p = 46 - 44. Suppose -y = 3*a - 4, 2*y + p*y = -4*a. Suppose -a*o + 7 = 5. What is the units digit of o? 1 Suppose 19*o = 10*o + 6390. Let k = o - 399. What is the tens digit of k? 1 Let x(q) = 7*q**2 - 7*q + 34. Suppose -y - w = -6, 4*y - 18 = 4*w - 2. What is the tens digit of x(y)? 7 What is the units digit of 3/(9/(-48)) + 910? 4 Suppose 22*k - 21*k + 10 = 0. Let t be (-9)/(-12) + k/(-8). What is the units digit of (17/(306/24))/(t/51)? 4 Suppose 24*a - 57795 = 54334 + 228095. What is the units digit of a? 6 Suppose d - 5*d - 522 = -5*k, 2 = d. Let o = k + 63. What is the hundreds digit of o? 1 Let z(k) = 3*k**3 + k**2 - 5*k + 8. Suppose -2*s + 27 = -3*b, 3*s + 5*b = 3*b + 8. What is the units digit of z(s)? 2 Let d = 15615 + 34093. What is the hundreds digit of d? 7 Suppose 4*i + 9443 + 26929 = 4*a, -36388 = -4*a - 4*i. Suppose a + 43221 = 44*d. What is the hundreds digit of d? 1 Suppose 6580*h - 35260 = 6560*h. What is the units digit of h? 3 Suppose -5*c + 90252 = 3*m, 5*c + 35283 = -2*m + 125536. What is the tens digit of c? 5 Let n = -23526 - -33223. What is the thousands digit of n? 9 Suppose -5*y = 1 - 16. Suppose y*j - 197 = 4*i, 5*j - j - 226 = -2*i. Let b = -30 + j. What is the units digit of b? 9 What is the thousands digit of ((14 - -9) + -22)/(1/42756)? 2 Suppose w - 6777 = -5*d, 5*w - 2*d - 9246 = 24774. What is the thousands digit of w? 6 Suppose 0 = p + 4*k - 55, 2*p + 0*k = -k + 75. Let v be 7*10/p + 238/2. Suppose -7*x + 96 = -v. What is the tens digit of x? 3 Let z be 660/(-80)*(-32)/(-3). Let u be (3/(-3))/((-1)/(-5)). What is the tens digit of (u/(-10))/((-2)/z)? 2 Let t be 3 + (-92)/28 - 268/(-7). Let p = 42 - t. Suppose -4*x = x + 3*s - 1071, -s = p*x - 854. What is the units digit of x? 3 Let m be 12*((-2)/3)/((-16)/(-54)). Let o = m + 62. What is the units digit of o? 5 Let n be 12*-9*((-69)/(-18) + -4). What is the units digit of 12/n*3*(-1822)/(-4)? 1 Suppose 11073 - 25765 = -4*k. What is the thousands digit of k? 3 Suppose 4*b - 12 = 0, 0 = 5*s + 25*b - 24*b - 6803. Suppose -16*m - m + s = 0. What is the tens digit of m? 8 Let d = 153 - 148. Suppose 2663 - 572 = d*p - 4*n, -5*n = -4*p + 1671. What is the units digit of p? 9 Suppose 3*r = -0 + 6. Let o be -9*13/(819/(-14)). Suppose -r*b = 4*w - 6 - 140, o*w - 235 = -3*b. What is the tens digit of b? 8 Let s(p) = -6*p**3 - 586*p**2 + 115*p + 126. What is the units digit of s(-98)? 4 Let d = 1280 + 414. What is the units digit of (990/275)/(1 + d/(-1700))? 0 Let p(z) = z**3 - 11*z**2 - 12*z + 12. Let t(i) = -i**3 - 23*i**2 - i - 14. Let v be t(-23). Let w be p(v). Let s = -125 - w. What is the units digit of s? 3 Let i = -1413 + 693. Let k = i - -1048. Suppose 0 = 2*g + 4*t - 216, -k = -3*g + 2*t + 12. What is the units digit of g? 2 Suppose -228*s + 57805263 - 5712303 = 428*s. What is the ten thousands digit of s? 7 Suppose w - 42456 = -14*w + 417144. What is the thousands digit of w? 0 Let z = -106 - -134. What is the tens digit of (105/z)/15 - (-1086)/8? 3 Let n(j) = 6*j**2 + 9*j**2 - 36*j - 30*j + 60*j - 2. What is the units digit of n(-5)? 3 Let n(y) = -3*y**3 - 5*y**2 + 7*y + 13. Let s be n(-4). Suppose 93*z = s*z - 3792. What is the units digit of z? 8 What is the units digit of (-10)/40 - (-43834)/8? 9 Let d(v) = -v**3 - 3*v**2 + 3*v + 685. Suppose 0 = -6*w + 22*w. What is the units digit of d(w)? 5 What is the hundreds digit of 8/(-24)*(-3 + (-448590)/10)? 9 Let d(s) = -s**3 + 86*s**2 - 97*s + 237. What is the units digit of d(82)? 9 Let k(b) be the first derivative of -2*b**3/3 + 67*b**2/2 - 103*b + 151. What is the hundreds digit of k(30)? 1 Let o(r) = -8*r**2 - r + 39. What is the units digit of o(0)? 9 Suppose j - 35 = -t - 2, -3*j = -9. Suppose 480 = t*u - 24*u. What is the units digit of u? 0 Let l = -112 + 114. What is the hundreds digit of 752/l - (-27 - -25)? 3 Let g = -28984 - -57581. What is the hundreds digit of g? 5 What is the hundreds digit of ((-1003086)/(-16))/7 + -1 - 12/96? 9 Suppose -a - 3*b = 1, a + 0*b - 5*b + 1 = 0. What is the units digit of 344/(-16)*(2/a - 12)? 1 Let w be 13/(65/(-10))*(-1 + -1). Suppose -w*f = -f + 501. Let t = -93 - f. What is the tens digit of t? 7 Let l be 3/(-2) - (-77)/22. Let r(m) = -123*m + 4. Let d be r(l). Let b = 378 + d. What is the units digit of b? 6 Let h = -22571 + 64073. What is the ten thousands digit of h? 4 Let p = -5330 - -17298. What is the thousands digit of p? 1 Let u be (0 + -1)/(-2 - -6 - 5). Suppose 5*n + 10 = a + u, 3*n + 4 = 2*a. What is the tens digit of (1 - a) + (48 - 1 - 6)? 4 Let k(w) = w**3 + 7*w**2 + 6*w + 6. Let y be k(-6). Let u be 2 + y*(-4)/6. What is the units digit of (11 + -10)*29/(-2)*u? 9 Suppose -4*j = 4*d - 32536, 70*j - 65*j - 24394 = -3*d. What is the thousands digit of d? 8 Let d(m) be the first derivative of -m**4/4 + 5*m**3/3 - 3*m**2/2 + 6*m + 1768. Let f be (-3)/((-1)/(4/3)). What is the units digit of d(f)? 0 Suppose -4*f - u + 7 = -3*f, u = -3*f + 13. Suppose f = 10*q - 17. Suppose 3*i + 0*s - 22 = -2*s, -17 = -q*i + s. What is the units digit of i? 8 Let g = 257 - 254. Suppose -2*v - 3*c - 732 = -2365, -g*c - 2427 = -3*v. What is the units digit of v? 2 Let g(j) = -102*j**3 - 12*j**2 + 11*j + 6. What is the units digit of g(-4)? 8 Let a be 11/(77/(-126))*-39. What is the hundreds digit of -3 - a/(-3) - (-8)/(-4)? 2 Let u(l) = -l**3 - l**2 + l + 3702. Let w be u(0). Let c = 6549 - w. Suppose 0 = -8*i - 5*i + c. What is the hundreds digit of i? 2 Suppose 4*h = -5*p - 3493, 18*p - 22*p - 876 = h. Let n = 131 - h. What is the tens digit of n? 0 Let s = -36 + 45. Let n = 16 + s. What is the tens digit of 14/8*(n + -9)? 2 Suppose 528*b = 584*b - 500360. What is the units digit of b? 5 Let q(k) = 3*k**3 - 22*k**2 + 7*k - 2. Let x be q(7). What is the hundreds digit of 209 -
Vamos imaginar que os olhos de José Mourinho nunca tinham tocado no futebol simples-que-não-é-assim-tão-simples do Barcelona. E, já agora, que a TV e a Internet faziam gazeta. O treinador não conhecia, por isso, aquela cultura do toque, a arte da geometria no ataque aos espaços, os terrenos baldios e os planos de contingência. Continuando pela mesma estrada, vamos acreditar que o treinador português já tinha esquecido aquele episódio, no primeiro jogo pelo Benfica, em que alguém lhe entregou o relatório do Boavista com apenas 10 jogadores titulares. Faltava só o Platini da Bolívia. Ou Erwin Sánchez. Estamos em 2006, a meio da segunda época do português nos blues, com uma Premier League no bolso. O Chelsea acabou o Grupo G da Liga dos Campeões no segundo lugar, atrás do Liverpool de Rafa Benítez, e teve de se conformar com a inevitabilidade de jogar contra um robusto tubarão. Tocou-lhe o Barcelona de Frank Rijkaard e Mourinho pediu ao jovem André Villas-Boas um documento no qual destaparia todos os segredos da maquinaria blaugrana. Esse documento foi despejado na internet na segunda-feira. João Nuno Fonseca, treinador de 29 anos, com passagens na Académica, Aspire e Nantes, teve acesso a esse relatório muito antes disso, quando estava na faculdade. “Na altura falava-se de como poderia ser o trabalho de bastidores, mas era bastante difícil perceber o que o AVB efetivamente produzia para ajudar no sucesso de Mourinho”, começa a contar à Tribuna Expresso. “Não tenho dúvidas que inspirou muitos jovens, como foi o meu caso, em ver na análise de jogo uma maneira de entender as dinâmicas do jogo e, mais importante de tudo, como interpretá-lo e aplicá-lo em treino. Começou a ser um relatório-referência." 1 / 4 2 / 4 3 / 4 4 / 4 Vamos lá ao osso. O desenho revelava um 4-3-3. Nesta temporada (2005/06), o Barça estaria mais agressivo na saída rápida para o ataque depois de conquistar a bola, deixando cair aquela fé cega na posse de bola paciente. Nessa fase da vertigem, procuram o passe vertical para dois rapazes que não jogavam nada mal. “Dependem muito da criatividade Messi e Ronaldinho”, alertava então AVB, que aparentava estar muito preocupado com os movimentos interiores do brasileiro, que abriam um túnel imenso de oportunidades para as subidas de Gio van Bronckhorst. “Este momento pode ser parado com faltas”. “Erram muito na primeira fase de construção”, continua AVB. Ou seja, a tal geometria artística e a qualidade de receção-passe deixavam a desejar, segundo o observador de Mou, na hora de esboçar uma jogada desde trás. A isso juntar-se-ia o impecável relvado de Stamford Bridge (sim, isto é irónico), que também mereceu uma nota naquele documento. Deco é chave na saída limpinha com passes curtos. A subida dos laterais, que depois revelam problemas a fechar (deixam muito espaço para o corredor central), expõe a linha defensiva, isolando os centrais. O erro espreita a cada esquina. “Oleguer e Edmilson podem ser os alvos ideais para uma pressão alta”, informa Villas-Boas, como quem sente o sangue a quilómetros de distância. Darren Walsh/Getty Images Sem bola, AVB tira o chapéu ao instinto e leitura de Samuel Eto’o e de Deco, que sabiam quando e como pressionar, detectando a hesitação alheia. “Quando Deco e van Bommel pressionam forte no meio-campo é difícil evitar”, aponta ainda, sugerindo uma posse de bola rápida. Ou seja, mais vertical e menos mastigada, apostando até em situações 2x1 contra Edmilson, o brasileiro que não impressionava nadinha André Villas-Boas. É aqui, quando a teia estica, que é o “momento ideal para matá-los”. Enfim, é um senhor manual de instruções, que ainda especifica como é que os culés se comportam nas bolas paradas defensivas e ofensivas. Outra vez Este não é o primeiro relatório de AVB para Mourinho que salta para as páginas dos jornais e redes sociais. Há muitos anos já tinha sido partilhado um outro, igualmente detalhado, sobre o Newcastle United. O que querem os treinadores nestes relatórios? “Depende muito do caráter de cada um”, continua João Nuno Fonseca. “Tu podes observar um adversário para perceber o que queres evitar que ele te faça ou então podes observar para saberes de que forma é que te queres impor. E é neste ponto que o caráter do treinador se revela e que o leva a desenvolver uma determinada identidade como equipa. Não tenho dúvidas que, ao longo dos últimos 10, 15 anos, a maneira de se produzir relatórios de observação e análise foi sendo alterada, noutros países e contextos, muito pelo sucesso que o José Mourinho teve e pela evolução de carreira do AVB, de analista para treinador. Vais a Inglaterra e já não analisam um adversário com base em dados estatísticos como acontecia há uns anos. Depende sempre de quem lidera o processo. As estatísticas são como os professores sem conhecimento: ensinam tudo menos o mais importante. Mostram mas não demonstram.” Nuno Botelho Para quem ainda não juntou as peças do puzzle, esta é a tal jogatana entre Chelsea e Barcelona que aqueceu e até meteu cultura pelo meio. “Vocês sabem perfeitamente o que é teatro e do bom. É de qualidade”, diria Mourinho na conferência de imprensa. Lionel Messi, um garoto de 18 anos, com o mundo suspenso na ponta da bota esquerda, ameaçava ser o que acabou por ser: um génio. E os génios, já sabemos, às vezes têm problemas com os defesas. Asier del Horno deu-lhe as boas-vindas ao futebol de elite e beijou o joelho do argentino com os dentes da bota. O defesa espanhol até se livrou do vermelho, mas estavam destinados a olharem-se nos olhos: Messi voltou a enganar Asier e também Robben, perto da bandeirola de canto, e o lateral perdeu a paciência. Apesar de a entrada ter sido imprudente, ficou a anos-luz da tal primeira patada: vermelho direto. AVB não deixou de assinalar no relatório que esta era uma equipa que procurava faltas e que já sacara aos rivais 11 penáltis e quatro vermelhos. Mas, afinal, o que dizia o relatório de AVB sobre Messi? “É muito diferente de Giuly”, escrevia relacionando com a eliminatória da época passada, dos “oitavos”, em que os blues bateram o Barça. “O ano passado, o Giuly dava mais largura e profundidade. Messi é o oposto. Tem liberdade total e até acaba no lado oposto a criar situações 2x1 com o Ronaldinho. Ele quer receber a bola quanto antes, ligando fases de jogo através da condução (principalmente a ir para dentro com o pé esquerdo). Traz criatividade e risco para o jogo.” Mike Hewitt/Getty Images Na secção da avaliação individual, AVB resume: “Qualidade + velocidade mas demasiado esquerdino. Exatamente os mesmos comportamentos do Ronaldinho. Entre linhas ou diagonais. Encoraja a equipa a avançar com a condução. Incrível 1x1”. A repetição Quantos jogos são necessários para um observador ficar confortável com o rival? “Normalmente, entre três e cinco jogos, para poderes dizer que é provável que o adversário se vai comportar desta maneira. Há que ter também consciência de aceitar a complexidade que está inerente ao jogo. Ou seja, ser consciente de que, como treinador, não vais ter uma solução para todas as situações.” E é este o tipo de informações standard? Fonseca diz que depende do treinador. “Tens treinadores que querem tudo explicado em relatório com imagens, ilustrações e texto. Outros consideram suficiente ter apenas vídeo ilustrado e pontos-chave. Acima de tudo, hoje em dia, utiliza-se cada vez mais imagens reais do adversário, seja vídeo ou fotografia, do que esquemas como os que vês no relatório do AVB.” ADRIAN DENNIS/GETTY IMAGES O Chelsea até começou a ganhar. Depois de uma bola longa milimétrica de Frank Lampard, Duff ganhou as costas da defesa catalã e cruzou para área, onde Thiago Motta encostou para a própria baliza. John Terry, com mais um golo na própria, deixou tudo igual aos 72'. Quando faltavam 10 minutos para o apito final, o Barça inventou uma transição rápida implacável: Larsson tocou para Ronaldinho, que acelerou, deixou Makélélé para trás e devolveu a Larsson, que aproveitava o espaço aberto pela diagonal de Eto’o; o sueco tocou para trás, em Rafael Márquez, que sacou um grande cruzamento com a canhota para a cabeça de Eto’o: 2-1. Demorou apenas 17 segundos. Os londrinos acabariam eliminados. Podemos voltar só mais uma vez ao relatório de AVB? Só uma linha: “Em transição [ofensiva], eles agora querem matar o adversário rapidamente.” Et voilà. Mas, lá está, o futebol é tudo menos ciência exata e repetição da imaginação. “O importante é que se perceba efectivamente o que se quer demonstrar [num relatório], porque no final tudo é dinâmico e praticamente imprevisível”, conclui Fonseca.
/* * EFI GPT partition parsing code * * Copyright (C) 2009 Karel Zak <kzak@redhat.com> * * This file may be redistributed under the terms of the * GNU Lesser General Public License. * * This code is not copy & past from any other implementation. * * For more information about GPT start your study at: * http://en.wikipedia.org/wiki/GUID_Partition_Table * http://technet.microsoft.com/en-us/library/cc739412(WS.10).aspx */ #include <stdio.h> #include <string.h> #include <stdlib.h> #include <stdint.h> #include <stddef.h> #include <limits.h> #include "partitions.h" #include "crc32.h" #define GPT_PRIMARY_LBA 1 /* Signature - “EFI PART” */ #define GPT_HEADER_SIGNATURE 0x5452415020494645ULL #define GPT_HEADER_SIGNATURE_STR "EFI PART" /* basic types */ typedef uint16_t efi_char16_t; /* UUID */ typedef struct { uint32_t time_low; uint16_t time_mid; uint16_t time_hi_and_version; uint8_t clock_seq_hi; uint8_t clock_seq_low; uint8_t node[6]; } efi_guid_t; #define GPT_UNUSED_ENTRY_GUID \ ((efi_guid_t) { 0x00000000, 0x0000, 0x0000, 0x00, 0x00, \ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }}) struct gpt_header { uint64_t signature; /* "EFI PART" */ uint32_t revision; uint32_t header_size; /* usually 92 bytes */ uint32_t header_crc32; /* checksum of header with this * field zeroed during calculation */ uint32_t reserved1; uint64_t my_lba; /* location of this header copy */ uint64_t alternate_lba; /* location of the other header copy */ uint64_t first_usable_lba; /* lirst usable LBA for partitions */ uint64_t last_usable_lba; /* last usable LBA for partitions */ efi_guid_t disk_guid; /* disk UUID */ uint64_t partition_entries_lba; /* always 2 in primary header copy */ uint32_t num_partition_entries; uint32_t sizeof_partition_entry; uint32_t partition_entry_array_crc32; /* * The rest of the block is reserved by UEFI and must be zero. EFI * standard handles this by: * * uint8_t reserved2[ BLKSSZGET - 92 ]; * * This definition is useless in practice. It is necessary to read * whole block from the device rather than sizeof(struct gpt_header) * only. */ } __attribute__ ((packed)); /*** not used struct gpt_entry_attributes { uint64_t required_to_function:1; uint64_t reserved:47; uint64_t type_guid_specific:16; } __attribute__ ((packed)); ***/ struct gpt_entry { efi_guid_t partition_type_guid; /* type UUID */ efi_guid_t unique_partition_guid; /* partition UUID */ uint64_t starting_lba; uint64_t ending_lba; /*struct gpt_entry_attributes attributes;*/ uint64_t attributes; efi_char16_t partition_name[72 / sizeof(efi_char16_t)]; /* UTF-16LE string*/ } __attribute__ ((packed)); /* * EFI uses crc32 with ~0 seed and xor's with ~0 at the end. */ static inline uint32_t count_crc32(const unsigned char *buf, size_t len) { return (crc32(~0L, buf, len) ^ ~0L); } static inline unsigned char *get_lba_buffer(blkid_probe pr, uint64_t lba, size_t bytes) { return blkid_probe_get_buffer(pr, blkid_probe_get_sectorsize(pr) * lba, bytes); } static inline int guidcmp(efi_guid_t left, efi_guid_t right) { return memcmp(&left, &right, sizeof (efi_guid_t)); } /* * UUID is traditionally 16 byte big-endian array, except Intel EFI * specification where the UUID is a structure of little-endian fields. */ static void swap_efi_guid(efi_guid_t *uid) { uid->time_low = swab32(uid->time_low); uid->time_mid = swab16(uid->time_mid); uid->time_hi_and_version = swab16(uid->time_hi_and_version); } static int last_lba(blkid_probe pr, uint64_t *lba) { blkid_loff_t sz = blkid_probe_get_size(pr); unsigned int ssz = blkid_probe_get_sectorsize(pr); if (sz < ssz) return -1; *lba = (sz / ssz) - 1ULL; return 0; } /* * Protective (legacy) MBR. * * This MBR contains standard DOS partition table with a single partition, type * of 0xEE. The partition usually encompassing the entire GPT drive - or 2TiB * for large disks. * * Note that Apple uses GPT/MBR hybrid disks, where the DOS partition table is * synchronized with GPT. This synchronization has many restriction of course * (due DOS PT limitations). * * Note that the PMBR detection is optional (enabled by default) and could be * disabled by BLKID_PARTS_FOPCE_GPT flag (see also blkid_paertitions_set_flags()). */ static int is_pmbr_valid(blkid_probe pr, int *has) { int flags = blkid_partitions_get_flags(pr); unsigned char *data; struct dos_partition *p; int i; if (has) *has = 0; if (flags & BLKID_PARTS_FORCE_GPT) goto ok; /* skip PMBR check */ data = blkid_probe_get_sector(pr, 0); if (!data) { if (errno) return -errno; goto failed; } if (!mbr_is_valid_magic(data)) goto failed; for (i = 0, p = mbr_get_partition(data, 0); i < 4; i++, p++) { if (p->sys_ind == MBR_GPT_PARTITION) goto ok; } failed: return 0; ok: if (has) *has = 1; return 1; } /* * Reads GPT header to @hdr and returns a pointer to @hdr or NULL in case of * error. The function also returns GPT entries in @ents. * * Note, this function does not allocate any memory. The GPT header has fixed * size so we use stack, and @ents returns memory from libblkid buffer (so the * next blkid_probe_get_buffer() will overwrite this buffer). * * This function checks validity of header and entries array. A corrupted * header is not returned. */ static struct gpt_header *get_gpt_header( blkid_probe pr, struct gpt_header *hdr, struct gpt_entry **ents, uint64_t lba, uint64_t lastlba) { struct gpt_header *h; uint32_t crc, orgcrc; uint64_t lu, fu; size_t esz; uint32_t hsz, ssz; ssz = blkid_probe_get_sectorsize(pr); /* whole sector is allocated for GPT header */ h = (struct gpt_header *) get_lba_buffer(pr, lba, ssz); if (!h) return NULL; if (le64_to_cpu(h->signature) != GPT_HEADER_SIGNATURE) return NULL; hsz = le32_to_cpu(h->header_size); /* EFI: The HeaderSize must be greater than 92 and must be less * than or equal to the logical block size. */ if (hsz > ssz || hsz < sizeof(*h)) return NULL; /* Header has to be verified when header_crc32 is zero */ orgcrc = h->header_crc32; h->header_crc32 = 0; crc = count_crc32((unsigned char *) h, hsz); h->header_crc32 = orgcrc; if (crc != le32_to_cpu(orgcrc)) { DBG(LOWPROBE, ul_debug("GPT header corrupted")); return NULL; } /* Valid header has to be at MyLBA */ if (le64_to_cpu(h->my_lba) != lba) { DBG(LOWPROBE, ul_debug( "GPT->MyLBA mismatch with real position")); return NULL; } fu = le64_to_cpu(h->first_usable_lba); lu = le64_to_cpu(h->last_usable_lba); /* Check if First and Last usable LBA makes sense */ if (lu < fu || fu > lastlba || lu > lastlba) { DBG(LOWPROBE, ul_debug( "GPT->{First,Last}UsableLBA out of range")); return NULL; } /* The header has to be outside usable range */ if (fu < lba && lba < lu) { DBG(LOWPROBE, ul_debug("GPT header is inside usable area")); return NULL; } if (le32_to_cpu(h->num_partition_entries) == 0 || le32_to_cpu(h->sizeof_partition_entry) == 0 || ULONG_MAX / le32_to_cpu(h->num_partition_entries) < le32_to_cpu(h->sizeof_partition_entry)) { DBG(LOWPROBE, ul_debug("GPT entries undefined")); return NULL; } /* Size of blocks with GPT entries */ esz = le32_to_cpu(h->num_partition_entries) * le32_to_cpu(h->sizeof_partition_entry); /* The header seems valid, save it * (we don't care about zeros in hdr->reserved2 area) */ memcpy(hdr, h, sizeof(*h)); h = hdr; /* Read GPT entries */ *ents = (struct gpt_entry *) get_lba_buffer(pr, le64_to_cpu(h->partition_entries_lba), esz); if (!*ents) { DBG(LOWPROBE, ul_debug("GPT entries unreadable")); return NULL; } /* Validate entries */ crc = count_crc32((unsigned char *) *ents, esz); if (crc != le32_to_cpu(h->partition_entry_array_crc32)) { DBG(LOWPROBE, ul_debug("GPT entries corrupted")); return NULL; } return h; } static int probe_gpt_pt(blkid_probe pr, const struct blkid_idmag *mag __attribute__((__unused__))) { uint64_t lastlba = 0, lba; struct gpt_header hdr, *h; struct gpt_entry *e; blkid_parttable tab = NULL; blkid_partlist ls; uint64_t fu, lu; uint32_t ssf, i; efi_guid_t guid; int ret; if (last_lba(pr, &lastlba)) goto nothing; ret = is_pmbr_valid(pr, NULL); if (ret < 0) return ret; else if (ret == 0) goto nothing; errno = 0; h = get_gpt_header(pr, &hdr, &e, (lba = GPT_PRIMARY_LBA), lastlba); if (!h && !errno) h = get_gpt_header(pr, &hdr, &e, (lba = lastlba), lastlba); if (!h) { if (errno) return -errno; goto nothing; } blkid_probe_use_wiper(pr, lba * blkid_probe_get_size(pr), 8); if (blkid_probe_set_magic(pr, blkid_probe_get_sectorsize(pr) * lba, sizeof(GPT_HEADER_SIGNATURE_STR) - 1, (unsigned char *) GPT_HEADER_SIGNATURE_STR)) goto err; guid = h->disk_guid; swap_efi_guid(&guid); if (blkid_partitions_need_typeonly(pr)) { /* Non-binary interface -- caller does not ask for details * about partitions, just set generic varibles only. */ blkid_partitions_set_ptuuid(pr, (unsigned char *) &guid); return BLKID_PROBE_OK; } ls = blkid_probe_get_partlist(pr); if (!ls) goto nothing; tab = blkid_partlist_new_parttable(ls, "gpt", blkid_probe_get_sectorsize(pr) * lba); if (!tab) goto err; blkid_parttable_set_uuid(tab, (const unsigned char *) &guid); ssf = blkid_probe_get_sectorsize(pr) / 512; fu = le64_to_cpu(h->first_usable_lba); lu = le64_to_cpu(h->last_usable_lba); for (i = 0; i < le32_to_cpu(h->num_partition_entries); i++, e++) { blkid_partition par; uint64_t start = le64_to_cpu(e->starting_lba); uint64_t size = le64_to_cpu(e->ending_lba) - le64_to_cpu(e->starting_lba) + 1ULL; /* 00000000-0000-0000-0000-000000000000 entry */ if (!guidcmp(e->partition_type_guid, GPT_UNUSED_ENTRY_GUID)) { blkid_partlist_increment_partno(ls); continue; } /* the partition has to inside usable range */ if (start < fu || start + size - 1 > lu) { DBG(LOWPROBE, ul_debug( "GPT entry[%d] overflows usable area - ignore", i)); blkid_partlist_increment_partno(ls); continue; } par = blkid_partlist_add_partition(ls, tab, start * ssf, size * ssf); if (!par) goto err; blkid_partition_set_utf8name(par, (unsigned char *) e->partition_name, sizeof(e->partition_name), BLKID_ENC_UTF16LE); guid = e->unique_partition_guid; swap_efi_guid(&guid); blkid_partition_set_uuid(par, (const unsigned char *) &guid); guid = e->partition_type_guid; swap_efi_guid(&guid); blkid_partition_set_type_uuid(par, (const unsigned char *) &guid); blkid_partition_set_flags(par, le64_to_cpu(e->attributes)); } return BLKID_PROBE_OK; nothing: return BLKID_PROBE_NONE; err: return -ENOMEM; } const struct blkid_idinfo gpt_pt_idinfo = { .name = "gpt", .probefunc = probe_gpt_pt, .minsz = 1024 * 1440 + 1, /* ignore floppies */ /* * It would be possible to check for DOS signature (0xAA55), but * unfortunately almost all EFI GPT implemenations allow to optionaly * skip the legacy MBR. We follows this behavior and MBR is optional. * See is_valid_pmbr(). * * It means we have to always call probe_gpt_pt(). */ .magics = BLKID_NONE_MAGIC }; /* probe for *alone* protective MBR */ static int probe_pmbr_pt(blkid_probe pr, const struct blkid_idmag *mag __attribute__((__unused__))) { int has = 0; struct gpt_entry *e; uint64_t lastlba = 0; struct gpt_header hdr; if (last_lba(pr, &lastlba)) goto nothing; is_pmbr_valid(pr, &has); if (!has) goto nothing; if (!get_gpt_header(pr, &hdr, &e, GPT_PRIMARY_LBA, lastlba) && !get_gpt_header(pr, &hdr, &e, lastlba, lastlba)) return 0; nothing: return 1; } const struct blkid_idinfo pmbr_pt_idinfo = { .name = "PMBR", .probefunc = probe_pmbr_pt, .magics = { { .magic = "\x55\xAA", .len = 2, .sboff = 510 }, { NULL } } };
Every Pet Deserves A Good Home… Todd Palin Excited to Bring Iditarod to TV Todd Palin Hosts Iditarod Unleashed Palin-Cruz 2016: Iditarod Unleashed got Todd Palin to host: Sportsman Channel debuted its Iditarod Unleashed series on Tuesday March 25, 2014, getting a little help from none other than Iron Dog champ Todd Palin. Todd, the husband of former Alaska governor and 2008 Republican vice-presidential candidate Sarah Palin, is hosting a one-hour special introducing the race to fans. The network, which is also set to begin a show with Sarah premiering in April, is using Todd to draw viewers in to its 12 hours of Iditarod coverage. Palin cites his connections with Iditarod mushers like Martin Buser and Rick Swenson as cred for hosting the show, as well as his experience as a champion snowmachine racer. The Sportsman coverage is set to begin Tuesday at 3 p.m. Alaska time Tuesday and airs throughout the rest of the week. Sarah Palin isn’t the only one in the family who will be in front of the camera on the Sportsman Channel. While the former Alaska governor gets set to host Amazing America with Sarah Palin next month, it’s her husband Todd who will be showcasing the beautiful state of Alaska first. The Iditarod has not had a national television network partner since 2009. Until now, that is. In a groundbreaking agreement with Sportsman Channel, the event organizers will continue to produce the annual sled dog spectacle and provide extensive, in-depth coverage, video and updates through an online platform. As the Official Network of The Iditarod, Sportsman Channel will exclusively showcase the stories of The Iditarod. In a multi-week stunt entitled Iditarod Unleashed, Sportsman Channel will air 12 hours of programming and specials – including the national television premiere of shows from The Iditarod library – timed around the 2014 event. That’s where Todd Palin comes in. Iditarod Unleashed programming begins March 25 at 7 p.m. ET/PT with a one-hour special hosted by Palin. Palin is usually a behind-the-scenes guy. The supportive spouse. But he felt compelled to get the word out about the Iditarod. “I’ll do whatever I can to promote this great race,” Palin told Breitbart Sports. “I know some of the mushers and I know how much work it is to take part in it.” While not a camera hog by any stretch of the imagination, Palin enjoyed filming the special programming. “I don’t like to watch myself on TV,” said Palin. “But this was a lot of fun.” The Palin family is no stranger to the iconic Iditarod. “They used to have the restart in Wasilla before they moved it to Willow for the more consistent snow,” Palin said. “We watched for many years with the kids on snowmachines. It’s a big event for all Alaskans.” While Palin is not a musher, he is a champion Iron Dog racer. His success in Alaska’s other big race gives him a special appreciation for those who take part in the Iditarod. “Both are the ultimate,” Palin said. “In certain stretches, you can actually go faster than a snowmachine when mushing with a dog team. They’re so powerful, sometimes you’re just hanging on.” The Iditarod is more sophisticated than ever. Palin had a chance to visit sled dog champion Martin Buser’s facilities recently and he was blown away by the latest technology. “The use of carbon fiber has made these dog sleds better than ever,” Palin said. “The sport has come so far. You think back to the old days and wonder how they did it.” There are personal connections to the Iditarod for Palin as well. Buser teamed up with Palin during the 2008 campaign to help stump for the McCain-Palin ticket. Rick Swenson “King of the Iditarod” ran pro class when Palin started Ion Dog racing in 1993. Meantime, DeeDee Jonrowe serves as an inspiration to all. She beat cancer and got back to mushing. Palin speaks glowingly of John Baker and all he has done for the sport. “The people involved in this are just like the Iron Dog family,” Palin said. “A tight knit group that will help anyone, anyway they can.” Todd and Sarah Palin attended the Iditarod Mushers Banquet in Anchorage this year to show their support for the big race and all those who participate in it. “I’m just thankful that Sportsman Channel was excited to show the Iditarod and to come up here to share these ultimate races with the rest of the nation,” said Palin. The conclusion of the 2014 Iditarod Sled Dog Race was evidence that this event is like no other. Now, viewers across the country will have an opportunity to reconnect with the race on Sportsman Channel. Plus, viewers will be introduced to the special people who live the lifestyle of Iditarod musher. Sportsman Channel will look back at Dallas Seavey’s record-breaking win, and showcase the incredible stories of this year’s historic race, along with stories from previous years. Iditarod Unleashed will deliver dramatic stories of the dogs, mushers, volunteers, history, wildlife and rough terrain. The Iditarod is known as The Last Great Race on Earth. For Sportsman Channel viewers though the in-depth coverage of the race will be the first of it’s kind. “You don’t want to miss this,” Palin said. “It’s just incredible.” Also, 2014 Akiak Dash winner: On Sunday evening, John George won the 2014 Akiak Dash, bringing home $3,400 as he pulled into Bethel with seven sled dogs. The Akiak Dash, one of the series of races held by the Kuskokwim 300 Race Committee, ran from the Southcentral community of Bethel to Akiak and back to Bethel. George finished with a time of 6 hours, 39 minutes and 51 seconds. Coming in second was George Manutoli with a time of 6 hours, 45 minutes and 46 seconds. Herman Phillip took third with a time of 6 hours, 51 minutes and 24 seconds. Total purse for the race was $12,100, split between the top 10 finishers. […] of the Tesoro Iron Dog, the world’s longest snowmobile race, which traces the path of the Iditarod race with an extra journey of several hundred miles to Fairbanks added, has dedicated his 2015 Iron Dog […] Save a Life…Adopt Just One More…Pet! Everyday we read or hear another story about pets and other animals being abandoned in record numbers while at the same time we regularly hear about crazy new rules and laws being passed limiting the amount of pets that people may have, even down to one or two… or worse yet, none. Nobody is promoting hoarding pets or animals, but at a time when there are more pets and animals of all types being abandoned or being taken to shelters already bursting at the seams, there is nothing crazier than legislating away the ability of willing adoptive families to take in just one more pet!! Our goal is to raise awareness and help find homes for all pets and animals that need one by helping to match them with loving families and positive situations. Our goal is also to help fight the trend of unfavorable legislation and rules in an attempt to stop unnecessary Euthenization!! “All over the world, major universities are researching the therapeutic value of pets in our society and the number of hospitals, nursing homes, prisons and mental institutions which are employing full-time pet therapists and animals is increasing daily.” ~ Betty White, American Actress, Animal Activist, and Author of Pet Love There is always room for Just One More Pet. So if you have room in your home and room in your heart… Adopt Just One More! If you live in an area that promotes unreasonable limitations on pets… fight the good fight and help change the rules and legislation… Save the Life of Just One More…Animal! Recent and Seasonal Shots As I have been fighting Cancer… A battle I am gratefully winning, my furkids have not left my side. They have been a large part of my recovery!! Ask Marion Photos by the UCLA Shutterbug are protected by copyright, Please email at JustOneMorePet@gmail.com or find us on twitter @JustOneMorePet for permission to duplicate for commerical purposes or to purchase photos. By JoAnn, Marion, and Tim Algier This past week, we lost our dear family member Rocky who had just outlived his “huep – na-napbdad”, Tom, by just a few months. His perspective would have been interesting!! Just this side of heaven is a place called Rainbow Bridge. When an animal dies that has been […] By JoAnn, Marion, and Tim Algier This past week, we lost a dear family member, Rocky, who had just outlived his “human pet-dad”, Tom, by just a few months. It certainly would have been interesting to know what they thought and what experiences they had had in common!! Just this side of heaven is a […] Bristol Palin: Fellow SixSeeds blogger Zeke Pipher has a great question: If they were dead puppy parts, or parts from homosexual babies, or babies that self-identified as adults, it’d be a different story. Meaning, it would be a story. But as it is, the fact that these fetuses don’t look like puppies, and their sexual […] Family and friends of G.R. Gordon-Ross watch his private fireworks show at the Youth Sports Complex in Lawrence, Kan., Friday, June 28, 2013. (AP Photo/Orlin Wagner) Mercury News – Originally posted on July 02, 2013: The Fourth of July is one of my favorite holidays. Hot dogs, potato salad and, of course, fireworks. But Independence […] Very few dogs have the experience of being parents these days and especially seeing their litters through the process of weaning and then actually being able to remain part of a pack with at least part of their family. Apachi is our Doggie Dad. He is a Chiweenie and here he is is watching his […] By Marion Algier – Just One More Pet (JOMP) – Cross-Posted at AskMarion Anderson Cooper met Chaser, a dog who can identify over a thousand toys, and because of whom, scientists are now studying the brain of man’s best friend. Chaser is also the subject of a book: Chaser: Unlocking the Genius of the Dog […] By Tamara – Dog Heirs – Cross-Posted at JOMP Quebec, Canada – Animals will be considered “sentient beings” instead of property in a bill tabled in the Canadian province of Quebec. The legislation states that "animals are not things. They are sentient beings and have biological needs." Agriculture Minister Pierre Paradis proposed the bill and […] […] Flickr Photos Meta Great Book for Children and Pet Lovers… And a Perfect Holiday Gift One More Pet Emily loves animals so much that she can’t resist bringing them home. When a local farmer feels under the weather, she is only too eager to “feed the lambs, milk the cows and brush the rams.” The farmer is so grateful for Emily’s help that he gives her a giant egg... Can you guess what happens after that? The rhythmic verse begs to be read aloud, and the lively pictures will delight children as they watch Emily’s collection of pets get bigger and bigger. If You Were Stranded On An Island… A recent national survey revealed just how much Americans love their companion animals. When respondents were asked whether they’d like to spend life stranded on a deserted island with either their spouse or their pet, over 60% said they would prefer their dog or cat for companionship!
42 Cal.2d 129 (1954) COUNTY OF LOS ANGELES, Appellant, v. SOUTHERN COUNTIES GAS COMPANY OF CALIFORNIA (a Corporation), Respondent. L. A. No. 22570. Supreme Court of California. In Bank. Jan. 22, 1954. Harold W. Kennedy, County Counsel, A. Curtis Smith and Gerald G. Kelly, Assistant County Counsel, and John H. Larson, Deputy County Counsel, for Appellant. LeRoy M. Edwards, Oscar C. Sattinger and Frank P. Doherty for Respondent. Louis W. Myers and O'Melveny & Myers, as Amici Curiae on behalf of Respondent. TRAYNOR, J. This appeal involves the same basic problems as those presented in City of San Diego v. Southern Calif. Tel. Corp., ante, p. 110 [266 P.2d 14]. Defendant is a public utility engaged in purchasing and selling illuminating gas. It produces a small amount of the gas it sells. Its system is an integrated one and extends through six counties, including the County of Los Angeles. It holds franchises granted by these counties and many cities therein. By this action for declaratory relief and an accounting, plaintiff seeks a judgment establishing the basis on which *132 defendant must compute the amount due for four franchises granted it by plaintiff to lay its pipes in the public roads, streets, and highways in the county. Each franchise was granted by a separate ordinance pursuant to the Broughton Act. (Stats. 1905, p. 777, now Pub. Util. Code, 6001-6071.) Section 3 of that act fixes the amount that must be paid for the franchises at "two per cent (2%) of the gross annual receipts of the person, partnership or corporation to whom the franchise is awarded, arising from its use, operation or possession." Each ordinance contains substantially the same provision. [fn. 1] Defendant filed statements and made payments for the years 1936-1939, which plaintiff claims were incorrect. Although this case is based on statements and figures for 1939, it will control all payments due from 1936 to the termination of each franchise. There is no dispute as to the figures in the accounting processes or what they represent and no dispute as to the end result for the other years once it is determined which of the accounting methods is correct. The trial court made findings and entered judgment sustaining defendant's computations and return of the amount due. Plaintiff appeals, contending that the judgment is not in accord with section 3 of the Broughton Act as construed by this court in County of Tulare v. City of Dinuba (1922), 188 Cal. 664 [206 P. 983]. Defendant made the following computation of the amount due plaintiff for 1939, the year selected by the parties for presenting the issues: From its total capital, $31,216,087.13, defendant deducted its intangibles, $152,351.98, leaving $31,063,735.15 as its total investment in operative property, i. e., property used and useful in purchasing, producing, and distributing gas. It then segregated the amount invested in property not on rights of way, public or private, $9,955,707.06, and the amount invested in facilities on all rights of way, public and private, $21,108,028.09. Defendant then divided its total gross receipts, $9,620,838.45, by its total investment in operative property, $31,063,735.15, which gave $0.309713 of gross receipts per dollar invested. The amount invested in operative property on all rights of way, public and private, $21,108,028.09, *133 was then multiplied by $0.309713, which gave a total of $6,537,430.70 as the gross receipts arising from the use of rights of way. Defendant then prorated this amount between public and private rights of way on a mileage basis. Defendant uses 3,249.225 miles of rights of way; 2,969.673 miles thereof, or 91.3963 per cent, are public rights of way. The amount of gross receipts attributable to all rights of way, $6,537,430.70, was then multiplied by 91.3963 per cent, the percentage of miles of right of way subject to franchises, which gave $5,974,969.77 as the amount of gross receipts attributable to such rights of way. Of the 2,969.673 miles of such rights of way, 456.829 miles or 15.3831 per cent are public rights of way in Los Angeles County. Multiplying $5,974,969.77 by 15.3831 per cent gave $919,135.57 as the gross receipts arising from the use of the franchises granted by plaintiff. Two per cent of that amount is $18,382.60, the charge for 1939 for the use of such franchises. The foregoing computations were based on the following principles, which defendant maintains, and which we agree (see City of San Diego v. Southern Cal. Tel. Corp., ante, p. 110 [266 P.2d 14]), are in accord with the principles enunciated or implicit in the opinion of this court in the Tulare case: [1] 1. Defendant's gross receipts arise from all of its operative property, whether or not such property is located on rights of way, public or private, or on land owned or leased by it or on land owned by others. [2] 2. Defendant's operative property consists of various kinds of real and personal property, including land leased or owned, compressor stations and equipment, meter stations and equipment, regulator stations and equipment, gas production equipment, pipe lines, valves, general office buildings, warehouses, transportation equipment, laboratory equipment, etc. Pipe lines and appurtenances on public and private rights of way are but a component part of defendant's over-all system. [3] 3. Since the 2 per cent charge applies only to gross receipts arising from the use of the franchises, gross receipts arising from operative property other than franchises must be excluded from the base to which the 2 per cent charge applies. [4] 4. As in rate making, there is a relationship between the value of the property and the amount it earns; the dollars invested in the property produce the dollars that form the gross receipts. Since every dollar invested in operative property *134 earns an equal part of the gross receipts, gross receipts are attributed to a particular item or class of operative property according to the dollars invested in it. Moreover, the factors in the proration must be measured in the same terms, and since the gross receipts are measured in dollars, the property giving rise to them must be measured in dollars. (City of San Diego v. Southern Cal. Tel. Corp., ante, p. 110 [266 P.2d 14].) Although this court's opinion in the Tulare case did not specify how the gross receipts were to be apportioned between the property on various rights of way and other property, the method here described is the only feasible method of making that apportionment and was used on the retrial of the Tulare case (87 Cal.App. 744, 745-746). It is fair, practical, readily understood, and easily verified. [5] 5. Gross receipts that arise from the use of the franchises are the gross receipts attributable to that part of the property using the public rights of way pursuant to the franchises. [6] 6. Gross receipts attributable to the various rights of way are apportioned between public and private rights of way according to mileage, "not necessarily as an exclusive method," but as a practicable one, as suggested in the Tulare case. (188 Cal. 664, 681.) Defendant could have made this apportionment according to the amounts invested in rights of way as in (4) above (City of San Diego v. Southern Cal. Tel. Corp., ante, pp. 110, 122, 125-126 [266 P.2d 14]), but plaintiff raises no question as to this method of apportioning gross receipts between rights of way and, in fact, adopts it in its own computations. [7] Plaintiff contends that in arriving at the base to which the 2 per cent charge applies, defendant and the trial court erred in deducting all gross receipts attributable to (1) its office and other general facilities; (2) the part of its distribution system on private property owned by consumers and not under lease by defendant; (3) the part of its distribution system on private property owned or leased by defendant. Since this contention would not permit the allocation of any of defendant's gross receipts to the foregoing classes of property, it necessarily involves a repudiation of the principle that defendant's gross receipts arise from all of its operative property and that gross receipts arising from all operative property other than franchises must be excluded from the base on which the 2 per cent charge is computed. *135 Plaintiff would justify this repudiation on the grounds that the Tulare case decided that the total gross receipts of a public utility can only be divided into two categories: (1) that which is credited to its distribution system and (2) that which is credited to its production system; that the gross receipts attributable to its distribution system constitutes the fund from which the 2 per cent charge shall be ascertained; that the only gross receipts of defendant from its operative property that can be attributed to its production system and therefore excluded from the fund from which the 2 per cent charge is ascertained is the $134,111.96 investment in facilities for manufacturing the small amount of gas it produces and does not buy from others; and that the only gross receipts of defendant attributable to its distribution system that are not subject to the 2 per cent charge are the gross receipts attributable to the use of private rights of way. In support of this contention, plaintiff cites the following language from the Tulare case: "The gross receipts of this defendant accrue from two distinct agencies. One is the generating plants or powerhouses of the company, located in three separate counties; the other is the distributing system. ... The first step in this accounting should be to determine as a question of fact what proportion of the total annual gross receipts of the public utility should be justly credited to its distribution system over various rights of way, as distinguished from its power plants or other producing agencies." (188 Cal. 673, 681.) This language, however, must be read in the light of the conclusions this court had reached as a basis for the steps in the accounting. Among these conclusions were: "The corporation's gross receipts, to refer to the language of the Act arise from the 'use, operation or possession' not alone of these franchises over the streets and highways, but likewise from the use, operation, or possession of the powerhouses and private rights of way. The two last named are not subject to any franchise charges and the county or municipality is not entitled under the law to any part of the gross receipts attributable to these privately owned parts of the system." (188 Cal. 673-674.) It should be noted that the reason for the conclusion that the county or municipality was not entitled to any part of the gross receipts attributable to powerhouses and private rights of way, was that the company's gross receipts arise, not alone from the "use ..." *136 of the franchise, but from the use of powerhouses and private rights of way, which are not subject to any franchise charges. [8] It is clear from the opinion in the Tulare case that the principle that this court there enunciated was that the county was not entitled to any part of the gross receipts from utility property not subject to franchise charges. The gross receipts attributable to generating plants, powerhouses, and private rights of way were excluded, not because the court regarded them as the only source of gross receipts other than the use of franchises, but because they were privately owned parts of the system not subject to any franchise charges. (See, also, City of San Diego v. Southern Cal. Tel. Corp., ante; City of Monrovia v. Southern Counties Gas Co., 111 Cal.App. 659 [296 P. 117]; Ocean Park Pier Amusement Corp. v. Santa Monica, 40 Cal.App.2d 76 [104 P.2d 668, 879].) Since that reason applies with equal force to all operative property of the company not subject to any franchise charges, it cannot reasonably be implied that this court meant that only operative property of the kind mentioned contributes to gross receipts. That such an implication is absurd is apparent from the statement, "The absurdity of the position that any integral part of an electric distributing system like this is entitled to credit for the whole of the earnings from deliveries and sales in a given county or municipality when a large part of such service is over parts of the system not subject to such franchise permit may be shown by various illustrations." (188 Cal. 674.) [9] Operative property other than generating plants, powerhouses, and the distributing system consisting of poles and wires, are just as much an integral part of an electric or gas system as generating plants, powerhouses and private rights of way. Office buildings to house engineers and executive and administrative staff, warehouses, transportation equipment, communication equipment, meter devices, laboratory equipment and other facilities are all essential to an electric or gas company's operations and all contribute to its gross receipts. If it is absurd to say that any integral part of such a system is entitled to credit for the whole of its gross receipts, it is equally absurd to say that any number less than the whole is so entitled. Plaintiff's contention is based on the erroneous conclusion that in the Tulare case this court regarded all property of a public utility other than generating plants and powerhouses as part of its distributing system. This court was there concerned, not with labels or a division of the *137 property into producing system and distributing system, but with property that was and property that was not subject to any franchise charge. The arbitrary classification of land, office buildings, warehouses, garages, construction equipment, automotive equipment, laboratory and other equipment as entirely part of the distribution system rather than as part of the production system or as part of both production and distribution systems or as "other [revenue] producing agencies" (188 Cal. 664, 681), would not only be unreasonable but pointless. [10] Even if all of the property other than generating plants and powerhouses could reasonably be regarded as entirely part of the utility's distribution system, it would not follow that gross receipts attributable thereto should be included in the fund to which the 2 per cent charge applies. Thus, property in private rights of way is admittedly part of the distribution system. Yet this court in the Tulare case made it abundantly clear that gross receipts attributable to such property were not subject to the 2 per cent charge, since such property was "not subject to any franchise charges." For the same reason gross receipts from any other parts of the distribution system that are not subject to franchise charges are not subject to the 2 per cent charge. Plaintiff does not quarrel with the capital investment method as such for allocating gross receipts to a particular item or class of operative property in it. In fact, it uses that method itself in its own apportionment between production and distribution. Plaintiff contends that although this method is "plausible" and "entirely correct," there is no occasion to use it as defendant uses it and that unless it is limited to the use plaintiff makes of it to apportion gross receipts between production and distribution, defendant will get a double deduction for the same purpose: (1) the deduction taken by the proration on a mileage basis for gross receipts attributable to private rights of way and (2) the deduction taken, before the proration on a mileage basis, for operative property not located on rights of way. This contention assumes the validity of the distinction, discussed at length above, that plaintiff would make between production and distribution and the conclusions it would draw therefrom, and is simply another way of asserting that only gross receipts attributable to generating plants and private rights of way can be excluded from the base to which the 2 per cent charge applies. There is no double deduction for the same purpose. *138 [11] Gross receipts attributable to private rights of way and gross receipts attributable to private property not located on rights of way are separately excluded from the base to which the 2 per cent charge applies, without duplication or overlapping, and for the same reason--they arise from property not subject to any franchise charges. [12] Plaintiff would also justify its repudiation of the principle that defendant's gross receipts arise from all of its operative property and that gross receipts arising from all operative property other than franchises must be excluded from the base on which the 2 per cent charge is computed, on the following theory: The Broughton Act allows the utility to retain 98 per cent of its total gross receipts as the percentage applicable to its private property and requires it to pay to cities and counties 2 per cent of its gross receipts (less those attributable to private rights of way) for the use of public property; if it were allowed to take any more of its gross receipts as applicable to its private property, it would get a double deduction: (1) the amount so taken and (2) the 98 per cent it is allowed to retain. This theory ignores the limitation in the Broughton Act that the 2 per cent charge applies, not to defendant's total gross receipts, but only to its gross receipts "arising from the use" of the franchise. Thus, by its express terms the Broughton Act allows the utility to retain not only 98 per cent but 100 per cent of its gross receipts from its private property not subject to franchise charges, as well as 98 per cent of its gross receipts arising from the use of the franchises. It is not 2 per cent of its total gross receipts but only 2 per cent of its gross receipts "arising from the use" of the franchises that is exacted as a payment for the use of such franchises. The foregoing theory of plaintiff's is simply a slight modification, purportedly made in obedience to the Tulare case, of another contention suggested by it that the Tulare case should be disregarded and that there should be only a proration of the entire gross receipts between rights of way on a mileage basis. [fn. 2] As we have pointed out at some length above, and *139 in City of San Diego v. Southern Cal. Tel. Corp., ante, pp. 110, 124 [266 P.2d 14], that is not what the statute provides. There is no more justification for prorating the total gross receipts between rights of way than there would be for attributing the total gross receipts to each franchise used and requiring the utility to pay 2 per cent of its total gross receipts to each of the numerous cities and counties granting the franchises. The judgment is affirmed. Gibson, C.J., Shenk, J., Edmonds, J., Schauer, J., and Spence, J., concurred. CARTER, J. I dissent. It appears to me that the gas company formula, accepted by the majority, attempts to deduct every possible dollar of invested capital from the distribution system before they compute the value of the distribution system attributable to either public or private ways. In this manner they seek to base the county's share on little more than pipe in the ground. This is a complete misconception of the Broughton Act and its interpretation by this court in the Dinuba case. (County of Tulare v. City of Dinuba, 188 Cal. 664 [206 P. 983].) The Broughton Act provides that the utility "shall during the life of the franchise pay to the county or municipality two percent (2%) of the gross annual receipts of the grantee arising from the use, operation, or possession of the franchise." In giving an interpretation to the meaning of these words this court, in the Dinuba case, supra, stated (p. 673) that the corporation's gross receipts "arise from the 'use, operation or possession,' not alone of these franchises over the streets and highways, but likewise from the use, operation, or possession of the power- houses and private rights of way. The two last named are not subject to any franchise charges and the county or municipality is not entitled under the law to any part of the gross receipts attributable to these *140 privately owned parts of the system." This court then went on to say (p. 681) that "The first step in this accounting should be to determine as a question of fact what proportion of the total amount of gross receipts of the public utility should be justly accredited to its distributing system over various rights of way, as distinguished from its power plants or other producing agencies." "This will establish the fund from which the percentage of earnings 'arising from the use, operation or possession' of the various franchise easements shall be ascertained." "The percentage of this fund to be apportioned to the respective public franchises will not include the proportion of such gross receipts of the distributing system as are attributable to the use of private rights of way occupied by the utility, as such part of the system is not subject to franchise charge." (Emphasis added.) The clear import of this language is that we are first to deduct from gross revenue that amount attributable to the production system. This leaves us with the amount of gross revenue attributable to the entire distribution system. We then must determine what proportion of these earnings of the entire distribution system to attribute to the distribution system on public ways as contrasted to the distribution system located on private ways. Such was clearly this court's view in the Dinuba case when it said (p. 676): "The reasonable construction of the language used is that each county or municipality is entitled to its percentage of the gross earnings arising from the use of its highway, in the proportion that the receipts arising from the use of such highways bears to the receipts attributable to all the rights of way of the entire system." In determining what share of the distribution earnings to attribute to the public ways and what share to the private ways this court felt that the relative mileage of each was the most appropriate basis. This was illustrated by the following statement on page 681: "We have adopted this appropriation, to the various rights of way, according to mileage, not necessarily as an exclusive method of distribution of the gross receipts, but as a practicable one where the contribution of the various franchise easements to the gross earnings cannot be otherwise determined. ... There may be instances where the extent or value of the distributing system over a given right of way may indicate its earning capacity; or where the service of lateral lines may be differentiated from that of main conduits in the value of their use *141 of the easements. In such cases these conditions should be taken into account. But where, as will often happen, contribution to the earnings of the various rights of way is general and indistinguishable, we can see no reason why the proportionate mileage basis should not be used in apportioning the statutory percentage of gross receipts." (Emphasis added.) Thus we see that once we have determined what proportion of the gross receipts is attributable to the entire distribution system, we must find some method of determining what proportion is attributable to private ways and what portion is attributable to public ways. The most practical method of so doing is by use of the relative mileage basis. An example of its application was given by this court in the Dinuba case (p. 676) where it said: "It may be assumed that the distributing system covers six hundred miles of easements. The proportion of the gross receipts derived from and chargeable to the use of the distributing system should be credited to this entire mileage. One-third of this mileage may extend over private rights of way which are not subject to any franchise liability. The remaining two-thirds of the mileage covered by county franchises is entitled to two-thirds of the two per cent of the gross amount, and each county is entitled to the percentage of this two-thirds in the proportion that the mileage of its franchises bears to the total mileage covered by all the franchises." (Emphasis added.) By its language this court, in the Dinuba case, made it extremely clear that the 2 per cent was to be taken from that portion of the gross receipts of the total distribution system attributable to the distribution system on public ways; that the exact earnings of each mile in the system cannot always be accurately determined; that the value of each portion of the distribution system is not necessarily indicative of its earning capacity and therefore the best method of prorating the earnings of the entire distribution system between public and private ways is to use the mileage basis. All of this makes it apparent that this court established a rather simple formula whereby we first determine what portion of the total gross receipts is attributable to the distribution system, and then, as the best practical method of prorating these total distribution receipts between public and private ways, we use the relative mileage basis. From the gross receipts attributable to public ways the governmental bodies granting the *142 franchises are entitled to 2 per cent of their proportionate interest. What could be clearer? As stated by the District Court of Appeal in its opinion in this case (see County of Los Angeles v. Southern Counties Gas Co., (Cal.App.), 259 P.2d 665): "The Broughton Act recognized the justice of allowing a public utility a credit for its private property by exempting 98 per cent of the gross receipts from the franchise charge. The Dinuba case went a step further in allowing an apportionment of the 2 per cent toll so as to eliminate any charge for that proportion of the mileage over private rights of way. The gas company is not satisfied to accept the benefits granted by both the Broughton Act and the Dinuba decision, but in addition thereto it takes the additional deduction for the facilities located on private property by the utilization of the so-called 'capital investment method' of accounting." The formula proposed by the county and accepted by the District Court of Appeal follows the pattern as established in the Dinuba case. The gas company, on the other hand, seeks to use a combination formula which includes some of the suggestions of the Dinuba case but which also includes several other calculations designed to reduce to a bare minimum the amount due the county. For a clearer understanding of the gas company's departure from the formula in the Dinuba case, it may be well at this time to compare the methods used by the county and by the gas company. To begin with it should be noted that both the county and the gas company are in accord as to certain calculations even though they are made at different stages of the respective formulae. As a starting point both the county and gas company agree that in 1939 total invested capital equalled $31,216,081.13. From this both deduct intangibles, capital in general facilities and office and capital invested in production facilities. This leaves a total of $28,548,380.17 as that portion of the total capital which is invested in the distribution system. Once the extent of the distribution system, as contrasted to the producing system, has been ascertained, the next step should (under the Dinuba case) be to determine what proportion of the gross receipts can be attributed to the total distribution system. This is required under the Dinuba formula and is done by the county. Thus the county calculates that the amount of capital invested in distribution is 99.1306 per cent as contrasted to .8694 per cent invested in production facilities. Since 99.1306 per cent of the production *143 and distribution capital is invested in distribution facilities it follows that 99.1306 per cent of the gross receipts should be credited to the entire distribution system. This is the logical approach, this is the reasoning of the Dinuba case and this is the formula used by the county, but the gas company seeks still another deduction. Rather than determine the amount of capital invested in the entire distribution system they seek a figure which includes only the distribution capital invested in rights of way. To do this they deduct $7,556,603.15, which is the value of all distribution capital on consumer's property or on leased property. It is in this major respect that the gas company formula departs from the Dinuba case and differs from the county formula. By so doing the gas company deducts over 25 per cent of the value of the entire distribution system before computing the gross receipts attributable to the distribution system. This leaves only the capital invested in rights of way and has the effect of basing the gross receipts attributable to the distribution system on little more than the value of the pipe in the ground. It is a departure from the strict mileage formula established by the decision in the Dinuba case. The net result of the gas company formula is that it does not compute the gross receipts for the entire distribution system as required by the Dinuba case, but it tries to limit the fund to those receipts attributable only to rights of way. It attempts to exclude some of the distribution system which is located on private property in this preliminary calculation, when such exclusion should properly be made only on the mileage basis when the ratio of public to private system is determined. As has already been pointed out in the Dinuba case the value of an isolated portion of the distributing system is not necessarily indicative of its earning capacity. Certain portions which are new may have a greater value but far less earning capacity than some of the older sections which have little book value but a great deal of earning power. The terminus of a gas conduit may be one of the most extensive parts of the line but that does not mean that the meters and terminal equipment account for most all the earnings and that the transporting conduit earns little or nothing. Thus we can see that while the amount of capital invested in an entire system may be some indication of its earnings, we cannot segregate isolated portions of a system and determine that its dollar value is a correct measurement of its earning *144 power. For this reason this court in the Dinuba case preferred to compute all the gross receipts attributable to the entire distribution system and then prorate them between the public and private ways on a mileage basis rather than deducting part of such system on a dollar value basis. This is necessary since as a practical matter the contributions of the various portions of the distribution system to the gross distribution receipts cannot otherwise be determined. The majority fails to recognize the fact that some portions of the distribution system which are low in dollar value may have an earning power as great or greater than other portions which have a high book value. Based on this misconception it states that "As in rate making there is a relationship between the value of the property and the amount it earns; the dollars invested in the property produce the dollars that form the gross receipts. Since every dollar invested in operative property earns an equal part of the gross receipts, gross receipts are attributed to a particular item or class of operative property according to the dollars invested in it." Granted that there is a relationship between the value of a corporation's property and the amount it earns, we must recognize the limitations of such a broad generalization. Thus it might be said that there is a relationship between the value of the entire production system and the extent of its earning power; or it might be said that there is a relationship between the value of the entire distribution system and the amount it earns; but such a general relationship between the value of the property and the degree of earning power cannot be carried too far. For example, assume that every building on Block "A" has a direct conduit connection with the main gas line; that each conduit has a book value of $100; that one of the buildings serviced is a restaurant using gas ranges; that one of the buildings is a bakery using gas ovens; that two of the buildings are unoccupied; that one of the buildings is occupied by a frozen food locker; and that one of the buildings is occupied by a meat market. From this type of factual situation it can clearly be seen that the amount of gas consumed by the various customers serviced will vary to a considerable extent even though the value of the conduit into each building has the same $100 book value. Thus we see that the earning power of the various conduits will vary in spite of the fact that the same number of dollars is invested in each; and therefore the generalization that earnings have a relationship to dollars invested has its limitations. *145 Granting that there is a relationship between the dollars invested in an entire system and its earnings, there is not always an accurate relationship between the value of a particular portion of the system and its earnings. In view of this it is not correct to say (as the majority has) that "every dollar invested in operative property earns an equal part of the gross receipts," and that "gross receipts are attributed to a particular item or class of operative property according to the dollars invested in it." (Emphasis added.) By this reasoning the majority (following the theory advanced by the gas company) contends that the earning power of the public ways must be limited to the actual value of the investments in rights of ways after various other portions of the distribution system have been deducted. Thus they compute the dollars earned on a particular portion of the distribution system on a dollar investment basis even though such a method is only feasible when applied to an entire system as contrasted to an isolated part. These limitations were recognized by this court in the Dinuba case when it stated (p. 682): "There may be instances where the extent or value of the distributing system over a given right of way may indicate its earning capacity; ... But where, as will often happen, contribution to the earnings of the various rights of way is general and indistinguishable, we can see no reason why the proportionate mileage basis should not be used in apportioning the statutory percentage of gross receipts." Thus in order to compute the gross receipts arising from the use, operation or possession of the public franchise we must first determine the gross receipts of the entire distribution system and then on a mileage basis prorate these gross receipts between public and private ways. By seeking to deduct the $7,556,603.15 as part of the distribution system on consumers' property or on leased property and later seeking to deduct 8.603 per cent of the mileage as being located on private ways the gas company is attempting a form of double deduction. The portion of the distribution system located at the terminus of each line is high in value ($7,556,603.15) but low in mileage so the gas company seeks to deduct this portion on a dollar basis. The other portions of the distribution system on private ways do not account for as much value (approximately $2,400,000) so the gas company is willing to compute these portions on a *146 mileage basis. Thus the gas company attempts to divide the distribution system on private ways into two parts. The one part having a high value ($7,556,603.15) they seek to deduct on a dollar basis. The other portions having a lower dollar value (approximately $2,400,000) but a higher mileage value they seek to deduct on a mileage basis. Actually the gas company is only entitled to one deduction from the receipts of the distribution system and that is a single deduction for the proportion of the distribution system on private ways. This should include that portion of the system running over private ways owned by the company, private ways leased by the company, private ways merely used by the company and all other forms of private ways including the conduits and equipment running to each consumer. Why should there be a distinction between private ways on consumers' property and other private ways? It is all part of the distribution system and the gas company will be credited with that portion of the distribution system on all private ways on a mileage basis. By these calculations the gas company has reduced the total of distribution receipts to $6,537,430.70 rather than the total of $9,537,137.16 reached under the county formula. Since 91.3963 per cent of the distribution mileage is on public ways the gross receipts fund attributable to public ways, from which the 2 per cent is to be taken, should total $8,716,590.49 instead of $5,974,969.77 as computed by the company. The net result of the gas company's double deduction is that for 1939 the county of Los Angeles having 15.3831 per cent of the public ways would only be entitled to $18,382.60 rather than $26,817.63. There can be no doubt that the Broughton Act as well as the Dinuba case intended the 2 per cent to be taken from the gross receipts attributable to the distribution system after the proportion attributable to private ways had been deducted. However, the manner of deducting or excluding such items must be consistent. It is not proper to exclude the part of the distribution system located on private property on a dollar invested basis and the balance on a mileage basis. The term gross receipts was adequately defined by the District Court of Appeal (County of Los Angeles v. Southern Counties Gas Co., (Cal.App.) 259 P.2d 665) when it said: "No authority has been found to define the term 'gross receipts' to mean anything other than the total without deduction; it means 'all receipts on business beginning and ending *147 within this state.' (Pacific Gas & Elec. Co. v. Roberts, 176 Cal. 183, 189 [167 P. 845].) The phrase is 'plain language which requires no interpretation ... "perfectly plain, unequivocal language" ... it must be taken in its plain sense without limitation or deduction save as expressly modified by the Legislature.' (Bekins Van Lines, Inc. v. Johnson, 21 Cal. 2d 135, 140 [130 P.2d 421].) Gross receipts mean all receipts arising from or growing out of the employment of the corporation's capital in its designated business. (Robertson v. Johnson, 55 Cal.App.2d 610 [131 P.2d 388].) Is there any doubt then that the Legislature intended for the utility to pay as a toll for the use of public highways on which to lay its pipes, tracks or cables, 2 per cent of its gross receipts?" "These conclusions are fortified by the doctrine of strict construction. The basic franchise ordinance (No. 1107, New Series, 1924) provides that 'the franchise is granted upon each and every condition contained herein, and in the ordinance granting the same and shall ever be strictly construed against the grantee.' When a franchise provides for the protection of the public interest, it is a fair assumption that the board of supervisors endeavored to perform its duty as trustee for the public and that the provisions were inserted for the purpose of securing for the public all substantial advantages. (38 Am.Jur. 214.) It is a general principle of construction that franchises granted by the state to private persons or corporations must be construed most strongly in favor of the public. If a doubt arises, nothing is to be taken by implication as against public rights. (Clark v. City of Los Angeles, 160 Cal. 30, 38 [116 P. 722]; Sacramento v. Pacific Gas & Elec. Co., 173 Cal. 787, 791 [161 P. 978].)" "From all that is said above it is unavoidable that the franchise must be construed strictly in favor of the county and as so construed respondent should pay its full 2 per cent of its gross receipts each year of the life of its franchise with no deductions except those attributable to production capital and the proportion of the distribution system belonging to the utility." It would also appear that the cases cited by the majority and the gas company were adequately distinguished by the District Court of Appeal (County of Los Angeles v. Southern Counties Gas Co., supra, (Cal.App.) 259 P.2d 665) in the following discussion: "Ocean Park Pier Amusement Corp. v. City of Santa Monica, 40 Cal.App.2d 76 [104 P.2d 668, 879], *148 cited by the gas company in support of its position, is readily distinguishable. In that case the city exacted the full statutory toll for the use of its own property and in addition sought to exact a charge for the use of the corporation's property. It was therefore properly held that no franchise payment need be made for the use of private property with respect to which no public property was contributed or used. In the case at bar, however, the gas company has consistently utilized public property in its operations and of course could not operate for an instant without public franchises, but the record discloses no attempt by appellant 'to include in the grant, land over which it had no proprietary interest,' as was true of the City of Santa Monica in the last cited authority, page 86." "Respondent cites also City of Monrovia v. Southern Counties Gas Co., 111 Cal.App. 659 [296 P. 117], as authority for its contention. The court said at page 660, 'In accordance with this method [from the Dinuba decision] the defendant ... [eliminated] that portion of its earnings attributable to the use of its properties located on private property.' The context of the above sentence, a portion of which respondent quotes, makes it clear that the mileage allocation formula of the Dinuba decision, under no dispute in the instant case, is referred to. But in any event, the only issue involved in the Monrovia action was whether or not the city was entitled to 2 per cent of the gross receipts collected within the city, a point not at all involved in the instant controversy." If we are to abide by the decision of the Dinuba case, if we are to insist on a fair and consistent formula without double deductions, and if we are to construe the franchise most strongly in favor of the public (as is required by law), then we must reverse the judgment rendered by the trial court. For these reasons I would reverse the judgment. NOTES [fn. 1] 1. Ordinance 500 (New Series) is typical. It provides: "... the said grantee and his or its successors or assigns shall, during the life of said franchise, pay to the county of Los Angeles ... two per cent (2%) of the gross annual receipts of such grantee, and his or its successors or assigns, arising from the use, operation or possession of said franchise." [fn. 2] 2. In advancing this contention plaintiff makes the specious argument that gross receipts means all receipts without deduction and that there is no more justification for deducting a cent from gross receipts than there would be to deduct manufacturing costs from the retail price of gas appliances in computing a sales tax based on gross receipts. There is no deduction here of manufacturing costs, cost of gas, costs of operation or other costs. Gross receipts that are attributed to the use of franchises and to other operative property are still gross receipts. There is no deduction from gross receipts but a proration of gross receipts between property that is subject to franchise charges and property that is not, just as there is no deduction from gross receipts in plaintiff's proration of gross receipts between rights of way. Plaintiff's argument would necessarily lead to the conclusion that there can be no proration of gross receipts, even between rights of way, and that for every franchise granted by each county and city, defendant must pay 2 per cent of its total gross receipts.
Embassy menthol content Embassy menthol content Embassy menthol content. Chicago cigarettes Winston China, Much cigarettes Costco Sheffield, Embassy menthol content, Pink cigarettes for sale, Untipped President cigarettes, Viceroy menthol cigarette price, Cigarette New York store, Buy Ashima lights UK, Cigarettes prices in Aberdeen 2018, Buy Benson Hedges 100s online, Cigarettes use United States. J much carton gauloises cigarettes pennsylvania .Enter a word (or two) above and you'll get back a bunch of portmanteaux created by jamming together words that are conceptually related to your inputs Jun 15, 2018 · Reserve order cigarettes marlboro in mississippi a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4.5 of 5 on TripAdvisor and ranked #26 of 2,588 restaurants in Tampa Or Send Your Contribution To: The where to buy 305 cigarettes Brother Nathanael Foundation, POB 547, Priest River, ID 83856 E-mail: brothernathanaelfoundation([at])yahoo[dot]com.Salem was launched in 1956 by buy marlboro cigarette free shipping the R.Enter a word (or two) above and you'll get back a bunch of portmanteaux created by jamming together words cigarettes davidoff cravings that are conceptually related to your inputs Jun 15, 2018 · Reserve a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4.Reynolds Tobacco Company as the Embassy menthol where can you order cigarettes online content first filter-tipped menthol cigarette.This is one of the world's classic Embassy menthol content and most dramatic overland journeys Port Manteaux churns out silly new words when price of parliament gold pack you feed it an idea or two.Salem was launched in 1956 winston menthol lights 100 by the R.Enter a word (or two) above and you'll get back a where to buy cheap cigarettes in texas bunch of portmanteaux created by jamming together words that are conceptually related to your inputs Jun Cheap Marlboro cigarettes for sale online 15, 2018 · Reserve a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4. american spirit cigarettes pack .Enter a word (or two) above and you'll get back a bunch of portmanteaux created by jamming together words that are conceptually related cheap cigarettes kool to buy in usa to your inputs Jun 15, 2018 · Reserve a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4.S buy cigarettes superkings online in chicago .We feature all sorts of blueprints, diagrams, schematics and all the information you need, to help you either buy your succeeding light boat or come up with your own I know that sounds crazy, I know that sounds like it seven star cigrette online came out of left field, but it’s absolutely true.Reynolds Tobacco cheap muratti 100s Company as the first filter-tipped menthol cigarette.You see, cigarettes viceroy brands and prices idaho falls vacationing Embassy menthol content in Portugal doesn’t have to be expensive Jun 15, 2018 · Reserve a table at SUSHISAMBA, Las Vegas on TripAdvisor: See 1,814 unbiased reviews of SUSHISAMBA, rated 4.How Embassy menthol content to choose the e-liquid/pre-filled buy golden gate cigarettes jersey cartridges’ nicotine content.Explore menu, see photos electronic cigarette 51 trio and read 793 reviews: "Our server was very pleasant and welcoming Traverse high passes, cross the stark but spectacular Buy New York cigarettes US Tibetan Plateau to Everest Embassy menthol content Base Camp.We feature all sorts of blueprints, diagrams, schematics and all the information you need, to help you either buy your succeeding light boat or come up with your own I know that sounds r1 special blend blue crazy, I know that sounds like it came out of left field, but it’s absolutely true.When the brand buy tobacco in kansas was introduced in Oris slim cigarette 1956, Salem's slogan was "Take a puff, it's springtime" Embassy menthol content which was used for several years afterwards History.We feature all sorts of blueprints, cheap tobacco tins diagrams, schematics and all the information you need, to help you either buy your succeeding light boat or come up with your own I know that sounds crazy, I know that sounds like it came out of left field, but it’s absolutely true.Reynolds Tobacco gran canaria tobacco prices Embassy menthol content Company as the first filter-tipped menthol cigarette.So get the inside scoop so you can make a truly informed decision Jun 15, 2018 · Reserve a table at SUSHISAMBA, price of sovereign cigarettes in spain Las Vegas on TripAdvisor: See 1,814 unbiased reviews of SUSHISAMBA, rated 4.Enter a word (or two) above and you'll get back How to pack a cigarettes Peter Stuyvesant box a bunch of portmanteaux created by jamming together Much carton cigarettes Dunhill Sheffield words that are conceptually related duty on cigarettes in texas to your inputs For example, enter "giraffe" and you'll get back words like "gazellephant" and "gorilldebeest" Jun 15, 2018 · Reserve a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4.5 of 5 on TripAdvisor and ranked #185 of 5,314 restaurants in Las Vegas Book now at City Perch Kitchen + Bar – North native cigarettes online canada Bethesda in North Bethesda, MD.Explore menu, see photos and read 793 reviews: "Our server was lobelia cigarettes more buy uk very pleasant and welcoming.How to cigarettes golden american pack in new york choose the e-liquid/pre-filled cartridges’ nicotine content.Initially it was designed to treat Mild Seven similar marlboro Cost of LM cigarettes in India red price idaho to Seven Stars, but in accordance ….Enter a word (or two) above and you'll get excise duty ireland cigarettes president back a bunch of portmanteaux created by jamming together words that are conceptually related to your inputs Jun 15, 2018 · Reserve a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4.You see, vacationing in Portugal doesn’t have to be expensive Jun 15, 2018 · Reserve a table at SUSHISAMBA, Las Vegas on TripAdvisor: See 1,814 buy us golden gate online unbiased reviews of SUSHISAMBA, rated 4.History kool cigarettes man .Mevius was launched in 1977 under the name Mild Seven and was intended to be a mild version of american spirit cigarettes japan the Seven Stars brand in Japan in 1977, it ….When the brand was introduced in 1956, Salem's slogan was "Take Embassy cheap cigarettes ontario canada menthol content a puff, it's springtime" which was used for several years afterwards History.So get the inside scoop so you can make a truly informed decision Jun Embassy menthol content 15, 2018 · Reserve a table Kool price Chicago at SUSHISAMBA, Las Vegas on TripAdvisor: See 1,814 unbiased reviews of where to buy marlboro cigarettes dublin SUSHISAMBA, rated 4.There are buy russian cigarettes many things that meet the eye.This is one of the world's classic and most dramatic overland journeys Port cigarettes shopping in belgium Manteaux churns out silly new words when you feed it an idea or two.When the brand was introduced in 1956, Salem's ashima cigarettes price in europe slogan was "Take a puff, it's springtime" which was used for several years afterwards History.Mevius was launched in 1977 under the name Mild Seven and was intended to buying cigarettes first time uk be a mild version of the Seven Stars brand in Japan in 1977, it was launched overseas in 1981.The makes marlboro cigarettes georgia greeters have great personalities.J the cost of cigarettes in boston .There are many things that buy marlboro cigarettes us meet the eye.How to choose the how much does gauloises cigarettes cost in pennsylvania e-liquid/pre-filled cartridges’ nicotine content. marlboro cost in usa 2016 .Reynolds Tobacco Company as the cigarettes brands tesco first filter-tipped menthol cigarette.Mevius carton cigarettes lambert butler menthol was launched in 1977 under the name Mild Seven and was intended to be a mild version of the Seven Stars brand in Japan in 1977, it ….Salem was launched in 1956 how much is a carton of glamour cigarettes in usa by the R.Mevius was launched in 1977 under the name Mild Seven and was intended wholesale cigarettes marlboro distributors denver to be a Embassy menthol content mild version of the Seven Stars brand in Japan in 1977, it ….How to choose the e-liquid/pre-filled cartridges’ nicotine buy cigarettes online from indian reservations content.And sadly, the things that you Embassy menthol content don’t see are the ones that probably price of mayfair red uk would hurt you the most.And sadly, the things where can i buy dunhill tobacco that you don’t see are the ones that probably would hurt you the most.How swiss cigarettes davidoff to choose the e-liquid/pre-filled cartridges’ nicotine content.This is one of the world's classic and most dramatic overland journeys Port Manteaux churns cigarette price in norway shop out silly new words when you feed it an idea or two.When the brand was introduced wholesale cigarettes for retail in 1956, Salem's slogan was "Take a puff, it's springtime" which was used for several years afterwards History.J cheapest cigarettes to buy in ireland .History where to get cheap cigarettes golden american in sydney .There are many where to buy canada cigarettes marlboro in los angeles things that meet the eye.Explore menu, see photos and read 793 reviews: "Our server benson and hedges dual price was very pleasant and welcoming Traverse high passes, cross the stark but spectacular Tibetan Plateau to Everest Base Camp.History davidoff menthol light gold .Salem was launched in where can i buy cheap american spirit cigarettes 1956 by the R.Mevius was launched in 1977 under the name Mild price of glamour lights in australia Seven and was intended to be a mild version of the Seven Stars brand in Japan in ….Explore menu, see photos and read 793 reviews: Embassy menthol content "Our server was very pleasant and welcoming Traverse high passes, price dun paquet de cigarettes a sweden cross the stark but spectacular Tibetan Plateau to Everest Base Camp. west cigarettes cost in florida .This is one of the world's classic and most dramatic overland journeys Port Manteaux churns out silly new words when you feed it an idea or where can you buy marlboro cigarettes in canada two.The essence of an excellent fishing boat is really all about a light, nimble and quick design, that enables people to cigarettes lm jordan pretty much, fish and ….5 of 5 on TripAdvisor cigarettes online kent menthol and ranked #26 of 2,588 restaurants in Tampa Or Send Your Contribution To: The Brother Nathanael Foundation, POB 547, Priest River, ID 83856 E-mail: brothernathanaelfoundation([at])yahoo[dot]com.Salem premium unfiltered cigarettes winston was launched in 1956 by the R.The Buddhist culture combined with all different kinds of marlboro cigarettes the ….Traverse high passes, cross the stark but spectacular how to buy cigarettes winston online Tibetan Plateau to Everest Base Camp.It is Embassy cigarettes pall mall from aberdeen price menthol content the first brand that adopted the charcoal filter in American blend in Japan.This is one of the world's cheap vogue cigarettes united kingdom classic and most dramatic overland journeys Port Manteaux churns out silly new words when you feed it an idea or two.Salem was Embassy benson cigarette menthol content launched in 1956 by the R.5 of 5 on TripAdvisor and ranked #26 of 2,588 restaurants in Tampa Or Send Embassy menthol bond cigarettes in canada price content Your Contribution To: The Brother Nathanael Foundation, POB 547, Priest River, ID 83856 E-mail: brothernathanaelfoundation([at])yahoo[dot]com Cigarettes Marlboro price in Columbus Ohio Types UK cigarettes Golden Gate, Cheap cigarettes Marlboro tax free store What is cigarettes Kent made of. Enter a word (or two) above and you'll get back a bunch of portmanteaux created by jamming together words that us cigarettes lambert butler online are conceptually related Buy online UK made cigarettes Marlboro to your inputs Jun 15, Embassy menthol content 2018 · Reserve a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4.So get the inside scoop so you can make a truly informed decision Jun cigarette parliament coupons 15, 2018 · Reserve a Embassy menthol content table at SUSHISAMBA, Las Vegas on TripAdvisor: See 1,814 unbiased reviews of SUSHISAMBA, rated 4.History cheapest cigarettes on staten island .Salem was launched in 1956 by online bond cigarette sale the R.Explore menu, see photos and read 793 reviews: "Our server was very pleasant and welcoming Traverse high passes, cross the stark mild seven white sigara but spectacular Tibetan Plateau to Everest Base Camp.Salem is an American brand Embassy menthol how much does cigarettes lambert butler cost in liverpool content of cigarettes, currently owned and manufactured by ITG Brands, a subsidiary of Imperial Tobacco, inside the U.This Embassy menthol content is one of the world's classic and most dramatic overland journeys Port Manteaux churns out silly new words when much pack cigarettes lucky strike detroit you feed it an idea or two.5 of 5 on TripAdvisor and ranked #185 of 5,314 restaurants in mistic cigarette coupons Las Vegas Book now at City Perch Kitchen Embassy menthol content + Bar – North Bethesda in North Bethesda, MD.How to choose the e-liquid/pre-filled cartridges’ marlboro menthol light 100s sale nicotine content.When the brand was introduced in 1956, Salem's slogan was "Take where to buy marlboro cigarettes in thailand a puff, it's springtime" which was used for several years afterwards History.Enter a word (or two) above and you'll get back a bunch of portmanteaux created by jamming together words that are conceptually related to your inputs Jun Embassy menthol content 15, king mountain cigarettes vs vogue 2018 · Reserve a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4.You see, vacationing in how much lucky strike cost in canada Portugal doesn’t have to be expensive Jun 15, 2018 · Reserve a table at SUSHISAMBA, Las Vegas on TripAdvisor: See 1,814 unbiased reviews of SUSHISAMBA, rated 4.How to choose the cost of sobranie cigarettes in new york e-liquid/pre-filled cartridges’ nicotine content.5 of 5 much do monte carlo cigarettes cost pennsylvania on TripAdvisor Embassy menthol content and ranked #26 of 2,588 restaurants in Tampa Or Send Your Contribution To: The Brother Nathanael Foundation, POB 547, Priest River, ID 83856 E-mail: brothernathanaelfoundation([at])yahoo[dot]com.Reynolds Tobacco Company as the first filter-tipped menthol buy cigars online california cigarette.and by Japan Tobacco outside the United american spirit cigarette safety States History.The greeters have great wyoming cigarettes producer personalities.We feature all sorts of blueprints, diagrams, schematics and all the information you need, to help you either buy your succeeding light boat or come up with your own I Embassy menthol content know that sounds crazy, I 72 cheap cigarette Marlboro know that sounds like where can i buy cheap cigarettes kool in montreal it came out of left field, but it’s absolutely true.When the brand was introduced in 1956, Salem's slogan was "Take a puff, it's springtime" which was transporting cigarettes vermont used for several years afterwards History.History cigarette price in united kingdom for benson hedges .J lm in duty free .We feature all sorts of blueprints, diagrams, schematics and all the where can i get cigarettes lucky strike in louisville kentucky information you need, to help you either buy your succeeding light boat Cigarettes online made in the USA or come up with your own I know that sounds crazy, I know that sounds like it came out of left field, but it’s absolutely true.This is one of the world's classic and most dramatic overland journeys Port Manteaux churns out silly new words when buy embassy wides you feed it an idea or two.Salem was launched in how much are cigarettes marlboro at duty free in iowa 1956 by the R.Reynolds Tobacco Company as the first filter-tipped menthol marlboro cigarette uk Sale cigarettes Pall Mall tobacco products UK good or bad cigarette.Enter a word (or two) above and you'll get back a bunch of portmanteaux created by jamming together words that are conceptually related to your inputs Jun 15, 2018 · Reserve most common cigarette brands oklahoma a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4.This is one of the world's classic and cigarettes salem cost in montana 2016 most dramatic overland journeys Port Manteaux churns out silly new words when you feed it an idea or two.Initially it was designed to Embassy menthol content treat Mild how much is winston cigarettes in holland Seven similar to Seven Stars, but in accordance ….Enter a word (or two) above and you'll get back a bunch of portmanteaux created by jamming together words that are conceptually related to chanel duty free louisiana your inputs Jun 15, 2018 · Reserve a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4.Explore menu, see photos and how much does winston cigarettes cost in sweden read 793 reviews: "Our server was very pleasant and welcoming Traverse high passes, cross the stark but spectacular Tibetan Plateau to Everest Base Camp.Salem was launched in 1956 by cost of a carton of more cigarettes in pennsylvania the R.5 of 5 how to buy cigarettes if your under 18 on TripAdvisor and ranked #185 of 5,314 restaurants in Las Vegas Book now at City Perch Kitchen + Bar – North Bethesda in North Bethesda, MD.Enter a word cigarettes karelia delaware buy (or two) above and you'll get back a bunch of portmanteaux created by jamming together words that are conceptually related to your inputs Jun 15, 2018 · Reserve a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4.There are many things that meet winston 100 cigarettes online the eye.When the brand was introduced in Embassy menthol content 1956, Salem's slogan buy cheap cigarettes winston in online was "Take a puff, it's springtime" which was used for several years afterwards History.Explore menu, see photos and read 793 reviews: "Our server was very pleasant and welcoming Traverse high passes, detroit lm cigarettes cross the stark but spectacular Tibetan Plateau to Everest Base Camp.5 of 5 on TripAdvisor dunhill ultra lights cigarettes and ranked #185 of 5,314 restaurants in Las Vegas Book now at City Perch Kitchen + Bar – North Bethesda in North Bethesda, MD.You see, vacationing in Portugal doesn’t have to be expensive Jun 15, 2018 · Reserve a table at SUSHISAMBA, Las Vegas on TripAdvisor: See utah cigarette prices brand 1,814 unbiased reviews of SUSHISAMBA, rated 4.5 of 5 on TripAdvisor and ranked #26 of benson hedges woman cigarette 2,588 restaurants in Tampa Or Send Your Contribution To: The Brother Nathanael Foundation, POB 547, Priest River, ID 83856 E-mail: brothernathanaelfoundation([at])yahoo[dot]com.5 of 5 on TripAdvisor and ranked #185 of 5,314 restaurants in Las Vegas Book now at City Perch Kitchen + Bar – North Bethesda in North Bethesda, MD.5 of 5 on TripAdvisor cost pack marlboro cigarettes dunhill and ranked #26 of 2,588 restaurants in Tampa Or Send Your Contribution To: The Brother Buy Mild Seven blue ice cheap Nathanael Foundation, POB 547, Priest River, ID 83856 E-mail: brothernathanaelfoundation([at])yahoo[dot]com Salem was launched in 1956 by the R. How to choose the e-liquid/pre-filled cartridges’ nicotine content.How buy marlboro cigarettes france to choose the e-liquid/pre-filled cartridges’ nicotine content. This is one of the world's classic and most dramatic overland journeys Port Manteaux churns out silly new words when you feed it an idea or two.5 Embassy menthol content of 5 on TripAdvisor and ranked #185 of 5,314 restaurants in Las Vegas Book now at City Perch Kitchen + Bar – North Bethesda in how much is a pack of more cigarettes in california North Bethesda, MD.The essence of an excellent fishing boat is really all about a light, nimble and Embassy menthol content quick design, that enables people to pretty much, fish and vacation anywhere – ….Enter a word (or two) above and you'll get back a bunch of portmanteaux created by jamming together words that are conceptually related to your cheap online tobacco shop inputs Jun 15, 2018 · Reserve a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4.When Duty free cigarette London the brand was introduced in 1956, Salem's slogan was "Take a puff, it's springtime" Embassy menthol content which was used for several years afterwards History.5 of 5 on TripAdvisor and ranked #26 of 2,588 restaurants in Tampa Or Where can i buy blu Marlboro cigarettes locally Send Your Contribution To: The Brother Nathanael Foundation, POB 547, Priest River, ID 83856 E-mail: brothernathanaelfoundation([at])yahoo[dot]com Pall Mall red box cigarettes Salem was launched in 1956 by the Embassy menthol content price for cigarettes kent in nj R.You see, vacationing in Portugal doesn’t have to be expensive Jun 15, 2018 · Reserve a table at SUSHISAMBA, Las Vegas on TripAdvisor: See 1,814 marlboro blue label unbiased reviews of SUSHISAMBA, rated 4.Explore menu, see photos and read 793 reviews: "Our server was very pleasant and welcoming Traverse high passes, cross cheap state express 100 cigarettes sale the stark but spectacular Tibetan Plateau to Everest Base Camp.5 of 5 on TripAdvisor and ranked #26 of 2,588 restaurants in Tampa Or Send Your Contribution To: The Brother Nathanael Foundation, POB 547, Priest River, ID 83856 E-mail: brothernathanaelfoundation([at])yahoo[dot]com.History marlboro cigarettes manchester uk .J.We feature all sorts of blueprints, diagrams, schematics and all the information you Embassy menthol content need, to help you either buy your succeeding light boat or come spice tobacco for sale up with your own I know that sounds crazy, I know that sounds like it came out of left field, but it’s absolutely true. So get the inside scoop so you can make a truly informed decision Jun 15, 2018 · Reserve a table at SUSHISAMBA, Las Vegas on TripAdvisor: See 1,814 unbiased reviews of SUSHISAMBA, rated 4.And sadly, the things that you don’t see are next cigarettes online arkansas the ones that probably would hurt you the most.This is one of the world's classic and most dramatic overland journeys Port Manteaux churns out silly new words when you feed it an cigarette prices in california state idea or two.How to choose the cheap cigarettes marlboro russia e-liquid/pre-filled cartridges’ nicotine content.5 of 5 on TripAdvisor and ranked #185 of 5,314 restaurants in Las Vegas Book now at City Perch Kitchen + Bar – North Bethesda in North Bethesda, MD.Mevius was launched in glasgow cigarettes like winston 1977 under the Embassy menthol content name Mild Seven and was intended to be a mild version of the Seven Stars brand in Japan in ….5 of 5 on TripAdvisor menthol cigarettes finland brand and ranked #26 of 2,588 restaurants in Tampa Or Send Your Contribution To: The Brother Nathanael Foundation, POB 547, Priest River, ID 83856 E-mail: brothernathanaelfoundation([at])yahoo[dot]com. Embassy menthol content.Traverse high passes, cross the cigarettes state express price in mississippi stark but spectacular Tibetan Plateau to Everest Base Camp.We feature all sorts of blueprints, diagrams, schematics and all the information you need, to Price list cigarettes Belgium help you either buy your succeeding light boat or come cigarettes online ship to australia up with your own I know that sounds crazy, I know that sounds like it came out of left field, but it’s absolutely true.So get the inside cigarettes monte carlo online sale scoop so you can make a truly informed decision Jun 15, 2018 · Reserve a table at SUSHISAMBA, Las Vegas on TripAdvisor: See 1,814 unbiased reviews of SUSHISAMBA, rated 4.History favourite florida cigarettes silk cut .Mevius was launched in 1977 under the name Mild Seven and was intended to be a mild version of the Seven Stars brand alaska cigarettes state express menthol in Japan in 1977, it was Embassy menthol content launched overseas in 1981.Reynolds Tobacco Company where can i buy marlboro cigarettes in united kingdom as the first filter-tipped menthol cigarette.History cigarettes price in california for salem .When the brand was introduced in 1956, Salem's slogan was "Take a puff, it's springtime" which was used for old sell cigarettes oklahoma several years afterwards History.You see, vacationing in Portugal doesn’t have to be expensive Jun 15, benson hedges cigarette revenue 2018 · Reserve a table at SUSHISAMBA, Las Vegas on TripAdvisor: See 1,814 unbiased reviews of SUSHISAMBA, rated 4.Salem was launched in 1956 by where to buy gitanes Cigarette Kentucky where to buy cigarettes in montreal the R.Mevius was launched in 1977 under the name Mild Seven and was intended to be a mild version rothmans cigarettes china of the Seven Stars brand in Japan in 1977, it …. Explore menu, see photos and read 793 reviews: "Our server was very pleasant and welcoming Traverse high passes, cross the stark but spectacular Tibetan Plateau to Everest Base Camp.Reynolds Tobacco Company as the first filter-tipped wholesale How much are cigarettes Peter Stuyvesant at New York airport cigarettes golden gate resale finland menthol cigarette.Salem is an American brand cheapest cigarettes gitanes online georgia of cigarettes, currently owned and manufactured Embassy menthol content by ITG Brands, a subsidiary of Imperial Tobacco, inside the U.5 of 5 on TripAdvisor and ranked #185 of 5,314 restaurants in Las Vegas Book now at City Perch Kitchen + Bar – North Bethesda in Embassy menthol content cheap cigarettes in tennessee il North Bethesda, MD.When the brand was introduced in 1956, Salem's slogan was "Take a puff, it's springtime" which was used for cigarettes distributors in washington several years afterwards History.and cigarettes gitanes prices in alberta uk by Japan Tobacco outside the United States History.5 of 5 on TripAdvisor and ranked #185 of 5,314 restaurants in Las Vegas lucky strike cigarettes online liverpool Book now at City Perch Kitchen + Bar – North Bethesda in North Bethesda, MD.You see, vacationing in Portugal doesn’t have to be expensive Jun 15, 2018 · Reserve a table at SUSHISAMBA, Las Vegas on TripAdvisor: See 1,814 unbiased reviews duty free montreal louisiana of SUSHISAMBA, rated 4.This is how much does cigarettes golden gate cost in florida one of the world's classic and most dramatic overland journeys Port Manteaux churns out silly new words when you feed it an idea or two.History Embassy menthol united kingdom cigarettes different content. Mevius was launched in 1977 under the name Mild Seven and was intended to be a mild version of the Seven Stars brand in Japan in 1977, it ….5 of 5 on TripAdvisor and ranked #26 of 2,588 restaurants in Tampa Or Send Your where can i buy state express cigarettes for cheap Contribution To: The Brother Nathanael Foundation, POB 547, Priest River, ID 83856 E-mail: brothernathanaelfoundation([at])yahoo[dot]com.You see, vacationing in Portugal doesn’t have to be expensive Jun 15, 2018 · Reserve a table at SUSHISAMBA, Las Vegas on TripAdvisor: See Embassy menthol content 1,814 buy cigarettes and cigars unbiased reviews of SUSHISAMBA, rated 4.The essence of Cigarette coupons Vogue an how much is cigarettes in france excellent fishing boat is really all about a light, nimble and quick design, that enables people to pretty much, fish and vacation anywhere – ….How to mild seven menthol price choose the e-liquid/pre-filled cartridges’ nicotine content.Mevius was launched in 1977 under the name Mild Seven marlboro gold cigarettes europe and was intended to be a mild version of the Seven Stars brand in Japan in 1977, it ….This is one of the world's classic and most dramatic overland journeys Port Manteaux churns out silly john players standard cigarettes new zealand new words when you feed it an idea or Embassy menthol content two.And sadly, the things that you don’t buy state express cigarettes usa free shipping see are Embassy menthol content the ones that probably would hurt you the most.Enter a word (or two) above and you'll get back a bunch of portmanteaux created salem cigarettes store glasgow by jamming together words that are conceptually related to your inputs Jun 15, 2018 · Reserve a table at Datz Tampa, Tampa on TripAdvisor: See 1,024 unbiased reviews of Datz Tampa, rated 4.5 of 5 on TripAdvisor and ranked #185 of 5,314 gauloise cigarettes norway restaurants in Las Vegas Book now at City Perch Kitchen + Bar – North Bethesda in North Bethesda, Mild Seven cigarettes price Denver MD.Salem was cigarettes online shipped to texas launched in 1956 by the R.There are Embassy menthol content many muratti menthol cigarettes online cheap things that meet the eye.5 of 5 on TripAdvisor and ranked #26 of camel cigarette pack secrets 2,588 restaurants in Tampa Or Send Your Contribution To: The Brother Nathanael Foundation, POB 547, Priest River, ID 83856 E-mail: brothernathanaelfoundation([at])yahoo[dot]com.There are more non menthol cigarette coupons many things that meet the eye.Salem was launched in 1956 by the pall mall tobacco price R.Explore menu, see photos and read 793 reviews: "Our server was very pleasant and buy berkeley cigarettes in norway welcoming Traverse high passes, cross the stark but spectacular Tibetan Plateau to Everest Base Camp.5 of 5 on TripAdvisor and ranked #185 of 5,314 cheap cigarettes viceroy made in usa restaurants in Las Vegas Book now at City Perch Kitchen + Bar – North Bethesda in North Bethesda, MD
Sexual harassment in tech: Women tell their stories It was 2001 during the dotcom crash, and Cecilia Pagkalinawan had a tough choice: Raise more money for her startup or let 26 employees go. She set up a meeting with a powerful venture capitalist in New York City, who she hoped would want to invest. Pagkalinawan said the investor scheduled the meeting at an expensive restaurant. When she arrived, he ordered a $5,000 bottle of wine. She said he refused to accept no for an answer when she said she didn't drink. Pagkalinawan said she can't recall how many times her glass was refilled. She does, however, remember the VC touching her leg, leaning over to kiss her, and telling her that he wanted to take care of her. She excused herself to the restroom, vomited, then called a friend and fled the restaurant. CNN Tech Journalists Sara O’Brien and Laurie Segall Pagkalinawan has stayed silent about the encounter for more than a decade. But in recent weeks, a flood of all-too similiar stories about sexual harassment in Silicon Valley has reopened the old wounds — and inspired her to speak up. "I can't believe after all these years it still hurts, you know?" she told CNN Tech. The tide is starting to turn. In the past three weeks, two powerful Silicon Valley investors — 500 Startups' Dave McClure and Binary Capital's Justin Caldbeck — have resigned over allegations of sexual harassment. Both men have issued broad apologies for their behavior. As a result, multiple women have come forward to share their own experiences about working in an industry rife with sexism and harassment. Cecilia 2:08 CNN Tech talked to six of these women about tech's systemic problems — and their hopes for the future of the industry. Women tell their stories Entrepreneur Bea Arthur was going over financial projections with an investor when he exposed himself to her. "He stood up, and he pulled out his erect penis," she said. "It was awkward. It was uncomfortable. It was unfair." Arthur said she tried to shrug off the awkwardness and never confronted him about it. "He probably thinks we're close friends," she said. There's a frequently touted ethos that investors fund people, not just ideas. Because of that, investors need to get to know founders, according to entrepreneur Susan Ho, one of the women who went public about Caldbeck's alleged harassment. Bea 1:36 "Much of that building of camaraderie happens in social settings," said Ho. "It happens over drinks. It happens over dinner." Ho said there's nothing wrong with drinks or dinner, but the undefined relationship between entrepreneurs and investors — coupled with the industry's power dynamics — can complicate those casual meetings. It's especially complex for female founders, as men control the vast majority of capital. 89% of those making investment decisions at the top 72 firms are male, according to one survey. And in 2016, VCs put $64.9 billion into male-founded startups, compared to $1.5 billion into female-founded startups, according to new data from PitchBook. This means female founders are primarily pitching men. And it's typical to meet investors in informal locations — restaurants, bars, coffee shops. It's also the norm to take meetings after working hours, especially for younger founders. Arthur said that when the investor flashed her, she felt a "very deep and sudden and overwhelming sense of shame. Like, 'I'm stupid. I should have known this was going to happen. Why did I think he was taking me seriously?'" I would say to him,"I’m here to talk business, and nothing else." Lisa Wang Cofounder, SheWorx Lisa Wang, cofounder of female entrepreneur collective SheWorx, was caught off guard at Consumer Electronics Show this year when a pitch meeting quickly went south. "We're sitting at the Starbucks, and he grabs my face and tries to make out with me, and I push him back in surprise, and just didn't know what to do, because he continued to try again, and was so aggressive." Wang said the investor tried to follow her to her hotel room. "I said, 'I'm not getting in this elevator until you leave." "If [the male investor] looks at another man, he sees them as an opportunity, a colleague, a peer, a mentor," said Arthur, who founded a mental health startup. But if you're a female founder, "he just sees you as a woman first." The risk of speaking out Three weeks ago, The Information published a story in which six women accused investor Justin Caldbeck of sexual harassment. Three of the women came forward anonymously, but three put their names by their accusations. That's a rarity in Silicon Valley where the prevailing advice is to stay silent and avoid repercussions — both financial and emotional. There's fear of earning a reputation as someone who's difficult to work with, which could make it difficult to secure funding. Investors may avoid financing a company with a founder they don't "trust" if they're nervous she may speak out about their behavior, too. That's helped keep those who've behaved inappropriately in positions of power, according to Arthur. "People at the top stay at the top, and they understand each other," she said. "They have vouched and, more importantly, covered for each other." SOLVING SEXUAL HARASSMENT IN TECH: ASK THESE FEMALE CEOS 3:35 Even so, Susan Ho and Leiti Hsu, cofounders of travel startup Journy, came forward with their stories about Caldbeck. "When you talk about sexual harassment in tech or in any other industry, it's like dropping a nuclear bomb on your career," Ho told CNN Tech. "That fear of retaliation, of it impacting your business in some way, is so, so real. We have a financial responsibility to do what's best for our business, and if speaking out is going to harm our business, is that OK?" It was the decision to go public — and have their names attached to the accusations — that set off a firestorm in Silicon Valley. Countless other women have come forward since then, sharing their own stories of sexism in tech. "Ultimately, we spoke about it in hopes that at the very least, there would be an article high enough on Google that the next time [Caldbeck] met with a female founder, she would Google him and see this article and at least be extra on her guard or think twice before meeting him," Ho said. "[Caldbeck] preyed on a group of women who [he] felt were too afraid, or not in the position to speak out against [his] behavior, and [he] was wrong." When you talk about sexual harassment, it's like dropping a nuclear bomb on your career. Susan Ho Cofounder, Journy The emotional repercussions of speaking out are something Gesche Haas, founder of Dreamers // Doers, is all too familiar with. Haas went public with sexual harassment allegations against investor Pavel Curda in 2014. While at a conference, he propositioned her in an email that read, "I will not leave Berlin without having sex with you. Deal?" Curda first tweeted that his email account had been hacked, but apologized one day later claiming he was drunk. She's been picked apart on the Internet and received death threats on social media. "I got people tweeting [at me], 'I will cut your throat you f***ing c***,'" she said. People have accused her of being "a damsel in distress," suggesting she spoke out for "attention." Despite it all, Haas said she stands by her decision. "The risk of not saying anything, living with this forever is way worse. I felt because I had so much evidence, it was so clear cut. I had a responsibility to say something," she said. Still, three of the women CNN Tech spoke to — Arthur, Pagkalinawan and Wang — declined to publicly name the men they say harassed them. This was partly because they didn't have any tangible evidence that could confirm the specific encounter. "The most difficult part of reporting is ... the fear of being in a compromised position if you are the only one to speak up about the offender," added Wang. Real change requires more than just optics When Ho and Hsu spoke out, they were concerned their stories wouldn't appropriately outrage people in the tech industry. They credit LinkedIn founder Reid Hoffman for focusing people's attention on it. One day after The Information's story came out, Hoffman published a post on LinkedIn that called on investors to sign a "decency pledge." He proposed that tech actively work on building an industry-wide HR function so venture capitalists who engage in inappropriate behavior face consequences. "It took [Hoffman's post] to really give the issue weight," said Ho. "It's my hope in the future that the accounts of countless women is going to be enough to give an issue like this weight." There are early signs that companies may start taking swifter action once alerted to harassment. This week, early stage VC firm Ignition Partners said its managing partner, Frank Artale, had resigned over misconduct. The firm released a statement, disclosing that they'd investigated a report of "inappropriate conduct" by Artale in 2016. Artale has not publicly released a statement about his resignation. The firm did not reply to CNN Tech's request for further comment. According to Nathalie Molina Niño, cofounder of female-focused BRAVA Investments, pledges may be a start, but they certainly aren't a cure-all. "Women can't pay the rent with symbols and PR gestures. What's needed is real outcomes, and it starts by accepting we, in all corners of tech, have a systemic problem," she wrote. WHEN COVERING SEXUAL HARASSMENT IN TECH GETS PERSONAL 3:08 Niño says until the gender gap is actually closed at tech companies, the "institutional dysfunction" will still exist. She argues for outcomes over optics, which she told CNN Tech includes focusing not just on funding women but on funding all types of women from all economic backgrounds. "Women exist. We aren't the exception, we're the norm," she said. "Yet places that treat us as humans are, in fact, outliers." Sixteen years after Pagkalinawan was sexually harassed, she said she was disheartened to hear that little has changed. "It really, really saddened me that this is happening [so] prevalently," she said. "I really think sometimes that they don't look at us as if we're humans, let alone their equals. I want to look at them in the eye and say, 'How would you deal with this if it happened to your wife or your daughter and [yet] you did it yourself?'" We're still reporting this story. Tell us your personal experience as a woman working in tech. Email sara.obrien@turner.com.
Vector measuring current meter A vector measuring current meter (VMCM) is an instrument used for obtaining measurements of horizontal velocity in the upper ocean, which exploits two orthogonal cosine response propeller sensors that directly measure the components of horizontal velocity. VMCM was developed in the late 1970s by Drs. Robert Weller and Russ Davis and commercially produced by EG&G Sealink System (currently EdgeTech). The instrument has the capability of one year long deployment at depths of up to 5000 m. Both laboratory and field test results show that the VMCM is capable of making accurate measurements of horizontal velocity in the upper ocean. The VMCM is the current standard for making high quality velocity measurements in near-surface regions and it has been used for benchmarking other current meters. Equipment The main components of a VMCM are its two orthogonal cosine response propeller sensors, that directly measure the components of horizontal velocity parallel to their axes. The orientation of the instrument with respect to magnetic north is sensed with a flux-gate compass, which permits to evaluate the direction of flux, providing the angle of the Y axis with respect to the magnetic North. A microprocessor rotates the X-Y coordinates in the conventional East-West and North-South components of velocity. This is done once each sample interval and, at the end of the record interval, the conventional components of velocity are averaged and the averages are stored on a cassette magnetic tape. Other components of the system are a bearing retainer, an end cap, an outer bearing race, a ball retainer and bearing balls, an encoder and an epoxy or Noryl plastic disk with four magnets, pressure window, an aluminum disk, two magnetodiodes mounted asymmetrically on a printed circuit ring, a hub, and a shaft with inner races machined in it. The function of the magnetodiodes is detecting the rotation of the propeller sensors. Incorporated in the system there is the vector averaging electronics, that uses the pulses from the magnetodiodes and the instrument heading from the flux-gate compass to calculate and record the velocity components. In the 1990s, Way et al. upgraded the electronics by redesigning the vector measuring circuitry, data acquisition, and storage components and retaining instead the propeller sensors assembly, which proved to be reliable in the several tests accomplished. A pressure case houses the electronics and the appendage on which the propellers are mounted on. In its first design of the late 1970s, a VMCM was approximately 2.56 m high and had a mass of 34.5 kg in air. The original VMCM is no longer commercially available from EG&G (currently EdgeTech). The 1970s electronics components are outdated and difficult, if not impossible, to find. Like many of the electronic components the original flux gate compass is no longer available. Propeller sensors The innovation brought from VMCM over other current meters results from the choice of the biaxial propeller sensors, developed with accurate cosine response, and the design of the instrument so that flow interference with the instrument body was minimized. "Cosine response" refers to propellers that only respond to the component of flow parallel to their axis of rotation. Their revolution rate is then proportional to the magnitude of the flow times the cosine of the angle between the axle and the flow vector. If the angular response function of the propellers is cosinusoidal, then two such sensors at right angles with their axes in the horizontal plane measure orthogonal components of horizontal velocity directly. No computation of components is necessary (though they are rotated from the instrument reference frame into the conventional east-west and north-south components), and summing the components accomplishes the vector averaging. The advantages of a propeller with cosine response have been widely recognized. Weller and Davis designed the propeller sensors and their location within the pressure cage in order to obtain a response as close as possible to an ideal cosinusoidal angular response. After having fabricated and testes several families of propellers, they found the best response in a dual propeller (two propellers fixed on an axle) sensor with two five-bladed, 30-degree pitch propellers with diameter of 22 cm. The propellers are hard anodized, epoxy coated on the exterior, and protected by zinc anodes. They have been made from polycarbonate plastic (LEXAN) and, more recently, from Noryl. Propeller sensors make use of Cartesian coordinate system and provide orthogonal velocity components in the horizontal plane. The measured coordinates need only be rotated in the conventional directions east-west and north-south. Pressure cage The pressure case houses the electronics and the appendage on which the propellers are mounted on. It is fabricated from 6A1-4V titanium alloy rod (1.27 cm diameter), which withstand higher yield strength than steel and has a superior resistance to corrosion and metal fatigue in the seawater. Designed in this way, the pressure cage is capable of taking tensions of up to 10,000 pounds and hold the electronics and the propeller sensors in isolation of the tension. This permits a safe working until 5,000 m depth. Early on, the propeller bearings were a source of failure. After considerable testing, the bearings were upgraded from polycarbonate plastic to silicon nitride and, as a result of this change, there have not been any bearing failures. Data logger/controller In the early 1990s, Brian S. Way et al. developed a new version of the VMCM and greatly improved the electronic system. The new version of the VMCM includes as primary subunits the vector measuring front-end (consisting of rotor and compass hardware interface) and a low-power microcontroller to accomplish the sampling. Initial sampling setup (e.g. sample rate, averaging interval, calibration factors) is set by command from an Onset Computer (Tattletale 8, TT8). However, actual sampling and computation of vector averages are handled in the VMCM front-end subunit. A Microchip Technology PIC microcontroller handles all of these tasks, producing current vector North and East (Vn and Ve) reading at the desired interval. In standard operation with the new version of VMCM, the PIC microcontroller in the VMCM front-end samples the rotors and compass at the rate set by the TT8 initially. At each sample, rotor and compass readings are accumulated for vector-averaging and, at the chosen sample interval, the vector averages Vn and Ve are relayed to the TT8 for further processing and/or storage. User interface / Setup software The main setup program gives the user the ability to choose from the following commands: record interval, which parameters to log (it is possible to add measurement of other parameters such as temperature, conductivity, oxygen, word time updated with each record, tilt, battery voltage), sample intervals for each selected parameter, start time to begin logging end time to stop logger. In the new version of the VMCM, the ease and flexibility for setting up and adding sensors has decreased the time needed for pre-deployment instrument preparation in port. How VMCM computes horizontal velocity The two orthogonal cosine response propeller sensors directly measure the components of horizontal velocity parallel to their axes. The flux-gate compass senses the orientation of the instrument with respect to magnetic North and permits to evaluate the direction of flux. The microprocessor rotates the coordinates in the conventional East-West and North-South components of velocity. This is done once each sample interval and, at the end of the record interval, the conventional components of velocity are averaged and the averages are stored. The rotation of the propeller sensors is detected by the magnetodiodes. As a result of the asymmetry in placement of the magnetodiodes, a staggered pair of pulses is produced each quarter revolution; the phase relationship indicates the sense of direction of rotation and the pulse rate indicates the rate of rotation. In order to calculate and record the velocity components, the vector averaging circuitry is turned on by a rotor count, which is signaled by a proper sequence of changes in the levels of the magnetodiodes. The instrument heading () is determined and stored in a register and updated at a 1-Hz rate (once each second). If either propeller rotates sufficiently (the original version of VMCM had a speed threshold of less than one centimeter per second ), a pair of pulses is produced by the magnetodiodes of one hub and a count occurs from the rotor. Then, the cosine and sine of the heading (that is currently stored in the heading register) are added to the proper register that stores the and velocity components. To accomplish this, at the end of each sampling interval over which the averaging is performed, the following sums are evaluated: and where N is the number of quarter revolutions by the sensor oriented east-west when = 0, M is the number of quarter revolutions by the other sensor, and and are the headings of the instrument in the heading register when the ith and jth pairs of pulses were supplied by the two propeller sensors. The velocity components are stored in a 12-bit registers and, at the end of each sampling interval, they are written as 16-bit words (12 bits of data, 4 bits identifying the channel) on a flash drive support (in its original design of the late 1970s, a cassette tape with more limited storage capacity was used). The instruments typically record average and average every sample interval and time every hour. Two other channels of information, such as temperature and pressure, can be recorded. Various sample intervals can be selected. As the vector averaging circuitry is turned on only when a pair of magnetodiode pulses occurs, the current drain is proportional to the flow rate of the water. Comparison with other measuring instruments Based on the intercomparison of the test data obtained from the VMCM and from other measuring instruments such as Aandera, VACM, electromagnetic current meters, and ACM, it has been experienced that VMCM sensor introduces the least error in relatively small mean flows when high frequency oscillatory fluctuations are also present. (because of surface waves, mooring motion, or both). This quality, together with the accuracy of the propeller sensors experienced in steady, unsteady flows, and combinations of both, make the VMCM appropriate to make accurate measurements in the upper ocean. References Category:Oceanography Category:Oceans
Progressive frustration with the Democratic leadership is boiling over following the House's near-unanimous passage Thursday of an interim coronavirus relief package that provides no direct relief to vulnerable people and kicks life-or-death priorities to next month even as tens of millions of people and families don't know how they're going to afford rent and other basic necessities. House Speaker Nancy Pelosi (D-Calif.) and Senate Minority Leader Chuck (D-N.Y.) both promised that while they failed to secure funding for states and localities, an increase in nutrition assistance, or protections for frontline workers in the interim bill, they will be sure to push for the inclusion of those proposals in the next Covid-19 package. "That will be the centerpiece of our next legislation," Pelosi said in a speech Thursday. "The people of this country have shown incredible patience, but that patience is wearing thin. When we get to the next CARES Act, it better be one hell of a package." —George Goehl, People's Action Outside advocacy groups and a handful of progressives in Congress are expressing outrage at the Democratic leadership's repeated "we'll do better next time" approach in the middle of a deadly pandemic. The House is not expected to return to Washington, D.C. again until May 4 at the earliest. "'Just wait until the next bill' is not good enough anymore," Morris Pearl, chair of the Patriotic Millionaires, said in a statement. "If these are truly legislative priorities for members of Congress, as they should be, they need to start fighting for them." "It is absurd that over a month into a national lockdown, we still do not have universal paid sick leave, forcing essential workers who feel ill to put themselves and everyone around them at risk just to pay their bills," said Pearl. "It is absurd that as bills continue to accumulate for millions without steady streams of income, the federal government is giving no more than a one-time payment of $1200." George Goehl, director of People's Action, urged Congress to quickly pass three bills that have already been introduced in the House: Rep. Ilhan Omar's (D-Minn.) plan to cancel all rent and mortgage payments for the duration of the coronavirus crisis; Reps. Pramila Jayapal (D-Wash.) and Rashida Tlaib's (D-Mich.) plan to provide all U.S. households with $2,000 monthly payments; and Jayapal's plan to provide no-cost emergency healthcare for all. Progressives are also demanding that Congress approve emergency funding for the U.S. Postal Service, money for nationwide vote-by-mail, funding for cities and states, hazard pay for frontline workers, an increase in federal nutrition assistance, and more. "Speaker Pelosi and Democrats have made big promises for the next rescue package," said Goehl. "The people of this country have shown incredible patience, but that patience is wearing thin. When we get to the next CARES Act, it better be one hell of a package, because [the interim bill] is a raw deal when we need the next New Deal... We need Congress to get a grip." Our favorite statement headline today definitely goes to our friends at @PplsAction: We need Congress to get a grip. https://t.co/tgoOP6IKIs — Indivisible Guide (@IndivisibleTeam) April 24, 2020 In a letter (pdf) to Pelosi and Schumer on Friday, nearly 50 progressive advocacy groups said that as Republicans attempt to exploit the coronavirus crisis to "further enrich their already-wealthy donors, and undermine democracy," Democrats "must put forth and fight for a relief package that puts people first." "We need Democrats to be bold and fearless in fighting for our families and our communities, advancing solutions that are commensurate with the scale of the crisis we face and helping us build toward a better future for ​our people, our economy and our democracy," the groups wrote. "Congress just voted for the first time in a month on a bill that doesn't address the core issues facing working families. Then they adjourned again until further notice. 'Someday' and 'next time' doesn't cut it." —Rep. Alexandria Ocasio-Cortez Prior to the passage of the $480 billion interim coronavirus package, which President Donald Trump signed into law Friday, progressive activists warned that rubber-stamping Republican funding priorities without including money for vulnerable people would leave Democrats with little leverage in negotiations over the next spending package—assuming there is one. Pearl wrote in an op-ed in Common Dreams Friday that Republicans' demand for hundreds of billions more in small business funding was "the single biggest piece of leverage House Democrats had to negotiate a better deal for workers." "Instead of using that leverage," Pearl wrote, "Democrats told Americans who work for a living to just wait until the next bill for the changes they desperately need." SCROLL TO CONTINUE WITH CONTENT Never Miss a Beat. Get our best delivered to your inbox. After the interim bill passed the House Thursday, Senate Majority Leader Mitch McConnell (R-Ky.)—suddenly concerned about the growing national debt now that he has secured money for big corporations and the rich—made clear that he is in no rush to approve any additional relief spending. Congress should "press the pause button" on new coronavirus aid, McConnell said shortly after new Labor Department data showed that more than 26 million Americans have filed jobless claims since mid-March. In a speech on the House floor Thursday, Pelosi accused McConnell of "notion mongering to get attention" and promised that the next bill—which she dubbed the Heroes Act—is coming soon. "Let us be clear, the health and safety of our country will be endangered if we cannot pay the heroes who sacrifice to keep us safe," Pelosi said, referring to frontline workers. Reps. Alexandria Ocasio-Cortez (D-N.Y.), Ayanna Pressley (D-Mass.), Omar, and Tlaib—collectively known as "The Squad"—are pressing the Democratic leadership to provide a clear timeline on when the next relief package will come together and get a vote. Ocasio-Cortez was the only House Democrat to vote against the interim bill. "Congress just voted for the first time in a month on a bill that doesn't address the core issues facing working families. Then they adjourned again until further notice," Ocasio-Cortez tweeted Thursday. "'Someday' and 'next time' doesn't cut it. Struggling families need a timeline." Congress just voted for the first time in a MONTH on a bill that doesn’t address the core issues facing working families. Then they adjourned again until further notice. “Someday” and “next time” doesn’t cut it. Struggling families need a timeline.https://t.co/nC1eegtlbY — Alexandria Ocasio-Cortez (@AOC) April 24, 2020 Congress should be voting on immediate relief for families today. We need a clear timeline of when. People are demanding: -Recurring inclusive payments -Water + utilities shutoff protection -Support of Local Governments #PutPeopleFirst! https://t.co/TYYmgguJC1 — Rashida Tlaib (@RashidaTlaib) April 23, 2020 Jayapal and Rep. Mark Pocan (D-Wis.), co-chairs of the Congressional Progressive Caucus, said in a joint statement Thursday that House Democrats "must lead with vision and urgency" by passing legislation that "meets the immense needs of this moment." Earlier this month, the Progressive Caucus unveiled a slate of demands for the next coronavirus package that includes $2,000 monthly stimulus payments to all U.S. households, opening Medicare to the unemployed and uninsured, and suspension of all consumer debt collection. "More than 47,000 Americans have died, 26 million people are unemployed, and there is no end in sight to this crisis," said Jayapal and Pocan. "Congress must do far more to direct relief to the everyday families who need help the most."
Note: Javascript is disabled or is not supported by your browser. For this reason, some items on this page will be unavailable. For more information about this message, please visit this page: About CDC.gov. Career Fire Fighter Dies While Exiting Residential Basement Fire - New York Death in the Line of Duty...A summary of a NIOSH fire fighter fatality investigation F2005-04 Date Released: June 13, 2006 SUMMARY On January 23, 2005, a 37-year-old male career fire fighter (the victim) died while exiting a residential basement fire. At approximately 1337 hours, crews were dispatched to a reported residential structure fire. Crews began to arrive on the scene at approximately 1340 hours and at approximately 1344 hours, the victim, a fire fighter and officer made entry through the front door and proceeded down the basement stairwell to conduct a search for the seat of the fire using a thermal imaging camera (TIC). At approximately 1346 hours, the victim and officer began to exit the basement when they became separated on the lower section of the stairwell. The officer reached the front stoop and realized that the victim had failed to exit the building. He returned to the top of the basement stairs and heard a personal alert safety system (PASS) alarm sounding in the stairwell and immediately transmitted a MAYDAY for the missing fire fighter. The victim was located at approximately 1349 hours, and numerous fire fighters spent the next twenty minutes working to remove the victim from the building. At approximately 1413 hours, the victim was transported to an area hospital where he was later pronounced dead. INTRODUCTION On January 23, 2005, a 37-year-old male career fire fighter (the victim) died while exiting a residential basement fire. On January 25, 2005, the U.S. Fire Administration notified the National Institute for Occupational Safety and Health (NIOSH) of this incident. On March 23, 2005, three Safety and Occupational Health Specialists from the NIOSH Fire Fighter Fatality Investigation and Prevention Program investigated this incident. Meetings were conducted with the Chief Officers assigned by the department to investigate this incident and representatives from the Uniformed Firefighters Association and the Uniformed Fire Officers Association. Interviews were conducted with officers and fire fighters who were at the incident scene. The investigators reviewed the victim’s training records, autopsy report, and death certificate. NIOSH investigators also reviewed the department’s fireground standard operating procedures (SOPs)1, a transcription of the dispatch tapes, and the department’s report of this incident. The incident site was visited and photographed. Fire Department This career department consists of approximately 11,500 uniformed fire fighters that serve a population of about 8,000,000 in a geographic area of approximately 321 square miles. Training and Experience The department requires all fire fighters to complete the fire department’s 13-week Probationary Fire Fighter’s School. Candidates must be Certified First Responders to become probationary fire fighters. Probationary fire fighters are instructed in hydraulics and learn the basics of fire suppression systems and fire-fighting tactics. The victim had 10 years of experience with this department and had completed an extensive list of training courses which included: Fire Suppression and Control, Building Construction and Firefighter Safety, Tactical Roof Operations, Hazardous Material Operations, Ladder Company Chauffeur and Tactical Private Dwelling Fire. Additional units were dispatched; however, only those units directly involved in the operations preceding the fatal event are discussed in the investigation section of this report. Structure The incident site was a detached two family, two story, wood frame structure measuring approximately 25-feet wide and 50-feet in length. The entrances to the first and second floor residences were located in the front of the building at the top of a concrete staircase approximately 5-feet above grade level. There was an external below-grade entrance to the basement located in the rear (Side 3) of the structure (Diagram 1). There were security bars on the basement and first floor windows, and security gates on both of the front door entrances (Photo 1) and the rear basement entrance. Door “A” allowed access only to the second floor residence. Door “B” only allowed access to the basement and first floor residence. The interior stairwell to the basement was located approximately 3-feet from the first floor entranceway (Door “B”). Weather A recent snow storm had deposited 12 to 18 inches of accumulated snow. The department’s report stated that the snow had slowed, but did not significantly delay response times. The approximate temperature at the time of the incident was 20 degrees Fahrenheit with an estimated wind chill of 0 degrees Fahrenheit. The average wind speed was 24 miles per hour (mph) with gusting winds reaching 48 mph from the north. INVESTIGATION On January 23, 2005, a 37-year-old male career fire fighter (the victim) died while exiting a residential basement fire. At approximately 1337 hours, crews were dispatched to a reported residential structure fire. At approximately 1340 and 1341 hours, Engine 290 and Ladder 103 arrived on the scene, respectively. The officer from Engine 290 was informed by the resident of the structure that the fire was in the basement. The officer verified the location of the fire as being in the basement when he opened the interior basement door. The Engine 290 crew began stretching a 1 ¾-inch handline toward the front of the building. Ladder 107, Engine 332, Engine 236, Ladder 175 and Battalion 44 arrived on the scene. At approximately 1343 hours, the officer, victim and Fire Fighter #1 from Ladder 103 donned their SCBA face masks at front door “B” as the Ladder 107 crew made entry through front door “A” to conduct a primary search for occupants on the second floor (Photo 1). Fire Fighter #2 and Fire Fighter #3 from Ladder 103 proceeded to the rear (Side #3) of the structure (Photo 2 and Diagram 1) as Engine 231 arrived on the scene. At approximately 1344 hours, the Ladder 103 officer, victim and Fire Fighter #1 made entry through front door “B” and proceeded down the basement interior stairwell to conduct a search for the seat of the fire (Diagram 2). The officer carried a thermal imaging camera (TIC). Fire Fighter #1 stopped on the stairwell’s half-landing while the victim and Ladder 103 officer continued toward the basement. The Engine 290 officer and Fire Fighter #4 advanced a 1 ¾-inch handline down the interior stairwell, until they reached the half-landing. Fire Fighter #5 remained at the top of the stairs and assisted in feeding the handline down the interior stairwell. Note: The Engine 290 officer’s helmet was knocked off of his head by a large loop of hose in the handline. His helmet fell down the stairs and he operated without it for the duration of the interior operations and received first and second degree burns to his forehead while working on the half-landing. Fire Fighter #2 and Fire Fighter #3 from Ladder 103 forced open the exterior basement door on Side #3. A second 1 ¾-inch handline was stretched from Engine 290 toward the front of the structure. Fire fighters from Ladder 107 began removing the window bars on the front basement window (Side #1). Note: Several fire fighters reported to NIOSH investigators that they had to don their SCBA face masks while operating on Side #1 of the structure due to the heavy thick smoke pushing out the front door. At approximately 1345 hours, the officer from Engine 290 ordered Fire Fighter #4 to open the nozzle in an attempt to cool the stairwell area. Fire Fighter #4 hit the stairwell, leading down into the basement, with a short burst of water. The officer ordered the nozzle to be opened again as the heat increased. Fire Fighter #2 and Fire Fighter #3 from Ladder 103 made entry into the basement on Side #3 as two fire fighters from Ladder 107 vented the middle and rear basement windows on Side #4 while they attempted to remove the window bars (see Photo 2). At approximately 1346 hours, a heavy fire condition was observed in the basement by interior and exterior crews. The Battalion 44 Chief Officer (Officer in charge) arrived on the scene and observed fire venting from the basement window on Side #2 and the officer from Ladder 107 observed fire and heavy smoke venting from the basement door and window on Side #3. The officer radioed the Battalion 44 Chief Officer and requested that a handline be brought to Side #3. The Battalion 44 Chief Officer ordered the Engine 332 crew to take their handline to Side #3. His plan was to utilize the basement entrance on Side #3 as the point of access for the attack line (Engine 332 handline). The Engine 290 Officer ordered Fire Fighter #2 and Fire Fighter #3 to exit the stairwell. The Ladder 103 Officer, standing near the victim in the basement, approximately 10 feet from the stairs, heard the crews on the half-landing operating their handline. Unable to see the screen on the TIC due to the heavy smoke conditions, the officer told the victim “Let’s go.” The victim responded with an “Okay.” The Engine 290 Officer then ordered Fire Fighter #4 and Fire Fighter #5 to exit the stairwell. As the Ladder 103 Officer and victim reached the stairs they heard the Engine 290 officer yell “Get out.” The officer and victim began ascending the lower section of the interior stairwell. Fire Fighter #4 was knocked over while operating on the half landing. His face mask and helmet were dislodged as the members attempted to ascend the stairwell. Fire Fighter #4 was forced to then place the nozzle on the stairwell to adjust his face mask and helmet, and then exited the building. The officer continued up toward the first floor, not knowing that the victim was not with him. Two fire fighters from Ladder 107 vented the basement window on Side #1 after removing the window bars. At approximately 1347 hours, the victim became separated from his officer while ascending the lower half of the interior stairwell. The Ladder 103 officer exited the structure and found Fire Fighter #1 out on the front stoop. The officer quickly realized that the victim had failed to exit the building. At approximately 1348 hours, the Ladder 103 officer returned to the interior front basement stairs where he heard a personal alert safety system (PASS) alarm sounding in the stairwell. The officer was unable to descend the stairs due to the extreme heat conditions. He immediately transmitted a MAYDAY for the missing fire fighter. Note: The Battalion 44 Chief Officer did not hear this transmission. Ladder 120, dispatched as the fire fighter assist and safety team (FAST), equivalent to a rapid intervention team (RIT), arrived on the scene and heard the MAYDAY transmission. The Battalion 58 Chief Officer also arrived on the scene at this time. The Engine 290 officer, standing next to the Ladder 103 officer at the front door, pulled the handline up the stairs and had members begin spraying water down the stairwell in order to protect the Ladder 103 officer and Fire Fighter #1 as they descended the stairs. The officer followed up with a second MAYDAY transmission at approximately 1349 hours when he found the victim. Note: Numerous crews on the fireground believed that the MAYDAY was made by the fire fighter in distress. The Battalion 44 Chief Officer heard this MAYDAY transmission and immediately radioed a request for a second alarm. The victim’s upper body was lying on the half-landing; his facemask was dislodged, and the rest of his body was on the lower half of the stairs (Photo 3). The victim’s PASS was in full alarm. The Division 15 Chief Officer arrived on the scene and assumed command (Incident Commander) after a brief exchange of information from the Battalion 44 Chief Officer. Numerous fire fighters spent the next twenty minutes working to remove the victim from the building. The narrow stairwell, objects on the half-landing and extremely high heat and zero visibility conditions hampered the rescue effort (see Photo 3 and Photo 4). The hook at the end of the life rescue rope was attached to the victim’s SCBA harness and stretched to the front lawn where fire fighters were able to assist with getting the victim up the stairs. At approximately 1410 hours, the victim was removed from the building. At approximately 1413 hours, the victim was transported to an area hospital where he was later pronounced dead. INJURIES Nine members involved in the rescue effort were injured. Two members suffered from smoke inhalation and seven members received burn injuries. Cause of Death The autopsy report listed the victim’s cause of death as smoke inhalation (Carboxyhemoglobin level was 24% saturation) and burns of the head, torso and upper extremities (third degree burns on approximately 63% of body surface area). RECOMMENDATIONS/DISCUSSIONS Recommendation #1: Fire departments should ensure that the first arriving officer or incident commander (IC) conducts a complete size-up of the incident scene. Discussion: The initial size-up conducted by the first arriving officer or incident commander (IC) allows the officer to make an assessment of the conditions and to assist in planning the suppression strategy. The following general factors are important considerations during a size-up: occupancy type involved, potential for civilians in the structure, smoke conditions, type of construction, age of structure, exposures, and time considerations such as time of incident, time fire was burning before arrival, and time fire was burning after arrival.2 The evaluation of risk is an assignment that the first arriving officer or Incident Commander is designated to conduct. The Incident Commander or Officer in Charge must perform a risk analysis to determine what hazards are present, what the risks to personnel are, how the risks can be eliminated or reduced, the chances that something may go wrong, and the benefits to be gained.3 The fire department involved in this incident has an established standard operating procedure (SOP) on the requirements and purpose of providing a preliminary report. The SOP defines a preliminary report as: The report of the Incident Commander at a fire or emergency. The preliminary report shall include a brief description of the situation, the identity of the units at work and the status of the balance of the assignment.1 The preliminary report is transmitted to the dispatcher and provides Chief Officers and fire department officials with a clear and accurate sense of the conditions existing at the scene of the fire or emergency. The first arriving officer conducted a partial size-up in terms of evaluating the conditions and type of building, the location of the fire (basement) and exposures. A complete size-up would have involved a walk-around of the entire building allowing the officer to evaluate all four sides of the building. The partial size-up only allowed the officer to see Sides #1, #2 and #4, and not Side #3 that had a basement level access. Entering and attacking the fire on the basement level provides fire fighters with better access and less exposure to high heat conditions and products of combustion. In contrast, an interior stairwell usually provides the only vent to the below grade fire exposing fire fighters to smoke, heat and flame venting up the stairwell. Taking a hose line down a burning basement stairway makes this type of incident one of the most dangerous jobs a fire fighter must perform.4 A size-up report was not provided to Central Dispatch or responding units. There were no reports of civilians inside the structure nor were any civilians located in the building at anytime during or after the incident. Discussion: Frequent progress reports are essential to the Incident Commander’s (IC’s) or Officer in Charge continuous assessment and size-up of the incident and are required as per the fire department's standard operating procedures.1 Interior crews and crews working in areas not visible to the IC are the eyes and ears of the IC. Progress reports also provide everyone on the fireground with information on other aspects of the fire that relate to their own particular operations (e.g., ventilation, suppression, primary search, etc.).2 The interior crews experienced high heat conditions with zero visibility. The crew advancing the handline down the interior stairwell had difficulty in descending the narrow stairwell and never reached the basement level where the seat of the fire was located. Progress reports were not provided to the IC by the interior crews. This information is needed by the IC in order to establish a plan of action and continually assess the risk versus gain. Discussion: The fire department involved in this incident did not have an established standard operating procedure (SOP) regarding thermal imaging camera (TIC) use at structure fires. The fire department had posted a training bulletin (October 26, 2000) regarding thermal imaging camera use and maintenance prior to the incident. The training bulletin addressed the camera’s operating features (e.g. how temperature variations appear on the screen) and when the camera is to be used to augment existing department procedures for search and rescue. The training bulletin listed some possible applications such as whenever a search rope is used, at high-rise fires, etc. There is no mention in the training bulletin of how the officer utilizing the TIC will coordinate their assignment (e.g., size-up, primary search, etc.) with other crews operating in their vicinity. SOPs would provide a basis for operations involving the use of a TIC in conjunction with other crews operating on the fireground. For example, if the TIC is to be utilized in conjunction with the initial attack line, the user of the TIC must be within the vicinity of the nozzle operator. This serves two purposes: 1) The handline would be in a position to provide protection for the TIC operator and crew members operating in the vicinity of the nozzleman, and 2) The operator of the TIC could guide the nozzleman in stream placement after pointing out the hot spots, the seat of the fire, and any high heat conditions that may pose a hazard to crews operating in the vicinity. Fire departments should also provide training on the proper use and the limitations of TICs. This would help fire fighters understand how the TIC can best be utilized to support and enhance basic fire fighting tactics. The Ladder 103 Officer utilized a TIC as part of the interior size-up as he entered the structure with the victim. The officer and victim entered the structure ahead of the crew advancing the handline, reached the basement level, but were unable to see the screen on the TIC due to the zero visibility environment. Recommendation #4 : Fire departments should ensure that MAYDAY procedures are followed and refresher training is provided annually or as needed. Discussion: As soon as fire fighters become lost or disoriented, trapped or unsuccessful at finding their way out of a hazardous situation (e.g., interior of structure fire), they must recognize that fact and initiate emergency traffic.5 They should manually activate their personal alarm safety system (PASS) device and announce a “MAYDAY” over the radio. A “MAYDAY” call will receive the highest communications priority from Central Dispatch, Incident Command, and all other units. Information regarding last known location, crew assignments, and identity of the lost fire fighter provides the RIT with important clues in locating the missing/lost member. The sooner Incident Command is notified and the RIT is activated, the greater the chance of the fire fighter being rescued.6 The steps included in the department’s standard operating procedures require that “If possible, the officer will immediately press his/her emergency alert button, and then contact the Incident Commander in the following format: “MAYDAY-MAYDAY-MAYDAY. Ladder 103 to Battalion 44, MAYDAY.”1 The SOPs also require that the person transmitting the MAYDAY identify who they are, what the MAYDAY is for, and the victim's location. Investigators were unable to determine, through interviews, whether the victim had manually activated his PASS device or if the device had gone into alarm mode. Investigators were also unable to determine if the victim had attempted at any time to transmit a “MAYDAY.” The victim’s officer radioed “MAYDAY” when he heard a PASS alarm sounding in the stairwell where he believed the victim was located. The victim’s location and his identity were not provided in the first “MAYDAY” transmission and the “MAYDAY” was not received or acknowledged by the IC, the FAST team, or Central Dispatch. The victim’s officer transmitted a second “MAYDAY” upon finding the victim (approximately 1 minute after initial “MAYDAY”) that was heard by the IC and the FAST team staged on the front lawn. Recommendation #5 : Fire departments should ensure that a rapid intervention team (RIT) is on the scene and in position to provide immediate assistance prior to crews entering a hazardous environment. Discussion: Fire departments should have a rapid intervention team (RIT) standing by during any fire to rescue a trapped, injured, or missing fire fighter.5 NFPA 1500, 8.5.5 states “In the early stages of an incident, which includes the deployment of the fire department’s initial attack assignment, the rapid intervention crew/company shall be in compliance with 8.4.11 and 8.4.12 and be either one of the following: 1) On-scene members designated and dedicated as rapid intervention crew/company, or 2) On-scene members performing other functions but ready to re-deploy to perform rapid intervention crew/company functions.”7 NFPA 1500, 8.5.7 states “At least one dedicated rapid intervention crew/company shall be standing by with equipment to provide for the rescue of members that are performing special operations or for members that are in positions that present an immediate danger of injury in the event of equipment failure or collapse.”7 A fire fighter assist and safety team (FAST), equivalent to a rapid intervention team (RIT) or rapid intervention crew (RIC), was assigned and en route to this incident. Ladder 120 was the designated FAST and arrived on the scene when the initial “MAYDAY” was transmitted. Fire fighters standing by on the front lawn were the first to assist the Ladder 103 Officer and Fire Fighter #1 with the victim. The narrow stairwell, high heat/low visibility environment and objects on the stairwell landing made it difficult to move the victim up the stairwell. Numerous fire fighters, in an attempt to assist with the rescue effort, blocked the area on the landing to the front door making it difficult for fire fighters entering and exiting the front door during the rescue attempts. The RIT must have an unobstructed entry/egress point in order to facilitate the rescue effort. Assigning a Chief Officer to monitor the entry/egress point would ensure that the area would remain clear and unobstructed and that only those members assigned to the rescue assignment are working in the area. Recommendation #6 : Fire departments should educate homeowners on the importance of installing and maintaining smoke detectors on every level of their home and keeping combustible materials away from heat sources. Discussion: When fire breaks out, the smoke alarm, functioning as an early warning system, reduces the risk of dying by nearly 50 percent.8 In the event of a fire, properly installed and maintained smoke alarms will provide an early warning signal to occupants. This allows for early reporting to emergency services and a quicker response by fire department personnel allowing fire fighters to reach and attack the fire in an earlier growth stage. Homeowners should follow the manufacturer’s installation instructions.8 Witness statements provided to investigators from the Fire Marshal’s Office mention that there were two smoke detectors and one fire extinguisher located in the basement. However, there were no statements regarding whether the smoke detectors were operational at the time of the fire. There were no reports of anyone hearing a smoke detector alarming at anytime. The homeowners were in the kitchen and dining room area of the first floor when they first noticed the smell of smoke. One of the residents opened the door leading down to the basement and observed smoke in the stairwell. He got the fire extinguisher from the kitchen and attempted to descend the stairs but was turned back due to the high volume of smoke. He closed the basement stairwell door and evacuated his family from the house while calling 911. The fire had been burning for an undetermined time prior to the family discovering and reporting the fire. This delayed report of the fire may have led to the fire growing to a more advanced stage making it more difficult and dangerous for the fire fighters to establish an initial attack. The fire was listed by the fire investigators as being accidental in nature as a result of combustibles in close proximity to a portable electric heater. Fire departments can provide public service announcements educating the residents of their communities on the hazards of storing flammable materials close to ignition sources (e.g., portable electric heaters). Recommendation #7 : Although there is no evidence that the following recommendation could have specifically prevented this fatality, NIOSH investigators recommend that fire departments should ensure that fire fighting teams check each other’s personal protective equipment (PPE) for complete donning. Discussion: The key to proper and effective use of PPE is the development of good habits that include fast, proper and complete donning of the appropriate PPE ensemble. Fire fighting teams should check each others’ PPE to help ensure that the equipment is fully and completely donned. This team check will help prevent burn or injury. To minimize the risk of burn injuries to the head region, it is important to ensure that the hood is donned correctly to provide maximum protection to the ears, neck and face (not protected by the SCBA face mask). Care must be taken to ensure that the hood does not interfere with the face-to-face seal. Collars must be turned up to protect the wearer’s neck and throat (the front of the collar must be fastened to protect the throat area). The ear flaps on the helmet must be pulled down to protect the back of the neck and the ears. The chin strap on the helmet must be fastened around the chin without obstructing the SCBA’s regulator hose to ensure that the helmet stays in place upon impact.6 INVESTIGATOR INFORMATION This incident was investigated by Mark McFall, Virginia Lutz and Steve Berardinelli, Safety and Occupational Health Specialists, Surveillance and Field Investigations Branch, Division of Safety Research, NIOSH. The report was written by Mark McFall.