content
stringlengths
14
9.58M
Martin Shkreli, the guy who was once called “the most hated man in America” and “a morally bankrupt sociopath” after obtaining the manufacturing license for the anti-parasitic drug Daraprim and raising its price from $13 to $750, got booted off Twitter on Sunday for the “targeted harassment” of Teen Vogue writer Lauren Duca, who scored points with progressives when she complained straight to Twitter CEO Jack Dorsey about his antics. Shkreli had facetiously tweeted her he wanted to take her to the presidential inauguration, and had posted a collage of photos of her on his Twitter profile. One of them was photo of Duca with her husband that had Shkreli’s face where Duca’s husband’s was supposed to be. It was a juvenile prank, but in the Internet troll community—of which Shkreli is a member—that sort of thing is considered harmless good fun. The purpose is to get a rise out of the target, and Duca predictably took the bait, going straight to the principal's office. Dorsey—also predictably—came through for her, booting Pharma Bro off his social media platform, probably without offering him the simple, reasonable option of deleting the offensive material from his profile and maintaining his account. Duca had options in dealing with this “high school” issue. She could’ve simply ignored his puerile attempts to provoke her, blocked him, or turned the tables and mocked him mercilessly. Instead, she chose to have a straight white male protect her from her tormentor, tweeting to Dorsey, “Why is harassment an automatic career hazard for a woman receiving any amount of professional attention? Question for @jack & also society!” This got her sympathy, but she didn't mention that she'd taken the first personal shot at Shkreli in August, tweeting a photo of him drinking a beer, along with, “Martin Shkreli is literally at Guy Fieri’s Flavor Town right now. I don't even know.” The writer followed that with, “I left right away. I feel sick,” and later, “I felt so gross. I still feel so gross.” A feminist warrior in New York City is so sensitive she has to leave a restaurant because she can't share the same air with someone she hates, and over three hours later still feels “gross”? Maybe writing for Teen Vogue about no-bake unicorn cheesecake and Kim Kardashian's lip ring at the Christmas party leads to one taking on the persona of the targeted readership and induces Valley Girl talk. Regardless of your opinion of Shkreli, is it shocking that he might want to take a swipe back at her after she'd obtained a degree of fame from with her contentious appearance on Tucker Carlson’s Fox show? Isn't it targeted harassment to use Twitter to single out someone who’s minding his own business in a restaurant and making him even more of an object of scorn? Duca didn't mention that part to Dorsey because she's skilled at playing the victim card, as so many modern feminists are. Self-proclaimed victimization by the patriarchy comes in handy in immunizing Duca from tweets like, “Friendly reminder that there's an uneven playing field, and straight, white men are generally trash,” and then explaining that with, “Look, I can make fun of white men because I have slept with several of them! Also, the white supremacist patriarchy.” Imagine her outrage at a man referring to white women as trash, regardless of the circumstances. On the same day that Ivanka Trump, who in many ways is a feminist exemplar, got harassed on an airplane while accompanied by her children by an angry guy, Duca tweeted, “Ivanka Trump is poised to become the most powerful woman in the world. Don't let her off the hook because she looks like she smells good.” Tucker Carlson called her out on this, and she disingenuously replied that she hadn't even mentioned the plane situation, though the reference was undeniable. During their spirited exchanges, Duca complained to Carlson that he was talking over her, which he wasn't, and also called him a partisan hack simply because he challenged her fairly. At the end of the segment, an annoyed Carlson shot back with, “You should stick to the thigh high boots,” a reference to one of her Teen Vogue articles, and she called him “a sexist pig.” Carlson makes cutting remarks like that all the time, regardless of gender. That's his signature move. Calling him a sexist makes Duca seem the victim though, which plenty of media accounts of their battle confirmed. Feminist Duca wants to be the hard-edged woman (sample tweet: “I’M GOING TO KILL THE NEXT RANDOM MAN THAT TALKS TO ME”), but as soon as anyone pushes back she plays her victimized little girl role. She's had to turn to the patriarchy now—Jack Dorsey, another straight white male—to defend her against a little online trolling, so it appears she still needs men to fight her battles for her.
Gamers are a funny bunch. Board gamers, doubly so. We treasure our board games. We put plastic sleeves on our cards to keep our games immaculate and pristine. We do not tolerate spilled drinks. So writing on the board (in permanent marker!), placing stickers, and ripping up cards is both incredibly disturbing, and therapeutically cathartic. Welcome to Seafall. A Lesson In Object Permanence The first ever “legacy” game, Risk Legacy, had this sticker sealing the box: “what is done can never be undone”. It is the most dramatic piece of board-game packaging I’ve encountered, and it proclaims boldly: here is something new. Risk Legacy is the greatest version of Risk you’ve never played, and a high watermark in Hasbro’s extensive catalogue. Designed by Rob Daviau, it is a fusion of avant-garde performance art masquerading as a respectable, mass-market board game (alongside Monopoly, Risk is one of most recognisable board game titles in the world). Nothing like this had ever been done before. Advertisement Normal board games are like The Simpsons. No matter what craziness ensues in an episode, the next episode starts the same way with the family on the TV couch, and there are never any lasting changes. Bart is trapped in pre-adolescent rebellion, and Homer will never age enough to lose another hair. The central conceit of legacy games is that the game doesn’t reset. Not precisely. Now we’re watching The Wire, or Arrested Development, or Archer, and changes stay changed. Games accumulate history. Games accumulate scars. Like the North, The Game remembers. The reality is that Seafall, and legacy games in general, are the beautiful lovechild of board games and Dungeons and Dragons. Playing a board game is now a season-long campaign, and both board and players will have scars aplenty by the end. What is done can never be undone, and the “legacy format” genie can never be put back into the bottle. Pandemic Legacy followed, lodging itself firmly at the top of the BoardGameGeek charts. Seafall is the third legacy game, and the first wholly designed by Rob Daviau. Sailing Towards The New World Advertisement Seafall’s a 4X game, channeling the spirit of Civilization or, more appropriately, Sid Meier’s Colonization. We set our soundtrack accordingly, and prepared ourselves to follow the footsteps of Messrs Columbus and Cook, de Gama and Cortez. For there is a sea to be sailed, islands to explore, goods to trade, and natives to mistreat. Doing all of the above gains you glory, and glory wins you the game. Not unlike real life, glory is fickle and fleeting, because glory also resets each game. However in due course, the game will reveal how lasting glory can be gained and kept, and the most glorious player will be ultimately victorious at the end of the campaign. Seafall itself is sumptuous. There are miniature ships (one powerful warship, one fast clipper), custom dice, the usual plethora of cardboard, and pre-assembled treasure chests. Each player gets a chest to store their loot, but there are other treasure chests. Sealed ones, and multiple warnings to not open them. Intriguing. Advertisement The mechanics are complex enough to present multiple strategies (should I play the short term game, raid the island villages, and build lasting enmity, or play a longer, gentler game of trading and building?). Future games promise possession of islands (there’s a place to write my name, but no instruction to just yet), and growing warfare between players. But the first game starts as a gentle race to explore. Every time a new island was explored, we open the Captain’s Booke. The permanent marker comes out to mark off the areas we’ve explored, and like a Choose Your Own Adventure book, you are presented with an option: steal from the natives or spend time honouring their ways? Let your sailors rest, or encourage them to gather wood quickly? In so doing, we permanently unlock spaces on the board: spice farms, markets and harbours and other intriguing mysteries. Advertisement One person at the table has been playing Civ 6 hard — he told me he clocked up 22 hours over the weekend. By the end he proclaimed, satisfied: “this scratches my Civ urges.” That’s the game — the prologue game, at least. But the real game, the real joy is the name game. You get to name your leader, your port, your province, and your ships. When you hire advisors, they are yours to name. When you reach the target glory points, you get to name an island. Naming things in perpetuity is a daunting prospect. Should your names be whimsical or referential, geeky or mythical or serious? And the glory of our evolving game is that now it is ours. Advertisement Your copy of Seafall will be different to mine. Only here, with these cards, can Captain Fluffy hire Francis Drake the explorer. Only here can Lara (Croft) guide you to buried treasure, and Dupre (from Ultima) lead your military raids, or risk your lot with that madman, Ham Sandwich. I own a lot of games — a lot of games — but this game is now truly mine. How The Beginning Ends Minor spoilers follow. Advertisement The great preacher, CH Spurgeon, was exceedingly fond of cigars. One vigilant congregation member accused him of turning his cigars into an idol, a false god. The quick-witted Spurgeon replied, “Madam, I burn my idols every day.” The end of the first game ends up by directing you to rip up cards. Not just any old card, but your character card. My character card, which I’d painstakingly named Athena. But as it turns out, gods can die after all. Other cards had been ripped up in the course if this game, but this one was the hardest. One player flat-out refused to destroy his card, and left it at the bottom of his chest for next week. As you can see, when I tried to tear up the card, I could only get halfway. Actually burning the card was excruciating. Turns out, destroying what you love is hard. I love this game already.
Updated: 2012-11-02 13:59 By Lan Lan ( China Daily) A shopper inspects energy-efficient refrigerators at a department store in Zaozhuang, Shandong province. According to a survey, 87 percent of Chinese respondents said they would be willing to pay more for greener products. [Photo/China Daily] Majority of Chinese willing to pay more for greener goods, poll shows The vast majority of Chinese believe climate change is taking place and most consumers are willing to pay more for eco-friendly products to reduce its effects, a survey has found. Some 93 percent of respondents said climate change is under way, while about three out of five respondents feel they have been directly affected by it. The study of 4,169 Chinese adults was carried out from July to September by the Center for China Climate Change Communication, jointly established by Renmin University of China and non-governmental organization Oxfam. About 68.4 percent of respondents said they thought China has already suffered from the effects of climate change, while about half of respondents said it will affect people in rural areas more. About 90 percent of respondents said the government should have prime responsibility for dealing with climate change, followed by the public, media, companies and NGOs. Zheng Baowei, director of Renmin University's Research Center of Journalism and Social Development, said the government should play a dominant role in adopting measures and designing policies in line with the public's expectations and interests. However, implementation of the policies will eventually rely on public participation. More than 93.4 percent of respondents felt they have knowledge of climate change, while just 6.6 percent said they had never heard of it. About 60 percent thought climate change is mainly caused by human activities, while 33 percent considered it to be mainly caused by the environment. Sun Zhen, deputy director of the Department of Climate Change at the National Development and Reform Commission, said the data might sound satisfactory, but climate change is placing increasing pressure on China. As the world is facing more extreme weather-related events, such as hurricanes, drought and floods, the government has an obligation to clarify to what extent climate change has contributed to these, Sun said. Wang Binbin, executive director of the Center for China Climate Change Communication, said addressing climate change also calls for the public to practise low-carbon ways of living and consumption, while the good news is that more Chinese consumers are willing to pay more for a greener life. Some 87 percent of those surveyed said they were willing to pay more for greener products, while more than 34 percent said they would accept a 30 percent price rise to buy such products. The survey also showed that people aged between 18 and 24 were willing to pay more for environmentally-friendly products. More than four out of five respondents said they supported the government in setting standards for mandatory garbage separation and waste recycling, adopting greener materials for construction, and producing greener cars, even if it means higher costs. Only 34 percent of respondents said they separated their garbage. lanlan@chinadaily.com.cn
Toto Wolff thinks drivers would "lose every single race" if they were allowed to call their own strategies from the cockpit. After the Brazilian Grand Prix Lewis Hamilton said he would have liked to try a different strategy to Mercedes team-mate Nico Rosberg to try and unlock some of the pace he felt he was losing in the dirty air behind. Mercedes did not let Hamilton deviate from the three-stop strategy it put both its drivers on once Sebastian Vettel had pitted earlier than expected in the race. Mercedes boss Wolff has no problem with Hamilton questioning the team's strategy during the heat of a grand prix. "The driver in the car being emotional is understandable," Wolff said. "We hired guard dogs and we don't want to have any puppies - and we want them to be guard dogs. Sometimes it is a bit more intense but it is okay." However, he thinks the emotional state drivers find themselves in during a race is one reason strategy must always be dictated from the pit wall. "We are going to keep one strategist. If the driver in the car starts to determine strategy then he is going to lose every single race because that is not an instinct-driven decision. Your instinct might be right sometimes but if you are not having the full set of data you are going to get the majority of your races wrong so that is why we are going to keep it why it is. Not switching Hamilton's strategy nullified his challenge for victory, meaning the only real action came lower down the order. Responding to the charge Mercedes' strategy made for a boring spectacle for fans, Wolff pointed out how the team has resisted implementing team orders. "As a fan I can understand that absolutely. But there are various escalations. We could have done it like some teams have in the past of having a clear number one and a clearn number two, and the number two wouldn't come close to the other one. So we have changed that and sometimes it is difficult for us to manage letting the two fight each other and now you can even say, let's take it one step further and let the strategists play against each other but this is not where we want to go to. "Controversy within the team is detrimental and we have kept the team together, not only the drivers, the travelling team in Brackley and Brixworth because the team comes first. From a fan stand point, I can understand, and from a team standpoint I will give a boring answer that we are not going to change it."
MARK KARLIN, EDITOR OF BUZZFLASH AT TRUTHOUT As Think Progress noted, Elizabeth Warren, Democratic Party senatorial candidate in Massachusetts (for the seat once held by Ted Kennedy), got to the heart of the matter about the difference between people and corporations. Warren admonished the one-time governor of her state, Mitt Romney, that: No, Governor Romney, corporations are not people. People have hearts. They have kids. They get jobs. They get sick. They thrive. They dance. They live. They love. And they die. And that matters. That matters. That matters because we don’t run this country for corporations, we run it for people. That's cutting to the chase. It's not easy sharing the one prime time hour of a convention night with the master of crowd seduction, Bill Clinton, but Warren wooed the delegates and guests with words that spoke to the importance of individual lives over corporate institutions. She rebutted the deification of companies -- the bestowal of a corporate divinity that is at the epicenter of the Republican ticket -- over the value of hearts that beat in human souls. Ironically, the only lives that merit protection in the GOP platform are the unborn, not those who come into this world and are in need. It has become a cliché to appeal to the travails of the working family, but Warren lit a spark of truth to the reality of living on the “ragged edge” of surviving: I’m here tonight to talk about hard-working people: people who get up early, stay up late, cook dinner and help out with homework; people who can be counted on to help their kids, their parents, their neighbors, and the lady down the street whose car broke down; people who work their hearts out but are up against a hard truth--the game is rigged against them. The game is rigged by institutions that profit off the work of those who are barely able to get by. When life is monetized, when the value of an individual with a beating heart is valued as nothing more than dollar signs in the eyes of vulture capitalism, then we have lost our values. Not leaving the playing field of God to the GOP, Warren recalled: I grew up in the Methodist Church and taught Sunday school. One of my favorite passages of scripture is: “Inasmuch as ye have done it unto one of the least of these my brethren, ye have done it unto me.” Matthew 25:40. The passage teaches about God in each of us, that we are bound to each other and called to act. Not to sit, not to wait, but to act--all of us together. No, corporations don't have hearts. They should be at the service of people who do. Call it a divine spark or just the radiant human soul, Elizabeth Warren spoke to the heart of the matter: the blood and spirit of life run through the veins of people, not corporations.
Intel became the world's largest semiconductor company by revenue in 1992 when is surpassed NEC. It has held the top spot ever since, but 25 years later and as predicted Intel is now just like NEC, being replaced. The new world's largest semiconductor company is Samsung. According to the Associated Press, for the April-June quarter, Samsung earned $7.2 billion on sales of $15.8 billion. Intel earned $2.8 billion on sales of $14.8 billion, pushing it into second place overall. However, Patrick Moorhead, principal analyst with Moor Insights & Strategy, said we will likely see the two companies swapping position a few times. For example, Intel could claim back the top spot in six months as it ramps up memory production to full capacity. Samsung's rise to the top is thanks to mobile devices and memory. Consumers can't get enough of mobile gadgets, which are typically full of Samsung parts. We also all want more and more memory, so Samsung's flash memory business is looking very healthy. And it doesn't look like that demand is going to drop off anytime soon. So while we all continue to buy lots of mobile gadgets, the PC market continues its five year slump. That certainly poses a problem for Intel going forward. Even so, the former world's largest semiconductor company is still on course to post $60 billion in annual sales for the year.
Scarcely 24 hours after being fired as manager of Toronto FC, Preki insists he has no regrets about the job he did in Toronto, and that he doesn't intend to alter his highly demanding style of coaching. With Toronto languishing five points outside of the playoffs, the former U.S. international star was relieved of his duties Tuesday, along with team director of soccer Mo Johnston. Former assistant and one-time Canadian U-23 head coach Nick Dasovic was tabbed to replace Preki on an interim basis. Preki was hired last November after a successful three-year stint with Chivas USA that saw him reach the playoffs every year, yet was unable to replicate that success with Toronto. "I'm [leaving] the job with my head held high," Preki said in a telephone interview. "I had all the best of intentions and the job that I did over the past six, seven months, I really have no regrets." When asked what went wrong during his tenure, Preki hinted at a lack of support from owners Maple Leaf Sports & Entertainment, although he refused to get specific. "The last two to three months there were some really challenging and difficult circumstances," he said. "At some point down the line, I'll say more." Preki long has had a reputation as a strict disciplinarian, and with the team struggling through a 1-6-3 stretch, it was widely reported that a player mutiny had ensued. Toronto FC captain Dwayne De Rosario admitted that Preki's style grated on many players, and midfielder Julian De Guzman even went so far as to question Preki's tactical acumen. "I couldn't put my finger on the type of system or my role, what we were trying to do, it was like freestyle," de Guzman told The Toronto Sun. "We play against Cruz Azul and we're amazing, and then against D.C. [United] you're the worst thing out there. It's like a gamble going into a lot of the games." When asked to respond to the comments of his former charges, Preki said: "I don't need to elaborate on the things that De Rosario and De Guzman said. You can see the work I've done with Chivas USA. It's a couple of Canadian guys making those comments. That's all I have to say about that." There were widespread reports that Preki also had a falling out with Dasovic, with the assistant conspicuously absent from the bench the last few games. When asked if he believed he was undermined by Dasovic, Preki said: "He's in charge now. It's up to you to guess." Preki added that he plans to spend the coming weeks trying to collect his thoughts. But he also indicated that he sees no reason to change his approach to coaching, one that worked well while he was with Chivas USA. "I just think that if you come in every day and you ask for commitment and hard work, that's not too much to ask for," he said. "That's the bottom line." Jeff Carlisle covers MLS and the U.S. national team for ESPNsoccernet.
At long last, AMD has launched the second of its so-called Fusion "APUs," where APU stands for "accelerated processing unit" and refers to a single chip that hosts both a Central Processing Unit (CPU) and a Graphics Processing Unit (GPU). Anandtech is first out of the gate with benchmarks for AMD's Llano testbed notebook, and the results show that the new chip is a win for AMD in a two departments. Llano's battery life is excellent, besting nearly all comers in terms of efficiency—only AMD's "Brazos" platform, with its simple Bobcat core, beats Llano in this department. This is the first time in a long time that AMD has been competitive with Intel in mobile power draw. Llano's other big win is in graphics, about which we'll talk more in a moment. But before jumping into that discussion, we should briefly mention where Llano lags behind: the CPU. The 32nm "Stars" cores that form the CPU side of Llano are a straightforward shrink of AMD's existing and venerable Phenom architecture, with a few minor updates. As such, these cores are significantly underpowered compared to Intel's Sandy Bridge cores, and the benchmarks show it. On CPU-bound workloads, Llano gets a sound drubbing from Intel's Sandy Bridge. So the CPU side of Llano is its Achilles heel, a fact that will keep Llano confined to the role of a budget alternative to Sandy Bridge, and not a competitor. Surprises from Llano's GPU The GPU is where Llano gets interesting. In both its mobile and desktop incarnations, Llano's GPU is a DX11-class, +400-shader GPU that takes up almost half of the processor die. Given that this GPU is a real, general-purpose graphics coprocessor, this makes Llano equal parts GPU and CPU. (Compare to Intel's Sandy Bridge, where the GPU is quite a bit smaller than the CPU region.) The fact that Llano's GPU is so beefy is both a blessing and a curse. Unlike Sandy Bridge's relatively anemic GPU (which gets a huge performance boost from moving onto the processor die), Llano's GPU has so much horsepower that it's severely memory-bottlenecked in the on-die configuration. If the desktop Llano's 6550D GPU were put on a discrete graphics card with its own pool of fast GDDR, it would actually out-perform the integrated configuration. This is not something that I expected or predicted, but it makes sense. To see what I mean, check out Anand's benchmarks of Llano in a desktop configuration. When used with an overclocked memory bus (DDR-1866), Llano leaps up in the rankings and lands squarely in budget discrete GPU territory. Clearly Llano's GPU is massively memory-bound, and CPU and GPU together are suffering from a lack of memory bandwidth in the APU socket. This fact has a few interesting implications. First, it means that AMD can boost Llano's performance significantly in future versions simply by adding another dual-channel DDR3 controller and introducing a new socket. Or AMD could also push the more widespread use of higher-clocked DDR. By far the most interesting implication of the Llano results, however, is that this is a chip that would benefit massively from the addition of some kind of IBM-style pool of on-die DRAM (i.e., PS2-style scratchpad RAM). Right now, Llano's CPU and GPU are connected via an internal bus, which is great as far as it goes. But if Llano had a large enough pool of on-die memory on the same bus, it would put it into a whole new performance league. (See below.) Ultimately, Llano as it currently exists is in fine shape in the notebook segment, where it handily beats Intel's Sandy Bridge in almost every gaming benchmark (the exceptions are one or two CPU-bound titles). Despite the memory bottleneck, the GPU in Llano really delivers the gaming goods, and on a price/performance basis it sets a new standard for budget mobile gaming. Llano is so capable that it's able to compete with midrange discrete mobile graphics solutions, a fact that reinforces just how much trouble NVIDIA is in with this particular market. In Anand's desktop preview, the desktop version of Llano puts up a solid showing against Intel's Sandy Bridge, where, again, it easily bests its opponent in all but a few CPU-bound benchmarks. All told, Llano is a solid entry into the mobile market that will make a worthy budget alternative to Intel's mobile Sandy Bridge. On the desktop, Llano looks to be similarly positioned, but given that testing at most sites is still ongoing, we'll have to reserve judgment until the official launch. Postscript: possible futures for Fusion The Llano + eDRAM idea mentioned above is obviously not going to happen with Llano—the current design is transitional, and will live out its life as a budget part. But who's to say that AMD won't do something like this in a future APU? AMD has been made it clear that Llano is just a bridge design—halfway between the integrated graphics of the previous generation and the type of true heterogeneous multiprocessing that will characterize future Fusion efforts. AMD hasn't really let on what it the ultimate Fusion part will look like architecturally, but the endgame for Fusion is probably a giant pool of shader cores, a small number of CPU cores, and a big enough pool of shared, on-die memory that the CPU and GPU can cut way down on the amount of memory traffic that goes off-die. This pool of shared memory could be a wired-down section of L3 cache (IBM does this with its game console chips), or it could be a separate pool of "scratchpad memory" that the CPU and GPU have access to (IBM did this with the PS2's Emotion Engine). From the perspective of boosting graphics performance, the latter seems preferable to me, but my knowledge in this area is spotty, so more informed readers should feel free to correct me in the comments. About four years ago, when IBM first began making waves with its eDRAM efforts, the idea that Big Blue's eDRAM cells might show up on an AMD processor was commonly floated. IBM and AMD have collaborated in the past on fab technology (prior to the Globalfoundries spinoff), so the idea is by no means out of the question. IBM's POWER7 chip, a derivative of which will probably show up under the hood of the Wii U, sports a giant 32MB pool of on-die eDRAM in the form of an L3 cache. Even a fraction of that amount of memory added to Llano could cut back on a lot of off-die memory traffic and give a major leap in graphics performance.
Effect of time of eCG administration on the fate of ovarian follicle in Holstein heifers. The objective of this study was to investigate the effect of equine chorionic gonadotropin (eCG) on ovarian follicles at three stages of development (emergence, dominance and early static phases) during the first follicular wave (FFW) in Holstein heifers. Heifers (n=20) were randomly assigned into four experimental groups (n=5 in each group). Heifers received eCG (500 IU; Folligon®; Intervet, Holland; i.m) a) on the day of follicle emergence (day of ovulation; group 1), b) on the dominant phase (dominant follicle (DF): the first day in which follicle was observed at ≥10 mm; group 2, and c) on the early static phase (group 3) of the FFW. Control group heifers did not receive any treatment. Daily ultrasonography was conducted to monitor ovarian structure throughout estrous cycle. All treatment group heifers, regardless of the stage of follicle development, displayed follicle growth after eCG injection. Administration of eCG, in group 1, hastened DF detection and induced co-dominant follicles; whereas, in groups 2 and 3, it delayed DF regression, and increased cycle length compared to control. In all treatment group heifers, DF was present 84 h after eCG injection. Maximum diameter of corpus luteum was larger in eCG treated groups compared to control (P<0.05). In conclusion, depending on the time of eCG administration throughout the FFW (emergence, dominant and early static phases), co-dominancy, maintenance of DF, enhancement of follicle and corpus luteum growth and increase in estrous cycle length could be observed in Holstein heifers.
Neuroprotective Effects of Calmodulin Peptide 76‐121aa: Disruption of Calmodulin Binding to Mutant Huntingtin Abstract Huntington's disease (HD) is a neurodegenerative disease caused by mutant huntingtin protein containing an expanded polyglutamine tract, which may cause abnormal protein–protein interactions such as increased association with calmodulin (CaM). We previously demonstrated in HEK293 cells that a peptide containing amino acids 76‐121 of CaM (CaM‐peptide) interrupted the interaction between CaM and mutant huntingtin, reduced mutant huntingtin‐induced cytotoxicity and reduced transglutaminase (TG)‐modified mutant huntingtin. We now report that adeno‐associated virus (AAV)‐mediated expression of CaM‐peptide in differentiated neuroblastoma SH‐SY5Y cells, stably expressing an N‐terminal fragment of huntingtin containing 148 glutamine repeats, significantly decreases the amount of TG‐modified huntingtin and attenuates cytotoxicity. Importantly, the effect of the CaM‐peptide shows selectivity, such that total TG activity is not significantly altered by expression of CaM‐peptide nor is the activity of another CaM‐dependent enzyme, CaM kinase II. In vitro, recombinant exon 1 of huntingtin with 44 glutamines (htt‐exon1‐44Q) binds to CaM‐agarose; the addition of 10 µM of CaM‐peptide significantly decreases the interaction of htt‐exon1‐44Q and CaM but not the binding between CaM and calcineurin, another CaM‐binding protein. These data support the hypothesis that CaM regulates TG‐catalyzed modifications of mutant huntingtin and that specific and selective disruption of the CaM‐huntingtin interaction is potentially a new target for therapeutic intervention in HD. INTRODUCTION Huntington's disease (HD) is an autosomal dominant neurodegenerative disease characterized by chorea and cognitive disturbances (27). The gene involved in HD, interesting transcript 15 (IT15), encodes for the protein huntingtin (1). In HD, IT15 has an expanded CAG trinucleotide repeat in its first exon, resulting in a mutated form of the huntingtin protein containing an expanded polyglutamine tract in its N-terminus (34). Medium spiny neurons in the striatum are selectively vulnerable to neurodegeneration in HD (47). In HD brains, insoluble deposits that contain aminoterminal fragments of huntingtin with an expanded glutamine repeat are found in neurons (12). Huntingtin is a substrate of transglutaminase (TG), and, as the length of the glutamine repeat is increased, the protein becomes a better substrate (7,16,21,23). It has been hypothesized that TG modifies huntingtin, thereby aiding in the stabilization of monomeric huntingtin (48), and the formation and stabilization of huntingtin containing aggregates (7,17). Huntingtin has been shown to interact with many proteins, including cyclic adenosine monophosphate (cAMP) responseelement binding protein (CREB)-binding protein (CBP) (26), huntingtin-interacting protein 1 (HIP1), the spliceosome protein HYPA (18), and calmodulin (CaM) (3). It has also been reported that mutant huntingtin containing an expanded glutamine repeat interacts with CaM with a higher affinity than wild-type huntingtin (3). Interestingly, CaM, a calcium (Ca 2+ ) binding protein that activates many enzymes, also interacts with TG and increases TG activity (36). We previously demonstrated that CaM colocalizes with TG2 and huntingtin protein in intranuclear inclusions in the HD cortex (49). Furthermore, inhibition of CaM results in decreased TG-catalyzed modifications of huntingtin in cells expressing mutant huntingtin and TG2 (49). TGs (EC2.3.2.13) are a family of enzymes that catalyze a calcium-dependent acyl-transfer reaction between the g-carboxy group of a protein-bound glutamine and either the e-amino group of a protein-bound lysine or a primary amine, resulting in the formation of an e-(g-glutamyl) lysine bond (15). TG mRNA, protein levels and activity are all elevated in an HD brain (23,25,50). TG-catalyzed e-(g-glutamyl) lysine bonds and TG2 both colocalize with huntingtin in intranuclear inclusions in HD brain (48). Treatment of HD-transgenic mice with cystamine, a TG inhibitor, or knocking out TG2 increases survival (2,11,28,46). Similarly, treating cells that express N-terminal mutant huntingtin and TG2 with cystamine increases cell survival and decreases the amount of TG-catalyzed modifications of huntingtin (50). In HD, mutant huntingtin may alter the normal interaction between CaM and TG. One result could be an increased interaction between CaM and huntingtin resulting in a subsequent increase in TG activity. Another possible result could be sequestration of CaM, thereby inhibiting its biochemical functions such as activation of nitric oxide synthase (NOS), an enzyme which has been shown to have decreased activity in an HD mouse model (8). Previously, in HEK-293 cells transiently expressing N-terminal huntingtin with an expanded polyglutamine repeat and TG2, we found that a peptide containing amino acids 76-121 of CaM (CaM-peptide) and encoded by a fragment of exons 4 and 5 of the CaM gene was able to decrease the TG-catalyzed modifications of mutant huntingtin, the cytotoxicity induced by mutant huntingtin and the binding of mutant huntingtin to CaM (13). The goal of the current study was to examine the effect of adeno-associated viral (AAV) mediatedexpression of that peptide (CaM-peptide) in differentiated neuroblastoma SH-SY5Y cells that stably express an N-terminal (63 amino acids in length) fragment of huntingtin containing 148 glutamines (SH-SY5Y-htt-N63-148Q cells). Previous studies demonstrated that a fragment of CaM from amino acid 78 to 148 was able to inhibit CaM-induced stimulation of phosphodiesterase and myosin light chain kinase (MLCK) (29,30). Therefore, we hypothesized that CaM-peptide would compete with endogenous fulllength CaM for binding to mutant huntingtin, resulting in inhibition of the endogenous CaM-mutant huntingtin interaction. We examined the effects that CaM-peptide had on TG-catalyzed modifications of mutant huntingtin, cytotoxicity associated with mutant huntingtin, total TG activity and binding of CaM to exon 1 of mutant huntingtin. AAV vector construction A fragment of exons 4 and 5 of the CaM gene, encoding amino acids 76-121 of CaM . Cell lysates were mixed with (g-32 P) adenosine triphosphate (ATP) (at 3000 Ci/mmol, 10 mCi/mL), activation buffer containing 5 mM CaCl2 and 5 mM calmodulin or control buffer containing 5 mM EGTA, with and without biotinylated CaM kinase II substrate. After incubation at 30°C for 2 minutes, the reaction was terminated by adding 7.5 M guanidine hydrochloride and spotted to a streptavidin-impregnated membrane. Membranes were washed and retained radioactivity was quantitated by liquid scintillation counting (Beckman, Fullerton, CA, USA). Radioactive counts were converted to endogenous CaM kinase II activity in the sample by the following formula: Enzyme activity in pmol min g of protein After addition of the lysate supernatant, the column was then washed with 12 column volumes of PBS pH 7.4, followed by elution of the fusion protein with two column volumes of PBS with 10 mM maltose. The fusion protein was further purified by incubating with Ni-NTA agarose (Qiagen, Valencia, CA, USA) in PBS supplemented with 10 mM imidazole with rocking for 2 h at 4°C. The resin was then washed three times with PBS supplemented with 20 mM imidazole, and fusion protein was eluted with six resin volumes of PBS supplemented with 250 mM imidazole. Purified fusion protein was analyzed by SDS-polyacrylamide gel electrophoresis (PAGE) and visualized by Coomassie staining. In vitro mutant huntingtin-CaM binding Binding of purified mutant huntingtin to calmodulin was studied Statistics All statistical analyses were performed using GB Stat software (Dynamic Microsystems, Inc., Silver Spring, MD, USA). Data are expressed as means Ϯ standard error of the mean (SEM). Analysis of Variance (ANOVA) or a two-tailed Student's t-test was used to analyze the data. Post-hoc comparisons were conducted using a Newman-Keuls test. Figure 2B). However, these values were lower than previously measured for this MOI, perhaps caused by conditions associated with neuronal differentiation. AAV-mediated expression of CaM-peptide decreases TG-catalyzed modifications of mutant huntingtin in SH-SY5Y-htt-N63-148Q cells With the cell and viral system now defined, we determined the effect of viral-mediated expression of CaM-peptide in differentiated SHSY5Y-htt-N63-148Q cells on TG-catalyzed modifications of N-terminal mutant huntingtin. Cells were differentiated, infected with either AAV-CaM-peptide + GFP, AAV-scram-CaM-peptide + GFP or AAV-GFP (MOI = 50), and forty-eight hours post-infection cells were assayed for TG-modified N-terminal mutant huntingtin. SHSY5Y-htt-N63-148Q cells expressing CaM-peptide had a significantly lower amount of TG-modified N-terminal huntingtin compared with cells expressing scram-CaM-peptide or only GFP (Figure 3). Effect of AAV-mediated expression of CaM-peptide on total TG activity in neuroblastoma cell lines that stably express N-terminal mutant huntingtin Next, we examined the effect of expression of the CaM-peptide on total TG activity in differentiated non-htt-SHSY5Y and SHSY5Yhtt-N63-148Q cells. Cells were infected with either AAV-CaMpeptide + GFP, AAV-scram-CaM-peptide + GFP or AAV-GFP (MOI = 50), and harvested forty-eight hours post-infection. TG activity was measured ex vivo based on the incorporation of 5-(biotinamido)pentylamine into the N,N′-dimethylcasein substrate coated onto micro-plates. There were no significant differences in total TG activity between the different infections in either non-htt-SHSY5Y cells or SHSY5Yhtt-N63-148Q cells ( Figure 5A). There were small but insignificant increases in total TG activity in SHSY5Y-htt-N63-148Q cells expressing scram-CaM-peptide or only GFP, compared with nonhtt-SHSY5Y cells expressing either GFP only, scram-CaM-peptide or CaM-peptide ( Figure 5A). To determine whether cell lysis had an effect on enzyme activity, we measured total TG activity in situ. Non-htt-SHSY5Y and SHSY5Y-htt-N63-148Q cells were infected with one of the various AAV, and 42 h post-infection, were treated with 5-(biotinamido)pentylamine. Six hours later, cells were harvested and the cell lysates were applied to micro-plates coated with anti-e-(g-glutamyl) lysine (81D4) antibody in order to capture ᭤ proteins that were modified by TG in situ. TG-catalyzed incorporation of 5-(biotinamido)pentylamine into cellular proteins was then determined by incubation with streptavidin-HRP. Similar to the ex vivo assay, there was no significant difference in total TG activity between the various infections in SHSY5Y-htt-N63-148Q cells ( Figure 5B). Similarly, there also was no difference in total TG activity in the various infections in non-htt-SHSY5Y cells. However, there was a significant decrease in total TG activity in non-htt-SHSY5Y cells compared with SHSY5Y-htt-N63-148Q cells expressing scram-CaM-peptide or only GFP. Total TG activity in non-htt-SHSY5Y cells was not significantly different from total TG activity in SHSY5Y-htt-N63-148Q cells expressing CaM-peptide ( Figure 5B). CaM-peptide does not significantly affect CaM kinase II activity To determine if expression of CaM-peptide has effects on the activity of other CaM-dependent enzymes, we examined whether expression of CaM-peptide affects CaM-dependent protein kinase II (CaM kinase II) activity. SH-SY5Y cells were transfected with vector, htt-N63-148Q, CaM-peptide or a combination of htt-N63-148Q and CaM-peptide. We found no significant differences in CaM kinase II activity among the various transfections ( Figure 6). Interestingly, expression of CaM-peptide alone resulted in a small but insignificant increase in CaM kinase II activity compared with all other transfections. We also examined the activity of CaM kinase II without the addition of exogenous CaM (addition of exogenous CaM is recommended, as described in the manufacturer's protocol). There was no significant effect of CaM-peptide under these assay conditions either ( Figure 6). CaM-peptide inhibits binding of N-terminal mutant huntingtin with an expanded polyglutamine repeat and CaM Thus far, all the experiments performed to examine the effects of CaM-peptide were done in cells where other endogenous proteins could play a role in mediating the action of CaM-peptide. Therefore, an in vitro assay was used to determine if CaM-peptide could directly interfere with the interaction of N-terminal mutant huntingtin and CaM. CaM-agarose was used to immunoprecipitate There was a small but insignificant increase in total ex vivo TG activity in SHSY5Y-htt-N63-148Q cells expressing scram-CaM-peptide or GFP compared with non-htt-SHSY5Y cells and SHSY5Y-htt-N63-148Q cells expressing CaM-peptide. B. Similarly there were no significant differences in total in situ TG activity in non-htt-SHSY5Y cells expressing CaM-peptide, scram-CaM-peptide or GFP. However, total in situ TG activity was significantly less in non-htt-SHSY5Y cells compared with SHSY5Y-N63-htt-148Q cells expressing scram-CaM-peptide or GFP. But, total in situ TG activity in non-htt-SH-SY5Y cells was not significantly different from total TG activity in SH-SY5Y-htt-N63-148Q cells expressing recombinant purified huntingtin exon 1 with an expanded polyglutamine repeat (htt-exon1-44Q) in the absence and presence of varying concentrations of CaM-peptide. The amount of CaMbound htt-exon1-44Q was significantly lower when 10 mM of CaM-peptide was present than when CaM-peptide was absent ( Figure 7A,B). To determine if CaM-peptide would nonspecifically inhibit the interaction of CaM with any CaM-binding protein, we incubated calcineurin with CaM-agarose in the absence and presence of 10 mM of CaM-peptide. The presence of 10 mM of CaM-peptide did not affect the binding of CaM with calcineurin ( Figure 7C,D). Next, to investigate the potential site of interaction of CaM-peptide, the CaM antagonist, W-5, was used along with CaM-peptide. W-5 alone did not affect the amount of CaM-bound htt-exon1-44Q, but once again the amount of CaM-bound htt-exon1-44Q was significantly lower when 10 mM of CaM-peptide was present. However, the amount of CaM-bound htt-exon1-44Q was significantly increased when 664 mM W-5 is present along with 10 mM of CaM-peptide (Figure 8). DISCUSSION The disease-causing mutation in HD is an expanded polyglutamine repeat in huntingtin protein. This mutant form of huntingtin becomes a better substrate for TG (6,16,21,23,50) and has increased interaction with CaM (3,49). Interestingly, TG is positively regulated by CaM (5,36,49). Therefore, we hypothesized that the interactions between mutant huntingtin, CaM and TG may be deleterious. Previously, we found in HEK-293 cells transiently expressing N-terminal mutant huntingtin, TG2 and a peptide containing amino acids 76-121 of CaM CaM-peptide that TG-catalyzed modifications of mutant huntingtin were reduced, cytotoxicity associated with mutant huntingtin protein was reduced and carbacol-stimulated calcium release was normalized (13). In order to test the effects of CaM-peptide in an HD transgenic mouse model, we created an AAV which mediates the expression of CaMpeptide. Before proceeding to an HD animal model, we first tested the effects of AAV-mediated expression of CaM-peptide in a neuronal HD cell model as neurons are affected in HD and the effects of CaM-peptide have only been tested in kidney cells. We created neuroblastoma SH-SY5Y cells which stably express N-terminal mutant huntingtin and endogenously express TG 2 (SHSY5Y-htt-N63-148-Q cells; Figure 2A) and examined the effect of AAVmediated expression of CaM-peptide on TG-activity and mutant huntingtin-associated neurotoxicity. Furthermore, we induced differentiation of SHSY5Y cells prior to testing the effects of the CaM peptide to increase the levels of TG expressed in the cells and to acquire a neuronal phenotype, both effects producing a model more closely resembling cells that degenerate in HD. AAV-mediated expression of CaM-peptide was sufficient to result in effects in SHSY5Y-htt-N63-148-Q cells similar to effects previously seen in HEK-293 cells. TG-catalyzed modifications of mutant huntingtin and cytotoxicity were attenuated in SHSY5Yhtt-N63-148Q cells expressing CaM-peptide compared with SHSY5Y-htt-N63-148Q cells expressing scram-CaM-peptide or GFP. These findings make it plausible that AAV-mediated expression of CaM-peptide could potentially have beneficial effects in an HD transgenic mouse model. CaM-peptide was able to attenuate TG-catalyzed modifications to mutant huntingtin, but it was unclear whether this effect was specific to TG-catalyzed modifications of mutant huntingtin or if it was a non-specific effect on TG-catalyzed modifications of all TG substrates. We used two different approaches to measure total TG activity, an ex vivo approach in which total TG activity is measured in lysed cells, and an in situ approach in which an exogenous substrate is added to the media, taken up by cells and then modified by TG in situ. Using both the ex vivo and the in situ approach, TG activity was not significantly affected by expression of CaM-peptide in either SHSY5Y cells not expressing mutant huntingtin (non-htt-SHSY5Y) or in SHSY5Y-htt-N63-148Q cells. TG activity was significantly increased in SHSY5Yhtt-N63-148Q cells expressing scram-CaM-peptide or GFP compared with non-htt-SHSY5Y cells as measured with the in situ assay. This is similar to the disease state in which there is elevated TG activity in the brains of HD patients compared with control subjects (23,25). However, TG activity in SHSY5Y-htt-N63-148Q cells expressing CaM-peptide was not significantly different from non-htt-SHSY5Y cells or from SHSY5Yhtt-N63-148Q cells expressing scram-CaM-peptide or GFP. The results suggest that CaM-peptide does not affect TG activity in cells without mutant huntingtin expression, further indicating that the effects of CaM-peptide are restricted to the site of interaction between TG and mutant huntingtin and therefore does not significantly affect total TG activity. Based on the total TG activity assays, it appeared as though CaM-peptide specifically alters TG activity associated with mutant huntingtin but not total TG activity. However, it was unclear whether CaM-peptide also alters the activity of other CaMdependent enzymes. This was important to investigate as not all CaM-dependent enzymes may have increased activity in HD like TG, or may not even be altered or involved in HD. TG protein levels and activity are elevated in human HD brains (24,25) and there is increased TG activity as well as immunoreactivity in R6/2 HD transgenic mouse brains (11). In contrast to TG, CaM kinase II protein levels are decreased in R6/2 HD transgenic mouse brains (10), suggesting that there may be a subsequent decrease in CaM kinase II activity. However, we found that in SHSY5Y cells trans-fected with mutant huntingtin, there were no significant differences in CaM kinase II activity compared with control cells transfected with vector. This suggests that CaM kinase II activity may not be altered in HD as TG activity is. It is also important to determine whether expression of CaM-peptide would alter CaM kinase II activity. We found that expression of CaM-peptide did not significantly alter CaM kinase II activity in the HD-model cells or in control cells compared with cells not expressing CaM-peptide, suggesting that CaM-peptide does not alter the activity of other CaM-dependent enzymes. Collectively, these data suggest that CaM-peptide may specifically affect the CaM-mutant huntingtin interaction, thereby primarily altering TG-catalyzed modifications of mutant huntingtin. As all of the previously observed effects of CaM-peptide have been in HD cell models, we used a cell-free in vitro assay to determine if there is a direct interaction between mutant huntingtin and CaM, and whether the CaM-peptide could interrupt the interaction. When purified recombinant huntingtin exon 1 with an expanded polyglutamine repeat was incubated with CaM-agarose, we were able to detect a direct interaction between the two proteins. This result differs from previous work in which an 125 I-CaM overlay experiment was performed with purified rat huntingtin and 125 I-CaM failed to bind huntingtin, suggesting that huntingtin and CaM did not bind directly (3). Differences in experimental design could account for the different outcomes. In comparison to the previous study that used full-length, wild-type huntingtin purified from rat brain, we used recombinant huntingtin consisting of exon 1 and having an expanded polyglutamine repeat containing 44 glutamines (htt-exon1-44Q). One possibility is that the polyglutamine expansion in the mutant form of huntingtin increases its affinity for CaM, allowing huntingtin to interact with CaM directly. Another possibility is that the exon 1 fragment of huntingtin has an altered conformation in which the CaM-binding region may be exposed, where otherwise it is masked in the full-length huntingtin structure. Furthermore, the wild-type huntingtin protein sample was resolved by SDS-PAGE and transferred to a membrane which was incubated with 125 I-CaM. It has been shown that some proteins are unable to renature upon transfer to a membrane (4). Possibly, the CaM-binding domains of the 350 kDa huntingtin protein failed to renature on the membrane. We performed our CaM-binding experiments by incubating the htt-exon1-44Q protein sample with CaM-agarose, allowing huntingtin to be in a native state. Lastly, the relative protein concentrations used in the previous experiments may be problematic. In the previous 125 I-CaM overlay experiments, 125 I-CaM failed to bind purified rat huntingtin but bound the positive control, calcineurin. However, only 150 ng (0.43 pmoles) of purified rat huntingtin was used but 500 ng (6.25 pmoles) of calcineurin was used. Calcineurin is one of the highest affinity CaMbinding proteins examined to date (having a Kd = 28 pmols for binding of calcineurin to calcium-saturated CaM) (37). Therefore, it is likely that huntingtin has a lower affinity for CaM compared with calcineurin. In the previous study, an interaction between huntingtin and CaM may not have been detected simply because of the problem that there was not enough huntingtin present. In this study, After a direct interaction between htt-exon1-44Q and CaM was established, we found that 10 mM of CaM-peptide was able to significantly inhibit the interaction between CaM and htt-exon1-44Q. The peptide concentration of 10 mM that was needed to inhibit CaM-mutant huntingtin binding is similar to concentrations required for other synthetic peptides to inhibit protein-protein interactions (14). The presence of W-5 did not affect the interaction between htt-exon1-44Q and CaM. These data suggest that in our previous study, the decreased TG-catalyzed modifications of huntingtin we observed when inhibiting CaM with W-5 (49) was most likely caused by inhibition of CaM activity and not caused by inhibiting the binding of CaM to mutant huntingtin. Furthermore, the presence of 664 mM W-5, but not lower concentrations (66 mM, 210 mM), attenuated the effect of 10 mM CaM-peptide, i.e. to inhibit CaM-mutant huntingtin binding. However, the amount of CaM-bound htt-exon1-44Q was still significantly less than when CaM-peptide was not present. This suggests that CaM-peptide may bind CaM at the site of interaction of W-5, but that CaM-peptide has a higher affinity necessitating a high concentration of W-5 (half a log more than the IC 50 of 210 mM) (20,40) to inhibit binding of CaM-peptide to CaM. Lastly, the lack of an effect of CaM-peptide on the interaction of CaM and calcineurin suggests that CaMpeptide may act directly on CaM and mutant huntingtin and not on other proteins that interact with CaM. As calcineurin has such a high affinity for CaM, the effect of CaM-peptide on the interactions of CaM and other CaM-binding proteins should be examined. However, the expression of the CaM-peptide reduced cytotoxicity in both the neuronal cell model and in a kidney cell model, which suggest that the CaM-peptide is not likely to interfere with crucial interactions of CaM with other proteins. We previously demonstrated that CaM is associated with the aggregates of mutant huntingtin protein (49). Other lines of evidence also support a role for CaM in HD. The CaM binding protein, PEP-19, which inhibits calcium-CaM signaling (38,39), has been shown to be decreased in HD brain regions that are vulnerable to degeneration (45). This decreased the inhibition of CaM, combined with the close proximity of TG and CaM (caused by the interaction of mutant huntingtin with both proteins) may contribute to CaMmediated over-stimulation of TG. It has been shown that TG activity is increased in HD brains (23,25). Increased TG activity can result in increased modifications and stabilization of aggregated and monomeric mutant huntingtin as well as oligomers of mutant huntingtin fragments. The binding of CaM to exon 1 of mutant huntingtin may result in the inhibition of the normal biochemical functions of CaM such as activating NOS. It has been shown that in R6/2 HD transgenic mice, NOS activity is reduced and NOS inhibition accelerates behavioral deficits (9). There are abnormally high levels of cytosolic calcium in HD transgenic mice (43), and mitochondria from HD patients display abnormal calcium homeostasis (33). Furthermore, huntingtin with an expanded glutamine repeat increases inositol 1,4,5-trisphosphate receptor-mediated calcium release (41)(42)(43). These deleterious effects on calcium homeostasis suggest that returning the functioning of CaM toward normal would be beneficial. The ability of the CaM peptide to inhibit the binding of CaM and N-terminal mutant huntingtin could result in increased functioning of CaM, assuaging the disturbed calcium homeostasis as previously shown in HEK-293 cells (13). Our current findings illustrate that the expression of CaMpeptide in a neuronal HD cell model results in decreased TG-modifications of mutant huntingtin and decreased cytotoxicity. Our in vitro findings support the hypothesis that expression of a small fragment of CaM can regulate the binding of CaM and mutant huntingtin, and taken together with the cell culture data, suggest that interrupting this binding provides neuroprotective effects against the detrimental consequences associated with mutant huntingtin expression. Importantly, the CaM-peptide shows selective effects on TG-mediated modifications to mutant huntingtin and do not affect any other CaM-regulated enzymes or TG activity in general. The AAV-expressed CaM-peptide will next be tested in an HD transgenic mouse model. Inhibiting the interaction of CaM with mutant huntingtin protein is a potential target for therapeutic intervention in HD.
The president of Checker Yellow Cabs says he’s pleased police appear to be paying closer attention to incidents of abuse against cabbies after a woman was charged with assault for allegedly throwing a bag of vomit on one of his drivers. Kurt Enders said he believes drivers are now also bringing these issues “to the forefront, because things are starting to get done and charges are being laid.” He added passengers do get sick in cars, probably every weekend, but this is the first time he’s heard of a bag of vomit being thrown at one of his drivers, calling this “an extreme case.” The assault charge was laid after the driver came forward at an open house attended by taxi drivers and police officers. Police learned the incident took place around 3:20 a.m. on Aug. 23, when the taxi went to pick up a woman near 1st Street and 4th Avenue S.E. in downtown Calgary, and drove her to a home in the northwest Sandstone Valley area. On the way there, the passenger threw up in the vehicle. The cabbie gave her a bag, then continued driving. Once they arrived at the home, the passenger continued vomiting on the exterior of the vehicle. When the driver requested a cleanup fee, the passenger became abusive and threw the bag of vomit on him, covering his clothes, phone, car seat and floor mats, police said. “She became upset and agitated,” said Det. Matt Baker. “She was intoxicated by alcohol and wasn’t prepared to pay the cleanup fee.” The passenger’s family later paid both the cleanup fee and fare. Selena Narayan-Lachapelle, 33, of Calgary, was charged with one count of common assault, and is expected to appear in court on Oct. 14. Baker said the driver reported the incident to police at an Aug. 25 open house, which was held in response to the incidents of abuse against taxi drivers, including one in which a passenger was reportedly caught on camera screaming racial slurs and profanities at a taxi driver and a similar case in which a woman was charged with uttering threats to cause death or bodily harm against a driver that have recently been made public. “The difference with this (most recent) one and the others is the assault is minor in nature and not racially motivated,” he added. Police also recently charged a man in connection with two taxi carjacking incidents and charged a man after he allegedly slapped a taxi driver’s face and tore off the vehicle’s dashboard camera. Baker said he hopes to hold another similar open house where cabbies can open up about their concerns, adding he encourages drivers to reports all incidents of abuse or assault to police. Enders said sometimes it comes down to the public being respectful of drivers and their workplaces. “I think people have to realize this is the driver’s office,” he said. “You wouldn’t want me coming to your office disrespecting you in your office.”
DALLAS -- Dallas Mavericks owner Mark Cuban lambasted the NBA on Wednesday for allowing the league-owned New Orleans Hornets to complete a trade with the Sacramento Kings in which the financially troubled franchise will absorb salary and send an undisclosed amount of cash to the Kings. New Orleans sent guard Marcus Thornton, who is earning $762,195, plus cash to Sacramento for forward Carl Landry, who is earning $3 million. The Hornets, who are over the salary cap, were able to fit Landry into a trade exception. The difference in salary is $2.24 million, of which the Hornets will be responsible for the prorated amount for the remainder of the season. "If New Orleans is taking back $2 million and the team is losing money and I own 1/29th of it, I'm going to go against the grain and say that's just wrong," Cuban said prior to the Mavericks' home game against the Utah Jazz. "There's no way, with their payroll, having to dump salary before they were sold to us [NBA owners]; now they can take on more salary while they're losing money. That's just wrong every which way." The NBA -- Cuban and 28 other owners -- took over ownership of the Hornets from George Shinn on Dec. 6. The league funds the organization and set an operating budget. Cuban said he never anticipated the Hornets to be in a position of taking on salary. "I don't have a problem if they go dollar-for-dollar, great, more power to them," Cuban said. "You could see if it was like a marquee guy and he's going to bring in lots of dollars. No disrespect to Carl Landry, but I don't see that's the way it works. It's just wrong. I'm one of the owners. The league is supposed to just give them a budget and it never dawned on me that the budget would say you can spend more money to bring in players." The Mavs and Hornets are Southwest Division rivals and split two heated games earlier in the season. It is possible they could meet in the first round of the playoffs. Cuban wouldn't acknowledge if the Mavs had interest in acquiring the 6-foot-8 power forward, who is in the final year of his contract. Cuban said plenty of teams did have interest in Landry, though few are willing to take back salary, making the Hornets' deal the sweetest for a struggling Sacramento franchise. "That's the whole point," Cuban said. "Other teams have interest in him, but not a lot of teams are going to take back money. If it's a better deal, great. But the dollars should add up." Cuban launched the most pointed criticism toward the league since it took over ownership. Los Angeles Lakers coach Phil Jackson has publicly questioned the wisdom of the league's owning the Hornets, but has not criticized the league as firmly. "All I know is if most of the owners in this league can't take back salary in a deal," Cuban said, "the Hornets shouldn't be allowed to either." Jeff Caplan covers the Mavericks for ESPNDallas.com.
CANBERRA (Reuters) - It seemed unbelievable when bids to buy a heartbroken man’s life in Australia reached A$2.2 million (US$2.1 million) — and it was, with the bemused seller aware his life was only worth a quarter of that amount. Ian Usher poses on a beach at Lancelin, about 130 km (81 miles) north of Perth in this undated handout photo made available to Reuters June 18, 2008. When Usher's partner of 12 years left him broken-hearted, he knew he needed a drastic change so decided to auction his life on eBay with the package including his house, job, car, motorbike, clothes and even friends. REUTERS/Handout Ian Usher, 44, announced in March he was auctioning his life on eBay with the package including his A$420,000 three-bedroom house in Perth, Western Australia, a trial for his job at a rug store, his car, motorbike, clothes and even friends. His decision to sell his life followed the break-up of his five-year marriage and 12-year relationship with Laura with whom he had built the house. Usher, originally from County Durham in Britain before moving to Perth in 2001, said he hoped to raise up to A$500,000 to fund a new life but on the first day of the week-long auction, bids skyrocketed to A$2.2 million. But Usher knew his life was not worth that and was quick to realize there was a glitch in the system with auction Web site eBay allowing offers from non-registered bidders which took a day to sort out. “Apologies to all, but I guess there are a lot of bored idiots out there,” Usher said in a statement e-mailed to Reuters that was to be posted on his website www.alife4sale.com. “Anyway after a long day on the computer, I have decided to pull all bids back as far as the first registered bidder, and the price is back to A$155,000 as I write this ... we are back in the land of common sense and reality, so it’s over to you.” After 21 bids the amount had risen to A$245,100. A spokeswoman for eBay, Sian Kennedy, said Usher had to verify all the bidders before the auction to check they were genuine buyers and he could delete any he believed were hoaxes. She said this was his responsibility as the bids were not binding. Usher’s life has come under the real estate section on eBay as his house is the main asset in the sale. “The real estate category on eBay is a non-binding section because of the real estate laws in Australia. You need a special license to sell real estate,” said Kennedy. “You need to get in contact with him and he has to verify you are a genuine bidder before you can bid. If he doesn’t think you are genuine he can remove your bid.” Kennedy said Usher is not the first person to put his life up for sale but could be the first to offer it in this package. Australian philosophy student Nicael Holt, 24, offered his life to the highest bidder last year in a protest about mass consumerism. Slideshow (2 Images) American John Freyer started All My Life For Sale (www.allmylifeforsale.com) in 2001 and sold everything he owned on eBay, later visiting the people who bought his things. Adam Burtle, a 20-year-old U.S. university student, offered his soul for sale on eBay in 2001, with bidding hitting $400 before eBay called it off, saying there had to be something tangible to sell. Burtle later admitted he was a bored geek. Usher’s auction closes at noon on June 29.
Distribution, diversity and functional dissociation of the mac genes in marine biofilms Abstract Bacteria produce metamorphosis-associated contractile (MAC) structures to induce larval metamorphosis in Hydroides elegans. The distribution and diversity of mac gene homologs in marine environments are largely unexplored. In the present study mac genes were examined in marine environments by analyzing 101 biofilm and 91 seawater metagenomes. There were more mac genes in biofilms than in seawater, and substratum type, location, or sampling time did not affect the mac genes in biofilms. The mac gene clusters were highly diverse and often incomplete while the three MAC components co-occurred with other genes of different functions. Genomic analysis of four Pseudoalteromonas and two Streptomyces strains revealed the mac genes transfers among different microbial taxa. It is proposed that mac genes are more specific to biofilms; gene transfer among different microbial taxa has led to highly diverse mac gene clusters; and in most cases, the three MAC components function individually rather than forming a complex.
One question more than a few people have asked me is if Luke Kuechly could possibly play OLB, specifically SAM. Initially I wasn’t so sure. His Combine workout opened a lot of eyes so I went back to the tape to see if my take on this had changed. I popped in the BC/Miami game. Here are some notes I took: Kuechly — Read end around to Miller and got him for loss. Took a good angle on the play. When Miller tried to put on a move, Luke was able to react and still make the stop. Smart, patient player. Moves well laterally. Attacks upfield when he sees the ball and can get to it. Sometimes was lined up toward the outside. Will fight to shed blocks. Tackled RB after short catch over the middle. Great vision. Plays under control so that when he finds the football he can go get it. Flew out to the flat and slammed the FB down hard after a short catch. Almost picked off pass to Streeter on a deep seam route. PBU. Slammed down TE after short catch over the middle. Showed good closing speed when flying over to slam down a RB after short catch. Deflected a pass over the middle by skying for the ball. Deflected pass over the middle going to Streeter. Tommy was able to catch the deflection and got 8 yds on the play. The Streeter PBU was really impressive. Kuechly ran downfield 30 or so yards with the WR and was in perfect position. The ball was underthrown and he made a good stab at picking it off, but just couldn’t hold on. Streeter ran well at the Combine so this isn’t a case of Luke staying with a lumbering player. He had to go all out to be in the right position on that play. I do think Kuechly could play SAM in our system. The key to that is alignment. The Wide 9 has the DEs out wide and the OLBs off the ball and more to the middle of the field. The SAM is in the RG/RT area, but a couple of yards back. In the old system, the SAM would be up on the LOS and over the TE. He would be more of an edge player. The x-factor here is what Juan Castillo wants to do with SAM. Last year he hoped Jamar Chaney would be a really coverage coverage LB who could handle RBs and TEs out in space. When Chaney moved to the middle, Juan switched up what he wanted out of Moise Fokou and Akeem Jordan. They were more focused on run defense. If Juan wants to go back to a coverage guy, Kuechly would not fit as well. He’s good in zone and when sitting back and reading plays in front of him. Kuechly isn’t nearly as effective at running with a RB or TE out in the flat and reacting to the moves put on by the player. I would certainly prefer Luke to stay at MLB, but it is interesting to at least consider the option of playing him at SAM. There is no prospect in this draft or in FA that is a perfect SAM target for us. The most intriguing scenario is for the Eagles to sign David Hawthorne and then draft Luke Kuechly and put one at MLB and the other at SAM. You could figure out which combination worked best and then go with that. Hawthorne played WLB in Seattle in the 2010 season. He’s got starting experience at OLB. Kuechly has the better SAM frame at 6’3, 242. He was mostly a MLB in college, but did play WLB for part of his Freshman season. Kuechly played Safety back in HS. I truly doubt the Eagles go this way at LB. Adding a key FA LB and then using a Top 15 pick on a LB would be a huge change for them. Expect one or the other. If they did want to be aggressive, the team could add both guys and I think they could make it work. * * * * * Today the Eagles added LB Monte Simmons, a Practice Squad player from SF. This isn’t a move to get excited about, but Simmons is an interesting guy. At Kent State he racked up 21.5 sacks and 6 FFs. He is only 6’1, 234 so that sure doesn’t sound like DE size, even in the Wide 9. Most likely he’s here to compete at SAM. Simmons had a strong showing at his Pro Day last spring and has the athleticism to play in the NFL. I’ll have to go back and find some Kent State tape to watch and get a feel for his game. Not a big deal, but with our LB situation, all competition is welcome. * * * * * NFL Gimpy has up his new MAQB column. The Saints could be in for a big tumble. They have several key FAs who could leave and will likely lose at least one draft pick this year. They don’t have the cap space to just sign replacements. And losing Sean Payton for a month or two (or more?) could really hurt them. Ouch. For those interested in hearing Adam Caplan’s show from Saturday, WCHE has a podcast up. Adam moves at a quick pace and covers a lot of NFL ground. * * * * * Tags and new deals flying around the NFL today. I’ll take all of this in and then address it after coming up with a genius conclusion.
A Hybrid Asymmetric Integer-Only Quantization Method of Neural Networks for Efficient Inference The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing. Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedded devices. In order to reduce the complexity of deploying neural networks on Integer-only hardware, most of the current quantization methods use a symmetric quantization mapping strategy. However, the robustness and generalization of this quantization mapping strategy are poor. It is difficult for the quantized model to meet the accuracy requirements, especially for complicated tasks with higher accuracy requirements. In this paper, an efficient hybrid asymmetric Integer-only quantization method for different types of neural network layers is proposed. The proposed method can resolve the contradiction between the quantization accuracy and the ease of implementation, and balance the trade-off between clipping range and quantization resolution, and thus improve the accuracy of the quantized neural network. The results show that compared with the traditional symmetric quantization method, the accuracy of the proposed method can be improved up to 2.02% for classification models and up to 5.52% for the Yolo-v3 tiny target detection model, enabling the neural network to be easily deployed and implemented on embedded devices.
PHILADELPHIA (Reuters) - A U.S. judge on Monday sentenced an American woman who called herself Jihad Jane to 10 years in prison - at least a decade less than prosecutors had sought for her role in a failed plot to kill a Swedish artist who had depicted the head of the Muslim Prophet Mohammad on a dog. Colleen R. LaRose, 50, who converted to Islam online and has maintained her faith, was given credit for the four years she has already served. LaRose, who pleaded guilty to following orders in 2009 from alleged al Qaeda operatives, could have received a life sentence. “It’s a just and reasonable sentence,” her attorney, Mark Wilson, told reporters after the hearing. “She’s pleased. Ten years is about what we were hoping for all along.” U.S. District Judge Petrese Tucker called LaRose’s crimes “gravely serious,” adding: “The court has no doubt that, given the opportunity, Ms. LaRose would have completed the mission.” Tucker also cited the significant cooperation LaRose has given the Federal Bureau of Investigation in other terrorism cases since her 2009 arrest, as well as the sexual and other abuse she suffered as a child. That abuse was chronicled in a 2011 Reuters investigative series. LaRose, who used the name Jihad Jane as she became involved in the Muslim online community, traveled to Europe in 2009 intending to participate in a militant plot to shoot artist Lars Vilks in the chest six times. But LaRose became impatient with the men who lured her to Europe and she gave up after six weeks and returned to Philadelphia, where she was arrested. At Monday’s hearing, LaRose apologized for blindly following the instructions of her handlers. “I was in a trance and I couldn’t see anything else,” she said. “I don’t want to be in jihad no more.” Assistant U.S. Attorney Jennifer Arbittier Williams had sought “decades behind bars” for LaRose, arguing that despite her extensive cooperation, she still was a danger to society. Prosecutors also had pointed out that LaRose - a blond, green-eyed, white American - did not fit the stereotype of an Islamic militant. “This is a sentencing that people are watching,” Williams said on Monday. “Ms. LaRose had such a big impact in the public and press because she really did change the face of what the world thought of as a violent jihadist. It was scary for people to hear that Ms. LaRose could have been radicalized simply online in the U.S.” CHILDHOOD SCARS Wilson told the court that the plot to kill Vilks was “more aspirational than operational” and that LaRose had never even fired a gun. He had described LaRose as a lonely and vulnerable woman easily manipulated by others online. Her behavior, while not excusable, can be explained in part by deep psychological scars from her childhood, he said. LaRose’s biological father repeatedly raped her from about age 7 to 13, when she ran away and became a prostitute, according to court documents. At age 16, LaRose married a man twice her age and later became a heavy drug user. “I survived a lot of things that should have rightfully have killed me,” LaRose told Reuters in a 2012 interview. While LaRose was in contact with an al Qaeda operative in Pakistan, her conspirators repeatedly bungled a plot that never moved much past the planning stages. Vilks, the artist, had told Reuters that he believes LaRose has spent enough time in prison and should be freed. “That’s a pretty tough sentence,” Vilks told the Swedish news agency TT on Monday. Under U.S. sentencing rules, LaRose likely will serve 90 percent of her sentence, which means she will be eligible for release around 2020. She has requested imprisonment near her sister, Pam LaRose, in the Fort Worth, Texas, area, but a final decision will be up to the Bureau of Prisons. LaRose was in solitary confinement for four years but recently moved to the general population at a Philadelphia jail. Colleen LaRose, known by the self-created pseudonym of "Jihad Jane", is pictured in this photo released by Site Intelligence Group on March 10, 2010. REUTERS/Site Intelligence Group/Handout Ali Damache, LaRose’s alleged handler in Ireland, remains jailed there, fighting extradition to the United States on terrorism charges. Jamie Paulin Ramirez, who flew from Colorado to marry Damache in Ireland, has pleaded guilty to related terrorism charges and is scheduled to be sentenced on Wednesday. The sentencing for another co-conspirator who has pleaded guilty, Mohammad Hassan Khalid, has delayed in order to complete psychological evaluations. Khalid, who grew up in Pakistan and was an honor student in suburban Baltimore, committed his crimes when he was 15 and 16. He is the youngest person ever charged with terrorism inside the United States. According to a November report in the Guardian newspaper, documents leaked by Edward Snowden, a former contractor for the U.S. National Security Agency, to the British newspaper show that the FBI became involved in the Jihad Jane case after the NSA intercepted communications.
Accurate Analytical Solution for Free Vibration of the Simply Supported Triangular Plate Exploiting the superposition method developed earlier by the author, a highly accurate analytical-type solution is obtained for the free vibration of the general simply supported triangular plate. A new technique is introduced for the orderly storage of eigenvalues. It is shown how advantage can be taken of the more easily obtained isosceles triangular plate solutions. Accurate eigenvalues are tabulated for a wide range of plate geometries. This represents the first accurate and comprehensive treatment of this problem to appear in the literature. a b D E h K K* M W
President Obama campaigned on behalf of Democratic presidential nominee Hillary Clinton Hillary Diane Rodham ClintonSanders: 'I fully expect' fair treatment by DNC in 2020 after 'not quite even handed' 2016 primary Sanders: 'Damn right' I'll make the large corporations pay 'fair share of taxes' Former Sanders campaign spokesman: Clinton staff are 'biggest a--holes in American politics' MORE in Nevada on Sunday, also pushing those gathered to vote for Democrats running down ballot. “Nevada’s always close,” he told the crowd in North Las Vegas, “but that’s what makes it exciting.” ADVERTISEMENT After thanking the state for helping to elect him, praising retiring Sen. Harry Reid Harry Mason ReidBottom Line Brennan fires back at 'selfish' Trump over Harry Reid criticism Trump rips Harry Reid for 'failed career' after ex-Dem leader slams him in interview MORE and touting the progress made during his eight years in office, Obama launched into an attack on GOP nominee Donald Trump Donald John TrumpHouse committee believes it has evidence Trump requested putting ally in charge of Cohen probe: report Vietnamese airline takes steps to open flights to US on sidelines of Trump-Kim summit Manafort's attorneys say he should get less than 10 years in prison MORE. “You’ve got a guy who proves himself unfit for this office every single day in every single way,” he said. He also criticized Trump for claiming the election process is rigged, saying: “If this was rigged, boy it would be a really big conspiracy." “The Republican governor is not going to rig an election for Hillary Clinton or rig an election for Catherine [Cortez Masto],” he added, referring to the Democrat running for Reid’s seat. “We’ve got to have a Congress that is willing to make progress on the issues Americans care about,” he said, before launching an extended attack on Rep. Joe Heck, the Republican facing off against Masto. He said Heck supported Trump when it was “politically convenient” and asked “What the heck took you so long?” to denounce the nominee. Heck dropped his support for Trump earlier this month after The Washington Post uncovered a 2005 “Access Hollywood” tape in which Trump boasts of kissing and groping women without consent. “Too late,” Obama said. “You don’t get credit for that.” As he criticized Heck, he asked “Nevada, what the heck?” and then led the crowd in chants of “Heck no!” Polls show a tight race between Heck and Cortez Masto, the former Nevada attorney general in a race that Heck had been narrowly leading for months. Clinton is ahead of Trump by nearly 5 points in the RealClearPolitics average, and the latest average for the Senate race shows Cortez Masto up by 2 points. Heck revoking his support of Trump has set off a backlash against from Trump supporters, and he’s privately acknowledged he’s in a “very difficult situation” for no longer supporting his party’s standard-bearer.
JOSEPHINE COUNTY, Ore. - The human remains found by hikers on February 7 have been identified as that of Chase Cook, 22. News10 first reported on this story . The remains were found in a wooded area near Williams Highway. Josephine County authorities responded to the scene and they determined that the remains were believed to have been there for an extended period of time. OSP were requested to assist with the investigation. Cook had been reported missing in 2012 and was believed to be driving in the area of Williams highway when he went missing. Cook's remains were discovered less than a mile from where the vehicle he was reportedly driving at the time of his disappearance. Evidence located at the scene supports that he died of self-inflicted gunshot wound.
Both the Yvan Muller Racing Norma M30 and #26 United Autosports Ligier JSP3 will take no further part in this week’s ELMS Prologue test at Monza after sizeable incidents in this afternoon’s on-track action. The YMR Norma M30 came off worse of the two after Bronze driver Gwenaël Delomier rolled the car at Ascari, leaving the team in a race against time to get the car to the season opener at Silverstone. Che paura in Ascari per una LMP3 della @EuropeanLMS durante i test di oggi a Monza! 😱 #ELMS #officialtest2017 foto @marcofossen pic.twitter.com/bYDUclPpXr — Autodromo Naz. Monza (@Autodromo_Monza) March 28, 2017 Team principal Yvan Muller has confirmed to DSC that after looking at the onboard footage of the incident, no other cars were involved. Since the crash, Delomier has gone for mandatory medical checks after complaining of pain in his knees. At United Autosports meanwhile, its Gulf-liveried JSP3 of Richard Meins and Shaun Lynn will not take to the track again after Meins significantly damaged the front-end of the the car after an off which has damaged the tub. Unfortunately a crash at the end of the day for Richard Meins in the Gulf Ligier means it won’t take any further part in testing…. — United Autosports (@UnitedAutosport) March 28, 2017 Meins has been reported as ok by a United team spokesperson.
The Legislature in its closing hours voted to legalize medical marijuana and to deliver a second round of tax cuts to Minnesotans, with property refund checks going to nearly a million homeowners, renters and farmers. Finishing ahead of the mandated Monday adjournment, lawmakers wrapped up Friday night by passing a handful of major bills. “History might look at it as the most productive legislative biennium in a generation,” said Senate Majority Leader Tom Bakk, DFL-Cook. The medical marijuana bill was one of the final votes, passing the Senate and House 89-40 with significant bipartisan support. “It is nice when Republicans and Democrats work together to help people by expanding their personal freedoms, rather than limiting them,” said Rep. Pat Garofalo, R-Farmington. The House and Senate also gave final votes to adopt $1 billion in state-backed construction projects. Lawmakers also voted by wide margins to ban vaping of e-cigarettes in some public places and prohibited the sale of such devices to minors. The House passed the tax bill unanimously, and the medical marijuana bill and construction package both drew DFL and GOP support. Earlier this year the Legislature had passed a minimum-wage increase, income tax cuts, a long-sought antibullying bill for schoolchildren and a Women’s Economic Security Act designed to improve pay and conditions for female workers. “I think we did have an incredibly productive two years, I think there’s no doubt about that,” said House Speaker Paul Thissen, DFL-Minneapolis. Medical marijuana bill authors Sen. Scott Dibble and Rep. Carly Melin were congratulated after the bill passed the House on a bipartisan vote. DFL Gov. Mark Dayton praised the results. “Two years ago, when I was asked what Minnesotans could expect from a DFL governor and a DFL Legislature, I said: Progress,” he said in a statement. “That is exactly what we delivered again this session.” Senate Minority Leader David Hann, R-Eden Prairie, decried the emphasis on spending. “Spending money and having good intentions is not enough,” he said. “There are too many kids in this state who are left behind. Spending money hasn’t helped them.” Deals yield bipartisanship While this year’s session ended with a number of bipartisan coalitions on major bills, it required a flurry of last-minute deal-making and vote-trading to make it happen. That came into play most visibly with the construction bill, which the House passed about 3 a.m. Friday. The Senate followed suit about nine hours later, sending Dayton a package of bond- and cash-backed projects that includes the State Capitol restoration, affordable housing upgrades, campus buildings, roads and bridges, local economic development projects, the Lewis and Clark water pipeline, a new Senate office building and other ventures. “It’s going to be a really bright star in terms of helping our economic development,” said Sen. LeRoy Stumpf, DFL-Plummer, who helped assemble the package. The construction measure required Republican votes in order to authorize the required bond sales, which gave the minority party rare leverage. They exercised it by demanding House Democrats scrap a DFL bill, the so-called Toxic Free Kids Act, which would have required manufacturers to notify consumers of the presence of toxic chemicals in toys, school supplies and personal care products. “We were able to stop that from proceeding and we think that’s good,” House Minority Leader Kurt Daudt said. Republicans called it an overly burdensome regulation for businesses. Dayton also jumped into the deal-making fray. Legislative leaders trying to hold together the fragile coalition around the construction bill asked the governor to promise he wouldn’t use his veto pen to delete specific projects from the final package. Dayton responded with his own printed list of demands, which administration staffers circulated to reporters in the wee hours of Friday morning. Lawmakers delivered most of the governor’s demands, which included oil pipeline safety requirements, a veterans hiring measure, money for a sober schools program, tougher regulations for commercial breeding facilities, and a demand that lawmakers give up their attempts to repeal sprinkler requirements for new larger homes. Dayton also demanded the Toxic Free Kids proposal, which he did not get. The governor declined to make any promises about construction bill vetoes; he has 14 days from the end of the session to sign or veto bills passed on the last day. Democrats and Republicans also agreed on $103 million in new tax breaks, with more than half of that devoted to direct relief for about 940,000 homeowners, farmers and renters. The measure passed unanimously in the House on Friday, and with just one vote against it in the Senate. Under the measure, the average homeowner would see direct property tax relief of more than $800, with more than $600 for renters and an average of $410 for farmers. Individual amounts will vary. That was the second tax relief measure of the session. In March, Dayton signed a bill providing $444 million in relief that reached about a million taxpayers. “Between this bill and the last one, we will have delivered $550 million in tax cuts for Minnesotans this year,” said Rep. Ann Lenczewski, DFL-Bloomington. Limits for medical marijuana The medical marijuana proposal prompted the most high profile policy debate of the session. Lawmakers and the Dayton administration had struck a final deal on the proposal just a day earlier. They agreed to a limited system of production and distribution that is considered the most restrictive among the 21 states that currently authorize access to medical marijuana. The new medical marijuana law, which Dayton has promised to sign, authorizes access to the drug for about 5,000 Minnesotans with conditions including cancer, epilepsy, HIV/AIDS and a handful of others. With a health care provider’s permission, those patients will enroll in a patient registry that will allow the state Department of Health to monitor their progress. Rep. Kurt Zellers hugged fellow Rep. Mary Liz Holberg, both of whom plan to retire. The drug will be available only in pill or oil forms, with smoking not allowed and access to the drug in its original plant form forbidden. That was not enough to mollify some skeptics. “It will change the face of Minnesota, folks, and don’t think it won’t,” said Sen. Carrie Ruud, R-Breezy Point. “We’re legalizing a drug.” Others said the move was premature. “We don’t have any studies, or proven methods of knowing what works for who, and at what level,” said Rep. Kathy Lohmer, R-Stillwater. “We’re basically just saying, we’re going to try this and see how this works. I think that is the opposite of compassion.” Bakk noted the proposal was stalled for much of the session and revived only with the persistent lobbying of a small group of families of children with epilepsy who want to treat their kids’ seizures with a marijuana-based oil. “This was not on the legislative agenda of most of us in this room,” Bakk said. “What that tells me is this is a wonderful example of how representative democracy works. A small group of families with their hurting children came to the Capitol, and they changed the law.”
Melting behavior of H2O at high pressures and temperatures Water plays an important role in the physics and chemistry of planetary interiors. In situ high pressure‐temperature Raman spectroscopy and synchrotron x‐ray diffraction have been used to examine the phase diagram of H2O. A discontinuous change in the melting curve of H2O is observed at approximately 35 GPa and 1040 K, indicating a triple point on the melting line. The melting curve of H2O increases significantly above the triple point and may intersect the isentropes of Neptune and Uranus. Solid ice could therefore form in stratified layers at depth within these icy planets. The extrapolated melting curve may also intersect with the geotherm of Earth's lower mantle above 60 GPa. The presence of solid H2O would result in a jump in the viscosity of the mid‐lower mantle and provides an additional explanation for the observed higher viscosity of the mid‐lower mantle.
Instinct and the Origins of Mind and Behavior Instinct has been one of the more contentious concepts throughout the history of psychology and social psychology. Broadly defined, instinct is considered innate, patterned behavior for living organisms that does not require learning or experience. Almost all early psychologists engaged in the study of instincts, and many attempted to classify them. One of the debates that emerged was whether there is a simple dichotomy between instinct and reason, with animals endowed with instinct for survival but only humans with the ability to rely on reason. With more influence from Darwin’s evolutionary theory, however, the idea that instincts were modifiable and a common trait for humans and animals became accepted. This also led to the idea that human instincts could be understood by examining the instincts of animals and the mental development of children. With the arrival of behaviorism, the concept of instinct began to fall out of favor altogether, and all behaviors were attributed to learning or conditioning. More recently, evolutionary psychologists have reclaimed the notion of instinct, although the understanding of this concept still varies and has an uncertain fate in the discipline.
Species differences in cardiac energetics. The energy flux of rat, guinea pig, and cat papillary muscles was measured myothermically under resting, isometric, and isotonic conditions at 27 degrees C. Resting heat rate was highest in the smallest species and declined with body size. The slope of the isometric heat-stress relationship was constant across species, whereas the stress-independent heat component was least for rat muscles. The shape of the load enthalpy relationship was similar across species. Maximum mechanical efficiency, work-enthalpy, occurred with lighter loads than for skeletal muscle (approximately 0.2 Po). Rat muscle had the smallest enthalpy per beat and the highest active mechanical efficiency, but this advantage was nullified by the higher basal heat rate. The myothermic data are compared with cardiac oxygen consumption values in the literature and it is concluded, contrary to the deductions of common dimensional arguments, that cardiac energy expenditure across species is not directly proportional to heart rate. Reasons for this discrepancy are considered together with the likely contribution of cardiac metabolism (EH) to total body metabolism (EB). It seems likely that smaller species have lower EH/EB.
Aging in enumerated spin glass state spaces Aging phenomena are observed in many spin class experiments. Heuristic state space models were presented in the past to reproduce these effects. We here start the investigation by considering the real state space of an Ising spin glass Hamiltonian. A branch-and-bound algorithm is used to find the low-energy part of the state space. We solve the problem of the still huge size of the state space by employing a special coarse-graining algorithm. The system can be reduced to a computational treatable size. We demonstrate that these systems still contain all properties necessary for aging effects.
Cannabis leaf (Photo: Boltenkoff, Getty Images/iStockphoto) COLUMBUS – After years of stonewalling efforts to legalize medical marijuana in Ohio, state lawmakers announced a plan to provide patients with medical marijuana by 2018. The proposal, which will be introduced this week, would allow Ohioans older than 18 to buy edible marijuana, patches, plant material and oils with their doctors' recommendation. Who would grow it? That's to be determined. Within a year, a new commission would create rules on how to grow, distribute and sell medical marijuana. That means patients could have access medical marijuana within two years, maybe less, said Rep. Kirk Schuring, R-Canton, who led a several-week task force looking into the benefits of medical marijuana. Lawmakers hope to have the bill on Gov. John Kasich's desk by the summer. Kasich, who is running for the GOP presidential nomination, has said he would be open to "something" on medical marijuana, especially if it helps patients. If approved, Ohio would join 24 states, and likely Pennsylvania soon, that allow medical marijuana. Among the proposed changes: Children, younger than 18, could use medical marijuana with their parent's permission and doctor's recommendation. Patients would not be able to grow marijuana at home, and it's not clear whether they could smoke it. A nine-member medical marijuana control commission would be created under the Ohio Department of Health. Members would include a representative from physicians, pharmacists, law enforcement, mental health, alcohol and drug addiction treatment, employers, labor unions, marijuana proponents and the general public. The commission would have one year to write rules to regulate those who grow, sell and recommend medical marijuana. Only physicians licensed by the state medical board and certified by the medical marijuana commission could recommend medical marijuana to their patients. However, lawmakers would not limit the list of conditions. Physicians would need to report the number of medical marijuana patients, list their patients' conditions and explain why they recommended medical marijuana over other medication every 90 days. Lawmakers would determine how medical marijuana would be taxed before passing a law. Dispensaries that sell medical marijuana would be regulated much like liquor shops, and local communities could vote to ban medical marijuana dispensaries from their cities and villages. Financial institutions that collect money from medical marijuana sales would be granted safe harbor from prosecution. State lawmakers would recommend federal authorities reduce marijuana from a Schedule I drug, the most dangerous classification, to Schedule II. Lawmakers would encourage research into medical marijuana with funding. Employers could still ban employees from using medical marijuana in their employee handbooks, even if employees get approval from doctors. "The workplace can still be drug-free," Schuring said. "Employers do not have to make accommodations for employees being recommended medical marijuana." How we got here After Ohioans soundly defeated a ballot initiative to legalize all marijuana, House and Senate lawmakers took different approaches to investigating the benefits of medical marijuana, which polls show is more popular among Ohioans than recreational marijuana. Sen. Dave Burke, R-Marysville, and Sen. Kenny Yuko, D-Richmond Heights, visited three cities, including Cincinnati, to listen to marijuana activists and opponents. House lawmakers created a task force, led by Schuring, which held multiple meetings in Columbus. The task force included former ResponsibleOhio leaders Jimmy Gould and Chris Stock, who authored Issue 3 to legalize all marijuana. Some in the Senate questioned whether the group's motives were pure with Issue 3 proponents on the panel. Senators, who have been working on their own medical marijuana bill, don't plan to introduce it. Instead, they will follow the House version, Yuko said. Senate President Keith Faber, who had not seen the House version, wants the final version to address his list of concerns. "Does it include smokeable or not? Are we creating a database, just like we do with opiates? How are we going to allow the prescription? How many dispensaries are we going to have? Are we going to limit the production side? How are we going to award the licenses? Is it going to be a monopoly? All of those are questions that we have concerns about," Faber said. NEWSLETTERS Get the News Alerts newsletter delivered to your inbox We're sorry, but something went wrong Be the first to be informed of important news as it happens in Greater Cincinnati. Please try again soon, or contact Customer Service at 1-800-876-4500. Delivery: Varies Invalid email address Thank you! You're almost signed up for News Alerts Keep an eye out for an email to confirm your newsletter registration. More newsletters The Ohio State Medical Association already opposes the House proposal, saying "it draws conclusions about the medicinal benefits of marijuana absent conclusive clinical research." Still, Yuko, a longtime proponent of medical marijuana, said he's confident patients will be able to access medical marijuana sooner once lawmakers hear stories about patients who need help. "I think it'll be a lot shorter than two years," Yuko said. 'Irresponsible' ballot initiatives? Lawmakers are under a time crunch because two groups are working to place medical marijuana before voters in November. But House Speaker Cliff Rosenberger, R-Clarksville, essentially told them to knock it off. Lawmakers haven't been stalling, and they are taking medical marijuana seriously, he said Wednesday. "To those that are operating outside the scope of this process, it is extremely irresponsible to continue without coming forward and participating with us in this process," Rosenberger said Wednesday. Rep. Wes Retherford, R-Hamilton, who introduced a bill last year to allow children with seizures to access cannabis oil, agreed that lawmakers, not outside groups, should drive changes on medical marijuana. The state constitution is much harder to amend than laws. "Anytime things need to be changed, it has to go back on the ballot again," Retherford said. The two ballot initiatives are from Marijuana Policy Project and its Ohio operation, Ohioans for Medical Marijuana, and Grassroots Ohio, a group of Ohio marijuana activists not thrilled about the Marijuana Policy Project's plan. - Marijuana Policy Project's proposal would allow adults older than 21 to grow up to six marijuana plants with a recommendation from their doctor. Those younger than 18 could use marijuana with a parent's permission and physician's recommendation. - Grassroots Ohio plans to legalize medical marijuana for those older than 18 and allow farmers to grow industrial hemp through a constitutional amendment. Then, they would send lawmakers a proposal to regulate the industry and determine how medical marijuana is distributed to patients. Marijuana Policy Project spokesman Mason Tvert wasn't impressed by Rosenberger's language about "irresponsible" ballot initiatives, calling the speaker's speech "arrogant." "Lawmakers often dislike initiatives and assume they could draft better laws. With all due respect, they have spent a couple months looking at this issue and we’ve been working on it for a couple decades," Tvert said. Marijuana Policy Project won't halt its ballot initiative unless lawmakers pass a quality law, said Tvert, adding that he was concerned that the proposal would take two years to implement and physicians would need to report patient information every 90 days. Read or Share this story: http://cin.ci/1N8HMrG
Unlike commercial airliners, modern military aircraft are subjected to ever-changing flying conditions—from high-thrust takeoffs to flying at altitude to combat maneuvers. So why are they outfitted with engines that perform optimally in only one of those flight envelopes? For the next iteration of the F-35 Lightning II, Pratt and Whitney is developing an engine that performs at its best no matter what's required of it. Turbofan technology is the backbone of modern aviation, using a pair of air streams to propel everything from commercial airliners to jet fighters far faster than any propeller could. The problem is that the dual air stream design limits the engine's efficiency to a single speed point. That's why commuter jets can't go supersonic, and fighter jets are terrible at low-speed cruising. Advertisement Turbofans are air-breathing jet engines that use a large fan at the front of the engine with a smaller gas turbine engine core behind. The fan pushes air both into the core as well as a bypass duct that surrounds it. There are some variations to the basic design, of course, as military jets use more and smaller fans while commercial airliners use a single larger fan. "You can design turbofan engines off of that single design point but you are not operating at your best performance and typically what you end up giving up is efficiency," Jimmy Kenyon, Pratt & Whitney's Next Generation Fighter Engine General Manager, explained to Gizmodo. He continued: If you look at the evolution of the turbine engine over time, we started out with turbo jets, and they had what we call a single stream. So the air flowed into the compressor, then went into a combuster, burned, and then exited out the turbine. The turbine actually drives the compressor so it’s something that can keep itself sustained and going. And at the time it was considered a very efficient way of doing business. Later on, we introduced what we called the turbofan. And what the turbofan did was it added an extra turbine on the back end of that turbojet (now called the core), but it added a turbine on the back end of that, and added a big fan up in front. That extra turbine drives that extra fan, but what that allows you to do is, part of the air that comes in the front goes through the core just like it did before, but a part of the air goes through the fan and then passes by the rest of the engine. Advertisement The super-hot compressed exhaust coming from the core then pushes this cooler, denser bypass air to generate the thrust. The ratio of the two streams is called the bypass ratio and it's this ratio that determines the engine's efficiency envelope. For high-performance engines like the Pratt & Whitney F135 powering the F-35 Lightning, the bypass ratio is very low—that is, it uses mostly jet thrust from the core in relation to the bypass stream—hence the term, low-bypass turbofan. Long-haul engines, such as the GE-90 that powers the 787 Dreamliner and military cargo jets, instead utilize more fan thrust from the bypass than jet thrust from the core and are referred to as a high-bypass turbofan. Advertisement But what if you could make an engine that performs equally well at both low- and high-bypass ratios? That's exactly what Pratt & Whitney is attempting with its Adaptive Engine Development Program for the upcoming sixth-generation F135 turbofan. "What we are looking at with adaptive engines are engines that can operate at multiple design points across a range of flight envelopes while maintaining optimal operating efficiency," Kenyon said. Advertisement This adaptive cycle engine will utilize a secondary bypass stream (three air streams in total) to act much like the gearing on a car's transmission, allowing the F135 engine to adjust and match its bypass ratio at will, whether it's high-thrust takeoffs or high-efficiency cruising at altitude. "That third stream is something that we have the ability to modulate, to change the conditions of that flow," Kenyon told Gizmodo. "How much flow, and flow characteristics so that we can kind of optimize the bypass ratio over the flight envelope." Kenyon further explained: On top of that adaptive fan we’re also making improvements, tremendous improvements in the core system as well. We’re putting in a higher pressure ratio, higher efficiency compressor, leveraging a lot of our advanced commercially-derived, 3D aerodynamic design capability...We’re looking at increasing the temperature capability and the efficiency of the turbine stages, and then we’re also looking at the exhaust system. Having that adaptive third stream allows us to work with that stuff as well...we’re making improvements to the efficiency of the core engine, but we’re also using the adaptive architecture to give us a lot more design options in terms of how we can manage the engine over the flight envelope. Advertisement These improvements should translate into marked improvements in the engine's overall fuel efficiency on both sides of the sound barrier. What's more, the new system is expected to offer superior heat sinking abilities that will reduce the plane's thermal signature while further improving its stealth capabilities. You know, just in case the F-35 wasn't deadly enough as it is. [Wikipedia - Pratt & Whitney 1, 2, 3]
Deep in the Antelope Valley in northern Los Angeles County, directly off State Route 138, are the forlorn ruins of a sunbaked, desert ghost town. Large stone chimneys, once the focal point of a cozy hotel, rise into the dry, clear blue sky, almost seeming to touch the snow-capped San Gabriel Mountains in the distance. Beige dirt roads lead to crumbling beige walls, cisterns full of trash, and the rough stone foundations of long-gone homes and workshops. There are no people, only the occasional scurrying lizard, whose movements are amplified in the still, silent air. These ruins serve as the collective tombstone of Llano del Rio, the briefly bustling socialist colony founded nearly a hundred years ago. Job Harriman—handsome, earnest and charismatic—was the face of socialism in California around the turn of 20th century. A perpetual candidate, he mounted unsuccessful campaigns for the governorship of California in 1898 and the vice presidency of the United States in 1900. He ran twice for mayor of Los Angeles, and almost certainly would have won in 1911, if the men accused of bombing the Los Angeles Times—who he had supported and represented—had not pleaded guilty days before the election. After another failed mayoral bid in 1913, the 55-year-old Harriman, wearied by politics and the constant harassment he suffered at the hands of powerful enemies (including Harrison Gray Otis, the bombastic and conservative owner of the Los Angeles Times), began to look to a future outside of LA. He dreamed of a socialist colony, where cooperative living could thrive and serve as an example to others while still staying within the bounds of capitalist norms. “It became apparent to me,” he recalled, “that people would never abandon their means of livelihood, good or bad, capitalistic or otherwise, until other methods were developed which would promise advantages at least as good as those by which they were living.” Harriman formed a plan for the colony centered around these beliefs. He would form a corporation with which to buy land for a cooperative settlement. Each new resident of the colony would be required to buy $2,000 worth of shares, and then would be assigned a job that would pay her or him a good living wage. Looking for money, he took this idea to Gentry P. McCorkle, a socialist banker from Corona. “If you will join me and a few other of my friends,” he told McCorkle, “we will build a city and make homes for many a homeless family. We will show the world a trick they do not know, which is how to live without war or interest on money or rent on land or profiteering in any manner.” McCorkle thought Harriman’s vision made sound economic sense and agreed to join him in his venture. The new corporation began to search for an ideal place for the colony. According to McCorkle, they were presented with the perfect location one day at their offices in the Higgins Building in Downtown Los Angeles: Mr. James L. Stanley, the old man of the mountain, came into the office one morning and asked to see the boss. He had gray hair and wore long whiskers. He looked like a ‘possum’ more than a man, but he had a good story for us. We loaded ourselves into an automobile, and, in three hours we were standing beside the Big Rock Creek where the water from it came rushing along by us … The Antelope Valley looked to be as large as the Pacific Ocean … some of this adjacent land was set out to Bartlett pear orchards and a few acres were set out to alfalfa. There were but three dwelling houses in sight. Ten thousand acres had nothing but jack rabbits and stink weeds and could be bought for one dollar an acre … We paid Mr. Stanley $25 for his day’s work and his information and proceeded to buy Llano Del Rio Ranchero. The land, much of which was owned by the Mescal Land and Water Company, had once been the site of a failed temperance colony. It was virtually worthless, and the “starry-eyed” socialist grossly overpaid for the honor of owning around 9,000 acres of boulder-strewn desert landscape. However, included in the purchase were the essential water rights to Big Rock Creek. The Mescal Land and Water Company was officially reorganized into the Llano del Rio Company on October 10, 1913. For good or bad, Llano del Rio was theirs. Harriman and his cohorts, including holdovers from his mayoral campaigns, set about finding settlers for their new colony. Ads were placed in the Western Comrade, LA’s new socialist magazine, which would eventually be printed at Llano. One ad touted a booklet written by Harriman: Are you tired of the competitive world? Do you want to get into a position where every hour’s work will be for yourself and your family? Do you want assurances of employment and provisions for the future? Ask for the booklet entitled Gateway to Freedom. Subscribe to the Western Comrade … and keep posted on the progress of the colony. The widely disseminated Gateway to Freedom promised a wage of $4 a day to new settlers, an impressive salary at the time. Harriman’s adversaries at the Los Angeles Times derided the booklet, “which in flowery language pictures the beauties and blessings on the contemplated colony in Antelope Valley.” They mocked the full-length picture of Harriman on the title page and questioned the socialist bona fides of the colony, since the booklet stated confusingly that “this is not a co-operative colony, but it is a corporation.” Ten thousand acres had nothing but jack rabbits and stink weeds and could be bought for one dollar an acre. Despite the Times’s derision, the colony soon found many willing to buy into the Llano dream. Millie Miller, Harriman’s stenographer, was the first to buy a colony stock certificate, partially with shares earned through her work with the Llano del Rio Company. Others quickly followed, and the hard work of clearing land and laying irrigation ditches began. The work was done by future residents, many exchanging their labor for shares in the corporation. By May 1914, excited colonists began to move to the isolated new settlement, 90 miles from Los Angeles over rough, treacherous roads. From all over the country and all walks of life, they were tied together by the dream of a better, more equal society. “We felt happy, exhilarated, and confident that the Llano del Rio Co-operative Colony would, indeed, become a paradise on earth,” Miller remembered. The colony grew rapidly. By October 1915, there were 500 people living at Llano, mostly in makeshift tents (in which most would reside until the colony’s demise). When at Llano, Harriman also stayed in a tent, which he scandalously shared with colonist Mildred Buxton, while his wife Theo stayed in Los Angeles. Construction of more permanent structures began in earnest during the summer of 1915. The colonists often used local resources: The local adobe clay formed the basic building block of Llano’s earliest residential architecture. A lime kiln was built on the side of a bluff in a canyon south of the colony, and utilized native rock to make cement for construction purposes … The Llano site was remarkably stony. This detriment was turned around by the colonists who built many foundations of stone, since it could be used at no further cost on the site. Circumstance also aided in the construction needs. One day a man was accepted into the colony despite his lack of cash. But he did have a complete sawmill outfit, which was pulled by four yokes of oxen. His equipment, set up in the San Gabriel Mountains above Llano, started producing lumber for the colony’s construction. In early 1915, the hotel, which would become the focal point of social life at Llano, was completed. “The first community building, the hotel, combined cobblestone foundations with native boulders and frame walls,” the authors of Bread and Hyacinths: The Rise and Fall of Utopian Los Angeles write. “This structure, in addition to living quarters for bachelors and arriving members, contained a large dining room assembly hall with fireplaces. Colonists gathered around these hearths on cool winter evenings before blazing juniper fires.” The assembly room became a beehive of activity and intrigue. Curious tourists and weekend visitors from progressive organizations like the Young People’s Socialist League would file in for meals and lectures. The General Assembly, Llano’s nominal governing body, would meet in the large room, practicing “democracy rampant, belligerent, unrestricted.” Life was proving to be hard at Llano. Fresh fruits and vegetables were often scarce, and the long work hours were punishing in the relentless desert sun. A rival faction—known as the “brush gang” for their clandestine outdoor meetings—began to call for the ouster of Harriman as de facto leader. Lawsuits against the colony were filed by some early defectors, and dissatisfied colonists began to go to the press. Neighboring ranchers also began to sue the upstart colony over their alleged water rights to Big Rock Creek. All this noise caught the attention of the state Commissioner of Corporations. As early as the spring of 1915, representatives of the anti-socialist commission, including Deputy Commissioner H.W. Bowman, began to visit Llano. In December 1915, Bowman issued a scathing report on the less-than-year-old settlement. He claimed that there was not enough food at the colony, and that “although it is assumed that supplies are furnished to the colonists at cost, such is not always the fact.” He derided Llano’s hygiene, stating, “the only bathtubs or sanitary toilet appliances in the colony are those which still lie crated among the debris in the scrap pile behind the carpenter and machine shop.” He reserved his most biting commentary for Harriman himself, which the Los Angeles Times reported with glee: The general statements usually made by the colony’s promoters cannot safely be accepted without qualifications and explanations. The same is true of much of the company’s literature … There is a studied effort to induce the belief that the influence of each stockholding colonist in the control of the colony’s affairs is equal to that of any other. The fact is that the colony is almost autocratically dominated and controlled by one man, Mr. Harriman. However, the commission gave the Llano del Rio Company the right to keep functioning, provided “a true copy of the permit shall be exhibited and delivered to each prospective subscriber for or purchaser of said securities.” Harriman’s enemies didn’t care. “The Modern Moses,” the “Oligarch of Misrule” was continually taunted in the Times, with headlines celebrating every defection and lawsuit. However, to the close to 800 colonists living at Llano, all of this turmoil didn’t really matter. Over 1916 and early 1917, through backbreaking work and sacrifice, the colony began to coalesce into a seemingly fully functioning village. According to Bread and Hyacinths: By 1917, over 60 departments functioned under division managers. A representative list of economic activities included: agriculture, architecture and surveying, art studio, bakery, barber shop, bee-keeping, cabinet shop, cannery, cleaning and pressing, clearing, fencing and grading land, dairy, fish hatchery, general store, hay and grain, hogs, horses and teaming, the hotel, irrigation, laundry, lime kiln, library, machine shop, medical department, poultry, printing, post office, rabbits, rugs, sawmill, sanitation, shoe shop, soap factory, tannery, tractors, transportation, tin shop, wood and fuel. The inventive elementary school was led by Prudence Stokes Brown, who had studied under education pioneer Maria Montessori. Secondary education was supplied in the form of an industrial school, where teenage girls and boys were taught skills seen at the time as appropriate to their sex. “The boys have their managers of departments, make their own laws, try their own culprits and acquire a sense of responsibility,” an in-house publication reported. “Boys, who have seemed to be incorrigible, have transformed into loveable, tractable, good natured workers.” Beige dirt roads lead to crumbling beige walls, cisterns full of trash, and the rough stone foundations of long-gone homes and workshops. Adults also partook in continuing education classes and joined social and craft clubs. Writer Aldous Huxley, who lived near the colony in the 1940s, wrote that former colonists “had often talked to me nostalgically of that brass band, those mandolins and barber-shop ensembles” that made Llano life unique. There were also plenty of eccentric personalities to entertain, like the Zorne brothers, who built an airplane around an old Model T motor to the fascination of their fellow colonists—only for it to mysteriously burn to the ground after a failed test run. Llano chronicler Robert Hine described one of the colony’s largest scale celebrations, presided over by a seemingly confident and jubilant Harriman: The May Day festivities of 1917 commenced at nine o'clock in the morning with intra-community athletic events, including a Fat Women's Race. The entire group of colonists then formed a Grand Parade and marched to the hotel where the Literary Program followed. The band played from a bunting-draped grandstand, the choral society sang appropriate revolutionary anthems like the `Marseillaise', then moved into the Almond Grove for a barbecue dinner. After supper, a group of young girls injected the English into the radical tradition by dancing about the May Pole. At 7:30 the dramatic club presented ‘Mishaps of Minerva' with newly decorated scenery in the Assembly Hall. Dancing consumed the remainder of the evening. Sadly, by that May 1917—though most residents didn’t know it—Llano had been a dead colony walking for almost a year. Although Harriman was often accused of being the colony’s virtual “czar,” the rigidly democratic nature of the General Assembly also led to dysfunction. The 1917 crop of alfalfa was lost, because the General Assembly had failed to authorize its harvest. The continuing lawsuits from disgruntled shareholders and alleged mismanagement by McCorkle also took a toll on the colony’s finances. But Llano’s real death knell had actually come in July 1916, when the colony’s application to secure their water rights and build a dam to help irrigate their fields was denied by California Commissioner of Corporations H.L. Carnahan. “Your people do not seem to have the necessary amount of experience and maybe the sums of money it will involve,” he wrote. “The application is denied.” After the ruling, Harriman and his remaining partners began quietly looking for a new home for the colony. By fall of 1917, they had found a suitable new homestead—an old mill town in Louisiana they named New Llano. By early 1918, Llano del Rio had been involuntarily forced into bankruptcy and most of its colonists had begun to leave, some for New Llano. On March 30, 1918, the Los Angeles Times, hypocritically waxing poetic about its failed nemesis, reported that the last community meal was being served in the hotel that evening: At sundown tonight, Llano del Rio, which in its heyday had nearly 1000 souls, will simmer down to a deserted village containing just sixteen men … The town itself begins with a pretentious cobblestone hotel and men’s dormitory, a large bath-house for men and the administration building and post office. It moves on, across the street, to the industrial center, where there is a machine shop, a blacksmith shop and a sawmill … All over this portion of the townsite are the remains of what were homes, wrecked buildings, and the frames that once supported canvass that provided shelters for colonists. At the upper end of the town stands the temple of weaving arts. A weather-beaten loom stands outside in the burning sun. The temple was to send Llano weaves to the socialist world far and wide. Just now it houses an old bed, a couple of broken chairs and a pine table. Job Harriman moved to New Llano (which lingered until the 1930s) before returning to Los Angeles in ill health. He died virtually penniless in 1925, his dream for a better socialist future still improbably intact. Aldous Huxley, speaking of the Llano colonists he knew who had remained in the Antelope Valley decades later, poignantly described them as:
April 23--CHESAPEAKE -- When it comes to marijuana, the nose knows. Even in a moving car. Even with the windows up. Police officers in Chesapeake have been pulling over cars on the grounds that they smelled marijuana while cruising down local roadways, defense attorneys say. And according to the testimony of one officer, it's become common practice to try to sniff out pot from behind the wheel. "We drive our patrol car with the vents on, pulling air from the outside in, directly into our faces," Officer Barrett C. Ring said late last year in court during a preliminary hearing, according to a transcript of the proceedings. "Commonly, we'll be behind vehicles that somebody in the vehicle is smoking marijuana, and we can smell it clear as day." Before officers pull over a car to search it, he said, they will follow it until there are no other cars in the area and they are certain about the source of the odor. Assistant Public Defender Matthew Taylor and several other defense attorneys question the officers' "supernatural" sense of smell. "The idea that police can drive behind a car and smell marijuana is preposterous," said Taylor, who tried unsuccessfully last week to get Ring's search of his client's car thrown out of court. "What do we need drug dogs for if (people) can drive behind cars and smell marijuana?" Kent Willis, executive director of the ACLU in Virginia, agreed with Taylor, saying, "It stretches the imagination that the police can drive down the road and home in on a car." He predicted that traffic stops based only on an officer's sense of smell will draw more legal challenges in the future. "Experts will have to tangle over this and decide," he said. Officials with the Chesapeake Police Department declined to comment. So far, the officers' behavior appears to have withstood legal review. No defense attorney contacted by The Virginian-Pilot had seen any such searches overturned. Attorney Robert L. Wegman said he has handled two cases involving pot-sniffing police on patrol but did not challenge the searches because the officers had other reasons to conduct the traffic stops. On Thursday, Judge V. Thomas Forehand Jr. ruled in Chesapeake Circuit Court that the search of Deon Crudup's car was lawful, but he didn't specifically address the officers' ability to detect pot while in their moving car. Rather, he noted that Ring and his partner, Officer James H. Rich, never initiated a traffic stop based on smelling pot. After smelling the marijuana on April 10, 2011, while driving about 35 miles per hour on Battlefield Boulevard, they followed Crudup's car into the parking lot of Blakely's night club, walked up to the parked vehicle and searched it without needing permission, according to court testimony. The officers found some dried marijuana in a bag along with what authorities said was heroin. "None of this argument about what the officer could smell in his car... has any import," said Forehand, adding that the only thing that mattered was that the police smelled marijuana as they approached the parked car on foot. Crudup, 29, was convicted in October on one count of misdemeanor possession of marijuana and is scheduled to stand trial May 8 on one count of felony possession of heroin. The air-vent technique hasn't been adopted on any large scale in South Hampton Roads outside Chesapeake, according to officials in Portsmouth, Suffolk and Virginia Beach. Suffolk Commonwealth's Attorney C. Phillips Ferguson said he hadn't heard of the practice but expects it to catch on as more officers learn about it. "It's very creative policing," he said. Ferguson saw no problem with police officers using their noses to identify suspicious vehicles, follow them and then find another reason to pull them over -- such as a broken taillight. He said officers are allowed to search a vehicle when they smell marijuana during a routine traffic stop. If an officer used the odor of marijuana he sensed while driving down a highway as the sole basis to justify a traffic stop, Ferguson could see a defense attorney having more success persuading a judge to throw out the vehicle search. "I'm not saying they wouldn't have been justified (to stop the car), but it's pushing the line," Ferguson said. Taylor said he decided to challenge the search of Crudup's car partly with the hope that he could prevent the technique from becoming common. "If cops can get away with this, they will have total authority," he said. Scott Daugherty, 757-222-5221, scott.daugherty@pilotonline.com Copyright 2012 - The Virginian-Pilot, Norfolk, Va.
Dr. Marshall A. Lichtman: The Wit and Wisdom of One of Stem Cells' Founding Fathers introduce the STEM CELLS audience once more to the remarkable wit, and wisdom, of Dr. Marshall A. Lichtman. As Curt Civin and I recently reminded our readers, Marshall was one of the “Founding Fathers” of STEM CELLS. It was his idea, among others, to include a “Concise Reviews” section in the journal and it has proven to be one of its most successful features as judged by download activity from our Web site. Marshall is truly one of the giants of modern hematology. A prodigious intellect with a wide variety of interests, he has authored well in excess of 230 research articles dealing with virtually every hematopoietic lineage (so we have to love him at STEM CELLS!), as well as the origins of clonal myeloid malignancies. Career benchmarks include serving as Dean of the University of Rochester School of Medicine and Dentistry (where he is still Professor of Medicine, Biochemistry and Biophysics), the Board of Governors of the American Red Cross, and one of the highest honors in the world of hematology, being elected President of the American Society of Hematology. I came to know Marshall personally as a result of my association with the Leukemia and Lymphoma Society, where Marshall also serves as its Executive Vice President for Research and Medical Programs. During my tenure as Chair of the Society’s Medical and Scientific Affairs Committee, Marshall was a constant source of support, inspiration, and the most helpful commodity of all, good advice served up with a sense of humor. The little ditty we publish is an excellent example of this. So what does one call a scholar who is a real gentleman in the truest sense of the word? If one is lucky, as I am, “Friend” is the word that rises to the top. We at STEM CELLS are proud of this friendship and Marshall’s many contributions to the field and to the journal.
Identification of transcription factor binding sites from ChIP-seq data at high resolution MOTIVATION Chromatin immunoprecipitation coupled to next-generation sequencing (ChIP-seq) is widely used to study the in vivo binding sites of transcription factors (TFs) and their regulatory targets. Recent improvements to ChIP-seq, such as increased resolution, promise deeper insights into transcriptional regulation, yet require novel computational tools to fully leverage their advantages. RESULTS To this aim, we have developed peakzilla, which can identify closely spaced TF binding sites at high resolution (i.e. resolves individual binding sites even if spaced closely), as we demonstrate using semisynthetic datasets, performing ChIP-seq for the TF Twist in Drosophila embryos with different experimental fragment sizes, and analyzing ChIP-exo datasets. We show that the increased resolution reached by peakzilla is highly relevant, as closely spaced Twist binding sites are strongly enriched in transcriptional enhancers, suggesting a signature to discriminate functional from abundant non-functional or neutral TF binding. Peakzilla is easy to use, as it estimates all the necessary parameters from the data and is freely available. AVAILABILITY AND IMPLEMENTATION The peakzilla program is available from https://github.com/steinmann/peakzilla or http://www.starklab.org/data/peakzilla/. CONTACT stark@starklab.org. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
For too long, anarchist projects have been mismanaged by arrogant fantasies of mass. We have unconsciously adopted the Statist, Capitalist and Authoritarian belief that “bigger equalsbetter” and that we must tailor our actions and groups towards this end. Despite our intuitive understandings that large organizations rarely accomplish more than small, tight groups working together, the desire for mass remains strong. We must re-examine how we organize projects in order to awake from the nightmare of over-structure that inevitably leads to bureaucracy, centralization and ineffective anarchist work. This article suggests a few ideas on how anarchists can reject the trap of mass and reinvent ourselves, our groups and our work: from local community activities to large revolutionary mobilizations. The rejection of mass organizations as the be-all, end-all of organizing is vital for the creation and rediscovery of possibilities for empowerment and effective anarchist work. The Tyranny of Structure Most mass structures are a result of habit, inertia and the lack of creative critique. Desire for mass is accepted as common sense in the same way it is ‘common sense’ that groups must have leaders, or that that they must make decisions by voting. Even anarchists have been tricked into accepting the necessity of super structures and large organizations for the sake of efficiency, mass, or unity. These super structures have become a badge of legitimacy and they are often the only conduits by which outsiders, whether the media, the police or other leftists, can understand us. The result is an alphabet soup of mega-groups which largely exist to propagate themselves and, sadly, do little else. Unfortunately, we haven’t just been tricked into accepting superstructures as the overriding venue of our work: many of us have gone along willingly, because the promise of mass is a seductive one. Large coalitions and super-structures have become the coin of the realm not only for leftist groups in general but also for anarchist enterprises. They appeal to activists’ arrogant fantasies of mass: the authoritarian impulse to be leading (or at least be part of) a large group of people that reinforce and legitimize our deeply held ideologies and beliefs. Even our best intentions and wildest dreams are often crowded out by visions of the black clad mob storming the Bastille or the IMF headquarters. The price of the arrogant dream of mass is appallingly high and the promised returns never come. Super-structures, which include federations, centralized networks and mass organizations, demand energy and resources to survive. They are not perpetual motion machines which produce more energy than what is poured into them. In a community of limited resources and energy like ours, a super-structure can consume most of these available resources and energies, rendering the group ineffective. Mainstream non-profits have recently illustrated this tendency. Large organizations like the Salvation Army commonly spend 2/3 of their monies (and even larger amounts of its labor) on simply maintaining its existence: officers, outreach, meetings and public appearance. At best, only 1/3 of their output actually goes to their stated goals. The same trend is replicated in our political organizations. We all know that most large coalitions and super-structures have exceedingly long meetings. Here’s a valuable exercise: The next time you find yourself bored by an overlong meeting, count the number of people in attendance. Then multiply that number by how long the meeting lasts: this will give you the number of person-hours devoted to keeping the organization alive. Factor in travel time, outreach time and the propaganda involved in promoting the meeting and that will give you a rough estimate of the amount of activist hours consumed by greedy maw of the superstructure. After that nightmarish vision, stop and visualize how much actual work could be accomplished if this immense amount of time and energy were actually spent on the project at hand instead of what is so innocently referred to as ‘organizing’. Affinity or Bust Not only are super-structures wasteful and inefficient, but they also require that we mortgage our ideals and affinities. By definition, coalitions seek to create and enforce agendas. These are not merely agendas for a particular meeting but larger priorities for what type of work is important. Within non-anarchist groups, this prioritization often leads to an organizational hierarchy to ensure that all members of the group promote the overall agenda. A common example is the role of the media person or ‘spokesman’ (and it is almost always a man) whose comments are accepted as the opinion for dozens, hundreds or sometimes thousands of people. In groups without a party line or platform, we certainly shouldn’t accept any other person speaking for us — as individuals, affinity groups or collectives. While the delusions of media stars and spokespeople are merely annoying, superstructures can lead to scenarios with much graver consequences. In mass mobilizations or actions, the tactics of an entire coalition are often decided by a handful of people. Many of the disasters of particular recent mobilizations can be squarely blamed on the centralization of information and tactical decisions on a tiny cadre of individuals within the larger coalition/organization (which might include dozens of collectives and affinity groups). For anarchists, such a concentration of influence and power in the hands of a few is simply unacceptable. It has long been a guiding principle of anarchist philosophy that people should engage in activities based on their affinities and that our work should be meaningful, productive and enjoyable. This is the hidden benefit of voluntary association. It is arrogant to believe that members in a large structure, which again can number in the hundreds or thousands of people, should all have identical affinities and ideals. It is arrogant to believe that through discussion and debate, any one group should convince all the others that their particular agenda will be meaningful, productive and enjoyable for all. Due to this nearly impossible situation, organizations rely on coercion to get their agendas accepted by their membership. The coercion is not necessarily physical (like the State) or based on deprivation (like Capitalism) but based on some sense of loyalty or solidarity or unity. This type of coercion is the stock and trade of the vanguard. Organizations spend a significant amount of their time at meetings trying to convince you that your affinities are disloyal to the greater organization and that your desires and interests obstruct or remove you from solidarity with some group or another. When these appeals fail, the organization will label your differences as obstructionist or breaking ‘unity’ — the hobgoblin of efficiency. Unity is an arrogant ideal which is too often used against groups who refuse to cede their autonomy to a larger super-structure. Many anarchists whose primary work is done in large organizations often never develop their own affinities or skills and instead, do work based on the needs of super-structures. Without affinity groups or collective work of their own, activists become tied to the mass abstract political goals of the organization, which leads to even greater inefficiency and the ever present “burn-out” that is so epidemic in large coalitions and super-structures. Liberty, Trust and True Solidarity “All Liberty is based on Mutual Trust” — Sam Adams If we seek a truly liberated society in which to flourish, we must also create a trusting society. Cops, armies, laws, governments, religious specialists and all other hierarchies are essentially based on mistrust. Super-structures and coalitions mimic this basic distrust that is so rampant and detrimental in the wider society. In the grand tradition of the Left, large organizations today feel that due to their size or mission, they have a right to micromanage the decisions and actions of all its members. For many activists, this feeling of being something larger that themselves fosters an allegiance to the organization above all. These are the same principles that foster nationalism and patriotism. Instead of working through and building initiatives and groups that we ourselves have created and are based in our own communities, we work for a larger organization with diluted goals, hoping to convince others to join us. This is the trap of the Party, the three letter acronym group and the large coalition. In large groups, power is centralized, controlled by officers (or certain working groups) and divvied out, as it would be done by any bureaucratic organization. In fact a great deal of its energies are devoted to guarding this power from others in the coalition. In groups which attempt to attract anarchists (such as anti-globalization coalitions) this centralization of power is transferred to certain high profile working groups such as ‘media’ or ‘tactical’. Regardless of how it appears on the outside, superstructures foster a climate in which tiny minorities have disproportionate influence over others in the organization. As anarchists, we should reject all notions of centralized power and power hoarding. We should be critical of anything that demands the realignment of our affinities and passions for the good of an organization or abstract principle. We should guard our autonomy with the same ferocity with which the super-structure wishes to strip us of it. Mutual aid has long been the guiding principle by which anarchists work together. The paradox of mutual aid is that we can only protect our own autonomy by trusting others to be autonomous. Super-structures do the opposite and seek to limit autonomy and work based on affinity in exchange for playing on our arrogant fantasies and the doling out power. Decentralization is the basis of not only autonomy (which is the hallmark of liberty), but also of trust. To have genuine freedom, we have to allow others to engage in their work based on their desires and skills while we do the same. We can hold no power from them or try to coerce them into accepting our agenda. The successes that we have in the streets and in our local communities almost always come from groups working together: not because they are coerced and feel duty-bound, but out of genuine mutual aid and solidarity. We should continue to encourage others to do their work in coordination with ours. In our anarchist work, we should come together as equals: deciding for ourselves with whom we wish to form affinity groups or collectives. In accordance with that principle, each affinity group would be able to work individually with other groups. These alliances might last for weeks or for years, for a single action or for a sustained campaign, with two groups or two hundred. Our downfall is when the larger organization becomes our focus, not the work which it was created for. We should work together, but only with equal status and with no outside force, neither the state, god nor some coalition, determining the direction or shape of the work we do. Mutual trust allows us to be generous with mutual aid. Trust promotes relationships where bureaucracies, formal procedures and large meetings promote alienation and atomization. We can afford to be generous with our limited energies and resources while working with others because these relationships are voluntary and based on a principle of equality. No group should sacrifice their affinity, autonomy or passions for the privilege to work with others. Just as we are very careful with whom we would work within affinity group, we should not offer to join in coalition with groups with whom we do not share mutual trust. We can and should work with other groups and collectives, but only on the basis of autonomy and trust. It is unwise and undesirable to demand that particular group must agree with the decisions of every other group. During demonstrations, this principle is the foundation of the philosophy of “diversity of tactics”. It is bizarre that anarchists demand diversity of tactics in the streets but then are coerced by calls for ‘unity’ in these large coalitions. Can’t we do better? Fortunately, we can. Radical Decentralization: A New Beginning So let us begin our work not in large coalitions and super structures but in small affinity groups. Within the context of our communities, the radical decentralization of work, projects and responsibility strengthens the ability of anarchist groups to thrive and do work which best suits them. We must reject the default of ineffective, tyrannical super structures as the only means to get work done and must strengthen and support existing affinity groups and collectives. Let us be as critical of the need for large federations, coalitions and other super-structures as we are of the State, religion, bureaucracies and corporations. Our recent successes have defied the belief that we must be part of some giant organization “to get anything done”. We should take to heart the thousands of anarchist DIY projects being done around the world outside super structures. Let us come to meetings as equals and work based on our passions and ideals, and then find others with whom we share these ideals. Let us protect our autonomy and continue to fight for liberty, trust and true solidarity.
A Simple Method For Real-Time Detection Of Voltage Sags and Swells in Practical Loads Abstract This paper presents an algorithm for real time detection of short duration voltage disturbances that occur in single phase power supply systems. The method used involves the use of a sliding, overlapping window of fixed size for measuring the RMS(root mean square) value of the voltage signal. The RMS value of a fixed number of voltage samples, is computed over a cycle, and this value is updated over a sample period by the overlapping window. A trigger signal is generated in response to the comparison of the RMS values between two consecutive cycles of the voltage signal. The trigger thus obtained, can be used to initiate the operation of the main control system of a Dynamic Voltage Restorer (DVR) to be used in conjunction with the detection unit to mitigate the sag or swell, or as a triggering function to a digital recorder to record the occurrence of the disturbance. The modeling of the algorithm and test of its functioning was carried out in Matlab - Simulink environment. Reliability of the functioning of the algorithm under conditions of noise and change in the nature of the sag and swell in the voltage signal was tested on real data acquired during events such as starting of induction motor, intermittent loading of welding transformer, and dynamic loading of a captive diesel generator. TMS320F6713 Digital Signal Processor used for the implementation of the algorithm has been programmed through the Matlab embedded link for code composer studio. The experimental set up consists of a digital signal processor (DSP) TMS320C6713, for generation of trigger, a personal computer for record of the parameters and is successfully tested for real time disturbance detection.
Interstitial Lung Disease in Undifferentiated Forms of Connective Tissue Disease The intersection of interstitial lung disease (ILD) and connective tissue disease (CTD) is complex and commonly includes the scenario whereby ILD is identified in patients with pre-existing, well-characterized forms of CTD, is the presenting manifestation of a well-characterized form of CTD, or arises within the context of a poorly defined, “undifferentiated” form of CTD. Determining that an ILD is CTD associated is important because this knowledge often impacts management and prognosis. Identifying occult CTD in patients with an “idiopathic” ILD can be challenging and requires a comprehensive, often multidisciplinary, evaluation. There is much uncertainty and controversy surrounding undifferentiated forms of CTD-associated ILD and prospective studies are needed to provide a better understanding of the natural history of these cohorts, how to best manage them, and to determine whether they behave similar to definite forms of CTD-associated ILD.
. Electrocardiographic tests in 281 workers of metallurgical industry, exposed for 3-20 years to mechanical and acoustic vibrations exceeding allowable standards, were carried out. The control group consisted of 120 subjects unexposed to physical or chemical hazards. In 82 subjects exposed to vibration and noise, 43 workers exposed to isolated noise and 30 subjects of the control group, the amount of 3 methoxy--4 hydroxymandelic (3M4HM) acid before and after work was determined. Significantly more frequent repolarization disturbances of neurovegetative nature (V-3), particularly in those simultaneously exposed to general and local vibration, were evidenced. In those exposed exclusively to noise no ECG changes were found. In addition, increased 3M4HM secretion, due to both local and whole--body transmitted vibration, was found. Isolated noise does not induce an increase in 3M4HM secretion.
Like the way it expands the cops’ ability to surreptitiously record citizens during the course of an investigation. Instead of having to go to a judge for a warrant, in many cases all the police now need is the permission of a state’s attorney. The prospect of increased eavesdropping without a warrant is serious enough that the ACLU, which initially helped craft the legislation, now opposes the new law. It’s “too great an expansion of police power,” says ACLU spokesman Edwin C. Yohnka. As for conversations between ordinary citizens, the new law makes it illegal to surreptitiously record any communication when one or more parties has a “reasonable expectation” of privacy. What’s a “reasonable” expectation? “Any expectation recognized by law.” If you don’t know what that might cover (and who but a legal expert would?), you’re back to requiring express permission before you hit record if you want to be safe from potential prosecution. And what about the situation that got Chris Drew in trouble? In the era of Ferguson and “I can’t breathe,” can Illinois citizens now record police officers in action? The ACLU says yes: the new law “respects” an appellate court ruling that cops on duty have “no reasonable expectation of privacy in their conversations in public places.” You won’t find that language in the law itself, however. State representative Elaine Nekritz, who sponsored the bill in the house, says that’s no accident. We made a decision “not to specifically state that citizens can record cops,” Nekritz says. “I thought if we tried to describe every instance in which you either were or were not committing eavesdropping, we would run into more trouble than we’ve created by having this more general standard. We just can’t write every circumstance in which someone has a reasonable expectation of privacy.” Like the definition of guilt beyond a reasonable doubt, she says, “we know it when we see it.”
Violation in B → π + π − and the Unitarity Triangle We analyze the extraction of weak phases from CP violation in B → π+π− decays. We propose to determine the unitarity triangle (ρ̄, η̄) by combining the information on mixing induced CP violation in B → π+π−, S, with the precision observable sin 2β obtained from the CP asymmetry in B → ψKS . It is then possible to write down exact analytical expressions for ρ̄ and η̄ as simple functions of the observables S and sin 2β, and of the penguin parameters r and φ. As an application clean lower bounds on η̄ and 1 − ρ̄ can be derived as functions of S and sin 2β, essentially without hadronic uncertainty. Computing r and φ within QCD factorization yields precise determinations of ρ̄ and η̄ since the dependence on r and φ is rather weak. It is emphasized that the sensitivity to the phase φ enters only at second order and is extremely small for moderate values of this phase, predicted in the heavy-quark limit. Transparent analytical formulas are further given and discussed for the parameter C of direct CP violation in B → π+π−. We also discuss alternative ways to analyze S and C that can be useful if new physics affects Bd–B̄d mixing. Predictions and uncertainties for r and φ in QCD factorization are examined in detail. It is pointed out that a simultaneous expansion in 1/mb and 1/N leads to interesting simplifications. At first order infrared divergences are absent, while the most important effects are retained. Independent experimental tests of the factorization framework are briefly discussed. PACS numbers: 11.30.Er, 12.15.Hh, 13.25.Hw buchalla@theorie.physik.uni-muenchen.de safir@theorie.physik.uni-muenchen.de
Children's Original Thinking: An Empirical Examination of Alternative Measures Derived From Divergent Thinking Tasks Abstract Children's creative potential is often assessed using cognitive tests that require divergent thinking, such as the Torrance Tests of Creative Thinking (TTCT; E. P. Torrance, 1974, 1976, 1990). In this study the authors investigated the effect of various scoring systems on the originality index, evaluating the high intercorrelation of fluency and originality measures found in the TTCT scoring system and the applicability of TTCT scoring norms over time and across age groups. In 3 studies, the originality of elementary school children was measured using TTCT norms and various sample-specific scoring methods with the TTCT Unusual Uses of a Box test as well as social-problem-solving tasks. Results revealed an effect of scoring technique on creativity indices as well as on the reliability of originality scores and the relationship between originality and other ability measures. The usefulness of the various measures for understanding children's original thinking are discussed.
Challenges to Preparing and Conducting Christian Worship in Nursing Homes This essay explores and details the various challenges that are faced in the design of Christian worship within the nursing home environment. People in this setting have special needs-physical, mental, social, and spiritual. The physical plant itself demands special considerations when preparing for worship. Difficulties are defined and solutions presented for those who would provide intentional and accessible Christian worship that allows for action and response from the community in these settings. Sources include Christian liturgical reference materials and on-site interviews with chaplains and activity directors at three care facilities in southeast Michigan.
The Susan B. Anthony List is known for misleading ads. So it may come as a small surprise that a recent ad it sponsored featuring the Ryun family doesn't mention the family patriarch's long history as a Republican operative with close links to the Tea Party and the Koch brothers. Ned Ryun, who appears with his wife and daughter in an ad that targets North Carolina's incumbent senator, Democrat Kay Hagan, has a long history as a Republican operative with close links to the Tea Party and the Koch brothers. Women Speak Out PAC via YouTube Ned Ryun and Becca Parker Ryun are a telegenic couple, who star in a heart-wrenching 65-second advertisement that targets North Carolina’s incumbent senator, Democrat Kay Hagan. The Ryuns tell the story of their daughter, Charlotte, who was born severely premature—at 24 weeks gestation—but survived and thrived. “I didn’t think, at 24 weeks, you could have a viable baby,” Becca tells the interviewer. “It’s a human being. It wants to live. It has a soul. It has a will. It has a desire to live,” says her husband, Ned. The emotive video then shows images of the couple’s smiling daughter, as Ned says, “For those that are advocating late-term abortions, look at my daughter.” Get the facts, direct to your inbox. Subscribe to our daily or weekly digest. SUBSCRIBE The ad finishes with the message that Kay Hagan is “too extreme for North Carolina,” due to her support for later abortions. It’s a slick production, and a moving story, paid for by the Susan B. Anthony List, a leading anti-choice group, which announced last month that it was going on another advertising buying spree of up to $100,000, buying ads targeting Hagan, who is facing a tough battle to retain her seat in this year’s midterm elections. The Susan B. Anthony List is known for misleading ads. In fact, earlier this year, it went to the U.S. Supreme Court to defend its right to lie in political advertisements. So it may come as a small surprise that the ad tells only part of the story of the Ryuns, presented as an all-American couple, who could well be from North Carolina. In reality, Ned Ryun has a long history as a Republican operative with close links to the Tea Party and the Koch brothers—context that may well change how viewers see the conclusions he and Becca drew from what was undoubtedly a deeply emotional, personal experience. Neither Ned Ryun nor the Susan B. Anthony List returned Rewire’s requests for comment. Ned and Becca Ryun don’t live in North Carolina. The couple lives in Purcellville, Virginia, with their four children. Ned’s father is Jim Ryun, the former Republican U.S. Representative from Kansas who served ten years in Congress. Jim Ryun is best known for his achievements as an Olympic athlete (he was a Silver Medalist in the 1,500-meter race in the 1968 Mexico City games), and for his consistently conservative views. For instance, Jim Ryun voted against No Child Left Behind, the Bush administration’s marquee education law that was intended to boost poor-performing schools. People of all political persuasions objected to the law, but not for Ryun’s reasons: He voted “no” on the basis that states should have more control over education policy and rejected the need for additional funds. This, despite the fact that Kansas has some of the nation’s lowest performing public schools, and the greatest race-based inequality in educational opportunity. He also voted to ban adoptions by same-sex couples, to ban family planning as part of US foreign aid, and against an array of reproductive rights measures. His voting record earned him a zero rating from NARAL. Jim Ryun now runs Christian running camps, where attendees “learn how to apply racing, training strategies, and as well as hear from top Christian athletes who will share how their faith has helped them reach their fullest potential.” It’s not just Papa Ryun who is immersed in conservative politics. Ned himself is a former speechwriter for George W. Bush, while his twin brother, Drew, was a deputy director at the Republican National Committee. Along with their dad, the Ryun brothers have turned Tea Party politics into a family business. Drew and Jim Ryun are leaders of the ultra-conservative Madison Project, a group whose views of an array of things, including Europe, read more like the satirical news site The Onion. Referring to many European countries’ policies on abortion, the Madison Project’s website says: In Europe, the duly elected representatives in parliament decided the issue. Being that Europe is a morally decedent leftist utopia, they elected politicians who reflect their values. Ned heads up American Majority Inc., a 501(c)(3) nonprofit entity whose goal is to “create a national political training institute dedicated to recruiting, identifying, training and mentoring potential political leaders.” While it claims to be non-partisan, American Majority Inc. says it is committed to promoting “individual freedom through limited government and the free market.” In reality, that has mostly meant the Tea Party. Ned even wrote a monthly column in The Spectator called “With the Tea Partiers.” Like the brothers, American Majority Inc. has a twin—a 501(c)(4) called American Majority Action, which is led by Drew. Together, the American Majority organizations have donated to numerous Tea Party groups across the country, according to the entities’ tax filings. In 2010 the American Majority apparatus gave $520,000 to radical groups, including $22,500 to the St. Louis Tea Party in Missouri; $5,000 to the Jefferson County Tea Party in Missouri; and $275,000 to Grassroots Outreach, a Tempe, Arizona-based firm that has been linked to voter fraud. They have also made multiple donations to so-called 9/12 Project associations. The 9/12 Project is linked to Glenn Beck, and its goals include “tak[ing] over the Republican Party.” The American Majority nonprofits are licensed to do business in at least 34 states, and have drawn controversy for tactics such as paying field staffers in Ohio up to $10 an hour to get out the vote during Mitt Romney’s 2012 campaign. Traditionally, field canvassers have been volunteers. Just as interesting as who gets money from American Majority is who has donated to the Ryuns’ political operations. An analysis by Rewire, based on numbers collected by the Center for Media and Democracy, shows that American Majority received $3.9 million from DonorsTrust and its affiliated entity, the Donors Capital Fund, between 2010 and 2012. That puts American Majority among the top 15 recipients of DonorsTrust funds. DonorsTrust is one of the largest pass-through entities for conservative giving. Essentially a legal form of money laundering, DonorsTrust facilitates contributions from anonymous donors to be channeled toward conservative groups they specify. The Center for Media and Democracy names DonorsTrust as a key component of the Koch brothers’ political web. Between 2010 and 2012, the Donors entities distributed $252 million to a wide range of groups, including the Koch brothers-affiliated Americans for Prosperity Foundation, the Mercatus Center (a bastion of libertarianism, partly founded by the Koch brothers, according to Daniel Schulman’s recently published history of the Koch family, Sons of Wichita), and the right-wing Franklin Center for Government and Public Integrity. And the Ryuns’ connections to the “Kochtopus” don’t end there. According to The American Spectator, the idea for the American Majority groups “was conceived” by the Sam Adams Alliance, an organization that was active from 2007 through 2011, whose mission was to encourage “citizen engagement in politics, with specialties in studying and training citizen activists and bloggers.” The alliance was headed by long-time ultra-conservative and libertarian Eric O’Keefe, who has been close to the Koch brothers for decades. According to his online biography, O’Keefe worked on the Libertarian Party presidential campaign in 1980, in which David Koch was drafted by his older brother, Charles, into running as the vice presidential candidate. O’Keefe became close with another member of the Kochs’ inner circle, Ed Crane, who ran at the top of the Libertarian Party ticket and then spent the next few decades leading the Cato Institute, the extreme “free market” think tank that was almost entirely funded by the Kochs. O’Keefe joined Cato’s board in 1988. According to the Center for Media and Democracy, O’Keefe also worked for a group called Citizens for a Sound Economy, which was the predecessor to the Koch’s new funding vehicle, Americans for Prosperity. Thanks to O’Keefe’s ideas about training citizen activists, the Ryuns are now emerging as potential rivals to Karl Rove, and his enormous political machine, as masters of the “shadow” conservative movement, where power is held not by elected representatives, or even by the Republican National Committee, but by a cadre of highly paid consultants and deep-pocketed donors. In addition to American Majority, the brothers have established at least two other entities that feed into the extreme right’s political infrastructure. In 2012, American Majority reported using nearly $900,000 from its nonprofits to support a new outfit, called Media Trackers, a site that says it is “dedicated to media accountability, government transparency, and quality fact-based journalism.” In reality, Media Trackers has made claims about voter harassment in Wisconsin that PolitiFact later found were “mostly false,” and the group was active in attempting to undermine the Wisconsin effort to recall Gov. Scott Walker. Media Trackers is “a project” of another nonprofit entity with the Orwellian name Greenhouse Solutions. Tax filings show that Ned’s brother, Drew, and their father, Jim, are on the board. (It’s noteworthy that of the three bills noted by American Majority Action in its 2011 tax filings as particular lobbying targets, one was the NATGAS Act, a bipartisan measure intended to support natural gas. The name “Greenhouse Solutions” appears to literally be the opposite of what the Ryuns work toward.) However, perhaps the Ryuns’ most promising new entity is a voter database company known as Gravity. (It goes by iterations of that name—sometimes called Political Gravity, and sometimes, Voter Gravity.) Political parties increasingly rely on sophisticated voter databases to win elections, and they’re willing to pay high premiums for the best data, and those who know how to wield it. In the aftermath of Mitt Romney’s loss in the 2012 presidential race, the relative inferiority of his team’s database—known as Orca—became a key sore point for Republicans. Since then, competing teams in the shadow conservative world have been racing to build new systems to match up with the Democratic Party’s data tools—and with each other. Politico last year reported that the Koch brothers have established a political data company called i360, while Karl Rove’s group, Liberty Works, is also putting together a platform —each attempting to build the dominant conservative data tool. And then there are the Ryun twins, whose Gravity platform was expected to pass 1 million voter contacts by late 2012, propelling them into the financial center of right-wing politics. The company’s website boasts that “while Romney’s ‘Orca’ was going belly-up on Election Day, another group of conservatives were enjoying the fruits of labor that began long before voters headed to the polls.” Increasingly, they are being taken seriously as highly connected conservative heavyweights. While none of this detracts from Ned and Becca Ryun’s experience with the premature birth of their daughter, it does change the way viewers might see the conclusions that the couples drew from that experience. The Ryuns are far from being “everyday” North Carolinians. They are ensconced in the ultra-conservative movement, and their income derives from convincing the public of their very particular worldview. It would be fair to say that, if North Carolina voters knew the reality of who the Ryuns are, they’d be less inclined to see Kay Hagan as an “extremist,” and more likely to look closely at what the Ryuns believe. Moreover, Ned Ryun’s failure to disclose his conflicts of interest raises questions about how much trust can be placed in the views he expresses. Not only did he and the Susan B. Anthony List neglect to mention Ned’s extensive Koch brothers connections, but neither group mentioned that they had worked together in the past, when they both helped to launch Ohio Life and Liberty in October 2012. Nor did Ned disclose that the Susan B. Anthony List had contributed $28,000 to his father’s political campaign. (Koch Industries contributed more than $86,000.) But Ned Ryun’s failure to disclose even extends to his interest in Voter Gravity. On the company’s Facebook page, a reviewer purrs about the quality of Gravity’s service: “It was a bit of a no brainer for me to use Voter Contact: They saved me lots of money and got me a better product.” The reviewer gave Voter Gravity a five-star rating. Political Gravity’s account then replies, “Thank you Mr. Ryun.” That’s right. That reviewer was Ned Ryun, who replies—possibly to himself—“You bet. This is good stuff.” If the Ryuns’ entity, Media Trackers, is intended to police truth in the media, perhaps they should take a look at themselves. Surely they’d see no conflict of interest with that, either. Sofia Resnick contributed research to this report.
Bangkok: The dangerous stand-off over an oil rig in a disputed area of the South China Sea has pushed Vietnamese authorities to crack down on fierce anti-China protests and forcibly break up small demonstrations in two cities. Witnesses in southern Ho Chi Minh City said police dragged away several demonstrators from a park in the city centre. Vietnam’s Prime Minister Nguyen Tan Dung ordered an end to all “illegal protests” amid expectations of more anti-China unrest in Ho Chi Minh City and Hanoi, as Vietnam and China feuded over the placement over a $US1 billion ($1.06 billion) Chinese oil rig in the South China Sea. China, meanwhile, had evacuated more than 3000 of its nationals via flights and was sending five ships to evacuate more, the state-run Xinhua news service reported on Sunday.
ON THE ACCELERATION AND JERK IN MOTION ALONG A SPACE CURVE WITH QUASI-FRAME IN EUCLIDEAN 3-SPACE In this paper, we consider a particle moves on a space curve in the Euclidean 3-space and resolve its acceleration and jerk vectors according to quasi-frame. In this resolution, by appyling Siacci's theorem, we state the acceleration vector as the sum of its tangential and radial components, and obtain the jerk vector along the tangential direction and radial directions in osculating and rectifying planes. On the basis of the jerk vector formula , we give the maximum admissible speed on a space curve at all trajectory points. Furthermore, we present illustrative examples to explain how our results work.
Book Review: Governing Narratives: Symbolic Politics and Policy Change How is public policy implemented, and what informs its implementation by public administrators? Debate in the academic literature over policy development and implementation issues is long standing; a major consensus is that policy making is accomplished rationally, by calculating benefits and risk. In spite of challenges to this consensus, developing a coherent counterapproach has proven extremely difficult. The persistent scholarly search for new explanations of the differences between policy making and implementation processes has recently yielded to what is described in the academic literature as the narrative approach (Lindquist, 2009; Pepper & Wildy, 2009). New books attempt to explain how it can help policy makers, scholars, and students of public policy and administration understand options in the process of policy development and implementation. Sandford Borins, of the University of Toronto, not long ago published his renowned book—Governing Fables: Learning from Public Sector Narratives and Innovating with Integrity—which explains the repercussions of storytelling for the development and implementation of public policy. Partly through expanding the role of narratives, or storytelling, on policy analysis and implementation, Hugh T. Miller’s new book, Governing Narratives: Symbolic Politics and Policy Change, has enhanced our understanding of the importance of policy discourse from a narrative perspective. It has, further, added a significant voice to those speaking for the limitations of rationalism in policy making and implementation. According to Miller, the narrative approach seeks to “get beneath, around, and behind the way the media, society, lobbyists, public relations consultants, and people in government discuss public policy” (p. 18). As Miller himself rightly points out, the essence of the approach, and the purpose of the book, is to cast doubt on the “modern image of a rational, autonomous, intentional actor”: the scientific approach, in other words, to policy analysis that has dominated the field since the era of scientific management government and the emergence of the economic idea propounded by classical policy analysis scholars. Miller replaces that actor with “a decentred subject whose personage is inscribed by childhood experiences, family practices, and educational background, and many other cultural influences (p. xi).” It is a way to understand what and how policy discourses are shaped, and attends to what may be described as the idiosyncrasies of the personalities involved in advancing any course of action. Such a decentered subject, he notes, is not a product of the rational school of thought, but is, rather, “a product of the inscriptions accumulated through socialization and experience, by cultural symbolization, and genetic inheritance” (p. 14). The preface sets the tone of the book. Thereafter, Miller proceeds to demonstrate the importance of narrative’s governing of the policy making and implementation spheres by examining some critical concepts: he describes them in Chapter 1 as Words/Action. Here he draws the attention of the reader to critical issues in narratives, such as symbolic communication (p. 3), 482749 ARP43610.1177/0275074013482749American Review of Public AdministrationBook Review research-article2013
On Episode 28, I bag with one clean shot, a perfectly healthy free-range Honnold. Alex sits down in a clean, well-lit place to expound on his life in the glare of the klieg lights . He tells us what it was like to lock eyes with the beautiful and mysterious Lara Logan, what it was like to lock eyes with the happy-go-lucky Steve Denny on a lonely night on El Cap, and what its like to lock eyes with you, as he blows past on that hanging belay in Yosemite. From shy boy to pro-sesh hero, Alex lets us sit in on his world for an hour. Turns out, its pretty fun and chill in the Republic of Honnoldlandia. Alex on 60 Minutes (if you’re lucky, with a Viagra commercial, or two) Alex on No Way Jose, North Wash, UT vs. Our friend JP Ouellet on No Way Jose Want more Alex Honnold, you freak? Just google him and say goodbye to the afternoon!
Single-shot non-invasive three-dimensional imaging through scattering media We present a method for single-shot three-dimensional imaging through scattering media with a three-dimensional memory effect. In the proposed computational process, a captured speckle image is two-dimensionally correlated with different scales, and the object is three-dimensionally recovered with three-dimensional phase retrieval. Our method was experimentally demonstrated with a lensless setup and was compared with a multi-shot approach used in our previous work . INTRODUCTION Imaging through scattering media has been long studied for biomedical imaging, astronomical imaging, and so on . Recently, optical sensing and control through strongly scattering media, which are difficult to handle in conventional approaches relying on the existence of non-scattered light, have attracted interest in the field of optics and photonics. The rapidly growing computational power and the improved performance of optical elements for light control drive this area, and various methods have been reported. These methods are categorized into three types: feedback-based, inversion-based, and correlationbased . The feedback-based approach utilizes wavefront shaping behind or inside scattering media with an iterative feedback process based on an optimization algorithm . Issues with the feedback-based approach are the large number of feedback loops and the need for a probing process to measure the focusing state in the every loop. The inversion-based approach, including time reversal and phase conjugation, senses and controls the optical distribution through scattering media by taking the inverse of a transmission matrix expressing the scattering process . These methods realize single-shot imaging and focusing without any feedback process. However, they need to probe the whole or part of the transmission matrix before the imaging and focusing stage. The correlation-based approach exploits the shift invariance of speckles, which is called the memory effect . In the correlation-based methods, an autocorrelation process is used for removing speckles and exposing object signals in captured images. An advantage of the correlation-based approach is the lack of a need for the probing process, which is a drawback of the previous two approaches. As a result, the correlation-based approach has realized non-invasive imaging through scattering media. This approach has been recently extended to a threedimensional case . Drawbacks with the methods for correlation-based threedimensional imaging through scattering media include the need for capturing multiple images and/or invasive optical processes. Thus, they are difficult to apply to imaging of dynamical and practical scenes. Here we present a method for tomographic reconstruction of a three-dimensional object from a single speckle image captured through scattering media without any invasive or probing process. We demonstrated the method experimentally with a lensless setup. Our method enhances the possibility and practicability of imaging though scattering media for a wide range of applications, including biomedicine, security, and industry. METHOD The optical setup in the proposed method is shown in Fig. 1. A three-dimensional object o is illuminated with spatially incoherent illumination through a first diffuser, and the light scattered by a second diffuser is captured by a lensless image sensor as i. The relationship between the object and the captured speckle is three-dimensionally shift-invariant, based on the three-dimensional memory effect . Then, the imaging process is written with a scattering impulse response h as where r i = (x i , y i , z i ) are the spatial coordinates in the object space, and r o = (x o , y o , z o ) are the spatial coordinates in the sensor space, respectively. The x-and y-axes are lateral to the image sensor, and the z-axis is axial to the image sensor. The origin of these coordinates is the center of the second diffuser. The impulse response h is laterally random and axially scaled with a scaling factor s, which is written as In our previous work on multi-shot three-dimensional imaging through scattering media , the image sensor is axially scanned to observe the three-dimensional speckle distribution i, which is three-dimensionally autocorrelated to remove the impact of the scattering process h as where ⋆ 3D is the three-dimensional correlation, ⊗ 3D is the threedimensional convolution, F 3D denotes the three-dimensional Fourier transform, and O is the three-dimensional transform of o. Here, h ⋆ 3D h is approximated by the delta function due to the laterally random and axially scaled distribution of h . The object signal o is recovered by a phase retrieval process for |O| 2 . In this study, as shown in Fig. 2, a three-dimensional object is tomographically reconstructed from a single captured speckle image i 2D , which contains differently scaled impulse responses h depending on the object distances z o , using the optical setup in Fig. 1. The captured speckle image is computationally and laterally scaled multiple times, instead of multiple measurements made while axially scanning the image sensor as in the previous work . This computational scaling process mimics the optical scaling process in Eq. (2) by axial scanning, where the scaling factor is monotonically increased or decreased. This process is exploited for searching relative scales of the impulse responses on the original speckle image through the following correlation process, where high correlations appear if the original and scaled impulse responses are coincident, and vice versa. The scaled speckle images are computationally and laterally correlated as follows: where ⋆ 2D is the two-dimensional correlation, the superscript , and the superscript m = 0, 1, · · · , M − 1 denotes the index of the scaling factor. In this study, o ⋆ 3D o in Eq. (5) is approximated by c k . The object o is three-dimensionally reconstructed from the correlation result c k based on three-dimensional phase retrieval. The phase retrieval process in this study follows the previous work on speckle-correlation-based imaging through scattering media . Here two algorithms, which are called the error reduction algorithm and the hybrid input-output algorithm , are sequentially performed, as shown in Fig. 2. The second algorithm is a modified version of the first one, and they use the loop in the iterative processes shown in Fig. 3. The difference between them is the process of constraints. First, the common aspects are described, and then the differences are explained. The iterative process of the error reduction algorithm and the hybrid input-output algorithm is shown in Fig. 3 and is described as follows: 1. The object's Fourier spectrum is initially given with a three-dimensional random phase θ n by O n = (|O| 2 ) (1/2) exp(jθ n ), where j is the imaginary unit, |O| 2 is the three-dimensional Fourier transform of c k in Eq. (7), and the subscript n is the counter of the iteration, which is set to one. 2. O n is three-dimensionally inverse Fourier transformed, and the result is set as the intermediately estimated object o ′ n . 3. o ′ n is rectified with some constraints, which are described in the following paragraphs, and the result is the estimated object o n at the n-th iteration. 4. o n is three-dimensionally Fourier transformed, and the result is set as the intermediately estimated object's Fourier spectrum O ′ n . 5. The argument of O ′ n is extracted, and it is used as a replacement of the argument θ n of the estimated object's Fourier spectrum O n , where the counter n is incremented by one. The rectifying process with constraints at Step 3 is the difference between the error reduction algorithm and the hybrid input-output algorithm . In the error reduction algorithm, the intermediately estimated object o ′ n at the n-th iteration is updated with the following rule: where Γ is the set of all spatial positions r o which violate the constraints. In the hybrid input-output algorithm, the updating rule is as follows: where β is a feedback parameter. In the reconstruction according to the proposed scheme, as shown in Fig. 2, the hybrid input-output algorithm is performed first. In this algorithm, the feedback parameter β is decreased from 2.0 to 0.0 in intervals of 0.05, and the loop of Fig. 3 is iterated ten times for each β. Then, the error reduction algorithm is performed by using the result of the hybrid input-output algorithm as the initial estimate at Step 1, and the loop of Fig. 3 is iterated five hundred times. The constraints used here are realness, non-negativity, and the range of pixel intensities. Realness and non-negativity are introduced by using spatially incoherent illumination, such as a light emitting diode (LED) or fluorescence . The range of pixel intensities suppresses some reconstruction artifacts . This phase retrieval has trivial ambiguities of the spatial shift and the conjugate inversion, which have been studied in the literature on phase retrieval . EXPERIMENTAL DEMONSTRATION The proposed method was demonstrated with threedimensionally arranged point sources fabricated by a 3D printer (M2030TP manufactured by L-DEVO). The object had three levels, where the step height was 0.5 cm, and holes with a diameter of 0.4 mm were located at different positions on each level, as shown in Fig. 4(a). One hole was made on the front level, two diagonally arranged holes were made on the middle level, and two vertically arranged holes were made on the back level, respectively. As shown in Fig. 1, the object was located between two diffusers and it was illuminated with an incoherent LED (M565L3 manufactured by Thorlabs, nominal wavelength: 565 nm, full width at half maximum of spectrum: 103 nm) through a bandpass filter (578NM X 16NM 25MM manufactured by Edmund Optics, central wavelength 578 nm, full width at half maximum of spectrum: 22 nm) and a diffuser (LSD5PC10-5 manufactured by Luminit). The distance between the first diffuser and the object was 75 mm. Light passing through object was scattered by The three-dimensional phase retrieval result with the multi-shot approach in Reference , where the scale bar is 2 mm at the object plane. another diffuser (LSD20PC10-5 manufactured by Luminit) and captured by a monochrome image sensor (hr29050MFLGEA manufactured by SVS-Vistek, pixel count: 4384 × 6576, pixel pitch: 5.5 × 5.5 µm) without any imaging optics. The distance (z o ) between the object and the second diffuser was 92 mm, and the distance (z i ) between the second diffuser and the image sensor was 25 mm, as shown in Fig. 1. The captured speckle image is shown in Fig. 4(b), where the central 600 × 600 pixel area was clipped for visualization purposes. After background compensation, the captured image was scaled with a computational scaling factor s m com based on Eq. (2) as follows: where ∆ z o is the axial resolution of the object space, which was set to 0.5 mm, corresponding to the step height of the object. The number of scaled speckle images (M) was set to six. The scaled speckle images were laterally correlated as in Eq. (7). The central 800 × 800 pixel areas of the correlations were clipped and laterally down-sampled by a factor of four to reduce the computational cost in the next phase retrieval process. Biases of the correlations were equalized with the average values outside the central areas. The result of this correlation process is shown in Fig. 4(c), where the central 150 × 150 pixel areas were clipped, and contrast enhancement was applied for visualization purposes. The three-dimensional phase retrieval process was applied to the correlations, and the results are shown in Fig. 4(d), where the ambiguities of the spatial shift and the conjugate inversion were manually compensated, and the central 150 × 150 pixel areas were clipped for visualization purposes. The holes were three-dimensionally recovered. The reconstruction result of the multi-shot approach with axial scanning of the image sensor in our previous work is shown in Fig. 4(e). These results were comparable, although some interference from other planes in the single-shot approach was stronger than that in the multishot approach. An issue with the single-shot approach compared with the multi-shot one is distortion of the autocorrelation through the scaling process. The distortion should be smaller than the lateral resolution of speckle correlations. The limitation of the scaling factor in Eq. (10) is calculated as where d is the object size on the image sensor and is estimated as half of the correlative area, and δ is the resolution of speckle correlations and is √ 2-times larger than the grain size of the speckles, assuming a Gaussian distribution. In the case of the experiment, d = 300 pixels and δ = √ 2 × 17 pixels , so the above limitation was satisfied. CONCLUSION We proposed single-shot three-dimensional imaging through scattering media based on speckle correlation. An object is three-dimensionally reconstructed from a single speckle image with a scaling process, a correlation process, and a phase retrieval process. The proposed method was experimentally demonstrated with three-dimensionally arranged point sources between diffusers. The result was comparable to the multi-shot case reported previously. Our method simplifies the optical setup for threedimensional imaging through scattering media. It is useful in a wide range of modalities, including lensless imaging, microscope imaging, and telescope imaging, and applications such as looking around corners and reflection-mode scattered imaging. Also, this method may provide interesting insights for three-dimensional imaging without the need to calibrate optical hardware or probe optical phenomena before an imaging stage.
Charlie Hebdo — Paul Craig Roberts Charlie Hebdo Paul Craig Roberts The Charlie Hebdo affair has many of the characteristics of a false flag operation. The attack on the cartoonists’ office was a disciplined professional attack of the kind associated with highly trained special forces; yet the suspects who were later corralled and killed seemed bumbling and unprofessional. It is like two different sets of people. Usually Muslim terrorists are prepared to die in the attack; yet the two professionals who hit Charlie Hebdo were determined to escape and succeeded, an amazing feat. Their identity was allegedly established by the claim that they conveniently left for the authorities their ID in the getaway car. Such a mistake is inconsistent with the professionalism of the attack and reminds me of the undamaged passport found miraculously among the ruins of the two WTC towers that served to establish the identity of the alleged 9/11 hijackers. It is a plausible inference that the ID left behind in the getaway car was the ID of the two Kouachi brothers, convenient patsies, later killed by police, and from whom we will never hear anything, and not the ID of the professionals who attacked Charlie Hebdo. An important fact that supports this inference is the report that the third suspect in the attack, Hamyd Mourad, the alleged driver of the getaway car, when seeing his name circulating on social media as a suspect realized the danger he was in and quickly turned himself into the police for protection against being murdered by security forces as a terrorist. Hamyd Mourad says he has an iron-clad alibi. If so, this makes him the despoiler of a false flag attack. Authorities will have to say that despite being wrong about Mourad, they were right about the Kouachi brothers. Alternatively, Mourad could be coerced or tortured into some sort of confession that supports the official story. https://www.intellihub.com/18-year-old-charlie-hebdo-suspect-surrenders-police-claims-alibi/ The American and European media have ignored the fact that Mourad turned himself in for protection from being killed as a terrorist as he has an alibi. I googled Hamid Mourad and all I found (January 12) was the main US and European media reporting that the third suspect had turned himself in. The reason for his surrender was left out of the reports. The news was reported in a way that gave credence to the accusation that the suspect who turned himself in was part of the attack on Charlie Hebdo. Not a single US mainstream media source reported that the alleged suspect turned himself in because he has an ironclad alibi. Some media merely reported Mourad’s surrender in a headline with no coverage in the report. The list that I googled includes the Washington Post (January 7 by Griff Witte and Anthony Faiola); Die Welt (Germany) “One suspect has turned himself in to police in connection with Wednesday’s massacre at the offices of Parisian satirical magazine, Charlie Hebdo;” ABC News (January 7) “Youngest suspect in Charlie Hebdo Attack turns himself in;” CNN (January 8) “Citing sources, the Agence France Presse news agency reported that an 18-year-old suspect in the attack had surrendered to police.” Another puzzle in the official story that remains unreported by the presstitute media is the alleged suicide of a high ranking member of the French Judicial Police who had an important role in the Charlie Hebdo investigation. For unknown reasons, Helric Fredou, a police official involved in the most important investigation of a lifetime, decided to kill himself in his police office on January 7 or January 8 (both dates are reported in the foreign media) in the middle of the night while writing his report on his investigation. A google search as of 6pm EST January 13 turns up no mainstream US media report of this event. The alternative media reports it, as do some UK newspapers, but without suspicion or mention whether his report has disappeared. The official story is that Fredou was suffering from “depression” and “burnout,” but no evidence is provided. Depression and burnout are the standard explanations of mysterious deaths that have unsettling implications. Once again we see the US print and TV media serving as a ministry of propaganda for Washington. In place of investigation, the media repeats the government’s implausible story. It behoves us all to think. Why would Muslims be more outraged by cartoons in a Paris magazine than by hundreds of thousands of Muslims killed by Washington and its French and NATO vassals in seven countries during the past 14 years? If Muslims wanted to make a point of the cartoons, why not bring a hate crime charge or lawsuit? Imagine what would happen to a European magazine that dared to satirize Jews in the way Charlie Hebdo satirized Muslims. Indeed, in Europe people are imprisoned for investigating the holocaust without entirely confirming every aspect of it. If a Muslim lawsuit was deep-sixed by French authorities, the Muslims would have made their point. Killing people merely contributes to the demonization of Muslims, a result that only serves Washington’s wars against Muslim countries. If Muslims are responsible for the attack on Charlie Hebdo, what Muslim goal did they achieve? None whatsoever. Indeed, the attack attributed to Muslims has ended French and European sympathy and support for Palestine and European opposition to more US wars against Muslims. Just recently France had voted in the UN with Palestine against the US-Israeli position. This assertion of an independent French foreign policy was reinforced by the recent statement by the President of France that the economic sanctions against Russia should be terminated. Clearly, France was showing too much foreign policy independence. The attack on Charlie Hebdo serves to cow France and place France back under Washington’s thumb. Some will contend that Muslims are sufficiently stupid to shoot themselves in the head in this way. But how do we reconcile such alleged stupidity with the alleged Muslim 9/11 and Charlie Hebdo professional attacks? If we believe the official story, the 9/11 attack on the US shows that 19 Muslims, largely Saudis, without any government or intelligence service support, outwitted not only all 16 US intelligence agencies, the National Security Council, Dick Cheney and all the neoconservatives in high positions throughout the US government, and airport security, but also the intelligence services of NATO and Israel’s Mossad. How can such intelligent and capable people, who delivered the most humiliating blow in world history to an alleged Superpower with no difficulty whatsoever despite giving every indication of their intentions, possibly be so stupid as to shoot themselves in the head when they could have thrown France into turmoil with a mere lawsuit? The Charlie Hebdo story simply doesn’t wash. If you believe it, you are no match for a Muslim. Some who think that they are experts will say that a false flag attack in France would be impossible without the cooperation of French intelligence. To this I say that it is practically a certainty that the CIA has more control over French intelligence than does the President of France. Operation Gladio proves this. The largest part of the government of Italy was ignorant of the bombings conducted by the CIA and Italian Intelligence against European women and children and blamed on communists in order to diminish the communist vote in elections. Americans are a pitifully misinformed people. All of history is a history of false flag operations. Yet Americans dismiss such proven operations as “conspiracy theories,” which merely proves that government has successfully brainwashed insouciant Americans and deprived them of the ability to recognize the truth. Americans are the foremost among the captive nations. Who will liberate them?
289 Awareness of ovarian cancer symptoms in general population Objectives Ovarian cancer accounts for 3% of all female cancers and has a high mortality rate among gynecological malignancies. Early diagnosis carries a high survival rate of 93%. So, this study was carried out to assess the knowledge and awareness of Jordanian women about ovarian cancer symptoms and risk factors. Methods A cross-sectional survey design was used. Women randomly selected to complete the survey, 896 women completed the survey. Results The mean of total symptoms recognized was low at level of 3.2(SD=2.7) out of 10. The three highest known symptoms among women were as follows: extreme fatigue (43.2%), back pain (42.4%), and persistent pain in pelvic area (40.7%). The most commonly known risk factor was smoking (68.4%), followed by having ovarian cyst(s) (59.7%). Conclusions Poor awareness of ovarian cancer risk factors and symptoms were noticed. Awareness need to be raised through education and social media. The absence of an effective screening program, a national awareness campaign is urgently needed to improve the public’s understanding of symptoms and risk factors and increasing women’s confidence in symptom recognition.
Assessment of ion-selective optical nanosensors for drug screening applications Ion channels represent an important category of drug targets. They play a significant role in numerous physiological functions, from membrane excitation and signaling to fluid absorption and secretion. An ion-channel assay system using optical nanosensors has recently been developed. This high-throughput, high-content system improves on the existing patch clamp and fluorescent dye technologies that presently dominate the ionchannel screening market. This paper introduces the nanosensor technology, reviews the current market for ion-channel assays, assesses the costs associated with the nanosensors, and evaluates their commercialization potential. Thesis Supervisor: Heather A. Clark Title: Senior Member of the Technical Staff, The Charles Stark Draper Laboratory Thesis Advisor: Francesco Stellacci Title: Finmeccanica Associate Professor of Materials Science and Engineering, Massachusetts Institute of Technology
Lyft is giving riders a way to make a multi-stop trip, letting them use the Lyft app to add an extra destination for two-stop trips. The logic is simple: With more users, especially city-dwellers, tapping ride-sharing and on-demand services in place of owning a vehicle, it’s more likely they’ll want some flexibility in how those trips are routed, since a multi-errand excursion is a pretty common occurrence for most. Adding a side-trip is easy enough: You put your first stop in as the destination when requesting your ride, and then tap a new “+” icon next to the destination to add a final drop-off location. The driver will be able to see both stops on their end, and you won’t need to awkwardly mumble that it’s “5 o’clock somewhere” when explaining why you want to hit the liquor store before heading home at 11 in the morning. Of course, you can also do other, more socially acceptable things like dropping off a friend when sharing a trip, or picking up some groceries on the way home. Lyft will also let you change your mind and remove the stop if you decide against whatever it was you were going to do. All of which sounds very convenient, which is nice, but Lyft is looking to optimize its service by acting on actual data in adding this feature, which isn’t currently offered by its largest competitor Uber. Most riders do input a destination, Lyft has found, including in 90 percent of Classic, Plus and Premier rides booked. Five percent of those end up being manually updated during the ride, which actually translates to hundreds of thousands of instances of manual multi-stop tripping, so it’s definitely a non-trivial user need. And for fare-splitters, this marries up nicely with the existing ability to divvy up ride costs. But it’s also a service for drivers – Lyft allows drivers nearing the end of a trip to be matched up with passengers nearby before the trip actually concludes to help them earn more by chaining fares. A last-minute destination change can be frustrating for both drivers and passengers, but now multi-destination helps in driver planning and in avoiding the downsides of queuing close proximity rides. The ability to add a stop will roll out to Lyft’s mobile apps soon. It’s a small but smart addition, and one that could also easily translate into a core component of a more automated ride-sharing interaction model in the future.
Feasibility of a Low-Fidelity Pediatric Simulation-Based Continuing Education Curriculum in Rural Alaska Introduction Simulation-based continuing education (SBCE) is a widely used tool to improve healthcare workforce performance. Healthcare providers working in geographically remote and resource-limited settings face many challenges, including the development and application of SBCE. Here, we describe the development, trial, and evaluation of an SBCE curriculum in an Alaska Native healthcare system with the aim to understand SBCE feasibility and specific limitations. Methods The perceived feasibility and efficacy of incorporating a low-fidelity medical simulation curriculum into this Native Alaskan healthcare system was evaluated by analyzing semi-structured interviews, focus groups, and surveys over a 15-month period (August 2018 - October 2019). Subjects were identified via both convenience and purposive sampling. Included were 40 healthcare workers who participated in the simulation curriculum, three local educators who were trained in and subsequently facilitated simulations, and seven institutional leaders identified as “key informants.” Data included surveys with the Likert scale and dichotomous positive or negative data, as well as a thematic analysis of the qualitative portion of participant survey responses, focus group interviews of educators, and semi-structured interviews of key informants. Based on these data, feasibility was assessed in four domains: acceptability, demand, practicality, and implementation. Results Stakeholders and participants had positive buy-in for SBCE, recognizing the potential to improve provider confidence, standardize medical care, and improve teamwork and communication, all factors identified to optimize patient safety. The strengths listed support feasibility in terms of acceptability and demand. A number of challenges in the realms of practicality and implementation were identified, including institutional buy-in, need for a program champion in a setting of staff high turnover, and practicalities of scheduling and accessing participants working in one system across a vast and remote geographic region. Participants perceived the simulations to be effective and feasible. Conclusion While simulation participants valued an SBCE program, institutional leaders and educators identified veritable obstacles to the practical implementation of a structured program. Given the inherent challenges of this setting, a traditional simulation curriculum is unlikely to be fully feasibly integrated. However, due to the overall demand and social acceptability expressed by the participants, innovative ways to deliver simulation should be developed, trialed, and evaluated in the future. Introduction Simulation as a training tool is increasingly utilized in medical education, as it is demonstrated to be effective in improving patient safety and reducing health care costs . The degree to which the simulation training reflects reality is described along a continuum of low-to highfidelity, which can be adjusted according to the learning objectives of the simulation . Highfidelity is not necessarily superior to low-fidelity to achieve educational learning objectives . Barriers to incorporating high-fidelity simulation include insufficient availability, cost, lack of access and trained faculty, and time constraints . While a low-fidelity simulation program is inexpensive and requires little to no additional equipment to implement, it does require at least the eight critical factors identified by the Joint Commission Journal on Quality and Patient Safety: science, staff, supplies, space, support, systems, success, and sustainability . Geographic and ethnic disparities in healthcare service more heavily impact vulnerable populations of infants and children when compared to adults, and death rates for American Indian/Alaska Native (AI/AN) infants and children, the majority of those who reside in a rural area, are nearly three times higher than for Caucasians. Overall, the pediatric death rate for AI/AN youths to 19 years of age was 73.2 as compared with 29.1 for Caucasian youths from 1999-2009, with the AI/AN pediatric death rates highest for the Alaska region across all age categories (P<0.1) . Influenza and pneumonia rank as the highest causes of mortality in Alaska . The backbone of the healthcare delivery workforce in the Southwest Native Alaska villages is the Community Health Aide/Practitioner (CHA/P), with hospital personnel, including nurses, advanced practitioners, and physicians, staffing the sub-regional centers and regional hospital. The CHA/Ps practice broad-spectrum medicine, including primary and preventative, chronic, acute, and emergency services. They service 47 Alaska Native villages surrounding the regional city hub, with approximately 25,000 residents, spanning a river delta region more than 50,000 square miles in size . The land consists of coastal wetlands, tundra, and mountains and is accessible off the traditional road system via plane, boat, dog sled, or snow machine, colloquially known as the "bush." The approximately 170 CHA/Ps working in this region are often the sole source of healthcare in the village. They report to a supervising provider located in the sub-regional clinic or main hospital hub and must be prepared to handle any emergency medical situation that arises. The CHA/Ps, nurses, and providers are required to complete continuing education (CE) as part of the maintenance of certification. Given the geographically remote and resource-limited setting of the hospital, this usually means traveling far, which is time and cost-intensive. CHA/Ps have access to CE boot camps in the regional hospital and until now, their simulation experience is limited to simple procedural task trainers and standardized certificate courses (e.g. Acute Life Support). The inspiration for this project arose from a direct request for simulation-based education from experienced CHA/Ps working in a sub-regional clinic setting in Southwest Alaska. Given the unique job description of these providers, including specific medical knowledge, skills, and teamwork/communication capabilities required for success on the job, simulation-based education seems like a potentially suitable and beneficial endeavor . The authors subsequently conducted an assessment to capture the specific needs of this community. Providers reported a lack of a formal simulation program and a desire to have one. Specifically, those surveyed requested scenarios for pediatric respiratory distress, likely in response to the disproportionately high rates of respiratory infections and pulmonary chronic disease in the region . Given the declared need for an SBCE program for these providers working with this population with significant health disparities, the authors developed and piloted a low-fidelity simulation curriculum for pediatric respiratory distress scenarios common in the region . The simulations were launched as part of this study developed with the aim to assess the feasibility of the curriculum in the domains of acceptability, demand, practicality, and implementation, in addition to the perceived efficacy. The feasibility study methodology was selected because community partnerships needed to be established and a paucity of prior studies exist that are relevant to the target population . This study was evaluated in a real-world setting, recognizing the difficulties of conducting an internally validated, highly controlled efficacy trial in a community site. Local practitioners and community members were actively involved in meaningful ways, from program inception through design and execution. Study design, participants, sample size, and sampling Two CE simulation-based boot camps were held for CHA/Ps and one day of simulations was held for hospital-based nurses and emergency room (ER) technicians facilitated by outside simulation educators (ES, SR). Instructors were one pediatrician who worked at the local institution (ES) and one traveling pediatric critical care nurse practitioner (SR), each with over three years of simulation facilitation experience. A convenience sample of 11 CHA/Ps, 15 nurses, and two ER technicians participated in the sessions and were invited to submit survey responses evaluating their perception of the effectiveness and feasibility of the activity in the domains of acceptability, demand, practicality, and implementation. These sessions led by outside facilitators are grouped together as Survey Group #1. In an effort to implement a sustainable program, we trained local educators of the hospital nursing staff and the CHA/Ps simulation facilitation and debriefing with a one-day workshop and subsequently asked them to facilitate two simulation boot camp days with a total of 12 CHA/P participants. All participants received official CE credit for their participation. These sessions facilitated by trained local educators are grouped together as Survey Group #2. Of the five local educators who participated in the training, only three were able to subsequently facilitate the two simulation days due to competing work demands, with one of them facilitating both sessions. Following these simulation days, we conducted focus groups of the facilitators to explore their perceptions of the tool. This is captured as Focus Groups #1 and #2. In addition, seven local "key informant" hospital leaders were identified via purposeful sampling to participate in semi-structured interviews guided with the intent to explore their perception of the feasibility of implementing a simulation program at their institution. These data collection points are summarized in Figure 1. Data collection & analysis Simulation participants were surveyed on their perception of the effectiveness of the exercise on the basis of realism, usefulness in reviewing resuscitation and medical management skills, and whether the debrief felt safe and relevant, using the 5-point Likert scale (1 = strongly disagree, 3 = neutral, 5 = strongly agree) and the mean, standard deviation, median, and ranges were calculated . See Table 1. Table 2), as well as key informant, semi-structured interviews ( Table 3) and educator focus group interviews ( Table 4). The questions were framed to assess the four feasibility dimensions of acceptability, demand, practicality, and implementation. The content of the qualitative responses, interviews, and focus groups was analyzed. A realist thematic analysis approach was used to establish categories of concepts and themes, focusing on reporting the experiences, meanings, and realities across respondents . The codebook, which contained mostly semantic themes, was developed directly from the text . In the final round of coding, excerpts from each major theme were used to refine and define sub-themes in the context of the four feasibility domains listed above. Do you think you'll use simulation as a teaching modality more now after the April workshop and this experience facilitating the simulations? What would you improve about the April simulation workshop (content, logistics, or anything)? Or, what did not seem to "work" or resonate with you as an element that you could incorporate into your teaching practice? What would you improve about the simulations that you ran today (content, logistics, or anything)? Or, what did not seem to "work" or resonate with you as an element that you could incorporate into your teaching practice? Identify and explain any barriers you may face to incorporating simulation into your teaching practice, now and in the future? How do think simulation affects the quality of your teaching? How do you think using simulation as a teaching modality differs, if at all, from other teaching modalities you typically use? Do you have any thoughts to share about how you foresee a simulation program fitting in with the direction or vision of how your institution is evolving? Assuming you found the work compelling, what structure would be helpful for sustainment? Anything else to add? TABLE 4: Focus-group interview guide Acceptability was translated as the perception that the SBCE would be both desirable and culturally appropriate, with the mostly Alaska Native population that makes up the CHA/P workforce, the nursing staff, and educators in a hospital known to have a high staff turnover rate. Demand was operationalized as the perceived need for the SBCE. Practicality was linked to the costs (financial and temporal) associated with the program. Implementation was evaluated as the likelihood that the intervention could be fully integrated into the local CE coursework without needing ongoing outside resources. This investigation was approved by the Seattle Children's Hospital Institutional Review Board and the Yukon-Kuskokwim Health Corporation (YKHC) Human Subjects Committee. Participants Five local educators participated in a simulation facilitation workshop and three of the five subsequently led two simulation days. These three educators train CHA/Ps or nurses, average over 25 years of clinical nursing experience each, and have simulation experience in facilitating the Advanced Cardiac Life Support (ACLS) course. Of the 40 simulation participants, 23 were CHA/Ps, 15 nurses, and two ER technicians. Twelve of the CHA/Ps participated in simulations facilitated by the newly trained local educators (Survey Group #2) while the other 28 participants were led by external simulation facilitators (Survey Group #1). The seven key informants that participated in the semi-structured interviews included three physicians, two doctorate, and two nurse administrators; all informants were selectively identified due to their leadership roles in the hospital system, Community Health Aide Program, and nursing/provider education. Effectiveness outcomes All 40 simulation participants completed the post-simulation questions assessing feasibility and effectiveness. When asked to evaluate the effectiveness of the simulations on the Likert scale across six questions, the curriculum received a median of 5, mean of 4.59, standard deviation of 0.23, and range 3-5. When comparing the responses of participants with outside facilitators (Survey Group #1) vs local newly trained simulation facilitators (Survey Group #2), the mean (standard deviation) was only slightly higher (4.75 (0.19) vs 4.42 (0.14)) and the range of responses was respectively wider (3-5 vs 4-5) ( Table 5). Feasibility outcomes When asked dichotomous positive/negative feasibility questions, the CHA/Ps responded 100% positive to acceptability, demand, and practicality questions and 91% positive to the implementation question. The nurses and ER technicians responded 98% positive to acceptability, 100% demand, 79% practicality, and 94% implementation. The key informants responded 100% positively to acceptability, practicality, and implementation, and 95% positively to demand ( The following themes emerged upon qualitative analysis. In the acceptability domain, themes emerged across all respondents that simulation-based active learning is a useful and well-received way to prepare for responding to emergencies. Educators also note that collaborating on simulation facilitation enhanced cross-institutional collegiality. In the demand realm, themes expressed across all respondents that this type of education is needed to standardize medical care and improve teamwork/communication skills and healthcare worker confidence, which has the potential to improve patient outcomes/safety. In the practicality realm, themes emerged from the key informants and educators that funding and physical space would be necessary to realize a program and scheduling would be a challenge for a workforce spanning a large rural geographic region. Participants expressed they would participate if it was supported and expected by their managers, if continuing education credits were allotted, and if it fits into the standard workday. In the implementation realm, themes emerged from educators and some key informants that there would need to be broader institutional buy-in to enforce a program with an allotted local champion responsible for it, especially given the reality of high staff turnover in the region. While educators felt comfortable running simple task-training simulations and leading ACLS courses as they had done prior to this project, they expressed discomfort with facilitating and debriefing more complex medical scenarios after just one workshop and two days of direct observation and feedback. All respondents suggested the possibility of utilizing the robust telecommunication network already in place in the region to overcome barriers to accessing a high-quality, sustainable SBCE. CHA/Ps overwhelmingly asked for simulations "in-situ" in the village and sub-regional clinics, where the bulk of their medical practice takes place. They note their daily experience with telemedicine, tele-education, and teleconferencing, and suggest the potential to offer this training via this established telecommunication system to overcome the acknowledged barriers of time, space, and resources. See Tables 7-8 for sample quotations. Participant surveys Educator focus groups Key informant semi-structured interviews "Helps a CHA/P when she deals with a real emergency. We don't have emergencies every day in our clinics. It will also help a CHA/P to think fast or know what to do in an emergency especially with babies." (05) "I think it helps them learn better when they can actually practice hands-on. I think they can develop more confidence from doing hands-on learning, and I think they are bored to tears when we just talk at them." (48) "This is a project that I have wanted to put in place for some time, it will give great benefit to both new and seasoned Health Aides in their practice, in a safe environment that is conducive to learning and retention skills." (32) "Simulation helps teach in a non-punitive -non-grading method and builds confidence in skills and communication. This overall will improve patient outcomes." (12) " Discussion We were able to conduct a simulation program that served a unique and disparate group of participants who practice broad-spectrum "bush" medicine. Despite widespread acceptability and demand, the full integration of a practical and implementable SBCE program would require larger institutional buy-in, established champion(s), funding, and creativity in overcoming scheduling and access issues. Our team prioritized the training of local educators in simulation facilitation and created opportunities to practice these newly acquired skills with on-the-ground mentorship in an effort to integrate the didactic method into the existing CHA/P training program. Integration is another feasibility domain, operationalized as the level of system change needed to integrate a new program or process into an existing infrastructure or program . The local educators expressed insecurity in their facilitation skills after just one day of simulation training and only two subsequent mentored practice sessions. Interestingly, the simulation participants gave overwhelmingly positive feedback to these sessions, reflected in their Likert scale perceived effectiveness scores and open-ended survey responses. The discrepancy between local educator confidence and positive participant feedback should be further explored in order to create a feasible program. Participants, key informants, and educators alike overwhelmingly brought up the possibility of harnessing the power of the established telecommunication network unique to this setting to overcome the perceived barriers to integrating an SBCE curriculum. One possible solution to enhancing the feasibility of an SBCE program for providers in this remote and resource-limited setting is leveraging the telecommunication system to introduce telesimulation. Telesimulation is a replicable, low-cost, and robust tool that optimizes learning and instructor training with simulation in resource-limited settings . Tele-co-debriefing has been utilized in transcontinental settings for simulation faculty development and remotely facilitated simulation has been shown to be as effective as traditional, locally facilitated simulation . Future projects might consider developing a telesimulation program. The program could utilize a network of simulationists from more resourced areas to help co-facilitate remotely, with the intention to improve access to high-quality CE while continuing to build capacity in the local simulation facilitators themselves. Remote co-facilitation may be one way to provide ongoing mentorship to the local newly trained educators in an effort to enhance their comfort in simulation facilitation. Future studies can focus on demonstrating impacts on learners, facilitators, clinical outcomes, and other quality improvement metrics. This study is limited by a small convenience sample size. Given that we chose to conduct a feasibility study in a real-life, busy hospital with the actual participants that would ultimately be involved in the proposed SBCE, the authors felt this was necessary. However, we acknowledge the disadvantage that the convenience sample does not accurately represent the entire population of providers seeking CE in this broader geographic region. Thus, feasibility results might be skewed and may represent a barrier. We limited the evaluation of the simulation to learner perceptions of utility and did not directly evaluate the impact on knowledge, acquisition, or communication skills. Pre-and post-tests and/or videotaping with subsequent review are ways to enhance the evaluation piece of simulation in the future. Conclusions SBCE was found to be acceptable and in demand. Local educators were trained and facilitated simulations that were well-received by participants. While we were able to implement this research project, we uncovered barriers to the true integration of this SBCE program moving forward. A suggested, innovative solution to overcome some feasibility barriers was telesimulation. Future initiatives might consider developing a telesimulation tool to integrate into the established telecommunication system in the region. Studies can build upon this assessment and consider ways to innovate simulation delivery while studying the impact on cost, provider competence, and, ultimately, patient outcomes. The authors herein conclude that a formal low-fidelity SBCE program may be feasible within this rural Alaska Native healthcare system if the acknowledged barriers are addressed creatively.
Chemical characterization of neuroendocrine targets for progesterone in the female rat brain and pituitary. The secretory products of some of the cell types which respond directly to actions of progesterone in the female rat brain and pituitary were determined by combining immunocytochemistry with autoradiography following systemic administration of the synthetic progestin ligand -R5020. Four major findings are reported: (1) Approximately 90% of the tyrosine hydroxylase (TH)-immunoreactive neurons in the hypothalamic arcuate nucleus have progesterone receptors, while TH-immunoreactive neurons in other portions of the hypothalamus (e.g. the periventricular region and the zona incerta) do not. (2) Approximately 30% of the beta-endorphin neurons in the hypothalamus have progesterone receptors. (3) None of the luteinizing hormone-releasing hormone neurons examined have progesterone receptors. (4) Approximately 98% of the cells in the anterior pituitary that have progesterone receptors contain luteinizing hormone. Lactotrophs do not contain progesterone receptors. Many progestin targets in the brain remain to be characterized chemically. The implications for progesterone-inducible genes and neuroendocrine control systems are discussed.
The effects of phenylephrine on pupil diameter and accommodation in rhesus monkeys. PURPOSE Phenylephrine is used to dilate the iris through alpha-adrenergic stimulation of the iris dilator muscle. Sympathetic stimulation of the ciliary muscle is believed to be inhibitory, decreasing accommodative amplitude. Investigations in humans have suggested some loss of functional accommodation after phenylephrine. It is unclear whether this loss is due to direct action of phenylephrine on the ciliary muscle or to secondary optical factors associated with mydriasis. The purpose of this study was to determine whether phenylephrine affects Edinger-Westphal (EW)-stimulated accommodation in rhesus monkeys. METHODS The time course for maximum mydriasis was determined by videographic pupillography after phenylephrine instillation in 10 normal rhesus monkeys. Static and dynamic EW-stimulated accommodative responses were studied in five iridectomized rhesus monkeys before and after phenylephrine instillation. Accommodative amplitude was measured with a Hartinger coincidence refractometer. Dynamic accommodative responses were measured with infrared photorefraction, and functions were fitted to the data to determine peak velocity versus accommodative response relationships. RESULTS The maximum dilated pupil diameter of 8.39 +/- 0.23 mm occurred 15 minutes after administration of phenylephrine. In iridectomized monkeys, postphenylephrine accommodative amplitudes were similar to prephenylephrine amplitudes. Dynamic analysis of the accommodative responses showed linear peak velocity versus accommodative amplitude relationships that were not statistically different before and after phenylephrine. CONCLUSIONS alpha-Adrenergic stimulation causes a strong pupil dilation in noniridectomized monkey eyes but does not affect EW-stimulated accommodative amplitude or dynamics in anesthetized, iridectomized rhesus monkeys.
T cell receptors in autoimmune disease as targets for immune intervention. The optimal form of treatment for an autoimmune disease should be highly specific, have few side effects, and allow treatment of clinically apparent disease. One target that could fulfill these requirements is the T cell receptor. To answer the question whether treatment of autoimmune disease is possible with anti-T cell receptor antibodies, the heterogeneity of T cell receptor elements utilized in the T cell mediated autoimmune disease experimental allergic encephalomyelitis was analyzed. The limited heterogeneity of these elements allowed prevention and treatment of clinical autoimmune disease with anti-T cell receptor monoclonal antibodies. These results and their potential value for other autoimmune diseases are discussed.
Cervical Adenopathy of the Children in Tropical Context: Clinico-Pathological Study Objective The aim of the study was to describe the epidemiological and etiological characteristics of cervical lymphadenopathy in children, for 15 years (2003-2017) at the pathology laboratory of Lomé, Togo. A total of 221 cervical adenopathies in children were collected. The sex ratio (M/F) of patients was 1.1 and the average age was 9.8 ± 0.3 years. The notion of HIV was found in 69 children. Histological, the etiologies were infectious (n = 128 cases, 57.9%), tumors (n = 96 cases) and others (n = 8 cases, 1.6%). The main etiology among infections was tuberculosis (n = 84 cases). Tumor etiologies were primitive in 82 cases and secondary in 3 cases. Primary tumors were dominated by lymphomas (n = 74 cases) in its Burkitt form (n = 44 cases). The etiologies of cervical lymphadenopathy in tropical environments are always dominated by infectious agents. Introduction Cervical adenopathy in children is a frequent reason for consultation and causes a real diagnostic problem due to differential diagnoses, resolved if the diagnostic procedure is rigorous . This approach requires a careful and clinical examination, which sometimes makes it possible to evoke the origin of these adenopathies before the histological analysis . Knowledge of the etiological model is crucial for therapy . In Africa in general and in Togo in particular, few studies have been carried out on cervical adenopathies in children. The aim of our study was to study the epidemiological and etiological features in cervical adenopathies in Togolese children. Methods 3 In this retrospective and descriptive study, we retrieved the data retrospectively. We consulted the files of patients aged less than 15 Result We collected 221 cases of cervical adenopathies representing 41.9% of the total adenopathy of the child. The annual frequency was 14.7 cases on average. The study population consisted of 118 (53.4%) boys and 103 girls (46.6%), with a sex ratio (M/F) of 1.1. The mean age of patient was 9.8 ± 0.3 years; the extremes were 2months and 14years. Cervical adenopathies were unilateral in 112 cases, bilateral in 95 cases, and not specified in 14 cases. The notion of HIV was found in 69 children. Histological, these adenopathies were infectious (n = 128 cases, 57.9%), tumors (n=85 cases, 38.5%) and other etiologies (n=8 cases, 1.6%). Infectious etiologies were specific (n=96 cases) and non-specific (n=32 cases). For the specific etiologies, tuberculosis represented the main etiology (n = 84 cases) of caseo-follicular type in 78 cases and pure caseate (6 cases). Tuberculosis was diagnosed in 46 males and 38 females. The other specific etiologies were represented in Table I. Tumoral adenopathies were primitive in 82 cases and secondary in 3 cases. Primary tumors were lymphomas (n=74 cases) and leukaemias (n=8 cases). All cases of leukemia were chronic lymphocytic leukemia. Lymphomas were observed at an average age of 10.5 years. These lymphomas were observed in 40 subjects boys (54.1%) and 34 (45.9%) female subjects. Lymphomas were non-Hodgkin in 58 cases (78.4%) and Hodgkin in 16 cases (21.6%). Non-Hodgkin's lymphomas were dominated by the Burkitt type (n = 44 cases). The other types of non-Hodgkin's lymphomas were shown in Table II clinical examination often leads to diagnosis and is often a diagnostic problem that is taken for histological or cytological examination . Cervical adenopathies were observed in both boys and girls; but the slight male predominance reported in this series is also reported by other authors . The mean age of the children in our series is related to the mean ages reported in the literature . Many histological, studies have shown a very high frequency of cervical inflammatory diseases dominated by tuberculosis. Tuberculosis occurs in older children. The positivity of intradermal reaction to tuberculin is in no way specific to active tuberculosis, but merely indicates sensitization by prior contact with the bacillus of Koch . Fine needle cytotyponction may not be able to detect the pathogen but detect the presence of epithelioid cell granulomas and necrosis, leading to a definitive diagnosis in 73% of cases . Polymerase chain reaction (PCR), although rarely used, is recommended in cases with negative culture results . Histological examination makes the diagnosis when there is a tuberculoid granuloma or giganto-cellular granuloma associated with a caseous necrosis or when the bacillus of Koch is found on the histological sections . The isolated granuloma giganto-cellular is non-specific, since it can be observed in syndromes as diverse as the foreign body, sarcoidosis and connective tissues. Other inflammatory causes were rare, as reported in the literature . Our study showed a predominance of lymphomas, in its Burkitt's form (44%). This prevalence is also reported by numerous studies in Africa where Burkitt's lymphoma accounts for 20-50% of childhood cancers . For example, Burkitt's lymphoma in Côte d'Ivoire, Nigeria and Malawi is the first cancer identified before age 15, with 73%, 31%, and 55% of the cases, respectively . The high incidence of Burkitt's lymphoma in some parts of Africa is related to malaria endemicity in these areas. Indeed, the malaria associated with the Epstein-Barr virus would favor the occurrence of Burkitt's lymphoma . Burkitt's lymphoma is also more common in immunocompromised children, whether congenital or acquired (HIV infection, suppressor therapy after marrow or organ transplantation) . However, in western countries, Hodgkin's disease is more common than non-Hodgkin's lymphoma and occurs in older children, around 10 years . The first manifestation is often a cervical adenopathy which quickly becomes very bulky, of firm consistency, elastic. The cytological puncture sometimes allows the diagnosis by showing Sternberg cells . Chronic lymphocytic leukemia has been rare in our series, but very common in the western series where it is 6 the 2nd most common cervical cancers of children . This difference could be explained by a lack of means of early diagnosis in African countries . Leukemia frequently starts at the neck and carries out a regular and symmetrical macroadenopathy. Diagnosis is based on the blood count, the sternal puncture which shows a massive malignant proliferation of the lymphoid line . Other rare etiologies, particularly sarcoidosis, lupus are exceptionally revealed by isolated cervical adenopathy; there are other cutaneous, pulmonary, mediastinal and hepatosplenic localizations that need to be investigated . The diagnosis is mentioned in particular on the negativity of the IDR and the histology which reveals a granuloma epithelioid gigantocellular without casein or Bacillus of Koch . We observed only 3 cases of cervical metastases, concordant with the data of the literature which stipulates that the metastases in general are very rare head of the child . This study received approval from the head of the laboratory department to be conducted. Since it was counting records, patient consent was not required. However during the counting and data collection patient names were not collected in order to preserve 7 confidentiality. This study was approved by the head of the laboratory department of the Sylvanus Olympio University Hospital (Ref N° 14/2019/LAP/CHUSO). Consent for publication The Laboratory of Pathological Anatomy of Sylvanus Olympio Teaching Hospital, University of Lomé authorized the publication of this manuscript. Availability of data and materials Extracted data are with the authors and available for sharing on request.. Competing interest The authors declare that they have no competing interests. None Authors' contribution TD and BD were responsible for the design of the study, undertook the field study, performed data collection, analysis, and interpretation, and wrote the manuscript. TDj, BS, AM, BB, SA, EP, and SD participated in the design of the study, supervised the data collection, and participated in the data analysis. GN is responsible for the overall scientific management of the study, the analysis and interpretation, and preparation of the final manuscript. All authors have read and approved the final manuscript to be submitted for publication.
San Francisco - A federal appeals court today ruled that the NSA's bulk collection of phone records is illegal, saying Congress didn't authorize collection of a ''staggering'' amount of information on Americans. The decision by a three-judge panel of the U.S.Court of Appeals for the 2nd Circuit overturns a judge's ruling dismissing ACLU's challenge to Section 215 of the Patriot Act, ACLU v. Clapper. ''This is a great and welcome decision and ought to make Congress pause to consider whether the small changes contained in the USA Freedom Act are enough,'' said Cindy Cohn, executive director of Electronic Frontier Foundation (EFF). ''The 2nd Circuit rejected on multiple grounds the government's radical reinterpretation of Section 215 that underpinned its secret shift to mass seizure and search of Americans' telephone records. While the court did not reach the constitutional issues, it certainly noted the serious problems with blindly embracing the third-party doctrine—the claim that you lose all constitutional privacy protections whenever a third-party, like your phone company, has sensitive information about your actions." "Now that a court of appeal has rejected the government's arguments supporting its secret shift to mass surveillance, we look forward to other courts—including the Ninth Circuit in EFF's Smith v. Obama case—rejecting mass surveillance as well," said EFF Legislative Analyst Mark Jaycox. "With the deadline to reauthorize section 215 looming, we also call on Congress to both expressly adopt the interpretation of the law given by the court and to take further steps to rein in the NSA and reform the Foreign Intelligence Surveillance Court."
Orsatti’s Contribution to Module Theory I met Adalberto for the first time in November 1972: I was a first year student in Mathematics and he was my Algebra Professor. I do remember those tough days pretty well: the word “set” was completely new to me and each every time, at the end of Adalberto’s lesson, I thought I got an Arabic class instead of an Algebra one. Then suddenly, during Christmas time, I felt I could understand everything and that everything was so beautiful that, in the end, I gave up a very fancy interest for cybernetics (!) and decided to get more Algebra courses. Thus, in the next years, I followed two more courses by Adalberto and also got the honour to have him as my thesis advisor. Since then a never-ended collaboration began. Adalberto is not only a very clear teacher; he has the special gift of capturing your attention leading you inside the subject. For me he is the most fascinating person I ever met during my mathematical experience. And not only this. He taught me the most difficult thing: how to do research. It is nice here to remember that each every time I was getting depressed because we were not able to prove something then he began telling me one of his incredible jokes. Also if we found out that something we would have liked to be true, it was not, he was telling me “This is the truth, Claudia, what do you want more then the truth?” Orsatti’s first work in module theory is his first paper on Duality . The astonishing idea he got was that Pontryagin duality between discrete and compact abelian groups (see and ) and Lefschetz duality (see ) between discrete and linearly compact vector spaces as well as Kaplansky’s (see ) and Macdonald’s (see ) generalizations of this last one could be all thought as a particular case of a much more general situation. Namely, let A be a commutative ring and K an Hausdorff complete topological module over the ring A endowed with its discrete topology. Assume that: P1) The mapping a 7→ μa= multiplication by a in K defines an isomorphism A ∼= ChomA(K, K). P2) K has no small submodules i.e. there is a neighbourhood U of 0 in K such that the only submodule of K contained in U is 0. Let A-TM denote the category of Hausdorff topological modules over the ring A endowed with its discrete topology and let C(KA) be the subcategory of A-TM consisting of those topological modules which are topological isomorphic to closed submodules of topological products of copies of K. Let D(KA) be the subcategory of Mod-A cogenerated by KA. For every M ∈ Mod-A let M∗ be the A-module HomA(M,K) endowed with the topology of pointwise convergence. Clearly M∗ ∈ C(KA). For every M ∈ A-TM, let M∗ be the abstract module ChomA(M, K);
Thirty percent of U.S. electricity goes to power homes. iStockphoto.com /EricVega The energy we use is usually measured in kilowatt-hours (kWh); 1 kWh is equal to 1,000 watts working for one hour. In 2001, the entire world consumed 13.9 trillion kilowatt-hours (kWh) of electricity [source: Clean-Energy]. Of that global 13.9 trillion kWh, 25 percent (3.45 trillion kWh) powered electrical devices in the United States [source: IndexMundi]. And of that 3.45 trillion kWh, 1.14 trillion were used in households [source: EIA]. That's more than 30 percent of U.S. electricity going to power homes, which is more than either the commercial or the industrial sector uses [source: EIA]. Why the huge glut of energy consumption in the residential sector? Simple: Home appliances draw extreme amounts of energy. An appliance rated at 1,000 watts, left on for one hour, will use 1 kWh of electricity. Now think about all the appliances -- large and small -- you have in your home. Over the last 30 years, the efficiency of many appliances has increased dramatically. A refrigerator manufactured in 1979 consumed between 120 and 300 kWh per month; in a post-2001 unit, that monthly range is down to 31 to 64 kWh [source: Hawaiian Electric]. But still, refrigerators are a big draw on the energy supply. And they're not alone. Small appliances like toasters, hair dryers, coffee makers, vacuum cleaners and curling irons all use more watts than refrigerators do. Ranges and dishwashers do, too (you've probably noticed a trend -- producing heat takes lots of watts). But these big-watt items are only on for short periods of time, so they don't use as much power as an appliance that draws fewer watts but works indefinitely -- like a fridge/freezer or a water heater. So for the biggest energy hogs in the home, we're left with the household appliances that we leave running for hours -- or days -- at a time. In this article, we'll take a look at five of the most energy-hungry appliances in our homes. No. 5 on the list is refrigerator/freezers. Despite their huge efficiency jump in the last few decades, they still rank high in energy use.
Development of surface plasmon resonance-based immunoassay for aflatoxin B(1). Aflatoxins are a group of highly toxic fungal secondary metabolites that occur in Aspergillus species and may contaminate foodstuffs and feeds. Two different anti-aflatoxin B(1) antibodies were examined to develop a surface plasmon resonance (SPR)-based immunoassay to aflatoxin B(1). A conjugate consisting of aflatoxin B(1)-bovine serum albumin (BSA) was immobilized on the dextran gel surface. Competition between immobilized aflatoxin B(1) conjugate and free aflatoxin B(1) in solution for binding to antibody injected over the surface formed the basis for the assay. Regeneration of the antibody from the immobilized conjugate surface is essential for the development of such an inhibitive immunoassay. Problems were encountered with the regeneration of the sensor surface, due to the high-affinity binding of the antibodies. Conventional regeneration solutions consisting of low concentrations of NaOH and HCl worked to a degree, but regeneration was at the expense of the integrity of the immobilized conjugate. A polyclonal anti-aflatoxin B(1) antibody was produced and was found to be regenerable using an organic solution consisting of 1 M ethanolamine with 20% (v/v) acetonitrile, pH 12.0. This combined high ionic strength and extreme pH, as well as chaotrophic properties and allowed the development of an inhibitive immunoassay. The assay had a linear range of 3.0-98.0 ng mL(-1) with good reproducibility.
Effects of saccharin intake on hippocampal and cortical plasticity in juvenile and adolescent rats. The sensory system is developed and optimized by experiences given in the early phase of life in association with other regions of the nervous system. To date, many studies have revealed that deprivation of specific sensory experiences can modify the structure and function of the central nervous system; however, the effects of sensory overload remains unclear. Here we studied the effect of overloading the taste sense in the early period of life on the synaptic plasticity of rat hippocampus and somatosensory cortex. We prepared male and female Sprague Dawley rats with ad libitum access to a 0.1% saccharin solution for 2 hrs per day for three weeks after weaning on postnatal day 22. Saccharin consumption was slightly increased in males compared with females; however, saccharin intake did not affect chow intake or weight gain either in male or in female rats. We examined the effect of saccharin-intake on long term potentiation (LTP) formation in hippocampal Schaffer collateral pathway and somatosensory cortex layer IV - II/III pathways in the 6-week old saccharin-fed rats. There was no significant difference in LTP formation in the hippocampus between the control group and saccharin-treated group in both male and female rats. Also in the somatosensory cortex, we did not see a significant difference in LTP among the groups. Therefore, we conclude that saccharin-intake during 3~6 weeks may not affect the development of physiological function of the cortical and hippocampal synapses in rats.
Brendan Sinclair North American Editor Thursday 27th June 2013 Share this article Share Companies in this article Ubisoft Consumers weren't the only ones happy with Microsoft's recent reversal of Xbox One policies that would have enabled extra fees for used games and required a daily check-in with the company's servers. Speaking with GamesIndustry International, Ubisoft CEO Yves Guillemot said he was pleased with the move. "I think consumers didn't like their approach, so the fact they went back and listened to the consumers and gave them something different is a good move to ensure the new consoles [achieve] their potential," Guillemot said. While the reversal calls to mind Ubisoft's abandoned foray into always-online DRM with its PC games, Guillemot said the two situations aren't entirely similar. The problem for Ubisoft, he said, was that the publisher didn't offer enough content that justified the constant online connection. With Xbox One, Guillemot said Microsoft had lots of content planned to support its connection requirements. As for the pricing on the next-gen systems, Guillemot again expressed his approval. Like Activision's Eric Hirshberg, Guillemot noted that the $499 Xbox One asks a premium over the $399 PlayStation 4. But where Hirshberg thought Microsoft still needs to work to justify that discrepancy, Guillemot said the inclusion of the new Kinect camera in Microsoft's offering looks to do just that. "For the hardware they're putting on the market, the price is right," Guillemot said. "They are offering machines that will be exceptional, and for those prices, they're good deals." That was a better appraisal than Guillemot gave the Wii U prior to its launch. Last November, Guillemot said he always prefers lower pricing, so he wasn't happy when Nintendo announced plans to sell the Wii U in $299 and $349 bundles. Still, Ubisoft has supported the Wii U, and even with its slow start, Guillemot said he hasn't given up on it just yet. "We like the machine itself, with its possibility to have different types of gameplay on the TV screen and on the tablet," Guillemot said. "It's something that's really new for the industry, and we'll continue to see more of that on the PS4 and Xbox One." Now the big question is what Nintendo will do to push sales of the system. Ubisoft has already said it would welcome a Wii U price cut, but Nintendo president Satoru Iwata dismissed the idea in an interview earlier this month. "We will continue to support the Wii U this Christmas, and we're expecting it to take off in terms of sales," Guillemot said. "And we'll review what happened again at the beginning of next year." "Everybody feels that if the other guys knew where they were getting their money from, competitors would jump on that. Which is true." Yves Guillemot While its biggest games of the year are yet to come, Ubisoft has found some success in 2013 with the release of Call of Juarez: Gunslinger and Far Cry 3: Blood Dragon, $15 downloadable offerings based on the company's established retail brands. The latter title in particular surpassed Ubisoft's expectations, topping 500,000 copies sold in under two months and even spiked sales of the full Far Cry 3 release, especially on PC. Guillemot said he was "very happy" with both games' performances, adding that it has given them the opportunity to further explore that formula of downloadable brand extensions in the future. That Ubisoft would detail Blood Dragon's performance like that is noteworthy in itself, as reliable sales figures for downloadable titles are rarely divulged in the industry. However, Guillemot said he supported the Entertainment Software Association's call for greater digital sales transparency, saying it was important to release that information. "It gives all the industry better visibility on what's happening," Guillemot explained. "And not just the industry, but all the supporters of the industry and the financial community, as well. So we need to have more information to help the industry continue to grow." Despite agreement on the value of greater sales reporting, Guillemot said there's still reluctance on the part of companies to participate in it. "When you can, you keep all your secrets," Guillemot said. "Everybody feels that if the other guys knew where they were getting their money from, competitors would jump on that. Which is true. But what we've seen is that very often, if you receive as much information as you give, it's not negative."
At the end of 1989, David Foster Wallace was admitted to McLean Hospital, the psychiatric hospital associated with Harvard University, for substance addiction. He was twenty-seven years old and increasingly desperate for help. He had already experienced literary fame with his college novel, “The Broom of the System,” and sunk into obscurity with his postmodern short-story cabinet of wonders, “Girl with Curious Hair” (twenty-two hundred copies sold in hardcover). His most recent stop, as a graduate student in philosophy at Harvard, had lasted only a few weeks. His private life was hardly less uneven. He had attempted suicide the year before, in his family home, and had also gone from being a marijuana addict to an alcoholic, mostly drinking alone and in front of the television. Most dreadfully, he felt that he could no longer write well. He was unsure whether the problem was lack of focus, lack of material, or a lack of ambition. Granada House was to be the improbable solution to this problem, altering his approach to his work and putting him on the road to producing, in remarkably short order, his masterpiece, “Infinite Jest.” The four weeks Wallace spent at McLean in November 1989 changed his life. This was not his first or most serious crisis, but he felt now as if he had hit a new bottom or a different kind of bottom. From the ashes to which he had reduced postmodernism a new sort of fiction was meant to arise, as he’d recently laid out in the essay “Fictional Futures and the Conspicuously Young.” How else to understand the love note to the reader at the end of “Westward,” the last story he’d successfully written? But instead of rebirth, a prolonged dying had followed, and for the past year the corpse had moldered. Wallace hadn’t even been able to finish a nonfiction piece without help since 1987. Never before had he worked so hard with so little to show for it. Wallace was placed in a facility for alcoholics and depressives, with a large room for twelve-step meetings. The medical staff interviewed Wallace and told him that he was a hard-core alcohol and drug user and that if he didn’t stop abusing both he would be dead by thirty. Wallace in turn reported the news to his college roommate and close friend Mark Costello, who came the next day. “I’m a depressive, and guess what?” Wallace said. “Alcohol is a depressant!” He smiled through his tears, as if, Costello remembers, he “was unveiling a fun surprise to a five-year-old.” It was of course information Wallace knew already. The program was meant to shake up the addict, and, with Wallace, it succeeded. Pulling him out of his old life and keeping him away from its temptations and habits helped. In the end, though, what mattered most was probably that the intoxicated Wallace was no longer writing successfully, which left open the hope that a sober one might. Wallace saw a therapist and went to meetings. He detoxed from the alcohol. Bonnie Nadell, his longtime literary agent, who was back in the Northeast to be with her family for Thanksgiving, came by to see her author a few weeks after his admission. Wallace was already calmer by then. He met Nadell and a couple of other friends in a brightly lit room full of other patients, all smoking and drinking black coffee. Wallace looked so ragged that Nadell borrowed a pair of scissors from the staff and cut his hair. But she was happy to see he was writing in a notebook. McLean was the storied holding tank for many literary depressives, from Sylvia Plath to Robert Lowell, and it occurred to Wallace’s friends that this gave him at least some comfort, that he thought of himself as at a mental-health Yaddo. It was Wallace’s expectation that he would go back to Harvard after his stay at McLean. He was, after all, still enrolled in the graduate program. But the psychiatric staff kept advising him against it. He did not recognize himself in their phrase “hard-core recidivist,” but as the weeks went by he felt farther and farther away from his old self and must have begun, amid his anxiety about writing, to concede the point that survival had to come first. In any event, he chose to go to a halfway house in Brighton run by a woman who had worked in a psychology lab funded by NASA before she herself went into rehab. He hoped she would understand what he saw as the particular problems of a person as intelligent and educated as himself and provide support. It would be the next best thing to McLean, which Wallace was—Costello noted—sorry to have to leave. He had gotten used to the routines—the meetings, the therapy, the order, the prepared meals—not entirely unlike home. Brighton was a world away from Cambridge, and he did not know what to expect. Despite having written a book on rap, his knowledge of anything other than middle-class academic life was minimal. He wrote Nadell at the end of November, “I am getting booted out of here and transferred to a halfway house…. It is a grim place, and I am grimly resolved to go there.” Granada House was on the grounds of the Brighton Marine hospital near the Massachusetts Turnpike. Wallace found it funny that a “marine hospital” should be nowhere near water. He gives a good picture of its fictional counterpart in “Infinite Jest”: Unit #6, right up against the ravine on the end of the rutted road’s east side, is Ennet House Drug and Alcohol Recovery House, three stories of whitewashed New England brick with the brick showing in patches through the whitewash, a mansard roof that sheds green shingles, a scabrous fire escape at each upper window and a back door no resident is allowed to use and a front office around on the south side with huge protruding bay windows that yield a view of ravine-weeds and the unpleasant stretch of Commonwealth Ave. The compound consisted of seven buildings—“seven moons orbiting a dead planet,” as it is described in “Infinite Jest”—all leased to various substance-abuse and mental-health assistance groups. Wallace met Deb Larson, the director, at his new temporary home. Tall and blonde, she walked with a limp: drunk, she had fallen down in her kitchen, hitting her head, causing a partial paralysis. Even then she hadn’t stopped drinking. Wallace respected her. She was pretty and smart and gave him a link to an old life that was still his present—you could almost see Harvard from the top floor of the building. Recovery facilities tried to control the stress levels of their participants, and one activity they generally prohibited was school. Wallace had no choice but to call the philosophy department at Harvard and ask for a leave of absence. He was too humiliated to go back to get the vegetable juicer, a gift from his mother, that he had left behind in the graduate office. Wallace was expected to find low-level work. The writer, whose only real skill was teaching and writing, cast around and was able—probably thanks to the presence on his resume of the head of Amherst College security as a reference—to get hired as a guard at Lotus Development, a large software company. Granada House rules stipulated a forty-hour workweek, so Wallace got up at 4:30 in the morning to take the Green Line subway and worked until 2 P.M., walking to a vast disk-packaging plant in Lechmere, clocking in his whereabouts every ten minutes and twirling his baton (or so he later said). He would tear pages out of his notebook and send letters to his friends, maintaining contact with the small group of editors and writers who were vital to him. The Lotus experience, he recalled in a later interview, reminded him of “every bad ’60s novel about meaningless authority,” but at the time he bore it well. “Give me a little time to get used to no recreational materials and wearing a polyester uniform and living with 4 tatooed ex-cons and I’ll be right as rain,” he wrote the editor and literary critic Steven Moore with ironic brio shortly after starting. Even inside Granada House, he managed to attend to the business of being a writer—following up on submissions to magazines and reading pages of stories he had coming out. He could see the strange side of his situation. When the galleys of his story “Order and Flux in Northampton” arrived from Conjunctions with a page missing, he told his editor Brad Morrow he could send it at his convenience. “I’m not going anywhere for Xmas,” he wrote. But in his heart he was stunned with what had happened to him. “I am,” he wrote his former professor at Amherst Dale Peterson, “OK, though very humiliated and confused.” He was sharing a barracks-like room in Granada House with four men, one of whom, he wrote Rich C., who had been his twelve-step program sponsor the year before, had had a stroke while on cocaine and had a withered right side. “Mr. Howard,” he told his Norton editor, Gerry Howard, “everyone here has a tattoo or a criminal record or both!” To Peterson he reported, “Most of the guys in the house are inmates on release, and while they’re basically decent folk it’s just not a crowd I’m much at home with—Heavy metal music, black t-shirts & Harleys, vivid tattoos, discussion of hard-vs.-soft-time, parole boards, gunshot wounds and Walpole—” Massachusetts’s toughest prison. Wallace continued at his security job for more than two months, and then, unable to bear getting up so early, he quit. He went to work as a front desk attendant at the Mount Auburn Club, a health club in Watertown. His job was to check members in—he called himself a glorified towel boy—but one day Michael Ryan, a poet who had received a Whiting Award alongside him two years before, came to exercise. Wallace dove below the reception desk and quit that day. Wallace’s friends were accustomed to his exaggerations and inventions over the years—they came with his clownish, hyperbolic persona—but when they visited him at the halfway house, they found that what he said was true: he had stepped through the looking glass. His friend Debra Spark, a fiction writer, remembers sitting in on a group therapy session with Wallace one day and being amazed to hear someone recount killing someone while drunk. All the same, Wallace found his place; order, no matter how foreign the context, was always easier for him than the unstructured world. He met with a counselor, as required, and nearly every evening he drove to different parts of the city with other Granada House members for substance-abuse meetings. His sponsor was named Jimmy, “a motorhead from the South Shore,” as he called him to the novelist David Markson, with whom he had begun a correspondence. Wallace read the Big Book and enjoyed making fun of its cheesy 1930s adman vocabulary to his friends: “tosspot,” “Dave Sheen heels,” “boiled as an owl.” “He laughed at them, but he also knew he needed them or he would die,” Mark Costello, who visited him at Granada House, remembers. If Wallace found himself in unfamiliar territory, the residents didn’t know what to make of him either. One remembers wondering, “This guy can probably go to Betty Ford. Why’s he here with us welfare babies?” No one really cared for his cleverness. He was to them a type they’d seen before, someone who, like the character Geoffrey Day in “Infinite Jest,” tries to “erect Denial-type fortifications with some kind of intellectualish showing-off.” Wallace was back in high school, trying to figure out his place in the pack. “It’s a rough crowd,” he wrote Rich C., “and sometimes I’m scared or feel superior or both.” Yet a piece of him was beginning to adjust to the new situation. He remembered his last failed attempt to get sober and how he was no longer writing and asked himself what he had to lose. He came to understand that the key this time was modesty. “My best thinking got me here” was a recovery adage that hit home, or, as he translated it in “Infinite Jest,” “logical validity is not a guarantee of truth.” He knew it was imperative to abandon the sense of himself as the smartest person in the room, a person too smart to be like one of the people in the room, because he was one of the people in the room. “I try hard to listen and do what [they say],” he wrote Rich C., “I’m trying to do it easy … this time,” not “get an A+…. I just don’t have enough gas right now to do anything fast or well. I’m trying to accept this.” Not that things came easily. The simple aphorisms of the program seemed ridiculous to him. And if he objected to them, someone inevitably told him to do what was in front of him to do, driving him even crazier. He was astonished to find people talking about “a higher power” without any evidence beyond their wish that there were one. They got down on their knees and said the Thankfulness prayer. Wallace tried once at Granada House, he told Costello, but it felt hypocritical. (All the same, Wallace liked to quote one of the veteran recovery members, the group known in “Infinite Jest” as “the crocodiles,” who told him, “It’s not about whether or not you believe, asshole, it’s about getting down and asking.”) There were many times when he was sure he would start drinking again. “I’m scared,” he wrote Rich C. “I still don’t know what’s going to happen.” He asked his friend for some words of encouragement, and just when he thought he would give up, a letter arrived in which his former sponsor recounted the last time he had been in detox. “They gave me Librium,” he wrote Wallace, “and I threw them over my left shoulder for luck, and I’ve had good luck ever since.” The image, Wallace told his sponsor years later, was just the “good MFA-calibertrope” he’d needed. Stunned as he was, Wallace understood from the beginning that his fall from grace was a literary opportunity. So in the midst of his misery, he was alive to the new information he was getting. The communal house, he would later write, “reeks of passing time. It is the humidity of early sobriety, hanging and palpable.” Wallace was known for sitting quietly, listening as residents talked for hours about their lives and their addictions. (Later, residents would often be surprised to find that though he had heard their stories they had not heard his.) The explanations people gave for their behavior startled him with their simplicity, but their voices—always his way in to composition—were unforgettable, and their stories had a clarity his lacked. This was the sort of access to interior lives a novelist could not get elsewhere. He was finding, as he later told an interviewer, that “nobody is as gregarious as someone who has recently stopped using drugs.” Where else could a writer find, as Wallace wrote in “Infinite Jest,” in a passage that sounds as if Lester Bangs had written it, twenty-one other newly detoxed housebreakers, hoods, whores, fired execs, Avon ladies, subway musicians, beer-bloated construction workers, vagrants, indignant car salesmen, bulimic trauma-mamas, bunko artists, mincing pillow-biters, North End hard guys,pimply kids with electric nose-rings, denial-ridden housewives and etc., all jonesing and head-gaming and mokus and grieving and basically whacked out and producing nonstopping output 24-7-365. Wallace and his notebook were a familiar sight in the communal rooms and recovery meetings, trapping little inspirations before they could get away. Within a few months of arriving, Wallace had already drafted a scene centered on one of the most intriguing residents at Granada House, Big Craig. Big Craig—Don Gately in the novel—was one of the Granada House supervisors and sometimes the house cook. He had first met Wallace when he found the new resident’s stuff on his bunk and threw Wallace’s bag on the ground. Craig was in his mid-twenties, “sober and just huge,” as Wallace would later write in “Infinite Jest,” looking “less built than poured, the smooth immovability of an Easter Island statue.” Wallace quickly chose a different bed. (Craig didn’t trust Wallace when he first met him. “My suspicions were that he was just looking for material for a book,” he remembers.) Craig had grown up on the North Shore and been a burglar and Demerol addict. Friends closed elevator doors on his head for fun when he was a teenager, a detail Wallace would put into “Infinite Jest” too. But he turned out not only to come from a different world but also to be quite sensitive. And it did not take Wallace long to see the possibilities in a lug with an interior life. There was a sort of Dostoevskian gloss to him, the redeemed criminal, and Dostoevsky was on Wallace’s mind. He wrote to Dale Peterson shortly after arriving that “going from Harvard to here” was like “House of the Dead… with my weeks in drug treatment composing the staged execution and last minute reprieve from same.” The reprieve, he hoped, would spur the same creative surge it did in the Russian. This excerpt is drawn from “Every Love Story is a Ghost Story: A Life of David Foster Wallace,” by D. T. Max, a staff writer for this magazine, published this week. It was reprinted by arrangement with Viking Penguin, a member of Penguin Group (USA) Inc. Illustration by Philip Burke.
Curiosity feted its first Martian year on the red planet (687 earth days) with a stiched-up selfie while NASA reflected on the Mars rover's triumphs and setbacks. So far, it has achieved most of its mission goals, particularly its quest for evidence that Mars could have supported life. Drilling samples revealed traces of all the elements needed for life, and it spotted a streambed that once had "vigorous" water flow. The rover also found that moisture could be drilled from its soil, and that the radiation levels were safe for humans -- all important details for planned space travel. Unfortunately, due to sharper-than-expected rocks, Curiosity now has a gaping hole in its wheel, which forced the team to change its driving methods and routes. It's not expected to have much impact on the mission, though -- after grabbing samples at a site called Windjana, Curiosity's now headed to Mount Sharp, some 2.4 miles away. With its main goals accomplished, any new science is gravy --- see the video below for more.
Assessment of laboratory performance with Streptococcus pneumoniae antimicrobial susceptibility testing in the United States: a report from the College of American Pathologists Microbiology Proficiency Survey Program. OBJECTIVE To assess the performance of clinical microbiology laboratories in the United States when conducting in vitro susceptibility tests with Streptococcus pneumoniae. METHODS The results of a nationwide College of American Pathologists Proficiency Survey test sample, in which susceptibility testing of an isolate of S. pneumoniae was performed, were assessed with respect to precision and accuracy. RESULTS Wide variability was noted among participating laboratories with both minimum inhibitory concentration procedures and disk diffusion susceptibility tests when both methods were applied to S. pneumoniae. Despite this high degree of variation, categorical interpretive errors were uncommon. Numerous laboratories reported results for antimicrobial agents that are not recommended by the National Committee for Clinical Laboratory Standards for tests with S. pneumoniae. CONCLUSIONS Current susceptibility testing practices with S. pneumoniae in the United States indicate limited precision and a tendency for laboratories to test and report results obtained with antimicrobial agents of questionable therapeutic value against this organism. Continued efforts to standardize susceptibility testing of S. pneumoniae in the United States are warranted. In addition, modifications of existing interpretive criteria may be necessary.
We have already seen quite a few great smartphones announced this year including the Samsung Galaxy S8 and S8 Plus, LG G6, and the Huawei P10, but there are plenty more high-end devices coming this year which will go head to head with the current flagships on the market. If you’re wondering which smartphones we’re expecting to see in 2017 and when will they be available, keep reading. You’ll find all the details regarding the Xiaomi Mi 6, OnePlus 5, Samsung Galaxy Note 8, and others down below. Note: We’ve now moved Q1’s releases to the bottom of this post with hands on and review videos; we’ll rotate and refresh this page at the start of each new quarter. April – June (Q2) Samsung Galaxy S8 and S8 Plus Two of the most anticipated smartphones of the year, the Galaxy S8 and S8 Plus, were announced at a press event on March 29 and will be hitting stores on April 21. Both devices offer a large Infinity display (5.8 and 6.2-inch) that’s curved on both sides and has small bezels on the top and at the bottom. As there’s not enough room on the front, Samsung moved the iconic fingerprint scanner to the back of the devices, next to the camera. The flagships also come with Samsung’s own digital assistant called Bixby, which is an alternative to Google’s Assistant. They are slated for a global release on Friday, April 21, but you can already pre-order them in the US and a bunch of other countries. Take a look at the Galaxy S8 hands-on video below to learn more. Xiaomi Mi 6 and Mi 6 Plus The official reveal of the Xiaomi Mi 6 looks to be just around the corner with the company confirming the smartphone will be launched on April 19 in Beijing. However, it didn’t say anything regarding the Mi 6 Plus, so there’s a chance that the two devices won’t be announced on the same day. The flagships are expected to feature high-end specs and come in a few different variants. As with all of Xiaomi’s smartphones, they will offer a great price-performance ratio, as the pricing is rumored to start at just $290. If you want to learn more about the two devices, feel free to check out the Xiaomi Mi 6 and Mi 6 Plus rumor roundup post. ZTE Axon 8 Along with the OnePlus 3T, the ZTE Axon 7 was one of the best budget smartphones of 2016. We’re excited to see what its successor, the Axon 8, will bring to the table. If the company sticks to the traditional one-year release cycle, it will announce its upcoming flagship in May, with sales starting about a month later. In order to compete with more established brands on the market, the ZTE Axon 8 will try to spark consumer interest with great specs and a relatively low price. Asus Zenfone 4, Zenfone 4 Deluxe, and Zenfone 4 Ultra The new Zenfone 4 series of smartphones is expected to be announced at Computex in Taiwan, which kicks off at the end of May. We’re still a little while away, so we don’t really know what to expect from the devices but leaks will emerge in due course. As the Asus brand isn’t really that well established in the smartphone world, the company will have to innovate and take some risks if it wants to compete with the big boys like Samsung, Huawei, and others. HTC U HTC has already confirmed that the U Ultra isn’t the only high-end device it will release this year. The company said that we can expect a Snapdragon 835 powered HTC in 2017 — probably called the HTC U — which is expected to be officially announced in the second quarter. According to Evan Blass, HTC will take the wraps off the smartphone in mid to late April, while sales are expected to kick off in early May. The latest rumors claim that one of the biggest features of the device will be the sensors placed in the sides of the phone (Edge Sense technology). These will allow users to perform certain tasks, like launching the camera, by simply squeezing the edges of the device. OnePlus 5 The release date of the OnePlus 5 is kind of hard to pinpoint. The OnePlus 3 was announced in June 2016, but just five months later the OnePlus 3T was unveiled. Nevertheless, our guess is that the OnePlus 5 will make an appearance at the end of the second quarter — June. As always, we expect it to feature high-end specs and an affordable price tag, which is the combination that made the company and its products famous in the first place. You’re probably wondering what happened to the OnePlus 4, right? It looks like the upcoming OnePlus flagship will skip the number four due to its association with death in Chinese culture. Moto Z and Moto Z Force (2017) Last year’s Moto Z and Moto Z Force took the modular approach to the next level. Some people loved it, while others weren’t so impressed. Nevertheless, both are great devices and we are looking forward to seeing their successors this year. Not much is known about them, except that they will probably still boast a modular design that will make them different from most other smartphones on the market. Our best guess at this point is that they will be announced by the end of second quarter (June) at the latest and perhaps not surprisingly, might be known as the Moto Z2. July – September (Q3) Honor 9 Last year’s Honor 8 is a great device, especially when considering its price. It offers solid specs along with a glass body that is really easy on the eyes. Although we don’t know much regarding its successor, the Honor 9, we expect that the company will opt for the same strategy that made the Honor 8 successful. It should go on sale in July, assuming that the company sticks to the standard one-year release cycle, and it’ll likely have a pretty similar specs sheet to the Honor 8 Pro, just with a smaller screen and Leica branded cameras. Samsung Galaxy Note 8 Despite the Galaxy Note 7 fiasco, Samsung will still release a new version of its popular phablet under the Note brand this year. The Samsung Galaxy Note 8, as it will likely be called, should be announced sometime in the summer. Probably in August, just like the last two Note smartphones, though the delay of the Galaxy S8/S8 Plus could change that, pushing it slightly later in the year. The device will presumably look somewhat similar to the Galaxy S8 and will also feature the popular S Pen, that will hopefully come with a bunch of new tricks. Let’s just hope that the upcoming Note 8 won’t experience the same problems as the Note 7, which was recalled by the company because of the whole “exploding in your hands” issue. The Note 8 will also likely get the under-glass finger scanner and may also get dual cameras, both of which were rumored for the S8. October – December (Q4) LG V30 Those of you who like phablets are probably looking forward to the upcoming LG V30. You will, however, have to wait quite a few months before you’ll be able to get it, as the device will probably be announced in October. But it should be worth the wait. The V20 is one of the best phones on the market, and we expect that the V30 won’t be any different in that regard. Its biggest feature, in addition to the large screen, is the secondary display on top, which is great for notifications and shortcuts, a feature recently copied in the HTC U Ultra. Google Pixel 2 and Pixel XL 2 Google’s phones have impressed us with their excellent camera and a great software experience. However, they do have their faults that Google will hopefully address with the Pixel 2 and Pixel XL 2. Despite the glass window on the back, the overall design of the devices could be described as bland. We hope that the next Pixel generation will offer something new in terms of design and also come with a more affordable price tag. There are also rumors claiming that it will be waterproof and feature a curved display. Google will presumably announce the Pixel 2 and Pixel XL 2 in October alongside the final version of the Android O release. Xiaomi Mi Note 3 This is another phablet we’re looking forward to seeing in 2017. Its predecessor, the Mi Note 2, was announced in October, which is why we expect to see the Mi Note 3 around the same time in 2017. In addition to a great price-performance ratio, the smartphone is expected to feature a large screen that is slightly curved on both sides like the Galaxy Note 7. It will be available in a few different variants, just like the Mi Note 2. Huawei Mate 10 The Mate 8 and Mate 9 were announced in the month of November, so there’s no real reason to think that Huawei will change things up this year. We can, therefore, expect that the Chinese manufacturer will officially take the wraps off its phablet in November. As the Mate 9 came out not too long ago, it’s still a bit early to talk about specs and features. We do, however, expect to see a great looking high-end device that will appeal to those who want a phone with a big screen. Previous quarters: January – March (Q1) Samsung Galaxy A3, A5, and A7 (2017) Samsung has already announced the 2017 Galaxy A lineup. At the beginning of the year, the company took the wraps off the Galaxy A3, Galaxy A5, and Galaxy A7, which are already available in quite a few markets around the world. They are all mid-range devices that sport a metal frame, a glass back, and are water resistant (IP68) for the first time. All three smartphones offer a design we’re used to seeing from Galaxy devices and they come in four colors: Black Sky, Gold Sand, Blue Mist, and Peach Cloud. You can check out our Galaxy A3 and A5 hands-on video below. HTC U Ultra HTC announced its phablet, the U Ultra, at the beginning of the year. The device comes with a 3D contoured glass back which looks similar to the one found on the Galaxy Note 7. HTC also decided to take a page from LG’s playbook, as the U Ultra features a 2-inch secondary display above the main one, which is great for notifications, contacts, and reminders, among others. The device is already available in quite a few markets including the US, Canada, and some European countries. To learn more about it, check out the review of the HTC U Ultra below. BlackBerry KEYone The BlackBerry-branded smartphone called the KEYone – made by TCL Communication — was officially announced at MWC in Barcelona on February 25. Its biggest feature is definitely the iconic QWERTY keyboard that made BlackBerry phones popular among consumers. The mid-ranger, which was initially expected to go on sale in April, will be available in the month of May. LG G6 LG took the wraps off the G6 at Mobile World Congress on February 26. The smartphone is very different from its predecessor. It is waterproof and features a large 5.7-inch screen with 18:9 aspect ratio and very thin bezels, which make it quite compact for its size. As expected, it doesn’t sport a modular design like last year’s G5, which wasn’t really that well received among consumers. The high-end device is already available in the US, Canada, Europe, and a few other markets. Be sure to check out our review of the LG G6 to learn more. Nokia 3,5,6, 3310 and 9 HMD Global, the company that bought the rights to sell Nokia-branded devices, announced a few new handsets at MWC. These included the mid-range Nokia 3 and 5 as well as a new version of the iconic 3310. Additionally, the company also announced that the Nokia 6, which was unveiled in China in January, would be available in other countries as well. All four devices are expected to go on sale in 120 markets at the same time in Q2 2017 (April-June). We’re also expecting to see a high-end Nokia device sometime this year. According to a report from Nokiapoweruser, it will be called the Nokia 9 and should see the light of day sometime in the third quarter of the year. Huawei P10 and P10 Plus At MWC in Barcelona on February 26, Huawei announced the P10 and P10 Plus. Both devices look quite similar to their predecessors with a few exceptions. The main one is that the flagships have the fingerprint scanner located on the front, below the screen, and not on the back as the previous P series did. The two devices are already on sale in certain markets around the world. The P10 sports a 5.1-inch screen, while its bigger brother comes with a 5.5-inch display. There are a few other minor differences between the smartphones, which you can check out in the video below. Moto G5 and G5 Plus The Moto G5 and G5 Plus made their debut at MWC in Spain in February. The budget-friendly smartphones do bring some new things to the table when compared with their predecessors, including a metal body. The two devices look almost exactly the same but offer different screen sizes. The smaller one features a 5-inch screen, while the Moto G5 Plus has a 5.2-inch display. Lenovo has already started selling the two devices in a lot of markets across the world. However, it is worth mentioning that you can only get your hands on the bigger of the devices in the US, as the G5 isn’t (and won’t) be sold in the country. Sony Xperia XZ Premium and Xperia XZs At MWC 2017, Sony took the wraps off the high-end Xperia XZ Premium and Xperia XZs smartphones. The former offers the better specs of the two and is also bigger, as it has a 5.46-inch display compared to the 5.2-inch screen found on the XZs. The Xperia XZs is already available in a few markets including the US and India, while the Xperia XZ Premium is expected to be released in June. It is also worth mentioning that these aren’t the only two smartphones Sony announced in Barcelona. The company also unveiled the XA1 and XA1 Ultra, which offer a similar design with edge-to-edge displays and chunky top and bottom bezels. There you have it – our predictions on when to expect the biggest Android releases of 2017. Missed anything, let us know in the comments!
Role of Tetra Amino Acid Motif Properties on the Function of Protease-Activatable Viral Vectors. Protease-activatable viruses (PAV) based on adeno-associated virus have previously been generated for gene delivery to pathological sites characterized by elevated extracellular proteases. "Peptide locks", composed of a tetra-aspartic acid motif flanked by protease cleavage sequences, were inserted into the virus capsid to inhibit virus-host cell receptor binding and transduction. In the presence of proteases, the peptide locks are cleaved off the capsid, restoring the virus' ability to bind cells and deliver cargo. Although promising, questions remained regarding how the peptide locks prevented cell binding. In particular, it was unclear if the tetra-amino acid (4AA) motif blocks receptor binding via electrostatic repulsion or steric obstruction. To explore this question, we generated a panel of PAVs with lock designs incorporating altered 4AA motifs, each wielding various chemical properties (negative, positive, uncharged polar, and hydrophobic) and characterized the resultant PAV candidates. Notably, all mutants display reduced receptor binding and decreased transduction effciency in the absence of proteases, suggesting simple electrostatics between heparin and the D4 motif do not play an exclusive role in obstructing virus-receptor binding. Even small hydrophobic (A4) and uncharged polar (SGGS) motifs confer a reduction in heparin binding compared to the wild type. Furthermore, both uncharged polar N4 and Q4 mutants (comparable in size to the D4 and E4 motifs respectively, but lacking the negative charge) demonstrate partial ablation of heparin binding. Collectively, these results support a possible dual mechanism of PAV lock operation, where steric hindrance and electrostatics make nonredundant contributions to the disruption of virus-receptor interactions. Finally, because of high virus titer production and superior capsid stability, only the negatively charged 4AA motifs remain viable design choices for PAV construction. Future studies probing the structure-function relationship of PAVs will further expand its promise as a gene delivery vector able to target diseased tissues exhibiting elevated extracellular proteases.
Microservices for Redevelopment of Enterprise Information Systems and Business Processes Optimization Due to cost pressure and static technological development, the lifecycle of large enterprise information systems in operation is coming to an end. At the same time and as part of possible solutions, the demands for cloud systems in the enterprise context is continuously growing. Although microservices have become an established architectural pattern used by well-known companies, many especially smaller corporations are shying away from using them. In this paper we present the positive and negative effects of converting legacy applications into cloud-based microservice architectures. In addition to technical aspects such as maintainability and scalability, organizational consequences are considered and analyzed. Furthermore, the positive effects on existing business processes, especially ITIL Service Management Processes, are addressed and it is demonstrated how ITIL metrics such as MTRS, MRTT or TRD can be optimized by using microservices. We show advantages of a microservice architecture in the optimization of existing business fields and how new business areas can be opened up easier compared to conventional enterprise architectures. Even if microservices are not a silver bullet, they should be considered and evaluated as an opportunity for a new software lifecycle of a legacy enterprise application or as an architectural pattern for profound redevelopment.
The KeyKOS® Nanokernel Architecture Alan C. Bomberger A. Peri Frantz William S. Frantz Ann C. Hardy Norman Hardy Charles R. Landau Jonathan S. Shapiro Copyright © 1992, Jonathan S. Shapiro. All rights reserved. Permission to reproduce this document for non-commercial use is hereby granted, provided that this copyright notice is retained. This paper first appeared in Proceedings of the USENIX Workshop on Micro-Kernels and Other Kernel Architectures, USENIX Association, April 1992. pp 95-112 ABSTRACT The KeyKOS nanokernel is a capability-based object-oriented operating system that has been in production use since 1983. Its original implementation was motivated by the need to provide security, reliability, and 24-hour availability for applications on the Tymnet® hosts. Requirements included the ability to run multiple instantiations of several operating systems on a single hardware system. KeyKOS was implemented on the System/370, and has since been ported to the 680x0 and 88x00 processor families. Implementations of EDX, RPS, VM, MVS, and UNIX® have been constructed. The nanokernel is approximately 20,000 lines of C code, including capability, checkpoint, and virtual memory support. The nanokernel itself can run in less than 100 Kilobytes of memory. KeyKOS is characterized by a small set of powerful and highly optimized primitives that allow it to achieve performance competitive with the macrokernel operating systems that it replaces. Objects are exclusively invoked through protected capabilities, supporting high levels of security and intervals between failures in excess of one year. Messages between agents may contain both capabilities and data. Checkpoints at tunable intervals provide system-wide backup, fail-over support, and system restart times typically less than 30 seconds. In addition, a journaling mechanism provides support for high-performance transaction processing. On restart, all processes are restored to their exact state at the time of checkpoint, including registers and virtual memory. This paper describes the KeyKOS architecture, and the binary compatible UNIX implementation that it supports. Table of Contents, Trademarks Tymnet is a registered mark of British Telecom, Inc. UNIX is a registered mark of AT&T Bell Laboratories, Inc. This paper describes the KeyKOS nanokernel, a small capability-based system originally designed to provide security sufficient to support mutually antagonistic users. KeyKOS consists of the nanokernel, which can run in as little as 100 Kilobytes of memory and includes all of the system privileged code, plus additional facilities necessary to support operating systems and applications. KeyKOS presents each application with its own abstract machine interface. KeyKOS applications can use this abstract machine layer to implement KeyKOS services directly or to implement other operating system interfaces. Implementations of EDX, RPS, VM/370, an MVS subset, and UNIX have been ported to the KeyKOS platform using this facility. Tymshare, Inc. developed the earliest versions of KeyKOS to solve the security, data sharing, pricing, reliability, and extensibility requirements of a commercial computer service in a network environment. Development on the KeyKOS system began in 1975, and was motivated by three key requirements: accounting accuracy that exceeded any then available; 24-hour uninterrupted service; and the ability to support simultaneous, mutually suspicious time sharing customers with an unprecedented level of security. Today, KeyKOS is the only commercially available operating system that meets these requirements. KeyKOS began supporting production applications on an IBM 4341 in January 1983. KeyKOS has run on Amdahl 470V/8, IBM 3090/200 (in uniprocessor System/370 mode), IBM 158, and NAS 8023. In 1985, Key Logic was formed to take over development of KeyKOS. In 1988, Key Logic began a rewrite of the nanokernel in C. After 10 staff months of effort a nanokernel ran on the ARIX Corporation 68020 system, and the project was set aside. The project resumed in July of 1990 on a different processor, and by October of 1990 a complete nanokernel was running on the Omron Luna/88K. The current nanokernel contains approximately 20,000 lines of C code and less than 2,000 lines of assembler code. This paper presents the architecture and design of the KeyKOS nanokernel, and the UNIX system that runs on top of it. In the interest of a clear presentation of the KeyKOS architecture, we have omitted a description of the underlying kernel implementation. KeyKOS is founded on three architectural concepts that are unfamiliar to most of the UNIX community: a stateless kernel, single-level store, and capabilities. Our experience indicates that understanding a single-level store model requires a fundamental shift in perspective for developers accustomed to less reliable architectures. It therefore seems appropriate to present these concepts first as a foundation on which to build the balance of the KeyKOS architectural description.An early decision in the KeyKOS design was to hold no critical state in the kernel. All nanokernel state is derived from information that persists across system restarts and power failures. For reasons of efficiency, the nanokernel does reformat state information in private storage. All private storage is merely a cache of the persistent state, and can be recycled at any time. When the discarded information is needed again, it is reconstructed from the information in nodes and pages (which are described below) As a consequence, the nanokernel performs no dynamic allocation of kernel storage. This has several ramifications: The kernel is faster, since no complicated storage allocation code is ever run. The kernel never runs out of space. There is no nanokernel storage (such as message queues) that must be a part of the checkpoint. The absence of dynamic allocation means that there can be no interaction between dynamic allocation strategies, which is the predominant source of deadlock and consistency problems in most operating systems. The system outside the nanokernel is completely described by the contents of nodes and pages (see below), which are persistent. This state includes files, programs, program variables, instruction counters, I/O status, and any other information needed to restart the system. In addition, the ability to recover all run-time kernel data from checkpointed state means that an interruption of power does not disrupt running programs. Typically, the system loses only the last few seconds of keyboard input. At UNIFORUM '90, Key Logic pulled the plug on our UNIX system on demand. Within 30 seconds of power restoration, the system had resumed processing, complete with all windows and state that had previously been on the display. We are aware of no other UNIX implementation with this feature today. KeyKOS presents a persistent single-level store model. To the KeyKOS application, all data lives in persistent virtual memory. Only the nanokernel is aware of the distinction between main memory and disk pages. Periodic system-wide checkpoints guarantee the persistence of all system data. The paging system is tied to the checkpoint mechanism, and is discussed in the section on checkpointing, below. Persistence extends across system shutdown and power failure. Several IBM 4341 systems ran for more than three years across power failures without a logical interruption of service. Like memory pages, KeyKOS applications are persistent. An application continues to execute until it is explicitly demolished. To the application, the shutdown period is visible only as an unexplained jump in the value of the real time clock, if at all. As a result, the usual issues surrounding orderly startup and shutdown do not apply to KeyKOS applications. Most operating systems implement a transient model of programs; persistence is the exception rather than the rule. A client operating system emulator may provide transient applications by dismantling its processes when they terminate. The single-level store model allows far-reaching simplifications in the design of the KeyKOS system. Among the questions that the nanokernel does not have to answer are: How does the system proceed when it runs out of swap space? (It checkpoints.) How does the kernel handle the tear-down of a process? (It doesn't.) How is kernel state retained across restarts? (The kernel contains no state that requires checkpointing.) Each of these areas is a source of significant complexity in other systems, and a consequent source of reliability problems.KeyKOS is a capability system. For brevity, KeyKOS refers to capabilities as. Every object in the system is exclusively referred to by one or more associated keys. Keys are analogous in some ways to Mach's ports. KeyKOS entities call upon the services of other entities by sending messages via a key. Message calls include a kernel-constructedthat may be used by the recipient to issue a reply. Messages are most commonly exchanged in an RPC-like fashion. What sets KeyKOS apart from other microkernels is the total reliance on capabilities without any other mechanisms. There are no other mechanisms that add complexity to the ideas or to the implementation. Holding a key implies the authority to send messages to the entity or to pass the key to a third party. If A does not have a key for B, then A cannot communicate with B. Applications may duplicate keys that they hold, but the creation of keys is a privileged operation. The actual bits that identify the object named by a key are accesible only to the nanokernel. Through its use of capabilities and message passing, KeyKOS programs achieve the same encapsulation advantages of object-oriented designs. Encapsulation is enforced by the operating system, and is available in any programming language. It is the complete security of this information hiding mechanism that makes it possible to support mutually suspicious users. A fundamental concept in KeyKOS is that a program should obey the "principle of least privilege". To that end, the design of KeyKOS gives objects no intrinsic authority, and relies totally upon their keys to convey what authority they have. Using these facilities, the system is conveniently divided into small modules, each structured so as to hold the minimal privilege sufficient for its operation. Entities may be referred to by multiple, distinct keys. This allows an entity that communicates with multiple clients to grant different access rights to the clients. Every key has an associated 8-bit field that can be used by the recipient to distinguish between clients. When the entity hands out a key, it can set the field to a known value. Because all messages received by the entity include the 8-bit value held in the key, this mechanism can be used to partition clients into service classes or privilege levels by giving each class a different key. It is worthwhile to contrast this approach with the ring-structured security model pioneered in Multics and propagated in the modern Intel 80x86 family. The capability model is intrinsically more secure. A ring-structured security policy is not powerful enough to allow a subsystem to depend on the services of a subsystem with lesser access rights. Ring policies intrinsically violate the principle of least privilege. In addition, ring-based security mechanisms convey categorical authority: any code running in a given layer has access to all of the data in that layer. Capability systems allow authority to be minimized to just that required to do the job at hand. Using a capability model offers significant simplifications in the nanokernel. Among the questions that the nanokernel does not have to answer are: Does this user have the authority to perform this operation? (Yes - if you hold the key you can send the message.) How do I allocate enough kernel memory to perform name resolution on a variable length name? (The kernel never deals with names, only keys.) Where does this file name get inserted in this directory? (The nanokernel does not deal with file names or directories.) Because the nanokernel has no naming mechanism other than capabilities, entity naming is intrinsically decentralized. As a result, extending KeyKOS to multiprocessors is straightforward. KeyKOS applications cannot tell if they are running on a uniprocessor or a multiprocessor.The nanokernel includes all of the supervisor-mode code of the system. The entire kernel is implemented in approximately 20,000 lines of reasonably portable C code, and 2,000 lines of 88x00 assembly code. Of the assembly lines, 1,000 lines are in the context switch implementation. This compiles to roughly 60 Kilobytes of executable code. While running, the nanokernel requires as little as 100 Kilobytes of main memory. The nanokernel is the only portion of the system that interprets keys. No other program has direct access to the bits contained in the keys, which prevents key forgery. In addition, the nanokernel includes code that defines the primitive system objects. These objects are sufficient to build the higher-level abstractions supported by more conventional operating systems. The nanokernel provides: multiprogramming support, primitive scheduling, and hooks for more sophisticated schedulers running as applications; a single-level store, as discussed above; separate virtual address space(s) for each KeyKOS process; redundant disk storage for system-critical information; a system-wide checkpoint-restart feature; journaling pages exempt from checkpoint for database and transaction processing support; keys by which messages are sent from one application to another; primitive and limited access to individual I/O devices; interpretation of keys that hides the location of the object on disk or in main memory. During normal operation, KeyKOS executes a system-wide checkpoint every few minutes to protect from power failures, most kernel bugs, and detected hardware errors. Both dataare checkpointed. All run-time state in the nanokernel can be reconstructed from the checkpoint information. Except for the initial installation, the system restarts from the most recent checkpoint on power up. In addition to local checkpoint support, the nanokernel provides for checkpoints to magnetic tape or remote hot-standby systems. This allows a standby system to immediately pick up execution in the event of primary system failure. The KeyKOS kernel supports six types of fundamental objects:, andThe nanokernel implements low-level hardware drivers in privileged code. The supervisor-mode driver performs message encapsulation and hardware register manipulation. Except where performance compels otherwise, KeyKOS applications implement the actual device drivers.The simplest KeyKOS object is the page. Page size is dependent on the underlying hardware and storage architectures, but in all current implementations is 4 Kilobytes. Every page has one or more persistent locations on some disk device, known as its. The KeyKOS system manages a fixed number of pages that are allocated when the system is first initialized. This number can be increased by attaching additional mass storage devices to the system. A page is designated by one or more page keys. Pages honor two basic message types: read, and write. When pages are mapped into a process address space, loads and stores to locations in a page are isomorphic to read and write messages on the page key. When a message is sent to a page that is not in memory, the page is transparently faulted in from backing store so that the operation can be performed. Applications that perform dynamic space allocation hold a key to a space bank. Space banks are used to manage disk resource allocation. The system has a master space bank that holds keys to all of the pages and nodes in the system.[1] One of the operations supported by space banks is creating subbanks, which are subbanks of the master space bank. If your department has bought the right to a megabyte of storage, it is given a key to a space bank that holds 256 page keys. Space banks are a type of domain. A node is a collection of keys. All keys in the system reside in nodes. Aconveys access rights to a node, and can be used to insert or remove keys from a node. Like pages, nodes can be obtained from space banks. In all current KeyKOS implementations, a node holds precisely 16 keys. Nodes are critical to the integrity of the system. The KeyKOS system vitally depends on the data integrity of node contents. As a result, all nodes are replicated in two (or more) locations on backing store. In keeping with the general policy of not performing dynamic allocation in the kernel, and because the integrity requirements for nodes are so critical, KeyKOS does not interconvert nodes and pages. A segment is a collection of pages or other segments. Segments are used as address spaces, but also subsume the function of files in a conventional operating system. Segments can be combined to form larger segments. Segments may be sparse; they do not necessarily describe a contiguous range of addresses. Nodes are the glue that holds segments together. KeyKOS implements segments as a tree of nodes with pages as the leaves of the tree. This facilitates efficient construction of host architecture page tables. Because nodes and pages persist, so do segments. The system does not need to checkpoint page table data structures because they are built exclusively from the information contained in segments. Meters control the allocation of CPU resources. Aprovides the holder with the right to execute for the unit of time held by the meter. The KeyKOS kernel maintains athat represents the time interval from the present until the end of time. Like space banks, meters can be subdivided into submeters. Every running process holds a meter key that authorizes the process to execute for some amount of time. KeyKOS processes can be preempted. Holding a key to a meter that provides 3 seconds of CPU time does not guarantee that the process will run for 3 contiguous seconds. In the actual KeyKOS implementation, time slicing is enforced by allowing a process to run for the minimum of its entitled time or the time slice unit. Political scheduling policies may be implemented external to the kernel. Domains perform program execution services. They are analogous to the virtual processors of the POSIX threads mechanism. It was a design goal not to restrict the architecture available to the user. A consequence is that KeyKOS supports virtual machines. Domains model all of the non-privileged state of the underlying architecture, including the general purpose register set, floating point register set, status registers, instruction set architecture, etc. A domain interprets a program according to the hardware user-mode architecture. Domains are machine-specific, though we have considered the implementation of domains that perform architecture emulation (e.g. for DOS emulation on a RISC machine). In addition to modeling the machine architecture, domains contain 16 general key slots and several special slots. The 16 general slots hold the keys associated with the running program. When a key occupies one of the slots of a domain, we say that the program executing in that domain holds the key. One of the special slots of the domain is the address slot. The address slot holds a segment key for the segment that is acting as the address space for the program. On architectures with separate instruction and data spaces, the domain will have an address slot for each space. Each domain also holds a meter key. The meter key allows the domain to execute for the amount of time specified by the meter. KeyKOS processes are created by building a segment that will become the program address space, obtaining a fresh domain, and inserting the segment key in the domain's address slot. The domain is created in the waiting state, which means that it is waiting for a message. A threads paradigm can be supported by having two or more domains share a common address space segment. Because domain initialization is such a common operation, KeyKOS provides a mechanism to generate "prepackaged" domains. A factory is an entity that constructs other domains. Every factory creates a particular type of domain. For example, the queue factory creates domains that provide queuing services. An important aspect of factories is the ability of the client to determine their trustworthiness. It is possible for a client to determine whether an object created by a factory is secure. Understanding factories is crucial to a real understanding of KeyKOS, but in the interest of brevity we have elected to treat factories as "black boxes" for the purposes of this paper. To understand the UNIX implementation it is sufficient to think of factories as a mechanism for cheaply creating domains of a given type. The most important operation supported by the nanokernel is message passing. Messages sent from one domain to another involve a context switch. In order to encourage the separation of applications into components of minimal privilege, the nanokernel's message transfer path has been carefully optimized. The KeyKOS inter-domain message transfer path ranges from 90 instructions on the System/370 to 500 cycles on the MC88x00. Messages are composed of a parameter word (commonly interpreted as a method code), a string of up to 4096 bytes, and four keys. A domain constructs a message by specifying an integer, contiguous data from its address segment, and the keys to be sent. Only keys held by the sender can be incorporated into a message. Once constructed, the message is sent to the object named by a specified key. Sending a message is sometimes referred to as key invocation. KeyKOS supplies three mechanisms for sending messages. The call operation creates a resume key, sends the message to the recipient, and waits for the recipient to reply using the message's resume key. While waiting, the calling domain will not accept other messages. A variant is fork, which sends a message without waiting for a response. The resume key is most commonly invoked using a return operation, but creative use of call operations on a resume keys can achieve synchronous coroutine behavior. The return operation sends a message and leaves the sending domain available to respond to new messages. All message sends have copy semantics. The nanokernel does not buffer messages; a message is both sent and consumed in the same instant. If necessary, invocation of a key is deferred until the recipient is ready to accept the message. Message buffering can be implemented transparently by an intervening domain if needed. The decision not to buffer messages within the nanokernel was prompted by the desire to avoid dynamic memory allocation, limit I/O overhead, keep the context switch path length short, and simplify the checkpoint operation. A message recipient has the option to selectively ignore parts of a message. It may choose to accept the parameter word and all or part of the byte string without accepting the keys, or accept the parameter word and the keys without the data. KeyKOS provides for regular system-wide checkpoints and individual page journaling. Checkpoints guarantee rapid system restart and fail-over support, while journaling provides for databases that must make commit guarantees.The KeyKOS nanokernel takes system-wide checkpoints every few minutes. Checkpoint frequency can be adjusted by the administrator at any time without interruption of service. The KeyKOS system maintains two disk regions as checkpoint areas. When a checkpoint is taken, all processes are briefly suspended while a rapid sweep is done through system memory to locate modified pages. No disk I/O is done while processes are frozen. Once the sweep has been done, processes are resumed and all modified pages are written to the current checkpoint area. Once the checkpoint has completed, the system makes the other checkpoint area current, and begins migrating pages from the first checkpoint area back to their home locations. Checkpoint frequency is automatically tuned to guarantee that the page migration process will complete before a second checkpoint is taken. Because the migration process is incremental, a power failure during migration never leads to a corrupt system. An implementation consequence of this approach to checkpointing is unusually efficient disk bandwidth utilization. Checkpoint, paging, and page migration I/O is optimized to take advantage of disk interleave and compensates for arm latencies to minimize seek delays. This accounts for all page writes. The aggregate result is that KeyKOS achieves much higher disk efficiency than most operating systems. If the system bus is fast enough, KeyKOS achieves disk bandwidth utilization in excess of 90% on all channels. It is worth emphasizing that the checkpoint is not simply of files, but consists of all processes as well. If an update of a file involves two different pages and only one of the pages has been modified at the time of the checkpoint, the file will not be damaged if the system is restarted. When the system is restarted the process that was performing the update is also restarted and the second page of the file is modified as if there had been no interruption. A power outage or hardware fault does not leave the system in some confused and damaged state. The state at the last checkpoint is completely consistent and the system may be restarted from that state without concern about damaged files. For most applications, it is acceptable for the system as a whole to lose the last few minutes work after a power outage. Transaction processing and database systems require the additional ability to commit individual pages to permanent backing store on demand. Using the journaling mechanism, a domain may request that changes to a particular page be synchronously committed to permanent storage. If a system failure occurs between the commit and the next completed checkpoint, the journaled page will remain committed after the system restarts. It is the responsibility of the requesting domain to see to the semantic consistency of such pages. The journaling mechanism commits pages by appending them to the most recent committed checkpoint. As a result, journaling does not lead to excessive disk arm motion. A curious consequence of this implementation is that transaction performance under KeyKOS improves under load.[2] This is due to locality at two levels. As load increases, it becomes common for multiple transactions to be committed by a single page write. In addition, performing these writes to the checkpoint area frequently allows the journaling facility to batch disk I/O, minimizing seek activity. The KeyKOS transaction system significantly exceeds the performance of competing transaction facilities running on the same hardware. CICS, for example, is unable to commit multiple transactions in a single write. Process exceptions are encapsulated by the nanokernel and routed to a user-level handler known as a. The keeper technology of KeyKOS brings all exception policy to application level programs outside of the nanokernel. A keeper is simply a domain that understands the exception messages delivered by the kernel; it is in all regards an ordinary domain. Since the UNIX implementation relies heavily on the Domain Keeper technology, the ideas and specifications concerning Keepers will be discussed before we delve into the UNIX specifics. Recall that a KeyKOS application has an address space, a domain, and a meter. Each of these objects holds a start key to an associated domain known as its keeper. When the process performs an illegal, unimplemented, or privileged instruction, the error is encapsulated in a message which is sent to the appropriate keeper, along with the keys necessary to transparently recover or abort the application. The keeper may terminate the offending program, supply a correct answer and allow execution to continue, or restart the offending instruction. Each segment has an associated segment keeper. The segment keeper is a KeyKOS process that is invoked by the kernel when an invalid operation, such as an invalid reference or protection violation, is performed on a segment. Page faults are fielded exclusively by the nanokernel. By appropriate use of a meter keeper, more sophisticated scheduling policies can be implemented. The meter keeper is invoked whenever the meter associated with a domain times out. A thread supervisor might implement a priority scheduling policy by attaching the same meter keeper to all threads, and having the meter keeper parcel out time to the individual threads according to whatever policy seemed most sensible. The most interesting keeper for this paper is the domain keeper. The domain keeper is invoked when a trap or exception is taken. When a domain encounters an exception (system call, arithmetic fault, invalid operation, etc.) the domain stops executing and the domain keeper receives a message. The message contains the non-privileged state of the domain (its registers, instruction counter, etc.), a domain key to the domain, and a form of resume key that the keeper can use to restart the domain. When the faulting domain is restarted, it resumes at the instruction pointed to by the program counter. If necessary, the domain keeper can adjust the PC value of the faulting domain before resumption. In July of 1990, Key Logic undertook to produce a binary-compatible prototype UNIX implementation for the Omron Luna/88K. The effort had two principle goals. The first was to rapidly construct a system that could run existing Omron application binaries. Based on Mach 2.5, the Omron implementation provides a reasonably complete version of the Berkeley UNIX system, including the X11r4 windowing system. KeyNIX was implemented by a single developer over a six month period, without reference to the UNIX source code. The implementation was partly based on an earlier Minix port that had been built for KeyKOS on the System/370. Our experience in implementing other systems was that breaking an application into separate function-oriented domains simplified the application enough to improve overall performance. A second goal of the KeyNIX implementation was to learn where such decomposition into separate domains would cause performance degradation. In several areas, multi-domain implementations were tried where the problem area was clearly a boundary case in order to explore the limitations of the domain paradigm. Broadly speaking, the UNIX system provides the following services: Process management (fork, exec, exit, kill), File system and namespace services (open, link), I/O services (read, write, stat, ...) Timing facilities (sleep, nap, sometimes socket) Messaging (sockets, pipes) Memory management (mmap, mprotect) Signals Device support Networking (TCP/IP, NFS) With the exception of networking, KeyNIX implements all of these services. Adding networking support would be straightforward, but was not part of the prototype effort.Under KeyNIX, every UNIX process runs as a KeyKOS domain with a segment as its address space. A standard KeyKOS segment keeper is used to manage stack and heap growth within the address space segment. From the outside, the UNIX process model is essentially unchanged. No KeyNIX code is mapped in with the application, nor is special linking required. The application address spaces are bit for bit identical. This severely penalizes all trivial system calls, and is a significant departure from the implementations used by other microkernels. The penalty could be eliminated in a dynamic-library based standard such as System V Release 4. Figure 1: Structure of the UNIX Implementation To support UNIX processes, we implemented a domain keeper, known as the UNIX Keeper. The UNIX keeper interprets the system call and either manages the call itself or directs request to other domains for servicing. The implementation includes a number of cooperating domains, is shown in Figure 1. The gray box surrounds the domains and segments that are replicated for each UNIX process. Each of these domains in turn depends on other domains provided by the KeyKOS system. For example, a small integer allocator domain is used to allocate monotonically increasing inode numbers. To simplify the picture, domains that are not essential to understanding the structure of the UNIX implementation have been omitted. An unusual aspect of the KeyNIX design is that every UNIX process has a dedicated copy of the UNIX Keeper. When a process forks, the UNIX Keeper is replicated along with the process. By providing a separate UNIX keeper to each UNIX application, the scope of UNIX system failures is reduced to a single process. If a given UNIX processmanage to crash its copy of the operating system, no other processes are impacted. An individual kernel is very hard to crash. To crash the entire UNIX system essentially requires physical abuse of the machine or its power supply. State that must be shared between multiple UNIX keepers, including the process table and open file table, is kept in a segment shared by all UNIX Keepers. Each process has a description block (a process table entry) that describes the process' address space, open files, and signal handling. Process table entries contain chains of child processes and pointers to the parent process table entry. Each open file has an entry in the Open File Table which keeps track of the number of processes that have the file open, the attributes of the file, and a pointer to the data structures that buffer the file data in memory. The UNIX keeper implements UNIX process and memory management services by calling directly on the underlying KeyKOS services. The nanokernel handles virtual memory mapping and coherency directly. When a program is loaded by exec(2), the UNIX keeper builds an address space segment and copies the executable file segment into it. Manipulating the KeyKOS segment structures is simpler than the equivalent structure manipulations in UNIX, and allows the UNIX keeper to be largely platform independent. The nanokernel is responsible for the construction of mapping tables for the particular hardware platform. The UNIX Keeper holds a key to the root inode of the KeyNIX file system. Each inode contains the usual UNIX inode information, and is implemented by a KeyKOS domain. If the inode denotes a file, the inode domain holds a key to a KeyKOS segment containing the file data. If the inode denotes a device, the device major and minor numbers are contained in the inode. By making each UNIX inode into a KeyKOS domain, the UNIX Keeper does not have to manage an inode cache or worry about doing I/O to read and write inodes. When the Keeper needs to read the status information from an inode it sends a message to the Inode object and waits for the reply. Similar arguments apply to other operations. The Keeper does not cache file or directory blocks, and does not maintain paging tables for support of virtual memory. All of these functions are handled by the nanokernel. In the original KeyNIX implementation, directory inodes contained a key to a B-tree domain that was an underlying KeyKOS tool. An analysis of typical directory sizes led to the conclusion that it would be more space efficient to implement small directories (less than five entries) in the inode itself. As a result, directory protocol requests are implemented directly by the inode domain. If the inode does not denote a directory it fails the directory messages appropriately. A curious artifact of this approach is that directory order is alphabetical order. This is occasionally visible to end users as a change of behavior in programs that search directories without sorting them. When opening a file, the UNIX Keeper issues a message to the file system root inode domain. This domain in turn calls on other domains, until ultimately the request is resolved to a segment key that holds the file content. Once the file has been located, the UNIX keeper maps the segment into the keeper address space and adds an entry to the open file table. The open file table is shared by all UNIX Keepers, and is used to hold dynamically changing information such as the file's current size and last modification date. When opening a device, the UNIX Keeper receives the major and minor device number from the appropriate inode domain. The major number is in turn handed to the device table domain, which returns a key to the domain that implements the driver. Drivers implemented in the prototype include character I/O, graphics console (supports the X Window System), the null device, sockets, kmem, and the mouse. Support for /dev/kmem is limited to forging those responses necessary to run the ps(1) command. In most cases, the device driver domain consists of the original UNIX device driver code linked with a support library that maps the UNIX driver-kernel interface onto KeyKOS key invocations. The most difficult part of the KeyNIX implementation, was support for the(2) mechanism. One of the deliberate design decisions of KeyKOS is that domains are single threaded. A domain is either waiting for a message, waiting for a reply to a message, or processing a message. There is no mechanism for stacking messages. This decision increases the reliability of the KeyKOS system, but occasionally requires that queuing domains be inserted into an otherwise straightforward remote procedure call. UNIX signals are asynchronous with respect to the receiving process. As a result, the implementation of the signal mechanism is one of the more complicated and pervasive (not to say perverse) aspects of the UNIX kernel.[3] To ensure that the UNIX Keeper is always able to receive signal notifications promptly, trivial queuing domains are required where an operation might block or complete slowly. The purpose of these domains is to queue messages to devices such as ttys and pipes that might otherwise delay the receipt of signals by the UNIX Keeper. The UNIX Keeper delivers these messages through the queue domain, and waits asynchronously for the queue domain to send a message indicating completion of the requested service. In effect, a series of fork messages are used to implement a non-blocking remote procedure call to the device domain in order to ensure that the UNIX kernel is always ready to receive another message. Figure 2: Domains in a Pipe The queue insertion approach has unfortunate consequences for slow devices (with disk devices one can reasonably assume instant service and duck the issue), and severly impacted communication facilities such as pipes or sockets, as shown in Figure 2. These mechanisms are penalized by the requirement from both sides to remain able to receive signals while proceeding with the I/O transfer. The impact is easily visible in the performance of KeyNIX pipes. A better alternative is discussed below. To the best of our knowledge, the KeyNIX system uses far more processes than any other microkernel-based UNIX implementation. Reactions to the KeyNIX design from UNIX developers range from shocked to appalled at the profligate use of processes. UNIX developers find it difficult to accept that the task switch cost can be lower than the data management code that it replaces. We find this ironic, as one of the major innovations of the UNIX system was the notion that processes were cheap. The object paradigm was at the heart of the design of the KeyKOS system and, as a result, the task switch costs are very much lower than in traditional systems and several times lower than in competing microkernels such as MACH and Chorus. On the Motorola 88x00 series, a typical message send takes less than 500 cycles.[4] The low cost of task switches makes it possible to obtain better performance with much simpler software by taking an object-oriented approach to the decomposition of the system. The UNIX implementation described here takes considerable advantage of KeyKOS building blocks. The complete UNIX kernel implementation is approximately 16,000 lines of C code. The KeyNIX implementation is 99% compatible with the Omron BSD 4.3 implementation. While KeyNIX could be equally compatible with MACH 2.5, the existing prototype is not. There are four significant incompatibilities in the prototype: The application prolog ("crt0") in MACH 2.5 initializes certain MACH ports. Because KeyNIX does not yet implement MACH ports, applications built with the MACH 2.5 crt0.o do not run under KeyNIX. MACH 2.5 port functions are accessed by a trap instruction in the same fashion as are UNIX system calls. KeyNIX does not implement these traps. In MACH 2.5, the fork(2) system call does the same port initialization for the new task that was done by "crt0" in the parent task. This change is not implemented in KeyNIX. MACH 2.5 does not implement the sbrk(2) system call. This call is handled by a library routine that uses the "VMALLOC" of MACH 2.5 to handle memory expansion and contraction. The KeyNIX text segment is writable, which can impact buggy programs. This is the result of a quick and dirty implementation, and could be easily fixed. Programs compiled on the Luna 88K under MACH 2.5 that are to be run in the KeyNIX system must be linked with a new prolog and new library stubs for(2) and(2). In cases where the ".o" files exist, there is no need to recompile the programs, but the programs must be relinked. The existing prototype does not support all BSD 4.3 system calls. The major criterion for choosing what to implement and what not to implement was the need to run X-Windows, csh(1), ls(1) and similar useful utilities. If the system call is not needed to run these applications then it is not implemented. There are a number of calls that are implemented in a limited fashion, again sufficiently to run the required applications. As an example, csh(1) makes usage(2) calls but does not depend on the answers for correct behavior. Usage(2) always returns the same fixed values and is not useful as a measuring tool as a result. To get an intuitive sense of the compatibility achieved, it may suffice to say that all of the application binaries running on KeyKOS were obtained by copying the binary file from the existing BSD 4.3 system. The X Window System, compilers, shells, file system utilities, etc. all run without change under KeyNIX. Operation Iterations KeyNIX MACH 2.5 Ratio getpid(); 10,000 12,000/sec 30,000/sec 0.4 open();close(); 1,000 714/sec 2,777/sec 0.26 fork();exit(); 100 64/sec 10/sec 6.4 exec(); 100 151/sec 12/sec 11.6 sbrk(4096);sbrk(-4096) 100 2,564/sec 181/sec 14 A limited performance comparison was made between the KeyNIX prototype and the Omron MACH 2.5 implementation. A more careful analysis would be required for any serious evaluation of the two systems for production use. KeyNIX got mixed results for common system call sequences: I/O performance was equally mixed: Operation KeyNIX MACH 2.5 Ratio Pipe (round trip) .588 Mbyte/sec 1.05 Mbyte/sec .56 Disk access program 4 seconds 26 seconds 6.5 As anticipated, the simplification achieved by adding domains doesn't always lead to better performance. The cases that the KeyNIX prototype handled poorly have straightforward corrections which are discussed below. Simple system calls include calls such as(2),(2), and(2), which are essentially accessor functions. A trap is taken, but the system call itself performs little or no interesting activity within the kernel. The KeyNIX system is binary compatible with this approach. The MACH 2.5 implementation is able to execute these system calls 2.5 times as fast as the KeyNIX system because no context switch is involved. MACH 3 uses special system call libraries to implement some of these functions in the UNIX process address space. A similar approach would be possible in KeyNIX if the system calls were implemented in dynamic libraries, as in System V Release 4, or if binary compatibility could be sacrificed. We were surprised that KeyNIX did so well on this comparison. To explore the limits of domain performance, we elected to implement each inode as an individual domain. On the basis of our previous experiences, it seemed likely that the simplification achieved by this approach would overcome the overhead of multiple domains. With the benefit of hindsight, we were mistaken, and the performance of(2) suffered excessively. The namei() routine within the UNIX kernel is heavily used, and the decision to use multiple domains in effect inserted four context switches into the inner loop(for two round-trip RPC's). [5] In a small program that simply opens and closes a single file 1,000 times, the MACH 2.5 system outperformed the KeyNIX system by nearly four to one (3.89). Alternative implementations are discussed below.Because the UNIX programming model assumes that processes are cheap, the performance of(2) is critical to the overall performance of the system. In KeyKOS, the equivalent to(2) is even more critical, and is possibly the most carefully optimized path in the nanokernel. We therefore expected KeyNIX to do well on(2) calls. KeyNIX outperforms MACH 2.5 by a little more than six to one. The current KeyNIX implementation suffers from an extremely naive loader implementation in the UNIX keeper. When performing a fork(2), a complete copy of the process address space is made. The implementation could be improved by sharing the read-only text pages rather than copying their content. In addition, it would not be difficult to implement UNIX copy-on-write semantics as part of the segment keeper that services faults on the UNIX address space. Neither of these optimizations was performed in the prototype due to time constraints, and we would expect each to result in substantial improvements. Given the naive loader implementation, we were pleasantly surprised to find that KeyNIX outperformed MACH 2.5 by better than eleven to one on(2) calls. The test program simply calls(2) one hundred times and exits. Implementing shared text would significantly improve the KeyNIX results.In order to compare the performance of the(2) system call, a program was written to repeatedly grow and shrink the heap. 100 calls to sbrk(4096) and sbrk(-4096) were executed with a fetch of a byte from the newly allocated memory. The fetch of the byte forces the UNIX implementation to actually allocate the main store for the page, and consequently forces the page to be deallocated when the heap segment size is reduced. KeyNIX outperformed the MACH 2.5 implementation by fourteen to one, which was consistent with our expectation.Pipe performance is one of the areas where we expected KeyNIX to suffer. In order to compare the pipe implementations, a megabyte of data was passed through a pipe to a child process task and back in 1000 byte chunks. The MACH 2.5 implementation outperformed KeyNIX by nearly two to one. This result is principally due to the insertion of queue domains into both ends of the pipe, imposing considerable context switch overhead. In retrospect, we could have eliminated the queues and depended on the fact that asynchronous signal delivery timing is not guaranteed by the UNIX process model. In particular, correct UNIX programs cannot depend on the fact that interprocess signals will interrupt a system call in the receiving process. Taking advantage of this loophole would allow for a much simpler and faster implementation. To measure disk performance, we built a program to create a large test file and read it repeatedly. The I/O model of KeyNIX and MACH 2.5 are so radically different that other comparisons are very difficult. Uncached writes, for example, are dominated by disk arm movement, so a comparison of such activity is unenlightening. The times reported are the elapsed time to write and then read a one megabyte file ten times. KeyNIX outperforms MACH 2.5 by better than six to one. KeyNIX I/O performance is a direct result of the underlying KeyKOS I/O design. KeyKOS never writes to disk as a direct result of writing to a file. All writes to the disk are part of the paging, checkpoint, and migration system. To determine the impact of the checkpoint process on the test, we arranged for KeyKOS to perform a checkpoint and migration in parallel. This process increases the KeyKOS time to 4.4 seconds, giving a performance ratio of 5.9 to one. To the best of our knowledge, the prototype KeyNIX system achieves the highest I/O bandwidth utilization of any UNIX system today.[6] KeyKOS's I/O performance makes the overall performance of many applications better under KeyNIX than under a more conventional system, and appears to more than balance the prototype's performance deficiencies. The overall performance of the KeyNIX system is quite comparable with MACH 2.5. Some operations are slower and some quite a bit faster. A user using X-Windows doing VI and using a variety of shell commands and scripts is unaware of any significant performance difference between MACH 2.5 and KeyNIX.In the course of the prototype effort, we came up with several ways to simplify the UNIX keeper and to cut down on some of the overhead. Each of these ideas represents a compromise in the use of domains and multiple instantiation.The current process table segment is an array of process table entries. The UNIX process id is used to index the table. Process numbers are reallocated quickly, which leads to certain problems in the human interface for system maintenance. Also there are circumstances when process table entries should be chained so that children can be located more quickly. This is best handled by introducing a domain for process table entry manipulation that allocates and chains process table entries. The UNIX keeper continues to reference its own process table entry directly, but accesses other process table entries (to obtain a signal key) using the process table management domain. Similarly, the open file table could be implemented by a domain. These modifications would both simplify the UNIX keeper and remove the primary impediment to distribution of the KeyNIX implementation on loosely coupled architectures.The data for small files could be kept in nodes instead of segments. A small file might be a single-level tree of nodes with up to 16 leaf nodes each holding 176 bytes of data. When the 17th node is required the file is converted to a segment. The inode domain would convert the file to a segment when it is opened, and on the last close would convert it back into node form if it is small enough. This would allow KeyNIX to achieve more efficient storage of small files than current UNIX systems.Opening files is a crucial operation in UNIX systems, and the domain-per-inode approach is not nearly fast enough. Two alternative implementations would have delivered competitive performance. The first approach is to build the entire directory and inode support structure for a file system into a single domain, while continuing to implement files as individual segments. This would eliminate almost all of the context switching performed in the file subsystem, and would probably outperform the MACH 2.5 implementation. The second alternative is to implement a compatibility library that would enable us to simply compile a vnodes-compatible file system into a domain. Using this approach, the entire file system would reside in a single KeyKOS segment, and bug-for-bug compatibility is achievable. This approach is something like the File Manager tasks of CHORUS and MACH 3. In practice, supporting vnodes file systems is probably a compatibility requirement for a commercial UNIX implementation, but system reliability suffers greatly from this requirement. Our current preference would be the first alternative, mainly to eliminate the bugs of the existing file system implementations. In addition, we feel that this approach significantly simplifies recovery in the event of a disk block failure, as it eliminates the need for a complicated file system consistency checker. The KeyKOS nanokernel has been running in production environments for nine years. It is proven technology, and we feel that the architecture and implementation have much to offer to the computing community at large. A serious development project could far exceed the performance that we obtained from the six month UNIX prototype effort. KeyKOS represents a pardigmatic shift in operating system technology. It is therefore difficult to make direct comparisons with other approaches. A pure capability architecture brings fundamentally greater discipline, control, and reliability to application construction. In the long term, we feel that this degree of reliability is necessary to realize the productivity promises of the information age. For further information on KeyKOS: Norman Hardy 143 Ramona Road 3754 Portola Valley, CA 94028 (415) 851-2582 norm@netcom.com Jonathan S. Shapiro 870 North 28th Street, Suite 101 Philadelphia, PA 19130 (215) 236-7583 shap@gradient.cis.upenn.edu Theodore A. Linden, "Operating System Structures to Support Security and Reliable Software," NBS Technical Note 919, U.S. Department of Commerce, National Bureau of Standards, Institute for Computer Sciences and Technology, August 1976. (Also published in ACM Computing Surveys, 8, 4, December 1976, pp. 409-445). Norman Hardy, "The Keykos Architecture," Operating Systems Review, September, 1985. Introduction to KeyKOS Concepts, KL004, Key Logic, 1988. KeyKOS/370 Principles of Operation, KL002, Key Logic, 1988. KeyKOS Architecture, KL028, Key Logic, 1988. Butler Lampson, "A Note on the Confinement Problem," Communications of the ACM, 16, 10, October 1973. Henry M. Levy, Capability Based Computer Systems, Digital Press, 1984. M. Ritchie and K. L. Thompson, "The UNIX Time-sharing System." Communications of the ACM, July, 1974. System/370 Principles of Operation, GA22-7000-9, IBM, 1983. Patent number 4,584,639 (describes the secure factory mechanism). William A. Wulf, Roy Levin, and Samuel P. Harbison, Hydra/C.mmp: An Experimental Computer System, McGraw-Hill Book Company, 1981. [1] The system can support multiple master space banks. In a B3 implementation, system pages would be partitioned into multiple security classes, and there would be one master space bank for each class. [2] Up to a point. There ain't no such thing as a free lunch. [3] This is also a significant problem for debugging interfaces, such as /proc(4) and ptrace(2). [4] This time includes the context switch and copying both data and keys. The Motorola implementation is the slowest implementation to date. [5] One round trip to access the inode domain, the second to access the directory domain. [6] We are well aware of the significance of the I/O subsystem design in this claim, and believe that the claim would hold up when examined with other I/O subsystems and bus architectures. On the System/370, KeyKOS achieves channel utilization of better than 95% on all channels. With current SCSI technology, KeyKOS's disk utilization is limited by the SCSI channel performance.
Towards a Better Understanding of the Needs of Children Currently Adopted from Care: An Analysis of Placements 2003–2005 John Randall summarises the findings of an in-house study undertaken by Families for Children, a voluntary adoption agency based in the southwest of England. It took a consecutive sample of 103 children placed from care for adoption between 2003 and 2005, using Matching Needs and Services, a method designed for analysing need in child care populations and developing services best suited to meeting them. The study identified nine need groups of varying degrees of complexity and looked at the service responses to those identified needs. The children placed came from 41 local authorities ranging from nearby local authorities to the wider southwest, London and the southeast, the Midlands and the north of England. The sample offers a snapshot of the contemporary challenges presented by children placed for adoption from care.
Hours before Christmas comes, let me show you another great talented person who has a great passion for Wolverine World of James Howlett, the Wolverine, Eric or shall we call him as “The French Wolverine” from France. I recently found him on a Facebook page and his works are exceptional. If you’ll see his photos you’ll probably say he’s the real Wolverine. His profile is a bit impressive as well. He is Cosplayer, Bodybuilder, a Coach fitness and an Airsoft player. I would like to thank again Mr. Eric for giving us the opportunity to share his story and discovery about Cosplay. I hope all you guys would have a Happy and Merry Christmas! Let’s go and discover Wolverine world of the French Wolverine! The Wolverine World Hi I’m Eric, I discovered cosplay in 2005 – 2006 when I begin to be an airsoft player and then I want to do the same costume then Snake on MGS3. I always like to play characters who are totally different from me, that’s why I become an actor a few years ago. It’s the same thing for cosplay and this year a stars really on cosplay with my cosplay of Wolverine. First Convention  If I remember exactly my first real cosplay was Rios of the video game Army Of Two in 2011 at the Japanimes a little event in the south of France. I choose the Character of Rios because I have all the airsoft gear, gun, and mask for making a good cosplay. I remember people were really impressed by my arms and my body proportion at this time I already had this massive muscles and my arms are one of my best body parts. Costume/Gear/Props/Armor The first costume then I made 100% was an exoskeleton for a movie then make inspired by the movie which Matt Damon Elysium, I love to build this exoskeleton ( you can see pics on my Instagram ) Because before then be a personal trainer I was a gunsmith and build a fake futuristic armor was all fake system for move fight and the weapon, build all of that is very easy for me but sewed I’m not very good on this part. And the second exoskeleton then I build and you will see in few month, it’s an S.T.A.L.K.E.R exoskeleton armor because it’s my favorite video game and I want to do a great photo shooting for this armor. Unforgettable Experience I never have a very bad experience on a convention, I just remember then some believe that I was Batman…Batman without a cape, with metal claws and yellow costume part. Definition of Cosplayer For me, Cosplay is everybody and everybody can, for sure some people have the same face body then some character, such as, a lot of my followers tell me that I look like Wolverine on the comics with my body, especially the version on the crossover on Witchblade. They tell me also then I will do an awesome Aquaman, this cosplay will come this summer, but if I want to do Lara Croft or Cat Woman cosplay I can and that could be really fun, for me there is no limit on cosplay, you can all then you want to do. Admiration I follow on Facebook and Instagram, Maul Cosplay very great work made by this man I like really like this Last Of Us cosplay, and I follow also Monkeyofsteel Because I do like me a modern version of Wolverine. Open Message You guys can follow me on my account on FACEBOOK, INSTAGRAM, and YOUTUBE CHANNEL. Get to know Eric’s French Wolverine World Share this: Pinterest Facebook Twitter Tumblr Telegram Skype WhatsApp LinkedIn Pocket Reddit Email Print Like this: Like Loading...
Columbin isolated from calumbae radix affects the sleeping time of anesthetized mice. In a screening series of bioactive components in edible and medicinal plants, we found that Calumbae Radix (Colombo root, 1% powdered feed, for 5 d) and its component, columbin (20-40 mg/kg/d, for 5 d, orally), shortened the sleeping time induced by a urethane and alpha-chloralose mixture and prolonged the sleeping time induced by hexobarbital in mice.
The Elderly as a Political Community: the Case of National Health Policy .B 5ETWEEN 1950 and 1970, the over-65 population increased at more than twice the rate of the under-45. One out of every ten Americans is now 65 or older; at the present rate of increase, by the year 2000, roughly half of the population will be over 50 years of age. Aside from its dramatic increase in size, the over-65 population is undergoing important sociological changes. Because of Social Security and private retirement funds, rural-to-urban trend and improved medical procedures, more and more older people are living apart from their families and the younger society. In this paper we shall explore the political consequences of the expanding numbers of elderly citizens. In particular we shall examine the argument that the rising ranks of the elderly presage significant political realignments which will result in bloc-like electoral behavior as millions of elderly begin to unite into a selfconscious political community.
The Pokémon Company is giving out a bunch of free Booster Packs for Pokémon Trading Card Game Online (TCGO). The problem is that you need to get a specific code and redeem it using your TCGO before the deal expires tomorrow (October 10th). However, these codes are spread all over the place online. So, to help you out, we've been hunting and rounded up as many of them as we could for you. You can only enter each code once, and they'll each give you a random booster cards from recent expansions, including: Black & White – Legendary Treasures, XY, XY – Flashfire, and XY – Furious Fists. Of course, first you'll need to download the iOS game or the PC version, and make an account if you don't already have one. Other than that, the list of codes that you can redeem on your account are below. APPS-PYYT78-POKEMON-TCGO-APP 148APPS-PLAYS-POKEMON-TCGO GAMEINFORMER2014-POKEMON-TCGO-APP SHACKNEWS-POKEMON-TCG-ONLINE-NEWS DAD-POKEMON-TCGO-APP-FAN TINYCARTRIDGE-POKEMON-TCGO-CODE BULBAGARDEN-POKEMON-TCGO-APP-FUN TOPCUT-STRATEGY-POKEMON-TCGO-APP NUGGETBRIDGE-OVER-TO-POKEMON-TCGO-APP QF8W-3LY1 MARRILAND-PLAYS-POKEMON-TCGO-APP PURPLEPAWN-POKEMON-TCGO-APP POKEMON-TCGO-POKEMON-CROSSROADS TOONZONE-POKEMON-TCG-ONLINE-APP SLIDE-TO-PLAY-POKEMON-TCGO-TODAY REACTOR-LOVES-POKEMON-TCGO-APP JWITTZ-POKEMON-TCGO MACRUMORS-SEPT-POKEMON-TCGO-APP POKEMON-TCGO-WELCOMES-POKEJUNGLE NINTENDO-LIFE-LIVES-FOR-POKEMON-TCGO THE-AVERAGE-GAMER-PLAYS-POKEMON-TCGO-APP PIDGI-PLAYS-POKEMON-TCGO-2014-SEPT
Is noise exposure a risk factor for cardiovascular diseases? A literature review We are exposed to noise on a daily basis, and noise pollution is increasingly becoming more intense, especially with more people living in the urban areas. Cardiovascular diseases (CVDs) are the leading cause of mortality worldwide and of global public health concern. Preventing and treating CVDs requires a better understanding of the associated risk factors. There is emerging evidence that noise pollution, especially related to the various forms of transport, is likely a contributor to the pathogenesis and aggravation of CVDs. We review key epidemiological data that address the link between excessive noise exposure and CVDs in humans and present proposed pathophysiological mechanisms underlying this association.
ROHTAK: As an Olympic wrestler Yogeshwar Dutt was a trendsetter, whether it was his signature moves or winning medals on the mat.He has now taken a muscular stand outside the sporting world as well, by accepting just a token Re 1 as dowry when he weds on January 16 in Delhi.The bride-to-be, Sheetal, is the daughter of Haryana Congressman Jaibhagwan Sharma.“I saw my family struggle to collect dowry for the girls of the family,” said the 34-year-old who got engaged at Murthal in Sonipat on Saturday.“As a result, I decided on two things while growing up — I will excel in wrestling and I will not accept dowry. My first dream has been realised and now it is time to keep my second promise,” Yogeshwar Dutt said.The Olympian said how he wished his father Rammehar Dutt and his first guru Master Satbir Singh had been alive to see him fulfil both his promises.Yogeshwar’s mother Sushila Devi said his marriage was a special occasion and they would accept Re 1 as a mark of good omen from the bride’s family, but nothing else.
Object Detection in Tensor Decomposition Based Multi Target Tracking Non-linear filtering arises in many sensor applications such as for instance robotics, military reconnaissance, advanced driver assistance systems and other safety and security data processing algorithms. Since a closed-form of the Bayesian estimation approach is intractable in general, approximative methods have to be applied. Kalman or particle based approaches have the drawback of either a Gaussian approximation or a curse of dimensionality which both leads to a reduction in the performance in challenging scenarios. An approach to overcome this situation is state estimation using decomposed tensors. In this paper the Sequential Likelihood Ratio Test (SLRT) for object detection in tensor decomposition based target tracking is presented. The scheme closely follows the well-known and often applied approach of the track-oriented MHT.
Frequency and degree of aneuploidy in benign and malignant thyroid neoplasms The frequency and degree of aneuploidy in 44 benign and 124 malignant thyroid neoplasms were analyzed by DNA flow cytometry. Single aneuploid cell populations were found in 72% of the undifferentiated carcinomas, 64% of the follicular carcinomas, 24% of the papillary carcinomas and in 24% of the follicular adenomas. Multiple aneuploid cell populations were detected in 4% of the papillary and in 36% of the follicular carcinomas but not in undifferentiated carcinomas. A low degree of aneuploidy was found in well differentiated papillary carcinomas (mean DNA index of aneuploid populations; DI =1.17; SD ± 0.09). Significantly higher values were found for aneuploid moderately differentiated papillary carcinomas (DI =1.46; SD ± 0.29), well and moderately differentiated follicular carcinomas (DI =1.61; SD ± 0.33 and DI =1.60; SD ± 0.30, respectively) and undifferentiated carcinomas (DI =1.72; SD ± 0.19). High DNA indices were also found in several follicular adenomas (DI =1.49; SD ± 0.22). Comparison of the 10‐year survival rates of patients with moderately versus well differentiated papillary carcinoma (79 vs. 98 months, respectively) indicates that loss of differentiation and progression of aneuploidy in this tumour type is associated with more aggressive clinical behaviour. Similarly, the high frequency and degree of aneuploidy in undifferentiated carcinomas is in agreement with the very poor survival rate (0% at 10 years) in this group of patients. However, the occurrence of highly aneuploid adenomas and (near)‐diploid undifferentiated carcinomas does not point to a direct causal relationship between DNA‐ploidy changes and clinical behaviour of these thyroid tumours.
Health Lifestyles and Their Inuence on Chinese Oldest-old’s Health Outcomes —Evidence From a Latent Class Analysis A strong association between individual health behaviors and health outcomes has been emphasized by previous analyses. However, how individual health behaviors can be classied into health lifestyles and the manner in which health lifestyles have impacted Chinese oldest-old’s health status are largely unknown. Four distinct classes representing health lifestyles emerged. Health lifestyles were found to be strongly associated with Chinese oldest-old’s health outcomes which were measured by self-rated health, functional independence, cognitive function and chronic diseases, even after controlling for demographic features as well as individual and parental socioeconomic disadvantage. Findings also showed a convergence of health disparities caused by demographic and SES characteristics in very old age. Background In recent years, researchers have begun to use clustered health lifestyles to explain the health disparities among individuals . The bene t of this perspective is that it has extended the scopes of existing analyses on individual health behaviors to classi ed health lifestyles. Individual or single health behaviors that have been commonly used in prior analyses included poor dietary habits, cigarette smoking, excessive alcohol consumption et al. . Scholars who promoted the health lifestyle approach argued that health behaviors tend to cluster in ways that re ect the social and structural contexts of individuals, which in turn affects individual health status . This is because behaviors are not isolative, but co-occur with another . Health lifestyle theories therefore contended that concentrating on single behaviors or small subsets of risky behaviors provides limited insight into health behavior patterns . Thus, considering multiple behaviors simultaneously is a more appropriate strategy that creates larger and more enduring behavior change to improve individual health . As far as studies on health status of Chinese elders, abundant analyses have documented a strong link between health behaviors and health outcomes among Chinese older adults . Nevertheless, as studies focusing on other social contexts, most research on Chinese oldest-old also focused on single health behaviors. Since the sub-population of oldest-old is growing at extraordinary speed in China, it is important to explore potential factors that may improve the oldest-old's health status to alleviate the burden of the society as well as the family caregivers. Under such a proceeding, this research intended to take the health lifestyle approach, i.e., a combination of multiple health-related behaviors, to attain a better understanding of health-related practices and their relationship with Chinese oldest-old's health outcomes. Relying on latent class analysis strategy, the study used 2014 wave of the Chinese Longitudinal Health and Longevity Survey (CLHLS 2011), a nationally presentative data, to include health behaviors from multiple domains to present a relatively more comprehensive picture of health behaviors among Chinese oldest-old. It also aimed to elucidate how health lifestyles have shaped Chinese oldest-old's health outcomes. Findings based on analyzing nationally representative data in China are valuable to address disease prevention and health promotion related issues among the oldest-old in world countries. Exploring how clustered health behaviors in uence the oldestold's health outcomes can also expand theories explaining health disparities among elders in general. The health lifestyle approach and prior literature The health lifestyle approach can be considered as a theoretical development in research of health disparities. The concept of health lifestyle was derived from Weber's idea of lifestyles as the interaction of life choices and life chances. Weber reasoned that lifestyles are not associated with individuals but groups of people with similar social status and backgrounds. Such a de nition has been further expanded to include factors such as understandings of what good health means, health norms, policy environments et al. (Krueger et al., 2009). Bourdieu further treated health lifestyles as broad and potentially unobservable orientations that organize patterns of behaviors. Health lifestyle perspectives emphasized more on patterns of behaviors rather than single behaviors. The perspectives highlighted social, cultural and economic forces on individual choices of health behaviors . Some pioneer studies using the health lifestyle approach have been conducted to examine the general population. These studies can be classi ed as the following groups: First, linking personal characteristics, such as gender and age, to individual health lifestyles . Second, demonstrating a strong positive association between SES and clustered health behaviors among adults in different social contexts . Third, exploring determinants of health lifestyle behaviors in adolescence and revealing how early age health lifestyle behaviors had imprints on one's health behaviors in adulthood . Fourth, documenting signi cant in uence of health lifestyles on individual health outcomes, including mental health, self-rated health (SRH) and alike and underlining the positive effects of health lifestyle behaviors on diseases prevention . Health lifestyle approach has also been found useful in epidemiological studies examining health and mortality among older adults in a variety of countries. By operationalizing healthy lifestyle behaviors as physical activities, consumption of fruits and vegetables, and whether smoking, Martin-Maria and colleagues' study showed signi cantly positive effect of healthy lifestyle behaviors on subjective well-being among Spain sample aged 65 and over. Through studying multiple lifestyle behaviors of older persons in Korea and Amsterdam, scholars highlighted that participation in healthy lifestyles contributed to the maintenance of functional independence (measured as ADL and IADL) and cognitive function in later life . The study based on examining lifestyle behaviors including non-smoking and physical activity among elders in Sweden revealed that a low risk health behavior pro le could add ve years to women's lives and six years to men's after age 75 . The above reviewed analyses have provided guidance to this current research investigating the link between health lifestyles to Chinese oldest-old's health outcomes. The selection of health lifestyles as well as health status measures was based on the commonly used measures in previous studies. The analysis answered two main questions: First, what are predominant health lifestyles of the Chinese oldest-old? Second, how have these main health lifestyles shaped Chinese oldest-old's health outcomes? Findings of this study were expected to ll the voids of prior literature by investigating Chinese oldest-old's health disparities from single health behaviors. Results based on analyzing the China data were also supposed to enrich health lifestyle theories on the whole. Below the paper moved to an introduction of data, measures and methods used in the study. Data Data came from the 2014 Chinese Longitudinal Healthy Longevity Survey (CLHLS) which was conducted in randomly selected half of the counties/cities in 22 provinces of China. Until now, 7 waves (1998, 2000, 2002, 2005, 2008, 2011-12, and 2014) of survey data have been collected. The survey was initially launched to meet the needs for scienti c research on the oldest-old. Thus, the dataset provided an excellent source for studying the oldest-old in China. Previous literature showed that persons who reported age 106 or higher were considered as invalid cases . Therefore, persons aged 106 and higher were excluded from this study due to insu cient information to validate their reported extremely high age. The study eventually obtained 3,416 oldest-old aged 85 to 105, with 2,025 males and 1,391 females. Health lifestyle indicators Health lifestyles measures used in previous analyses can be classi ed as the following categories: (1) dietary patterns (including eating fruits, vegetables, breakfast et al.), (2) smoking, alcohol consumption, (3) sleep, (4) obesity and physical activity, (5) seat belt wearing and media use, (6) body mass index (BMI), and (7) regular physical examination . The selection of health lifestyle indicators in this research has been largely guided by prior studies and four key domains were applied, including dietary behaviors, smoking and alcohol use, sleep, and physical and leisure activities. The rst domain was dietary behaviors. In the CLHLS survey, the respondent was asked the frequency of eating or drinking fresh fruit, fresh vegetables and tea. The study coded these three variables as dichotomous ones with labeling respondents answering "almost everyday" and as "1" and "0" if otherwise. Tea consumption was considered because previous research pointed out that tea drinking related to longevity and reduced risk of mortality and death from cardiovascular diseases . Tea consumption was thus used as an important health lifestyle behavior in this study. The second domain related to smoking and alcohol use. Since the variables measuring the respondent's exact amount of cigarette or alcohol consumption had an extremely large amount of missing values with responding rates lower than 20.0% of the total sample, the research applied other measures. Those measures relied on CLHLS survey questions asking the respondent whether he or she smoked or drank alcohol "in the past" and "at present". The respondent who never smoked in the past or at present was coded as "0" and "1" if otherwise. It was assumed that for those individuals who smoked in the past and was still smoking when the survey was conducted was a heavy smoker; the same rationale and coding strategy were also applied to the alcohol consumption variable. Sleep was the third domain which was represented by two indicators: sleeping duration and sleep quality. The study dichotomized the sleep duration variable as "1" indicating having 8 hours or more sleep each day and "0" as having less than 8 hours sleep. The sleep quality variable was dichotomized with those who reported their sleep quality as "good" and "very good" as "1" and poor sleep quality as "0" (including the categories that were originally coded in the survey as 'so so', 'bad' and 'very bad'). The fourth domain was physical and leisure activities. The research relied on two survey questions asking whether the respondent exercised regularly in the past and at present to determine if he or she was physically active. Those who exercised regularly both at present and during the past were coded as "1", and "0" if otherwise. The research also classi ed leisure activities into sedentary actives and active activities. Sedentary activities were such as reading newspapers/books, playing cards and /or mah-jong, and watching TV and/or listening to radio. Active activities included raising domestic animals, doing gardening work et al. For those who participated in leisure activities almost everyday were coded as "1" and "0" if otherwise. Health outcome measures The health outcome measures used in this research were consistent with measures used in previous research, including self-rated health (SRH) , cognitive function , chronic diseases and activity of daily living (ADL) . The respondent's SRH was coded as a continuous variable (1=very bad, 5=very good). Chronic disease variable was measured by whether the respondent reported any chronic diseases (1=yes, 0=no). The CLHLS survey asked the respondent whether he or she was suffering from 24 types of chronic diseases, including: hypertension, diabetes, heart disease, stroke/ cerebrovascular disease, bronchitis/emphysema/asthma/pneumonia, pulmonary tuberculosis, cataracts, glaucoma, cancer, prostate tumor, gastric or duodenal, Parkinson's disease, bedsore, arthritis, dementia, epilepsy, cholecystitis/cholelith disease, blood disease, rheumatism or rheumatoid disease, chronic nephritis, galactophore disease, uterine tumor, hyperplasia of prostate, and hepatitis. Since the missing values for prostate tumor, chronic nephritis, galactophore disease and hyperplasia of prostate exceeded half of the respondents, these 4 types of chronic diseases were dropped from the analysis. As a result, the study included the rest 20 types of chronic diseases. If the respondent answered he or she was suffering from at least one type of the 20 types of chronic diseases, then the respondent was coded as "1" for the chronic disease variable, and "0" if otherwise. Cognitive function of the respondent was measured by using the Chinese version of the Mini-Mental State Examination (MMSE). The MMSE was adapted from Folstein, Folstein, and McHugh and tested four aspects of cognitive functioning: orientation, calculation, recall, and language. The total possible score on the MMSE is 30, with lower scores indicating poor cognitive ability. Based on recommendations in the literature, responses of ''unable to answer'' were coded as incorrect answers . Activity of daily living (ADL) disability was de ned as self-reported di culty with any of the following ADLs items: (a) bathing, (b) dressing, (c) eating, (d) indoor transferring, (e) toileting, and (f) continence. To avoid problems of complications and small sub-sample sizes in model estimation, the ADL functional capacity was dichotomized into "0" (meaning no ADL limitation) and "1" (meaning at least one ADL limitation). Control variables The analysis also controlled for the respondent's demographic characteristics such as age, gender, rural and urban residence. Respondents who lived in cities and towns were classi ed as urban residents. The respondent's socioeconomic status was also controlled, including years of schooling, per capita household income, and occupation before age 60. The occupation variable was coded as "1" if the respondent held a professional or administrative occupation and "0" if otherwise. Since socioeconomic condition in early childhood has been documented to have a cumulative effect on one's later life health status and mortality , the early childhood (or parental) SES was controlled as well. These measures included whether the respondent frequently went to bed hungry as a child, education of the respondent's father and the respondent's father's occupation before age 60 (1=professional or administrative job, 0=otherwise). Although the percentages of respondents and respondents' fathers who held professional or administrative jobs were low, the occupation measure has been repeatedly used as indicators of one's SES . Thus, the validity of the occupation measure representing SES has been proved by previous analyses. Table 1 showed descriptive statistics for all variables. Latent class analysis Latent class analysis (LCA) was used to predict membership in latent or unobserved groups. LCA is a statistical method that nds subtypes of related cases (latent classes) from multivariate categorical data. It can be used to nd distinct categories considering presence/absence of several symptoms. The rationale of LCA is that given a sample of cases (subjects, objects, respondents, patients, et al.) measured on several variables, one wishes to know if there is a small number of basic groups into which cases fall. The results of LCA can be used to classify cases to their most likely latent class. Within each latent class, each variable is statistically independent of every other variable. If one removes the effect of latent class membership on the data, all that remains is randomness. In this study, respondents in a certain latent class shared similar health lifestyle patterns. Each case was assigned a probability of membership in each class. LCA uses observed data to estimate parameter values for the model. The model parameters are the prevalence of each of C case subpopulations or latent classes and conditional response probabilities. A randomly selected member of that class will make that response to that item/variable. Parameters are estimated by the maximum likelihood (ML) criterion. The ML estimates are those most likely to account for the observed results. Estimation is usually done with simple iterative numerical methods. The traditional forms of LCA used complicated estimation methods based on matrix manipulation and simultaneous linear equations. Later on, simple iterative proportional tting was used to nd ML parameter values. In this analysis, since the exact number of health behavior typologies is unknown, an explanatory approach was used. It started with the most parsimonious 1-class model and tted successive models with increasing numbers of classes. Each latent class solution was replicated 20 times beginning at random starting values. This method included a close examination of item loadings and model t indices for estimating latent classes . The nal number of classes was determined by the conceptual meaning, and commonly used t measures such as the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC) and the value of entropy. The values of these indices for different class categories were shown in Table 2. When running LCA, the Stata software showed that convergence was not achieved when constructing 5 classes. Considering that the four-class solution provides the most conceptually coherent description of health lifestyles, the four-class solution was chosen as the most appropriate solution that represented health lifestyles among Chinese oldest-old. Since smaller values of AIC and BIC are better and the four-class model had both the smallest AIC and BIC (see Table 2), the statistical data proved that the four-class solution is preferred. As Table 2 presented, the entropy for the four-class model (0.732) was also beyond the criteria for good class separation cutoff point of 0.60 . Thus, the four-class solution was determined to be the best classi cation type to represent health lifestyles among Chinese oldest-old. Based on the results of LCA, the paper presented item response probabilities and shares for the analyzed sample for each class in Table 3. Such information clearly showed the four predominant health lifestyles among Chinese oldestold and the percentage distribution of sample among the four classes. Meanwhile, the table also showed the percentage distribution of the respondents for each health behavior, which helped us to describe the patterns of the four classes that represented Chinese oldest-old's health lifestyles. Descriptive and regression analyses The analytical part started with descriptive analysis to report means and percentage distributions of all variables (see Table 1). Multiple regression models were then constructed to predict Chinese oldest-old's health status on the basis of their health lifestyles, controlling for the respondent's demographic and socioeconomic characteristics. Since the health outcome measures of ADL disability and chronic diseases were coded as dichotomous variables, logistic regressions were used to perform the analyses. The other two measures of health status, namely, SRH and cognitive function scores, were continuous variables, ordinary least squared (OLS) regression were therefore applied to show how health lifestyles predict health status of the oldest-old. Descriptive statistics Descriptive statistical results for all variables were presented in Table 1. Among the 3,416 respondents aged 85 to 105, there was a higher percentage share of rural than urban respondents in the sample (57.3% and 42.7%, respectively) and females outnumbered males (59.3% vs. 40.7%). The mean age of the sample was 93.1 with a standard deviation of 5.7. As to SES of the respondents, the average reported years of schooling among the oldest-old was 1.5 with a standard deviation of 2.8. And the average schooling year for the respondent's father was 0.5 with a standard deviation of 1.8. The mean household per capita income for the year before the survey was 15,832.8 RMB (which is equivalent to 2,261.8 USD with 1 USD =7 RMB), with a standard deviation of 17,233.8. The percentage of the respondents and their fathers who had professional or administrative jobs before retirement was 5.7% and 2.6%, respectively. About 77.4% of the studied sample reported being hungry when going to bed in childhood. The health outcome pro le among Chinese oldest-old was fairly good. To illustrate, the average SRH score was 3.3 (between fair and good). About 57.1% and 34.8% of the respondents reported having at least one type of chronic diseases and ADL disabilities, respectively. The mean cognitive function score was 22.5, suggesting a relatively good cognitive function status of Chinese oldest-old. The health lifestyle patterns can be described as fairly healthy. The studied oldest-old seemed to be frequent fruit and vegetable eaters, with 13.5% and 49.1% of them eating fresh fruit and vegetables almost everyday. Tea was also preferred by some oldest-old. Nearly one fth of them had the habit of drinking tea almost everyday. About 24.9% and 21.3% of the studied sample reported themselves as smokers and drinkers, respectively. For three fth of the sample, sleeping was not an issue since they reported good sleep quality. A slightly over 55.0% of the sample had 8 or more hours of sleep each day. Considering the very old age of studied sample, the lifestyle of the respondents tended to be more sedentary than active. About 26.8% of them reporting doing physical exercise before age 60 and were still exercising when surveyed. And 48.4% and 41.7% of the oldest-old individuals reported that almost everyday they participated in at least one physical and sedentary type of leisure activity, respectively. Health lifestyles among Chinese oldest-old After choosing the 4-class model as the best tted latent class model, the study estimated item probabilities for the four identi ed latent classes. The four predominant Chinese oldest-old's healthy lifestyles (latent classes) and their share of the sample were presented in Table 3. Class 1 can be described as less healthy diet, not smoking, not drinking, poor sleep, low engagement in physical exercise and leisure activities, which contained 32.4% of the total sample. The in uence of health lifestyles on Chinese oldest-old's health OLS regression models were constructed to predict the respondent's health status that was measured by continuous variables (i.e. cognitive function and self-rated health) and logistic regression analyses were conducted to estimate how health lifestyles predicted the oldest-old's reported ADL disabilities and chronic diseases. Tables 4 and 5 presented OLS and logistic regression results when controlling for the respondent's demographic and socioeconomic factors, respectively. In both Tables 4 and 5, two models were constructed. The rst model included the health lifestyle classes as well as the respondent's demographic and SES characteristics; the second model further added the respondent's parental SES variables since prior research showed that parental SES had signi cant in uence on one's later life health status . Table 4 showed that including parental SES variables did not signi cantly change the statistical results. As compared to class 3, SRH scores for individuals in class 1 (less healthy diet, not smoking, not drinking, poor sleep, low physical exercise & leisure activities) were 0.56 lower than the scores of those in class 3 (consistent engagement in healthy behaviors). SRH scores for older adults in class 2 (less healthy diet, not smoking, not drinking, good sleep, lowest physical exercise & leisure activities) and class 4 (moderate diet, smoking and drinking, moderate sleep, moderate exercise and leisure activity engagement) were 0.21 and 0.19 lower as compared to SRH scores reported by members in class 3. These results suggested that less healthy lifestyles led to worse self-rated health. Models 1 and 2 in Regarding the control variables, males and individuals with higher income tended to reported better SRH. Regression results of using health lifestyle measures as well as control variables to predict cognitive function status of Chinese oldest-old were showed in Models 3 and 4 in Table 4. Similarly, adding parental SES covariates did not signi cantly change the statistical results except that the effect of one's education on cognitive function score turned to be nonsigni cant. Model 4 showed that as compared to the references group, class 3, cognitive function scores for the oldest-old in class 1, 2, and 4 decreased by 2.98, 3.45 and 1.37, respectively. The ndings again highlighted that less healthy lifestyles linked to worse health outcome that was measured by cognitional function among the oldest-old. As to the covariates, results showed that an increasing age related to lower cognitive function score, whereas being male, having higher education and higher income showed signi cantly positive effects on cognitive function scores of seniors. Going to bed hungry in childhood had signi cantly negative effect on the oldestold's cognitive function scores, supporting the cumulative disadvantage theories that childhood disadvantage was still able to explain part of the health disparities in older ages. When predicting ADL disabilities and chronic disease status, two models were constructed with models 2 and 4 adding parental SES controls. Similar to results showed in Table 4, the results in Table 5 showed that adding parental SES covariates did not signi cantly change the regression results presented in Models 1 and 3. The odds of having ADL disabilities among oldest-old in classes 1, 2, 4 were all about 2.8 times of the odds for members in class 3 (see Model 2). The ndings indicated that all other 3 lifestyle classes had higher risks of reporting ADL disability as compared to the consistently positive class (class 3). Except for class 2 (sedentary group), classes 1 and 4 also showed signi cantly higher odds of having chronic disease(s) than class 3. As compared to the respondents in class 3, sample in class 1 and class 4 were 1.3 and 1.5 times more likely to have chronic disease(s), respectively, when controlling for the covariates. These ndings implied that health lifestyles can explain the health disparities among Chinese oldest-old. The health differentials among Chinese oldest-old can also be explained by the respondent's demographic and socioeconomic characteristics. An increasing age and having professional or administrative jobs before age 60 increased the risks of elders experiencing ADL disabilities. Whereas being urban decreased the odds of the oldest-old having ADL disabilities. An increasing age lowered the odds of reporting chronic diseases; and higher family income and holding professional or administrative jobs before age 60 increased the likelihood of oldest-old's reporting chronic disease(s). These results seemed to be contradictory to ndings of prior research based on the Western society that higher SES led to better health condition. The paper offered possible explanations for such contradiction in the discussion section. In sum, ndings of this research proved signi cant in uence of health lifestyles on Chinese oldestold's health status, after controlling for covariates. Discussion With the trend of population aging, the oldest-old has becoming a fast growing group in China. Among the oldest-old, some of them are able to live longer and healthier whereas others suffer long-term disabilities and chronic diseases, which brings a heavy burden to the society as well as their family members. Therefore, a striking array of studies has been conducted discussing the in uential factors that have caused health disparities among the oldest-old. Nevertheless, most of the existing studies examined elderly health from the perspective of considering single behaviors. The research also showed that healthier lifestyles resulted in better health outcomes. The ndings highlighted that consistent engagement in healthy behaviors linked to better SRH, higher cognitive function scores and a lower likelihood of being functional dependent and suffering chronic conditions. These results suggested that practicing healthier lifestyles can be an effective way to improve Chinese oldest-old's health status and postpone long-term disabilities. In this sense, the research results echoed arguments of researchers that multiple health behavior change interventions outperformed single-behavior interventions in health promotion . Findings of this study certainly provided strong proof that applying an integrative approach rather than individual health behavior perspective can be a better way to achieve a more effective health promotion. Healthy lifestyles were proved to be an important tool to prevent chronic diseases and long term disabilities among the oldest-old. Findings based on analyzing the China data also provided valuable implications to address disease prevention and health promotion related issues among older adults in other countries. Caregivers, clinicians and professionals can educate the elderly and their family to form healthier lifestyles in order to improve the oldest-old's health status and longevity. The signi cant impacts of the covariates on Chinese oldest-old's health outcomes also had important implications. Gender only showed signi cant effects on cognitive function scores and ADL disabilities, with males doing better on these two dimensions than females. Age generally showed signi cantly negative effects on health of the oldest-old, except for chronic conditions. The exception may be explained by the survival selection theory that the oldest-old with severe chronic illnesses had already died or been censored. Thus, older ages showed a negative effect on chronic conditions among survived individuals. Higher education and income was associated with better cognitive function. Higher income and holding professional jobs also linked to greater risks of reporting chronic disease(s). This nding seemed to be incongruent with results documented in prior literature that higher SES led to better health outcomes. Such inconsistency may be explained by underreporting of chronic illnesses among disadvantaged groups in China due to limited access to medical services and diagnoses. It can also be caused by the fact that people with higher SES in China have a more sedentary lifestyle and consume more high-fat and energy-condensed food, which resulted in a prevalence of chronic conditions. Another issue that worth mentioning is the urban and rural divide that has been documented repeatedly in prior literature . Nevertheless, ndings of this research did not show signi cant health differentials among the oldest-old in these two spheres. The health disparities caused by residence only showed on the measure of ADL disabilities. That is, as compared to rural residents, urban oldest-old had a signi cantly higher likelihood of having ADL disabilities. The signi cant health differentials showed in ADL disabilities may be explained Declarations Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Availability of data and materials This article is based on a publicly available dataset derived from the Chinese Longitudinal Healthy Longevity Survey (CLHLS). The dataset can be obtained after sending a data user agreement to the data team. Competing interests The authors declare that they have no con icts of interest. Authors' contributions LZ was a major contributor in designing the research, writing the manuscript, conducting literature review and analyzing data. BXY and ZHD analyzed and interpreted the data; they also revised the earlier version of the paper. All authors read and approved the nal manuscript. Note: *<..05, **<.01, ***<.001. R represents the respondent. OR: odds ratio; CI: con dence interval.
I AM rubbing my eyes in disbelief and wonder. It can't be true that Barack Obama, the son of a Kenyan, is the next president of the United States. But it is true, exhilaratingly true. An unbelievable turnaround. I want to jump and dance and shout, as I did after voting for the first time in my native South Africa on April 27, 1994. We owe our glorious victory over the awfulness of apartheid in South Africa in large part to the support we received from the international community, including the US, and we will always be deeply grateful. But for those of us who have looked to America for inspiration as we struggled for democracy and human rights, these past seven years have been lean ones. A few days after the September 11, 2001, terrorist attacks, we had our first shock, hearing the President respond not with the statesmanlike demeanour we had come to expect from a US head of state but like a Western gunslinger. Later, it seemed that much of American society was following his lead. When war began, first in Afghanistan and not long after in Iraq, we read allegations of prisoner abuse at Bagram air base in Afghanistan and of rendition to countries notorious for practising torture. We saw the horrific images from Abu Ghraib and learned of gruesome acts performed in the name of gathering information.
Now that the former telecom minister A Raja is out on bail in the 2G case, the most eagerly awaited question in Tamil Nadu is about his political future and his access to the Karunanidhi household. Will he get a grand reception from the DMK cadres similar to what Kanimozhi got on her release? Will he immediately get to meet Karunanidhi? How will the new de-facto command-and-control under Stalin handle him? Will Raja have any party-role at all? And if one goes by Subramaniam Swamy’s warning, will he be safe outside jail? There are no strong indications from the DMK but sources close to the party say Raja is not Kanimozhi. Kanimozhi was family and is very dear to the DMK patriarch Karunanidhi and hence her return was a different story. Karunanidhi also believed that she had no role in the 2G scam and was a victim of circumstances. But Raja is an outsider, who made it big through shrewd use of his community base, political opportunities, and acquired-proximity to Karunanidhi through the late Murasoli Maran and Rajathi Ammal, Kanimozhi’s mother. And the 2G taint that he brought to the party was much bigger than anybody anticipated. It also cost them a critical election. While she was in jail, Kanimozhi had daily visitors from the party and its first family. When she landed in Chennai on her release from Tihar, she walked into a deafening celebration that made a victor and martyr out of her. It was Karunanidhi’s wish. But in Raja’s case, he may have to engineer a reception using his political resources from his hometown of Perambalur in southern Tamil Nadu. He has no roots in Chennai. Karunanidhi’s response to the news that Raja has moved the court for bail was completely devoid of any emotions or anticipation. “If Ravanan can get bail, why not Raja?” was his cryptic answer. Ravanan, a close relatives of Sasikala, was arrested by the state police on a series of criminal charges and then let out on bail later. To another question, if Raja will be given the same kind of reception that Kanimozhi received, Karunanidhi didn’t say much. Veteran DMK observers said the patriarch appeared cold to the idea. They recalled Karunanidhi’s statement in the beginning of the 2G scam that “bad friends will get you into trouble.” They are still not completely sure if by 'bad friends' he meant Raja or the Congress. Such ambiguous one-liners are vintage Karunanidhi and one has to often read meanings into them. If the Kanimozhi-household and Karunanidhi do not appear overtly warm towards the jail-returnee Raja, he will have no role of consequence in the DMK. Now that Stalin has ascended to a decisive role, Raja will need his patronage as well. Even during his peak in the DMK, Stalin’s interest in Raja, at best, has been plain indifference. Other than from Karunanidhi, Rajathi Ammal and the ring of leaders close to her, Raja didn’t have much patronage from the rest of the party leadership. That the 2G taint was a key reason for the DMK’s rout in the recently assembly elections will be a allegation that Raja will have to fight if he has to persist with his political ambitions. He may still have support in his constituency and his hometown, but to survive in the DMK or move forward an inch, he will need the patronage of both Karunanidhi and Stalin. During his heydays, Raja used to be a regular fixture in almost all public appearances by Karunanidhi’s public appearances. He also made a virtue out of the proximity and would place himself visibly close to the leader. However, when the 2G scam broke out, he was slowly eased out. Now the bigger question; will he safe outside jail? When Raja refused to seek bail while all the other accused in the case left the jail one by one, the biggest speculation in Tamil Nadu was that he preferred to stay back in Tihar because it was safer. The speculation stemmed from the sudden and mysterious suicide of his close accomplice Sadiq Batcha in March 2011 before he was to be questioned by the CBI. Reportedly, he was the crucial link that could have helped the CBI establish the all-important trail of the 2G cash. Subramanian Swamy said on 11 May that Raja’s life was under threat outside the jail and hence he should be provided security in case he gets bail. Swamy said Raja “knew too much,” a term that many used to describe Batcha too. Raja has been smart and shrewd, both as a politician and as an undertrial in the biggest scam in independent India. He maintained a disciplined life in the jail, always appeared defiant, in control in his court appearances and also argued his case vigorously from time to time. It will be him, and only him, who can decipher what lies s in store for him. That includes his safety as well. Firstpost is now on WhatsApp. For the latest analysis, commentary and news updates, sign up for our WhatsApp services. Just go to Firstpost.com/Whatsapp and hit the Subscribe button.
Corporate Social Responsibility and Firms Ability to Collude We examine a duopoly with polluting production where firms adopt a form of corporate social responsibility (CSR) to define their objective functions. Our analysis focusses on the bearings of CSR on collusion over an infinite horizon, sustained by either grim trigger strategies or optimal punishments. Our results suggest that assigning a weight to consumer surplus has a pro-competitive e¤ect under both full and partial collusion. Conversely, a higher impact of productivity on pollution has an anti-competitive effect under partial collusion, while exerting no effect under full collusion. Under partial collusion, the analysis of the isoquant map of the cartel reveals that complementarity arises between the two weights.
Bound states in the phase diagram of the extended Hubbard model The papaer shows how the known, exact results for the two electron bound states can modify the ground state phase diagram of extended Hubbard model (EHM) for on-site attraction, intersite repulsion and arbitrary electron density. The main result is suppression of the superconducting state in favor of normal phase for small charge densities. II. HAMILTONIAN AND THE SUPERCONDUCTING STATE We begin with the extended Hubbard Hamiltonian in standard notation: where we sum over nearest-neighbors (nn) sites only. U and W are treated as effective parameters. We use broken-symmetry Hartree-Fock approach (for details see Ref. ). As we are interested in the properties of the superconducting state, we introduce averages of operators c −k↓ c k↑ in Wick's-type decoupling of the fouroperator terms in the Hamiltonian. Non-zero average of such pair-creating operators means phase-coherence among pairs, i.e. a presence of superconducting state, and serves as an order parameter (see Eq. (3)). where: and µ = µ − ( U 2 + zW )n, where z is coordination number of hypercubic lattice, W k = W γ k , ε k = −tγ k , γ k = 2 d α cos k α , α ∈ (x, y, z). After diagonalization of the Hamiltonian Eq. (2) we obtain quasiparticle energy: and a self-consistent equation for the gap: where β = 1/k B T , T is temperature and k B Boltzmann constant. The constant C in the Hamiltonian Eq. (2) can be expressed now as: The pairing potential in the singlet channel (see Eq. (3)) takes on the separable form for the square lattice and nn interaction: U + W k1−k2 = U + W γ k1 γ k2 /z (retaining only the terms of s-wave symmetry), and that makes possible solving Eq. (3) by an ansatz: what leads us to the set of self-consistent equations: where F q = (tanh βEq 2 )/2E q . In the case of the rectangular density of states (DOS) and pure on-site pairing we can obtain analytical solutions ; in the case of the extended s-wave superconductivity (Eqs (8) -(10)) analytical solutions exist in the limit of low electron density. Introducing a new parameter: ∆ γ /∆ 0 , we can expand Eqs (8) -(10), treating ∆ 0 as a small parameter. As a result we obtain a formula for critical value for appearance of superconductivity for given U and n in the ground state : and D = zt is half-bandwidth unit. In the case of rectangular DOS this formula reduces to : We can go a step further in our mean-field analysis and include Fock term p = 1 N kσ γ k c † kσ c kσ into calculations. In Eqs (2), (4), (8) -(10), ε k must be changed intoε k = ε k (1 + pW/zt) then, and we have to solve Eqs (8) -(10) self consistently with the equation for the Fock term: p = − 1 N kε k γ k F k . Equations (11) - (12) remain valid, with the change X → X/(1 + pW/zt) where X = µ, W cr and U . III. LOW DENSITY LIMIT Going back to Hamiltonian Eq. (1) we can obtain exact results in the low density limit. In the center-of-mass coordinate system we can expand the wave function of the two-electron bound pair ψ in the basis of plane waves (i.e., eigenstates of the hopping part of the Hamiltonian Eq. (1)). We can easily find the equations for the coefficients of the expansion, what finally yields a set of self-consistent equations for the wave function in the position space, in terms of lattice Green functions : where G is lattice Green function defined by: and g(r) is diagonal interaction matrix, consisting of elements U and W . Eigenenergy equation takes the form: where G is a matrix with elements G ij = G(E, P, r i , r j ). This is an analogue of Eqs (8) - (9). Let's note that in the case of the two-electron bound pairs ∆ = 0 and the role of the binding energy is played by µ/2. For the hypercubic lattices these equations were solved and it has been found out that in one and two dimensions pairs for W = 0 bind for any negative U , while in three dimensions there is critical value for W . The formula for W cr in the case of two-electron bound state reads : where C = 1/N k (1 − γ k /z) −1 is the Watson integral. This is an exact result. Remembering that C is divergent for lattices of dimensions d = 1 and d = 2 we can see, that for n → 0 µ/D → −1, I → 0 (Eq. (11)) and Eq. (16) is a limiting value of the formula Eqs (11) and (12) for d = 1 and d = 2, as it should be. Not for d = 3 though; this case will be discussed later on. Let's note that Eqs (11), (12) and (16), are valid for any combination of signs of U and W and for large enough U < 0 and W < 0 there is a second branch of solutions . The two branches are the two solutions which realize in the two opposite limits: U = +∞ and U = −∞ (or W = ±∞). The formal equations and their solutions in both these limits are the same, despite completely different physical situation. Nevertheless these are the specific cases of two distinct solutions. IV. RESULTS AND DISCUSSION In view of the Randeria's notice, described in the Introduction, in the case of s-wave symmetry in two dimensions we can use condition for existence of bound states as a condition for existence of superconductivity. In Fig. 1 the boundaries expressed by Eq. (11) for different lattice dimensionalities and electron densities are shown. For parameters U , W belonging to the area above the plotted lines (mostly in the 1st quarter of coordinate system) two-electron bound states, and what follows s-wave superconductivity in two dimensions, can not exist. The curves for n = 0 in all dimensions are exact results. As n gets larger, the area of existence of bound pairs increases for W > 0 and U < 0 and decreases in the part of the diagram with W < 0 and U > 0. All curves except the one for n = 0 in d = 3 go through the point with coordinates (0, 0). This illustrates the fact that for W = 0 infinitesimally small U creates bound state in d = 2, while the threshold exists in d = 3. Nevertheless we do not have threshold in d = 3 for n = 0 -in agreement with Randeria's notion about necessity of bound states for superconductivity only in d = 2. Let's note that for large U and W the curves approach the asymptotes -for curves crossing through the axes origin the asymptotes are given by the formulas: U as /t = −16(n − 1) 2 /(1 + (n − 1) 2 ) and W as /t = −4/(1 + (n − 1) 2 ). This is connected with the fact that in the 3rd quarter of the coordination system for U < 0 and W < 0 there exist second branches of the solutions. As they have higher energy than the solutions described in Fig. 1 they do not modify the ground state phase diagram and are not shown here. The dotted line in Fig. 1 (and in Fig. 2) describes the results of calculations with inclusion of the Fock term, using Eq. (12) modified by the (1 + pW/zt) term, as was described in the end of Section 2. To simplify the calculations the Fock term from the normal state was used: p = n(2 − n). For W > 0 this term broadens the band moving the system more into weak coupling limit and enlarging the normal state area (opposite behavior for W < 0). The same effect can be seen in Fig. 2. In Fig. 2 the boundaries of existence of bound states, Eq. (12) (black symbols), are plotted on a ground state phase diagram of the extended Hubbard model for U < 0 and W > 0, for arbitrary n and rectangular DOS, together with the phase boundary PS /SS taken from Ref. (white symbols). Above the lines with black symbols s-wave superconductivity can not exist in d = 2. This way the superconducting state is suppressed and normal state (NO) area is introduced into the phase diagram. Let's note that also the phase separated state PS is "reduced" to the NO phase and not to the CDW phase. This is due to the fact that the CDW in the PS state is the CDW with n = 1. CDW with n = 1 is unstable, as it has negative compressibility. Another thing to note is the threshold for appearance of bound states for n = 0, which increases with increasing |U |. The phase diagram is modified only for intermediate values of |U | and |W |, smaller from their asymptotic values |U as | and |W as |. For |U | or |W | larger than these values, bound states exist for arbitrary value of the other parameter, in agreement with Fig. 1. The calculations in Ref. consider only pure, on-site s-wave pairing. Including ∆ γ (Eq. (9)) into calculations does not change much the described PS /SS boundary -∆ γ is two orders of magnitude smaller than ∆ 0 on this boundary. Inclusion of Fock term into the calculations of the bound states, results in extending the area of the normal phase, as was mentioned before. This effect increases with increasing |n| and W . In conclusion it was shown, how the analytical (and exact for n = 0) formulas for bound two-electron states can be used for the modification of the U < 0, W > 0 part of the phase diagram of the extended Hubbard model. The main result of this approach is suppression of the superconducting and phase separated areas in favor of the normal phase around half-filled band for intermediate values of |U | and |W |, larger than threshold values and smaller than |U as | and |W as |.
CLEVELAND, Ohio -- A 34-year-old man was shot early Friday while pumping gas, police said. The shooting happened about 2 a.m. Friday at the Marathon gas station on East 55th Street and Superior Avenue. The man told police he was was with another man when gunfire erupted, police reports say. The man dove behind his car after hearing the gunshots, according to police. He noticed he was shot in the wrist and ran into the gas station to call 911, according to police. The man was taken to University Hospitals for treatment. No arrests have been made in the case. To comment on this story, please visit our crime and courts comments page. Please take a moment and click here to help the Greater Cleveland Food Bank, a cleveland.com partner. Every dollar you give buys four meals for the hungry.
Estimating Forearm and Neck Muscle Load Using Surface EMG Amplitude: Methodologic Issues ESTIMATING FOREARM AND NECK MUSCLE LOAD USING SURFACE EMG AMPLITUDE: METHODOLOGIC ISSUES David M. Rempel, Bernard Martin, Carolyn A. Sommerich, Edward A. Clancy, Richard Wells, Roland Kadefors. University of California, Ergonomics Laboratory, Berkeley, CA, USA University of Michigan, Center for Ergonomics, Ann Arbor, MI , USA North Carolina State University, Department of Industrial Engineering, Raleigh, NC, USA’ Laval University, Department of Mechanical Engineering, Quebec, Canada University of Waterloo, Department of Kinesiology, Ontario, Canada National Institute for Working Life/West, Goteborg, Sweden
Increasing the efficiency of the herd reproduction system by introducing innovative technologies into dairy farming in Northern Kazakhstan Abstract Background and Aim: In recent years, Kazakhstan has increasingly imported breeding cows for dairy and beef production. To maintain and improve their breeding qualities of reproductive function, it is necessary to constantly monitor the herd reproduction system. The aim of this study was to increase the level of herd reproduction by introducing innovative technologies into dairy farms in Northern Kazakhstan. To achieve this goal, the AlphaVision visual insemination system (IMV Technologies, France) was used, aiding to improve the artificial insemination method in farms in Northern Kazakhstan and increased the breeding rate using sexed semen to inseminate cows. In addition, the AlphaVision device was used in the differential diagnosis of certain diseases of the reproductive organs of cows. Materials and Methods: The object of the study was 200 cows (3-5-year-old) and 100 heifers (16-18-month-old) of Holstein breed. The authors carried out a comparative analysis of biotechnological methods of reproduction – the cervical method insemination with rectal fixation of the cervix (traditional method of insemination) and the AlphaVision visual insemination system, and the effectiveness of AlphaVision for diagnosing some reproductive tract abnormalities in cows was studied. In the experiment on conducting artificial insemination through AlphaVision, we have used both normal (two-sex) and sexed semen. Results: When using the AlphaVision visual insemination system, a higher percentage of fruitful insemination was noted (20.7%) than when using the traditional method. The images obtained with AlphaVision made it possible to identify cows with abnormal sexual cycles, signs of vaginitis, endometritis, cervicitis, and differentiate them by the nature of the exudate. In many cases, visual examinations of the vagina and cervix are not carried out before the traditional method of artificial insemination. For this reason, some vaginal and cervical abnormalities are not diagnosed, resulting in reduced fertility in cows. We have found that the number of genital abnormalities has increased by 30% with the increasing age of cows. Obstetric and gynecologic pathologies in high-yielding cows are noted in more than 50% of the herd. A comparative assessment of clinical manifestations of cervicitis and other pathologies of reproductive organs, using the AlphaVision visual insemination system, has been carried out for the identified diseases. With the traditional method of insemination with conventional semen, the calf yield per 100 cows for the period 2016-2019 has been 65-80% and with sexed semen 30-50%. With AlphaVision in 2020, the insemination rate was 85% conventional and 60% sexed, respectively, which was 5% and 10% higher than with conventional insemination. This was due to the improved diagnosis of some reproductive diseases in cows. Conclusion: The introduction of innovative technology, namely, the visual insemination system AlphaVision, into the practice of dairy farms in Northern Kazakhstan increased the level of the herd reproduction system. Introduction To reach the genetically determined level of milk productivity of cattle, it is necessary to maintain a high level of herd reproduction, ensure timely fruitful insemination of cows and heifers, and annually receive viable offspring from them . A huge role here is played by biotechnological methods of reproduction, both from the point of view of increasing the efficiency of breeding work and increasing the reproduction of the herd. To solve this problem, it is necessary to intensify the reproduction of animals . The world's population is growing every year. To meet the increased demand for food protein by 2050, it will require a two-fold increase in animal genetics to meet the increased . An important condition for solving this task is to increase the reproductive function of animals and poultry. The vast majority of livestock and poultry are currently produced by artificial insemination, including 70% of cattle, 90% of pigs, and 100% of turkeys, except for some traditional breeds . Embryo transplantation is expedient when setting goals for accelerated qualitative renewal of the herd with a radical increase in productivity over the next 2-3 years. It allows increasing the genetic potential of the breeding nucleus in dairy and beef cattle breeding 5 times faster than with artificial insemination. Besides, one donor with a high-value genetic potential can give the farm ten or more calves annually . At the moment, in the Kostanay region, the issues of embryo transplantation into cattle remain open. All of the above largely depends on the reproductive health of the broodstock of dairy cattle. In modern dairy farming, an urgent problem is pathological condition of the reproductive organs of cows after labor, as well as their differential diagnosis and treatment . For example, when at least one obstetric pathology is diagnosed in the postpartum period, experience a decrease in fertility with the first insemination . At the same time, there are no specific diagnostic criteria to differentiate functional disorders from inflammatory diseases of the uterus. Therefore, one of the important conditions for the development of cattle breeding is the improvement of existing methods, as well as the search for new methods for diagnosing diseases of the reproductive organs in cows . In dairy farming, one of the most time-consuming processes is the reproduction of cattle. The milk productivity of cows, the efficiency of selection and breeding work, the duration and intensity of the use of genetically valuable highly productive animals, the economy, and profitability of production depend on the level of reproduction of the herd. The relatively short period of intensive production use of dairy cows requires the annual introduction of 25-30% or more highly productive first calving cows into the main herd. It becomes impossible with a significant decrease in the level of reproduction, calf crop, and their weak preservation . In solving this problem, the replenishment of herd method using sexed semen stands out. In this way, it is possible to increase the number of animals faster than with traditional methods and to replace discarded cows in a dairy herd with greater efficiency. The use of sexed semen in animal husbandry makes it possible to obtain over 80% of heifers from all received calves. This, in turn, allows renewing the dairy herd with first-calf heifers in a shorter time . However, researchers and practitioners have no consensus on the use of sexed semen . Some literature data show that, with obvious advantages, this method reduces the percentage of cows' pregnancy. Therefore, questions on the study of the effect of sexed semen on the reproductive qualities of animals are relevant as well. Improvement of biotechnological methods of reproduction using methods of the replenishment of herd with young high-yielding animals inseminated with sexed semen, diagnosis of pathology of the genitals in cows with the use of new technologies and innovative solutions leads to an increase in the efficiency of the herd reproduction system. Thus, the issues under consideration in the reproduction of the herd in North Kazakhstan are extremely relevant. The study aimed to review the innovative technologies in the reproduction of dairy cattle breeding in Northern Kazakhstan. The objectives of the study included a comparative analysis of biotechnological methods of reproduction , studying the use of AlphaVision in the diagnosis of some diseases of the reproductive organs of cows, and the replenishment of the herd with young cows obtained by insemination with sexed semen. Ethical approval The research protocol was discussed and approved at a meeting of the local ethical committee of the A. Baitursynov KRU non-profit joint-stock company belonging to the Ministry of Education and Science of the Republic of Kazakhstan, dated February 3, 2019. Study period and location The research to study the effectiveness of the herd reproduction system by introducing innovative technologies in dairy cattle breeding in Northern Kazakhstan was carried out based on the V. Dvurechensky Agricultural Institute, A. Baitursynov Kostanay Regional University (KRU), the experimental part was carried out at the Olzha-Ak-Kuduk LLP of the Kostanay district and Turar LLP of the Fedorovsky district, Kazakhstan. Collection and analysis of documentation and statistical data from farms were carried out from March 2016 to May 2020. The experimental part of the study was carried out in May 2019 and March-May 2020. Studies 1 and 2 A comparative analysis of biotechnological methods of reproduction and determining the most rational and effective method inseminating cows, as well as the effectiveness of the AlphaVision device for diagnosing some reproductive organ abnormalities in cows, were carried out in Olzha Ak-Kuduk LLP, Kazakhstan. The objects of research were 200 cows of 3-5 years of age of Holstein breed, black and white color. The live weight of the cows was 650-750 kg. The cows were kept in loose housing; feeding was carried out with a common mixed feed, balanced per their physiological needs. The cows were milked two times a day in the morning and in the evening; milk yield was recorded at each milking. Milk was analyzed for fat, protein, and somatic cells in a total milk sample from experimental and control cows. The experimental and control cows had no signs of ketosis, acidosis, lameness, and displaced abomasum. Before insemination, all cows were checked for estral mucus. In May 2019, in Olzha Ak-Kuduk LLP, Kazakhstan, we conducted a study to determine the effectiveness of different insemination methods. The cows were divided into two groups, the control group, and the experimental group, with 50 cows in each. Cows were of the same age and weight with no problems with insemination. In the control group, insemination was performed with the traditional method, and in the experimental group, it was done with the AlphaVision visual insemination system (IMV Technologies) (Figure-1). Results were determined at the onset of pregnancy in inseminated cows. In March-April 2020, the effectiveness of the AlphaVision device was studied for the diagnosis of some pathology of the reproductive organs of cows. The cows were divided into two groups for the experiment: 50 cows in the control group (the 3 rd -year-old ones had problems with insemination) and 50 cows in the experimental group (the 5 th -year-old ones had not inseminated either). Work with AlphaVision was carried out according to the instructions as follows. We started by putting on the neck extension for the video terminal, and then inserted the video terminal into the case through the hole provided for this. Then the video terminal was connected to the Micro-USB connector. Then the clip of the neck extension was opened to insert the video terminal there, after which the other end of the cable was connected to the AlphaVision device so that the two red dots were opposite each other. After that, the terminal was turned on. The charge needed to be at least 50%. Then, we launched the application. In the application, we hovered the cursor over OK and clicked on it. It was also necessary to make sure that the image was of good quality. Next, we installed the mirror on the AlphaVision pistol and put the cable around the neck. After that, a special sanitary shirt was put on the equipment. Before the study was carried out, the condition of the cow's vagina and cervix was checked, the external genitalia of the cow were cleaned with a warm 0.02% solution of furacilin. B-LUBE gel was laid on a sanitary shirt and around the mirror. Opening the vaginal slit, the camera was inserted into the vaginal vestibule at an angle of 45 degrees. Then, aligning it parallel to the rectum, it was inserted further into the vagina. Next, we advanced the AlphaVision pistol to the cervix and observed using the image transmitted to the terminal. Thus, we had started to check the condition of the cow's vagina, cervix using the resulting image on the screen. Assembly of a Kombicolor insemination gun (IMV Technologies) We performed the process as directed and started by holding the Kombicolor syringe in one hand, then pulling back the plunger of the Kombicolor syringe and pushing it into the graduated extension. After that, the Kombicolor colored ring was fixed to the end of the graduated extension, then the steel ring of the Kombicolor syringe plunger was moved into the provided groove on the pusher so that it snapped into it. The pusher was then inserted into the graduated extension at the first mark. The preparation of the sperm straw was started by defrosting and drying. After that, we cut off the sealed part of the straw with the cutter supplied. The straw was inserted into the chamber of the Kombicolor syringe and the Alpha tubing was then inserted into the Kombicolor syringe until completely blocked. Next, the pistol was inserted into the AlphaVision device at the first mark. After that, we pumped the Kombicolor syringe. Insemination Before artificial insemination, the external genitalia of the cows were cleaned, followed the same steps as in conventional insemination. First, the B-LUBE gel (IMV Technologies) was applied around the speculum at a distance of 10 cm from its edge and the edge of the speculum was brought closer to the vulva. Then, the vulva was moved apart, opened the vaginal slit, and inserted the mirror into the vagina from below at an angle of 45 degrees. Then, after passing the vaginal fornix, the speculum was aligned in the direction of the vagina. Then, the AlphaVision apparatus was advanced to the cervix using the image transmitted to the terminal. After inserting a Kombicolor insemination syringe into AlphaVision, the graduated extension was pushed in until the insemination tube appeared on the image at the entrance to the cervix. One mark on the handle corresponds to 1 cm. The Alpha insemination tube was then positioned opposite the entrance of the cervix and inserted into the uterus while continuing to push the gradual extension (rectal palpation was used to make sure that the syringe was positioned correctly). After that, a dose of sperm was injected by gently pressing on the pusher until it stopped, trying not to move the graduated extension. We checked the cervix in the image, making sure there was no outflow of semen. After emptying the straws, the AlphaVision extracted back. In the control group, when diagnosing diseases of the reproductive organs, the methods of rectal and vaginal examination were used. In the experiment with insemination cervical method with rectal fixation of the cervix, the following instruments were used: An ampoule, a plastic catheter 35-40 cm long, and an obstetric polythene glove. During the experiment, the air was removed from the ampoule; the ampoule was filled with prepared sperm and connected to the catheter. Performed cleaning of the external genitalia of the heifers; a glove was put on the hand and lubed with gel. Thus, a prepared hand was inserted into the rectum, through the wall of the cervix was groped and clasped and fixed with two fingers (the second and third), and with the thumb, we felt the cervical opening, where the catheter was inserted. On the other hand, the animal's genital slit was opened, and the catheter was inserted into the vagina and then into the cervical canal and the sperm was injected. Then, we pulled out the hand, and then the catheter. At the same time, the clitoris was massaged through the vulva within 2-3 min. We used sexed sperm to produce single-sex calves purchased from Taurus LLP, Almaty, Kazakhstan. The semen was produced in the USA from three stud bulls of Holstein breed MARVEL №551HO03444, CORSAIR №151HO03128, and REDROK №551HO03501 in 2015. The planned milk productivity of the broodstock, on which the experimental work was carried out, is 8000 kg of milk per year, with a milk fat content of 3.75%. All experimental animals were kept in the same feeding and housing under operating conditions. Study 3 Scientific and industrial research to study the effectiveness of using AlphaVision in insemination with sexed semen was carried out in Turar LLP, Fedorovskiy district of Kostanay region, Kazakhstan, in March-May 2020. On the farm, artificial insemination was performed by the cervical method with rectal fixation of the cervix. Farms, where studies were conducted, were free from infectious and invasive diseases. Collection and study of documentation (veterinary and zootechnical registers) were carried out from 2016 to 2020. The object of the research was heifers of mating age of the Holstein breed, 16-18 months of age, weighing 390-410 kg, and black-and-white color. The heifers were kept by untethered method and fed a common mixed diet twice a day, balanced according to their physiological needs. Heifers were kept separately in stalls equipped with tunnel ventilation and sand-bed stalls. During the experiment in March-May 2020, the animals were divided into two groups: the control group and the experimental group, 50 heads in each. Heifers of mating age included in the control and experimental groups were clinically healthy. In the control group, insemination was carried out with non-divided sperm, and in the experimental group with sexed semen. The animals were inseminated during the heat, which was detected with a Draminski electronic detector (Draminski, Poland), an estrometer and recording of animal movement activity. If the heat was revealed in the morning, the procedure on insemination was carried out in the evening at 17-19 h. If heat was observed in the evening, then the insemination procedure was carried out the next morning, at 5.00-6.00 O'clock. Artificial insemination was performed using AlphaVision, once. Hormonal preparations for synchronizing estrus of cows were not used during sex-segregated semen use. During the experiment, all animals were in the same conditions of feeding and housing. Statistical analysis The research results were processed by the method of variation statistics with the Microsoft Office software package using the Excel software (Microsoft Office, USA). Experiment 1 In the course of the experiment in the comparative analysis of biotechnological methods of reproduction (the traditional method and the method with the AlphaVision visual insemination system), it was found that among the 50 animals inseminated by the traditional method, 48 heads had been inseminatedtwo animals were unsuitable for insemination due to abnormalities in the genitalia. In this group, 1 month after insemination, 28 animals (58.3%) were found to be pregnant. In a test group, of 50 animals, 43 heads were inseminated by the AlphaVision visual insemination system, of which 1 month after insemination, when checked for pregnancy by ultrasound, 34 cows were pregnant (79,0%). The remaining seven heads in the experimental group and two in the control group were sent to treatment. A detailed data set of the study results is found in Table-1. In determining the most effective method of insemination, the time required for insemination, the cost of sperm, the cost of sperm, and the fertilization rate were considered. Two cows in the control group showed purulent-catarrhal endometritis on further examination of estral mucus. The visual insemination system revealed genital abnormalities with signs of endometritis and cervicitis in seven cows, which is 10% more than in the control group. The table shows that the use of the AlphaVision visual insemination system at Olzha Ak-Kuduk LLP, Kazakhstan made it possible to identify more cows with signs of the estrous cycle with abnormalities in the endometrium, while in the control group, 4% were identified, and in the experimental 14%. A higher percentage of fruitful insemination was established by Available at www.veterinaryworld.org/Vol.14/November-2021/25.pdf 21% and in the experimental group ( Figures-2 and 3). It should be noted that the insemination assistance system, equipped with a sealed chamber, provided a significant increase in the convenience of the inseminator. In particular, it is possible to check the involution of the cow's cervix, showed whether there was any pathology, and simplified the determination of its location for more painless insemination. Experiment 2 One of the causes of infertility in cows is genital abnormalities, which are widespread in large dairy complexes. For example, we found that under conditions of Olzha Ak-Kuduk LLP, Kazakhstan, obstetric and gynecological pathologies in highly productive cows were reported in more than 50% of the livestock. Meanwhile, subinvolution of the uterus was observed in 16%, cervicitis was diagnosed in 11.3%, and vaginitis in 8% of infertile animals. The spread of diseases reproductive organs of cows depending on age is shown in Table-2. Analyzing the data in Table-2, we can note that with an increase of age of the cows, the incidence of postpartum pathologies also increased. For example, in Olzha Ak-Kuduk LLP, Kazakhstan, the number of genital abnormalities increased by 30% with the increasing age of cows. For the identified diseases, a comparative assessment of the clinical manifestations of cervicitis and other pathologies of the reproductive organs (Table-3) was carried out using the AlphaVision visual insemination system and rectal palpation. The clinical picture of obstetric and gynecological abnormalities in cows has been studied ( Table-4 ). Tables-3 and 4 reveal that in the postpartum period in cows with a risk of developing obstetric and gynecological diseases, body temperature can vary depending on the severity of the ongoing changes. When the body temperature changes (increases) in the pattern of the vaginal wall were observed using AlphaVision (Figure-4), it became more pronounced (hemorrhagic). Therefore, observation of the body temperature of animals in the next 24 h is an early sign of the development of pathologies of the reproductive organs. With an increase in body temperature, other physiological indicators, such as respiratory and pulse rates, also increased. The animals refused to feed and drink, lay more, mucopurulent or brownish exudate was released from the birth canal. Rectal palpation of the uterus revealed softness and doughiness, as well as soreness of the uterus; the ovaries (in particular, the right one) were slightly enlarged. According to various criteria, the clinical picture of cervicitis is similar to the symptoms of other diseases of the genital organs, which, in turn, significantly hampered the differentiation of cervicitis as a separate disease, and this, in this case, made the use of the AlphaVision visual insemination system more relevant as it provided a more accurate diagnosis of diseases of the reproductive organs. Experiment 3 In Turar LLP, Kazakhstan, the calf crop per 100 cows was 70-85% during 2016-2020, with an insemination index of 2.95-2.76 and a service period of 176-154 days, respectively. The study of the influence of sexed semen on the fertility of heifers over the past 5 years (Figure-5) made it possible to conclude that its use reduced the fertility of the animals by an average of 25-30%. The complete data on the use of sexed semen were provided by studies on the number of viable calves obtained, especially heifers ( Figures-6 and 7). The calf crop per 100 cows when using ordinary semen was 70 to 85% throughout the experiment. The same indicator when using sexed semen was 35-50%. Differences in indicators are in the range of 30-40%. Over the entire study period, 253 live calves were obtained from heifers inseminated with sexed semen. The average number of heifers per total number of newborns averaged 85-92% over the years, which is 40% higher than with conventional semen. The percentage of stillborn calves and abortions averaged 5.6%, which does not exceed those figures for the herd as a whole. The insemination index was 2.3 doses. Studying the data on the effect of sexed semen (Figure-5) on the insemination of heifers, we found a decrease by an average of 30%. In the course of the experiment, we studied the effect of the AlphaVision device (Table-5) on carrying out artificial insemination with sexed semen. Analyzing the data of our research, it can be noted that the percentage of insemination of heifers during the period from 2016 to 2019 with bisexual semen fluctuated in the range of 65-80%, when AlphaVision was used in 2020 -85% (Table-5), the growth amounted to 5%. The percentage of fruitful insemination in cervical insemination with rectal fixation of the cervix using sexed semen during the period 2016-2019 fluctuated from 35 to 50%. When AlphaVisionin 2020 was used, the figure was 60%, that is, 10% more than with the traditional insemination method. Discussion To increase the level of herd reproduction using innovative devices of insemination (with the AlphaVision device), we conducted a comparative analysis of biotechnological methods of reproduction. The most rational and efficient way of reproduction in the conditions of Northern Kazakhstan was determined. We established the percentage of the first fruitful insemination, 58.3% with cervical insemination with rectal fixation of the cervix and 79.0% by visual insemination using the device AlphaVision. Krymowski showed that only 40% of sperm was placed either in the body of the uterus or evenly distributed between the left and right horns. This study found significant differences between professional technicians and owner technicians. Rather, placement depended on the skill and ability of the individual to determine the position of the rod tip in the reproductive tract. The benefits of using AlphaVision in cattle include the ability to visualize any abnormalities in the vagina or cervix and confirm high fever through the photos taken by observing cervical mucus. The speculum also offers less difficulty in passing the gun through the cervix, as the view of the pharynx prevents the tip from entering the fornix. Physical handling of the cervix is also minimized, which can rule out the potential damage to the lining of the rectum or uterus from excessive harsh handling. These findings are also supported by our study. One of the disadvantages of using the AlphaVision in cold weather (outdoors) is that cold air enters the vagina through the side slots of the device and can cause vaginal cramping. Of the total number of cows in the herd, 50% of all cows were diagnosed with at least one clinical disease during the 305 days of lactation. Carvalho el al. indicated that the aforementioned proportions were similar to those previously reported by researchers in the same study area , who once again emphasized the relevance of postpartum problems in dairy cows. Moreover, recent studies evaluating data for cattle, namely, diseases of the reproductive organs of cows consistently report long-term effects of uterine infection , mastitis , and metabolic disturbances , and their influence on the milk productivity of dairy cows, so our research focused on diagnosing these diseases. According to the results of the study on the use of AlphaVision in the growth of the herd with the use of sexed semen, we managed to reach our goals, having obtained results that had scientific and practical significance. It is a well-known fact that high milk production reduces the reproductive qualities of cows. With the introduction of our research into the practical activities of dairy farms in northern Kazakhstan, it will undoubtedly increase the level of the herd reproduction system. Studying the use of sexed semen on the broodstock at Turar LLP, Kazakhstan, we took into account that this technology was very little used in Kazakhstan. We still have no experience in using semen from breeding bulls divided by sex. In its implementation, this technology is very complex; the equipment for the laboratory, in addition to the high cost, requires long-term training of personnel who will operate it. There are no domestic laboratories in the Commonwealth of Independent States that could separate sperm from breeding bulls of local breeds. As a result, the high cost of sexed semen is noted. Like everything new, this technology has not yet taken root very well. However, abroad, this technology is today very popular on family-owned farms, for example, in Britain, where there is a fairly large number of farms engaged in the withdrawal of small livestock (150-300): The ratio of the sale of sexed and ordinary semen from stud bulls is 1:3. In the United States, where there are many large farms, this ratio equals 1:10. Therefore, it was interesting for us to study the use of sexed semen in the northern region of Kazakhstan. We also explored some innovative solutions to reduce the farmer's costs for the purchase of expensive semen. In our case, the study was carried out on a dairy farm, where the number of lactating cows was more than 1200 heads, and they could afford to purchase sexed semen. During 2016-2019, from 65 to 80% of calves were born from twosex sperm. In comparison, the percentage of calves with sexed sperm was 35-50%. In the course of the experiment in 2020, we received results of 85-60%, using the sexed semen of 60% of calves. Such fertility after using sorted sperm is consistent with the literature: Tubman et al. and Healy et al. found rates of 87.8% and 86.0%, respectively. de Jarnette et al. and Norman et al. reported rates close to the planned 90.0%. These deviations can occur due to different sorting accuracy or due to incomplete or erroneous data recording . The sex ratio for normal sperm was different from our assumptions. Most studies obtained between 50% and 52% of male newborns . In our study, the ratio changed in this way 49-51% of bulls were born after using conventional sperm, and the number of heifers born from sexed semen was 85-92%. Norman et al. also reported 51.5% of calves. Concerning the productivity of offspring, there are positive studies and negative studies on the spread of negative consequences for the health of calves obtained from sexed semen. Our study for 2016-2020 showed that the percentage of stillborn calves and abortions averaged 5.6%, which did not exceed these indicators for the herd in this farm as a whole. Conclusion Based on the study carried out, the following conclusions can be drawn. 1. The use of the AlphaVision visual insemination system, as one of the innovative methods of reproduction, made it possible to increase the percentage of fruitful insemination by 20.7%, from 58.3% to 79.0%. The insemination Assist System, equipped with a sealed chamber, has significantly improved the convenience of the technics-inseminator. In particular, it allowed checking the involution of the cow's cervix, showed pathology, and made it easier to locate it for more painless insemination 2. As a result of our research, we found that with an increase in the age of cows, the incidence of postpartum diseases increased by 30% and that postpartum inflammatory diseases were registered in more than 50% of the surveyed animals. According to various criteria, the clinical picture of cervicitis is similar to the symptoms of other diseases of the genital organs, which, in turn, significantly complicates the differentiation of cervicitis as a separate disease, and this, in this case, makes the use of the AlphaVision visual insemination system more relevant, as it provides a more accurate diagnosis of diseases of the reproductive organs 3. The use of sexed semen reduces the fertilizing capacity of animals by an average of 25-30%. The output of calves per 100 cows when using ordinary semen for the period for 2016-2019 was 65 to 80%, and during the experiment (using AlphaVision) in 2020, it reached 85%, with an increase of 5%. The same indicator when using Available at www.veterinaryworld.org/Vol.14/November-2021/25.pdf sexed semen reached 30-50% in 2016-2019, and with AlphaVision in 2020, it showed 60% (a 10% growth). The average number of heifers per total number of newborns, when using sexed semen is on average 85-92%, which is 40% higher than when using conventional semen. The use of sex-segregated semen allows obtaining more heifers from all calves, which, in turn, makes it possible to renew the dairy herd with first-calf heifers in a shorter time and, if necessary, to increase the number of livestock annually 4. Thus, the introduction of the innovative technology described in this work into the practice of farms increases the level of the herd reproduction system. Authors' Contributions VAR: Conception and design, acquisition and analysis of data, and drafting of the manuscript. AMN: Conception and design and revised the manuscript critically. VAS: Revised the manuscript critically and final approval. AAB: Conception and design of the manuscript. All authors read and approved the final manuscript .
Rick Westhead TSN Senior Correspondent Follow|Archive Canada's largest sports company wants the Maple Leafs to become China's favourite hockey team, in much the same way Manchester United is the most popular soccer team in the world's most populous country. At the same time as Chinese government officials have asked the NHL to stage a regular-season game in China as early as next season, TSN has learned four executives with Maple Leaf Sports & Entertainment are scheduled to leave Friday for Beijing for meetings to help grow hockey in China, an untapped frontier for many sports leagues around the world. MLSE chief commercial officer Dave Hopkinson, Bo Hu, MLSE's executive in charge of Chinese business, and two colleagues will attend meetings in Beijing with Prime Minister Stephen Harper, his staff, and Chinese government officials. Maple Leafs President Brendan Shanahan was scheduled to participate but canceled at the last minute, a team spokesperson said. (The NHL and NHLPA announced forward Carter Ashton was suspended for 20 games on Thursday after testing positive for a performance-enhancing drug. Ashton said he borrowed another athlete's asthma inhaler.) MLSE executives, the 27-year-old Hu in particular, have quietly laid the groundwork for building the company's profile in China over the past three years. Because basketball is so popular in China, MLSE staff first established relationships with companies interested in sponsorship agreements with the Raptors. The basketball team now has sponsorships with Chinese auto tire maker Aeolus Tires and Asus, a laptop maker. MLSE has also partnered with I.T. company Huawei on music-industry projects. It's believed that MLSE garners more than $300,000 from each of those sponsorships. Company officials declined to say how much sponsorship revenue they make from China-based companies. As MLSE's relationships in China matured, the company landed meetings with the Chinese government - meetings where MLSE received an attractive offer. China is competing with Kazakhstan to host the 2022 Winter Olympics. China wants to have hockey teams that are competitive. But with a men's national team that's ranked No. 38 — behind New Zealand, Belgium and Iceland — China needs help getting to that point. "China wasn't a sports nation before the Beijing Olympics in 2008, but it is now," Hopkinson said in an interview. "The president of the country attended the Team Canada men's hockey games in Sochi and said he fell in love with hockey. What's important to the president of China is important to China." Chinese officials said in exchange for helping to develop grass-roots hockey programs in China, the government would help MLSE promote the Maple Leafs' brand there. "This is the most storied and established brand in the game," Hopkinson said. "We were told that if they were going to partner in the NFL, it would be with the Dallas Cowboys. If they were doing a baseball deal, it would be with the New York Yankees. And if there's a partnership in the NHL, it's going to be with the Maple Leafs." It doesn't hurt that the Leafs are based in Canada. Tensions might appear if a U.S.-based team tried to do what the Leafs hope to do. "The Chinese have told us they really appreciate what we are doing, boots on the ground, helping them to develop a hockey competency," said Hopkinson, who has made four previous trips to China. In March, the Leafs converted all of their rink boards to Chinese to "welcome Chinese viewers." In August, the team sent staff to Beijing and Shanghai to stage hockey camps for young amateur players. In coming weeks, the Leafs will aid in the production of a 10-part "Hockey 101" series to be included in NHL broadcasts on CCTV in China to educate viewers. "We aren't the only team in China," Hu said. "The Canucks and Islanders are there, too, but we really have a plan to become the biggest hockey brand there. We want to keep holding camps for kids, building grass roots interest, and eventually take our Maple Leafs players there. This is the Manchester United model." The world's most famous soccer club says it has 108 million fans in China, where it has played almost a dozen times since 1975. Hu said because the NHL prevents teams from promoting themselves outside their local territories, MLSE has been targeting Chinese companies such as Air China and the Bank of China that want to expand their business in Canada. "There are a lot of Chinese mining companies that have investments in Canada, although we just signed an agreement with a Colombian mining company," Hu said. Hu confirmed Chinese officials have told the NHL they are interested in hosting an NHL game as soon as next season. As any seasoned marketer knows, the payoff for a team to break through in China is huge. In its recent filing for an initial public offering, Alibaba reported just 45.8 per cent of China's population uses the Internet, far lower than Canada or the U.S. In 2000, four per cent of urban households in China were considered middle class; by 2012, more than two-thirds of those homes had gained that status, earning between $9,000 and $16,000. In 2022, China's middle class may eclipse 630 million consumers. There already are signs that China, a non-traditional hockey market to be sure, is open to a new sport. Last year, the state broadcaster CCTV began showing four live NHL games every week. This is CCTV's second year of a three-year deal with the NHL. NHL games broadcast on the weekend in China have attracted an average 800,000 households - airing at 7 a.m. local time. By comparison, televised Leafs games in Canada with national distribution this season have attracted about 1.5 million viewers. This year, Beijing's 150 registered teams have 2,300 players between the ages of six and 15, MLSE says. Still, there are challenges to growing hockey in China. For starters, it's an expensive sport, Hu said, thanks to 30 per cent import duties for hockey equipment shipped from the west and expensive ice time. There are politics to consider. In 2005, the newspaper Beijing Today accused western soccer teams of "gold-digging," using their tours of China solely to build their brand and commercial presence. Some efforts simply don't translate. Manchester United closed a pair of theme restaurants after fans didn't see a connection between soccer fandom and eating out, The New York Times reported in 2007. Also, it's unclear whether the NHL sees a big upside in China. The league does not have a senior vice president in change of expanding its international business, a position common in other pro sports leagues. Moreover, the NBA has a long head start in China, where it began to lay a toehold during the 1990s. Now, the NBA attracts some five million viewers a game in China. In the U.S., NBA broadcasts on cable TV get about two million viewers per game. It's doubtful that the NHL would ever grow business through the sale of team jerseys. While authentic jerseys now sell for $330 in Canada, counterfeiting has flourished in China, bringing down the price of knock-off jerseys to $5 or $10 in some cases.
Impact of Heterogeneity on Opinion Dynamics: Heterogeneous Interaction Model Considering the impact that physical distance and other properties have on the change of opinions, this paper introduces an intension model of the Hegselmann-Krause (KH) model—heterogeneous interaction (HI) model. Based on the classical KH model, HI model designs new interaction rules and the interactive radius considering the impact of heterogeneous attributes, such as physical distance, individual conformity, and authority. The experiment results show that the opinion evolution of the HI model will be similar to the classic KH model when the interactive radius is above the particular threshold value (). Unlike the KH model, which leads to the polarization phenomenon; most agents reach a consensus in HI model when the confidence radius equals 0.2, and the interactive radius remains within regulatory limits (). The conclusions show that interactive radius affects public opinion evolution. HI model can explain more conscious opinion evolution in real life and has significance that effectively guides public opinion.
Otoendoscope combined with ablation electrodes for treatment of benign tracheal stenosis caused by granulation tissue hyperplasia after tracheotomy Benign tracheal stenosis mainly appears due to tracheotomy, tuberculosis, trauma, benign tumor, or ventilation. With the increase in the number of tracheotomies and the prolongation of the life span of patients after incision, the long-term complications after tracheotomy gradually increase, among which intratracheal granulation hyperplasia is a more serious complication. The present case describes a 59-year-old male with granulation tissue hyperplasia induced by tracheotomy. He underwent tracheal resection to remove the granulation tissue and he remained well after the follow-up. Even though the endoscopic intervention and tracheal resection are readily accessible, they usually quite challenging. Here we summarize the present details on this condition. Introduction Tracheal stenosis is considered as difficult due to complex field visualization and instrument narrowness. Coblation is preferred due to its fast and accurate ablation, less thermal damage et al.yet, more information is still required to demonstrate the ability of coblation in treating airway stenosis . In our case, we report a unique technique using tracheotomy-coblation for managing tracheal stenosis that was caused by tracheotomy. Patient and observation A 59-years-old male patient who suffered from cerebral hemorrhage nine years ago, underwent hematoma removal, decompressive craniectomy, and tracheostomy. He has a history of hypertension for 30 years, regularly taking amlodipine to control blood pressure; a history of diabetes for 7 to 8 years, taking acarbose for treatment. After the operation, he was given symptomatic and supportive treatment such as dehydration to lower intracranial pressure and anti-infection. He had left hemiplegia, speech difficulty, poor listening comprehension, most daily life dependent on the bed, and took soft food orally. Two months ago, the family members had difficulty suctioning and sucking sputum, and the trachea felt blocked of the patient. He was generally in good condition, with no fever, no dysphagia, no nausea or vomiting, normal urine and bowel movements, and no significant recent weight changes. Upon the visit to our hospital, a laryngoscope examination revealed a tracheal mass. Surgical treatment was recommended. To perfect the preoperative examination, the patient lies in a supine position. After the general anesthesia takes effect, the head is tilted back, routinely disinfected, and a sterile sheet is laid. The blood oxygen saturation was maintained between 95% -100%. The tracheal tube was removed with scar tissue at the tracheotomy opening. Tracheotomy opening was expanded with curved forceps, and otoendoscope is inserted. During the tracheal observation, a raised spherical granulation tissue was seen on the anterior wall of the trachea with a cobblestone-like appearance ( Figure 1). Curettes are used to scraping off the tissue. The tissue had a tough texture with little bleeding. Then, electrocoagulation was done with alternate unipolar and bipolar electrodes ( Figure 2). The granulation was ablated until the lumen was no longer obstructed, and no active bleeding was observed under the endoscopy. The tracheal resection was smooth, without side injury. The operation was completed. The granulation tissue was removed and sent for pathology. The postoperative pathology report revealed a fibrous fatty tissue covering squamous epithelium, fibrous tissue proliferation, more acute and chronic inflammatory cell infiltration, and small blood vessel proliferation. Upon follow up, the patient sucked sputum smoothly through the tracheal tube, and there was no sense of obstruction. Discussion Benign tracheal stenosis caused by tracheotomy is a very challenging condition. Usual treatment includes endoscopic intervention and segmental tracheal resection. Anastomosis and traditional surgical resection have some drawbacks, for example, the extent of the resection. Apart from avoiding anastomotic stricture, the resection of the trachea also includes the abnormal stoma part and the part of the stenosis . Tracheal stenosis can be managed through tracheal intervention , like coblation, tracheal stent, and laser. Many literature's have recommended that for the management of benign post-tracheotomy tracheal stenosis, tracheal resection is the more preferred modality with consideration to long-term results . We trust that the contraindications for tracheal resection are minimal. In previous literature, no patients were refused surgery for systemic rationale. In the group of patients who were refused surgery, this was chiefly because of glottic stenosis where it was believed that the outcome would not be beneficial. One of the complications of tracheotomy is granulation tissue formation . Granulation tissue may block the trachea at the level of the stoma and cause difficulty in sucking and suctioning sputum. As the granulation tissue matures, it develops into a fibrous and covered layer of squamous epithelium. With the maturation of fibrosis, stenosis appears as the lateral and anterior feature of the tracheal wall, narrowing at the stoma zone. Tracheal stenosis may develop at the site of the tracheal-tube cuff, where vascular injury to the submucosa of the trachea can arise when cuff pressure surpasses the coronary perfusion pressure of the capillaries of the wall of the tracheal. With delayed ischemia, epithelial ulcer, inflammation, and necrosis of cartilage may appear, resulting in the formation of granulation tissue. Shearing forces from the tube or the cuff may bruise the airway . The granulation tissue in our patient may have appeared as a result of prolonged mucosal irritation and the injury due to repetitive monthly suction catheters. To avoid the formation of granulation tissue, methods have focused on preventing excess mechanical irritation. Many therapeutic methods have been detailed in patients with granulation tissue. Conclusion Apart from the type of primary disease whether benign or neoplastic, tracheal resection has confirmed to be safe procedures and the good outcome rate is high. Even if complications develop, the morbidity rate is 45%. The tracheotomy-coblation seemed favorable with limited operation injury compared to traditional surgery and seemed safer and more appropriate under direct visualization. It needs only one operation which could facilitate the exhausting process of repeated ablation and lessen the patient´s concern. Figure 1: otoendoscopy showing a raised spherical granulation tissue observed on the anterior wall of the trachea with a cobblestone-like appearance Figure 2: during the operation, the coblator in combination with unipolar and bipolar electrodes removes the tracheal granulation tissue
A photo showing the car parked across two bays. Credit:Facebook "The ACROD sticker was clear as day in the windscreen, I can't believe someone has done this to this car. "Some people are just so ignorant to what an ACROD sticker is and why people need to have that room to open their doors fully [to allow wheelchair access in and out of the car]." The Hyundai had been dented and damaged by the trolleys, which Alex said had been stacked on top of one another and rammed against the car. "They'd obviously taken a lot of time to do that for something so ridiculous," she said. A fellow shopper took this photo and alerted centre security to the incident. "I really hope the person who did it gets caught because they've damaged a person's car, and even if the person did park like that and they didn't have an ACROD sticker, you don't vandalise someone's property." A spokeswoman for Garden City shopping centre confirmed on Thursday that security were aware of the incident and had noted the car's number plate, but had not been contacted by the vehicle's owner. "Unfortunately it did occur in our car park," she said. "All the ACROD bays in the centre were all taken so that customer unfortunately had to seek an alternative. "It's sad that it was taken that far... [the vehicle] had a wheelchair facility inside the car and had the ACROD sticker." Centre management did not consider fining the disabled driver for the double-park, with the spokeswoman saying security were understanding of ACROD sticker holders who needed extra space to park if there were no ACROD bays available. The Director General of the Disability Services Commission, Dr Ron Chalmers, condemned the actions of the vandal however, said an ACROD parking permit did not allow the holder to park their vehicle across multiple standard parking bays regardless of whether Blue ACROD bays were empty or not. "It is disappointing that a member of the public has sought to inconvenience a person with a mobility restriction as a protest against their decision to park inappropriately in a shopping centre," he said. "While an ACROD parking permit does not allow the holder to park their vehicle across multiple standard parking bays, regardless of whether Blue ACROD bays are empty or not, people with mobility restrictions can become frustrated when designated blue bays are occupied by people who do not have authority to do so. " "Unfortunately the inappropriate use of ACROD parking bays significantly increases during the Christmas holiday period. People without a valid ACROD parking permit should refrain from using bays designated for people who have legitimate mobility restrictions." Dr Chalmers said in a commercial shopping precinct, the response to an inappropriately parked vehicle with an ACROD permit would depend upon the guidelines set by the centre manager and the attitude of the centre management staff. "In street parking, where local government authorities have authority for parking, the response will be dependent upon the local government parking by-laws," he said.
(Angry Patriot) – A reputable Miami prosecutor, Beranton J Whisenant Jr., likely never thought his life was in danger. Yet, the man’s body just washed up on a Hollywood, Florida beach. Whisenant had been investigating DNC voter fraud. (via Dennis Michael Lynch). This husband and father of three was only 37. He leaves behind a devastated family and community. Also, he wasn’t just investigating the DNC voter fraud. He was also in charge of a few cases of other crimes, those which the liberals love to pretend don’t exist: passport and visa fraud. This man was no friend to those in the deep state, who would rather their crimes go overlooked. Many Democrats, who rely on illegal voters and even voter fraud, would have ample reason to want Beranton Whisenant out of the way. The mainstream media is already doing their part to cover-up any potential wrong-doing in this case. The Miami Herald has suggested that if the death was a crime or “retaliatory” than authorities would be more aggressive. For their part, police have simply stated that Whisenant seems to have suffered some kind of head trauma, possibly a gun shot. They are still looking into the matter. There is no reason to think authorities are not being aggressive in their pursuit of this case. Why can’t the media just let the police do their job? Why can’t the media just stay out of the way in general and report the FACTS like they’re supposed to? Any American death deserves a full investigation from authorities. There is no reason to dismiss the possibility that Whisenant was murdered. If he was murdered, the media is just enabling a serious criminal. We patriots prefer to support authorities in their investigation. No one can deny that the body of a prosecutor washing up on a beach, with head trauma, is a serious matter that deserves our full attention. Whisenant matters to us, and he matters to his community. Benjamin Greenberg, Acting US Attorney, said that Whisenant “was a great lawyer and wonderful colleague, and we will miss him deeply. Our thoughts are with Beranton’s family and friends.” Learning the circumstances of his death should be a top priority, not swept under the rug. angrypatriotmovement.com/federal-prosecutor-found-dead/
Party Size and Portfolio Payoffs: The Proportional Allocation of Ministerial Posts in Coalition Governments Over 30 years ago, Eric Browne and Mark Franklin demonstrated that parties in a coalition tend to receive portfolio payoffs in almost perfect proportionality to their seat share. Even though this result has been confirmed in several studies, few researchers have asked what the underlying mechanism is that explains why parties receive a proportional payoff. The aim of this paper is to investigate the causal mechanism linking party size and portfolio payoffs. To fulfil this aim, a small-n analysis is performed. By analysing the predictions from a statistical analysis of all post-war coalition governments in 14 Western European countries, two predicted cases are selected, the coalitions that formed after the 1976 Swedish election and the 1994 German election. In these case studies two hypotheses are evaluated: that the proportional distribution of ministerial posts is the result of a social norm, and that parties obtain payoffs according to their bargaining strength. The results give no support to the social norm hypothesis. Instead, it is suggested that proportionality serves as a bargaining convention for the actors involved, thus rendering proportional payoffs more likely.
CLOSE For years the only way to get marijuana was to grow it at home illegally or buy it on the black market. But today 205 million Americans live in a state where marijuana is legal for either recreational or medical use. Kristen Hwang/The Desert Sun A marijuana plant being grown for medicinal purposes. (Photo: Associated Press file photo) Story Highlights New Jersey Approves Its sixth - and final -- facility for growing medical marijuana New Jersey's Medical Marijuana Program has 13,200 registered users The Meadowlands — already set to be the home of the state's largest shopping and entertainment complex — will also be the site of the state's largest dispensary of medical marijuana. Once it opens for business, the dispensary plans to serve up to 4,000 patients a month with a variety of strains of cannabis. The Christie administration this week issued a permit to grow medical marijuana to Harmony Foundation and will consider issuing a permit to dispense marijuana after the crop is tested later this year. LEGALIZATION: What will happen if marijuana is legalized in NJ? NEW JERSEY: Legislators begin marijuana legalization effort as they look past Christie WAYNE: Schools adopt medical marijuana policy The nonprofit foundation will operate the 10,000-square-foot facility on Meadowlands Parkway in Secaucus. "After two years of designing and constructing this state-of-the-art facility, we are excited to finally put it into action," said Shaya Brodchandel, Harmony's president and CEO. The strains selected "are well suited for New Jersey medical patients' conditions and to our unique growing system," he said. The medical marijuana growing facility for Harmony Foundation, on Meadowlands Parkway in Secaucus. (Photo: courtesy of Harmony Foundation) New Jersey currently has 13,200 patients registered to purchase medical marijuana, which can prescribed for certain medical conditions only by physicians who have registered with the program. Medical marijuana in New Jersey is the most expensive in the country, according to Ken Wolski, the head of the Coalition for Medical Marijuana-New Jersey. It sells for about $500 an ounce, he said. The state Legislature has begun considering a measure to legalize recreational marijuana, which is projected to generate as much as $300 million in tax revenue. Phil Murphy, the Democratic candidate for governor, has said he favors legalization. That would make it easier to purchase marijuana and would change the environment in which dispensers of medical marijuana operate. Once the Secaucus center opens, New Jersey will have six marijuana dispensaries, which state officials call alternative treatment centers. The others are in Montclair, Egg Harbor, Woodbridge, Cranbury and Bellmawr in Camden County. Lights and other equipment at the Harmony Foundation's marijuana growing facility in Secaucus are automated. (Photo: courtesy of Harmony Foundation) Former Gov. Jon Corzine signed New Jersey's law allowing compassionate use of marijuana to treat certain medical conditions in 2010, leaving it for Gov. Chris Christie to implement. Christie, who vehemently opposes legalization of recreational marijuana, enacted some of the strictest regulations in the nation for medicinal marijuana. Wolski said he welcomed the new dispensary, but added: "We're very disappointed with the pace of the process." Approval of the sixth center, he said, "is long overdue." The law had anticipated that additional centers would be approved by the state after the first six. MORE: Marijuana's legalization fuels black market in other states MORE: What's the big deal with legal pot? No one knows yet MORE: California city to use pot shops to fight racial inequities MORE: States forge path through uncharted territory to legal pot The Health Department says its permitting process for new growers is modeled after the background checks for casino operators. The examination of Harmony Foundation's executives and funding sources began in December 2014. The leaders and financing have changed since then, said Donna Leusner, a spokeswoman for the Health Department, prolonging the vetting process. "The permit was issued after a comprehensive review, including several site inspections, background checks of its corporate officers and a review of its security operations and cultivation facility," she said. The medical marijuana growing facility for Harmony Foundation in Secaucus is largely automated. (Photo: courtesy of Harmony Foundation) Brodchandel, who is 30, has no previous experience in the marijuana industry, but led a company that produced products used in nuclear medicine, a highly regulated industry that prepared him for this role, said Leslie Hoffmann, a spokeswoman for Harmony. He joined the foundation in 2015. The company's automated, robotic growing system is designed to produce a consistent, high-quality product in an environment where light, temperature, humidity, water, nutrients and carbon dioxide are strictly controlled and tracked, Hoffmann said. It will produce an "extremely consistent, pure product," she said. Read or Share this story: https://njersy.co/2wdfhSA
Marie D. E. sent in this video, titled “Karen 26,” in which a woman claims to be looking for the father of the child she conceived after a one-night stand with a tourist (found at Adland): The video, it turns out, was actually produced as part of a campaign by Visit Denmark, a Danish tourism agency. The idea is, apparently, to market Denmark to male tourists with the implication that it’s easy to have anonymous, unprotected sex with attractive local women who just want to introduce you to Danish customs. I don’t know that the possibility of unplanned pregnancy would be the best tourism draw, but she does assure us that she’s not a slut and she’s not wanting anything from the father, so perhaps that will reassure potential tourists that not only can they have unprotected sex with local women, there are no real consequences to doing so. So the perception in many parts of the world of Scandinavian women as sexually liberated and promiscuous is used by a state-funded agency to promote tourism by turning female sexuality into another local attraction…with the added benefit of being free, unlike in nations known for sex tourism. Also see our posts on promoting European tourism with infidelity, sex tourism in Thailand, and female sex tourists in the Caribbean.
A report of 2 cases of green pigmentation in the primary dentition associated with cholestasis caused by sepsis. Green pigmentation of teeth is uncommon but, when it occurs it is a cause of anxiety to the child and family. It also causes the child to lose self-esteem and increases social issues for the family. The purpose of this paper was to present the management of 2 unrelated patients who presented with green primary teeth following sepsis-induced liver dysfunction and hyperbilirubinemia in infancy.
Impact of Combined Heat and Drought Stress on the Potential Growth Responses of the Desert Grass Artemisia sieberi alba: Relation to Biochemical and Molecular Adaptation Artemisia sieberi alba is one of the important plants frequently encountered by the combined effect of drought and heat stress. In the present study, we investigated the individual and combined effect of drought and heat stress on growth, photosynthesis, oxidative damage, and gene expression in A. sieberi alba. Drought and heat stress triggered oxidative damage by increasing the accumulation of hydrogen peroxide, and therefore electrolyte leakage. The accumulation of secondary metabolites, such as phenol and flavonoids, and proline, mannitol, inositol, and sorbitol, was increased due to drought and heat stress exposure. Photosynthetic attributes including chlorophyll synthesis, stomatal conductance, transpiration rate, photosynthetic efficiency, and chlorophyll fluorescence parameters were drastically reduced due to drought and heat stress exposure. Relative water content declined significantly in stressed plants, which was evident by the reduced leaf water potential and the water use efficiency, therefore, affecting the overall growth performance. Relative expression of aquaporin (AQP), dehydrin (DHN1), late embryogenesis abundant (LEA), osmotin (OSM-34), and heat shock proteins (HSP70) were significantly higher in stressed plants. Drought triggered the expression of AQP, DHN1, LEA, and OSM-34 more than heat, which improved the HSP70 transcript levels. A. sieberi alba responded to drought and heat stress by initiating key physio-biochemical and molecular responses, which were distinct in plants exposed to a combination of drought and heat stress. Introduction Several reports have stated that many areas of the world will suffer from drought in the coming decades due to climatic changes . On one hand, there is an increasing demand for water, and on other hand, seasonal climatic fluctuations, and an apparent decline in available natural water and global increases in CO 2 are all occurring . These factors significantly threaten the floristic diversity coverage in the arid ecosystems, and these effects become more drastic due to the coexistence of heat and drought . The combined effect of stresses alters different agronomic characteristics by influencing the biochemical and physiological functions, thereby influencing plant growth, development, and yield . Drought and heat stress affect the phonological traits that have a pivotal role in the adaptation of plants to counteract adverse environmental factors . To prevent stress-induced oxidative damage, plants improve their antioxidant defense system to scavenge reactive oxygen species (ROS) . Photosynthetic machinery, quantum yield efficiency (Fv/Fm), and activity of PSII decrease significantly after either heat or drought stress exposure or both combined . High temperature and drought inhibit seed filling and seed production , and gene expression changes in floral organs . Combined exposures to temperature and drought stress limit the growth The shoot fresh weight of A. sieberi alba plants grown under normal environmental conditions (17-22 °C) significantly increased from 92.3 g plant −1 to 95.9 g plant −1 after ten days. However, exposure of A. sieberi alba plants to abiotic stress (drought and heat stresses) decreased the shoot fresh weight (SFW) to 79.4, 87.5, and 76.2 g plant −1 in plants stressed with drought, heat, and drought and heat stresses, respectively, for five days, and the same trend was observed after ten days ( Figure 2A). Ten days after stress, the SFW of A. sieberi alba plants decreased significantly to 80.3, 88.2, and 77.3 g plant −1 for drought, heat, and drought and heat stresses, respectively. Accordingly, the shoot dry weight after ten days of stress exposure declined from 11.6 g plant −1 (control) to 9.07, 10.2, and 8.6 g plant −1 in plants stressed with drought, heat, and drought and heat stress, respectively, for ten days ( Figure 2B). The shoot fresh weight of A. sieberi alba plants grown under normal environmental conditions (17)(18)(19)(20)(21)(22) • C) significantly increased from 92.3 g plant −1 to 95.9 g plant −1 after ten days. However, exposure of A. sieberi alba plants to abiotic stress (drought and heat stresses) decreased the shoot fresh weight (SFW) to 79.4, 87.5, and 76.2 g plant −1 in plants stressed with drought, heat, and drought and heat stresses, respectively, for five days, and the same trend was observed after ten days (Figure 2A). Ten days after stress, the SFW of A. sieberi alba plants decreased significantly to 80.3, 88.2, and 77.3 g plant −1 for drought, heat, and drought and heat stresses, respectively. Accordingly, the shoot dry weight after ten days of stress exposure declined from 11.6 g plant −1 (control) to 9.07, 10.2, and 8.6 g plant −1 in plants stressed with drought, heat, and drought and heat stress, respectively, for ten days ( Figure 2B). Plant growth with both abiotic stresses recorded the lowest minimum values of the shoot and root biomass, consequently affecting the whole plant biomass ( Figure 3A and B). Plants generally allocate biomass according to their needs, however, under stresses, A. sieberi alba plants showed a significant modification in their biomass allocation in terms of shoot and root ratios. Generally, biomass allocation increased with the abiotic stresses, especially temperature, drought, and temperature and drought stresses. A marked decrease was observed in root biomass relative to the decrease in shoot biomass, reflected in increased shoot-root (S:R) ratios ( Figure 3C). The highest recorded biomass allocation was 5.9 (S:R ratio, g g −1 FW). Differences between treatments in biomass allocation in A. sieberi alba plants was assessed by two-way analysis of variance followed by Duncan's multiple range comparisons (DMRTs). Effect of drought, heat, and interaction between drought and heat stresses on (A) shoot fresh weight (g-FW plant −1 ), (B) shoot dry weight (g-DW plant −1 ), (C) root fresh weight (g-FW plant −1 ), and (D) root dry weight (g-DW plant −1 ) of A. sieberi alba. Data expressed as mean of triplicates, error bars represent standard error for means. Means marked with different letters are significantly different according to ANOVA and DMRTs at p < 0.05. Plant growth with both abiotic stresses recorded the lowest minimum values of the shoot and root biomass, consequently affecting the whole plant biomass ( Figure 3A,B). Plants generally allocate biomass according to their needs, however, under stresses, A. sieberi alba plants showed a significant modification in their biomass allocation in terms of shoot and root ratios. Generally, biomass allocation increased with the abiotic stresses, especially temperature, drought, and temperature and drought stresses. A marked decrease was observed in root biomass relative to the decrease in shoot biomass, reflected in increased shoot-root (S:R) ratios ( Figure 3C). The highest recorded biomass allocation was 5.9 (S:R ratio, g g −1 FW). Differences between treatments in biomass allocation in A. sieberi alba plants was assessed by two-way analysis of variance followed by Duncan's multiple range comparisons (DMRTs). Figure 3. Effect of drought, heat, and interaction between drought and heat stresses on (A) plant biomass (g-FW plant −1 ), (B) plant dry weight (g DW plant −1 ), (C) biomass allocation (shoot-root ratio, g g −1 FW) of A. sieberi alba. Data expressed as mean of triplicates, error bars represent standard error for means. Means marked with different letters are significantly different according to ANOVA and DMRTs at p < 0.05. A. sieberi alba plants exposed to drought and temperature stress individually, as well as combined, showed a significant decline in chlorophyll content. The reduction was more obvious as the period of stress increased from five to ten days ( Figure 4). Individually, heat stress resulted in a more significant (p ≤ 0.05) decline in total chlorophyll content as compared with drought stress, and the decline reached a maximum in plants exposed to drought and heat stress ( Figure 4). Stomatal conductance (mmole H2O m −2 S −1 ) was significantly decreased by drought and temperature stress at both five and ten days after treatment ( Figure 5A). . Effect of drought, heat, and interaction between drought and heat stresses on (A) plant biomass (g-FW plant −1 ), (B) plant dry weight (g DW plant −1 ), (C) biomass allocation (shoot-root ratio, g g −1 FW) of A. sieberi alba. Data expressed as mean of triplicates, error bars represent standard error for means. Means marked with different letters are significantly different according to ANOVA and DMRTs at p < 0.05. A. sieberi alba plants exposed to drought and temperature stress individually, as well as combined, showed a significant decline in chlorophyll content. The reduction was more obvious as the period of stress increased from five to ten days ( Figure 4). Individually, heat stress resulted in a more significant (p ≤ 0.05) decline in total chlorophyll content as compared with drought stress, and the decline reached a maximum in plants exposed to drought and heat stress ( Figure 4). Stomatal conductance (mmole H 2 O m −2 S −1 ) was significantly decreased by drought and temperature stress at both five and ten days after treatment ( Figure 5A). Stomatal conductance in the leaves of the untreated control A. sieberi alba plant was 0.13 mmole H 2 O m −2 S −1 , although it significantly decreased to 0.08 and 0.06 in plants stressed with drought stress and both heat and drought. Drought stress proved more damaging in reducing stomatal conductance, however, heat stress caused a nonsignificant change in stomatal conductance of A. sieberi alba, which was 0.12 mmole H 2 O m −2 S −1 ( Figure 5A). Membrane stability index (MSI) is an important measure of plant membrane behavior under heat and drought stresses, and the results of the membrane stability index are shown in Figure 5B. The MSI of A. sieberi alba plants significantly (p < 0.05) decreased under drought and drought and heat stresses, however, was not altered significantly (p > 0.05) under heat stress alone ( Figure 5B). The results depicting the effect of drought and temperature stress on plant water status, evaluated in terms of Ψ pd , RWC, and WUE are shown in Figure 6A-D. Stress exposure induced a significant decline on relative water content (RWC), leaf water potential (MPa), water use efficiency (WUE, kg m −3 ), and transpiration rate (mmole H 2 O cm −2 S −1 ), with a more apparent decline after ten days of stress. Individually the decline was more in drought-stressed plants as compared with temperature-stressed plants. This decline in relative water content (RWC), water use efficiency (WUE, kg m −3 ), and transpiration rate (mmole H 2 O cm −2 S −1 ) was significantly less after exposure to both (drought and heat) abiotic stresses ( Figure 6A,C,D). Consequently, the leaf water potential decreased markedly (p < 0.05) in plants exposed to drought and combined stress (D + H). and both heat and drought. Drought stress proved more damaging in reducing stomatal conductance, however, heat stress caused a nonsignificant change in stomatal conductance of A. sieberi alba, which was 0.12 mmole H2O m −2 S −1 ( Figure 5A). Membrane stability index (MSI) is an important measure of plant membrane behavior under heat and drought stresses, and the results of the membrane stability index are shown in Figure 5B. The MSI of A. sieberi alba plants significantly (p < 0.05) decreased under drought and drought and heat stresses, however, was not altered significantly (p > 0.05) under heat stress alone ( Figure 5B). The results depicting the effect of drought and temperature stress on plant water status, evaluated in terms of Ψpd, RWC, and WUE are shown in Figure 6A-D. Stress exposure induced a significant decline on relative water content (RWC), leaf water potential (MPa), water use efficiency (WUE, kg m −3 ), and transpiration rate (mmole H2O cm −2 S −1 ), with a more apparent decline after ten days of stress. Individually the decline was more in drought-stressed plants as compared with temperature-stressed plants. This decline in relative water content (RWC), water use efficiency (WUE, kg m −3 ), and transpiration rate (mmole H2O cm −2 S −1 ) was significantly less after exposure to both (drought and heat) abiotic stresses ( Figure 6A, C, D). Consequently, the leaf water potential decreased markedly (p < 0.05) in plants exposed to drought and combined stress (D + H). to ten days of stress treatment. The lowest quantum yields were recorded in plants under the combined effect of H + D for five and ten days, respectively ( Figure 7A). Moreover, untreated (unstressed) plants maintained high leaf photochemical efficiency, Fv/Fm, chlorophyll content, photosynthetic rate, and PSII quantum yield. Drought, heat, and drought and heat significantly (p ≤ 0.05) reduced photosynthetic rate (Pn) and actual quantum yield of PSII ( Figure 7B,C). Individually, drought stress decreased the photosynthetic rate and actual quantum yield of PSII more than heat stress. The maximal decline in photosynthetic rate and actual quantum yield of PSII (ΦPSII) was observed in H + D stressed plants ( Figure 8B,C). content, photosynthetic rate, and PSII quantum yield. Drought, heat, and drought and heat significantly (p ≤ 0.05) reduced photosynthetic rate (Pn) and actual quantum yield of PSII ( Figure 7B, C). Individually, drought stress decreased the photosynthetic rate and actual quantum yield of PSII more than heat stress. The maximal decline in photosynthetic rate and actual quantum yield of PSII (ΦPSII) was observed in H + D stressed plants ( Figure 8B, C). Hydrogen peroxide accumulation was significantly induced by drought and heat stresses, resulting in higher H2O2 accumulation in plants exposed to combined stress ( Figure 8A). The drought and heat stress-induced enhancement in H2O2 resulted in a significant increase in leaf electrolyte leakage (EL) and lipid peroxidation (MDA accumulation), and also in a reduced membrane stability index (MSI, described above). MDA decreased significantly (p ≤ 0.05) in plants exposed to heat, drought, and combined stress ( Figure 8B). Combined stress significantly induced a substantial decrease in MDA accumulation and electrolyte leakage revealed a significant (p ≤ 0.05) increase during drought, however, EL decreased significantly (p ≤ 0.05) in A. sieberi alba plants under heat and combined H + D stresses ( Figure 8C). Hydrogen peroxide accumulation was significantly induced by drought and heat stresses, resulting in higher H 2 O 2 accumulation in plants exposed to combined stress ( Figure 8A). The drought and heat stress-induced enhancement in H 2 O 2 resulted in a significant increase in leaf electrolyte leakage (EL) and lipid peroxidation (MDA accumulation), and also in a reduced membrane stability index (MSI, described above). MDA decreased significantly (p ≤ 0.05) in plants exposed to heat, drought, and combined stress ( Figure 8B). Combined stress significantly induced a substantial decrease in MDA accumulation and electrolyte leakage revealed a significant (p ≤ 0.05) increase during drought, however, EL decreased significantly (p ≤ 0.05) in A. sieberi alba plants under heat and combined H + D stresses ( Figure 8C). A. sieberi alba plants exposed to drought and heat stress accumulated significantly more compatible osmolytes than control plants ( Figures 9A and 10). All osmotic solutes including proline, mannitol, inositol, and sorbitol were higher in drought-stressed plants, followed by their heat-stressed counterparts. In response to drought, heat and combined (H + D) stress, the leaf proline content was significantly (p ≤ 0.05) increased at either five days or ten days ( Figure 9A). The highest increase in leaf proline was observed in drought and drought and heat-stressed plants. Leaf polyols (mannitol, inositol, and sorbitol) in A. sieberi alba plants significantly (p ≤ 0.05) increased under all stress conditions (H, D, and H + D) ( Figure 10A-C). The highest increase in mannitol content was observed in the combined stressed group, which was significantly (p ≤ 0.05) increased by seven-fold over the untreated control ( Figure 10A). Drought stress induced the highest increase (p ≤ 0.05) in leaf inositol and sorbitol content, which continued from five to ten days of stress ( Figure 9B,C). A. sieberi alba plants exposed to drought and heat stress accumulated significantly more compatible osmolytes than control plants ( Figures 9A and 10). All osmotic solutes including proline, mannitol, inositol, and sorbitol were higher in drought-stressed plants, followed by their heatstressed counterparts. In response to drought, heat and combined (H + D) stress, the leaf proline content was significantly (p ≤ 0.05) increased at either five days or ten days ( Figure 9A). The highest increase in leaf proline was observed in drought and drought and heat-stressed plants. Leaf polyols (mannitol, inositol, and sorbitol) in A. sieberi alba plants significantly (p ≤ 0.05) increased under all stress conditions (H, D, and H + D) ( Figure 10A-C). The highest increase in mannitol content was observed in the combined stressed group, which was significantly (p ≤ 0.05) increased by seven-fold over the untreated control ( Figure 10A). Drought stress induced the highest increase (p ≤ 0.05) in leaf inositol and sorbitol content, which continued from five to ten days of stress ( Figure 9B and C). The DPPH radical scavenging activity of different treatments in A. sieberi alba leaves revealed a decrease ( Figure 10B). The combined effect of heat and drought induced a decline of 40% in DPPH radical scavenging activity, with a more pronounced effect after ten days of stress treatments ( Figure 9B). The DPPH radical scavenging activity of different treatments in A. sieberi alba leaves revealed a decrease ( Figure 10B). The combined effect of heat and drought induced a decline of 40% in DPPH radical scavenging activity, with a more pronounced effect after ten days of stress treatments ( Figure 9B). A. sieberi alba as an important wild medicinal plant with various valuable phytochemical constituents, therefore, an evaluation of the phytochemical composition of A. sieberi alba leaves was carried out to assess the effect of abiotic stress. The phytochemical constituents screened included flavonoids, tannins, phenols, saponins, glycoside, alkaloids, steroids, terpenoids, soluble sugars, and sterols. As a response to environmental stresses including drought, heat, and drought and heat, the phytochemical compositions of A. sieberi alba plants changed considerably and significantly (Table 1). Combined stress (H + D) increased various phytochemical compounds including flavonoids, tannins, phenols, saponins, glycoside, alkaloids, steroids, and terpenoid, however, there was no change from the untreated control in the content of soluble sugars and sterols. Heat stress significantly induced an increase in the content of tannins, alkaloids, terpenoids, and steroids, however, drought stress induced higher levels of saponins and steroids (Table 1). The relative gene expression of aquaporins, DNH1, HSP70, LEA1, and OSM-34 genes was performed in A. sieberi alba under drought and heat stress conditions by real-time (RT)-qPCR ( Figure 11A-E). Expression of the aquaporin gene (AQP) was performed by using the relative quantification of SsPIP1 aquaporin-1 (unit SD264077, Table 1). Drought stress and combined stress significantly induced AQP relative gene expression as revealed by the analysis of variance ( Figure 11A). Generally, the relative expression of AQP, DNH1, HSP70, LEA1, and OSM-34 was significantly increased under both stress conditions, attaining maximal values in plants exposed to drought + heat stress ( Figure 11). Individually, expression of AQPs, DNH1, LEA1, and OSM-34 was higher in drought-stressed plants than HSP70, which showed higher expression levels in heat-stressed plants. Dehydrin (DHN1 and LEA1) genes exhibited a significant increase under drought and drought and heat stress conditions, where a cellular protective role during stress was encoded by DHNs and LEA ( Figure 11B,D). With an increase in the stress period from five to ten days, the relative expression of all genes showed a gradual increase ( Figure 11). Discussion Drought and heat stress were considerably more anxious for plants as compared with either stress alone, indicating that environmentally associated heat waves which are generally associated with arid durations in summer and spring might be deleterious to A. sieberi alba . To neutralize these negative impacts, plants initiated several key mechanisms, with common reactions to individual or combined stresses . In this study we investigated responses at the physiological, biochemical, and molecular levels to drought and temperature stress focusing on the combined effect of drought and temperature stress. High temperature accelerates the depletion of soil water, probably by increased evaporation and transpiration, which was evident in this study. Furthermore, it was observed that both stresses imparted serious phenotypical modifications. Heat treatment exerted Discussion Drought and heat stress were considerably more anxious for plants as compared with either stress alone, indicating that environmentally associated heat waves which are generally associated with arid durations in summer and spring might be deleterious to A. sieberi alba . To neutralize these negative impacts, plants initiated several key mechanisms, with common reactions to individual or combined stresses . In this study we investigated responses at the physiological, biochemical, and molecular levels to drought and temperature stress focusing on the combined effect of drought and temperature stress. High temperature accelerates the depletion of soil water, probably by increased evaporation and transpiration, which was evident in this study. Furthermore, it was observed that both stresses imparted serious phenotypical modifications. Heat treatment exerted more significant impact than drought on physiological parameters such as height, chlorophyll content, and photosynthesis reflected in the differences in responses and stress pathways. It has been reported that growth and functioning of shoot and root are reduced resulting in considerable changes in the distribution of essential components from root to shoot . It has been shown that several plant hormones and nutrient availability regulate key physiological pathways under stresses . The phenology relationship and water use are main indicators for drought stress . We found that A. sieberi alba exhibited significant decline in the morphological parameters and the water use efficiency, with the effect being much more apparent in plants exposed to combined drought and heat stress. In this study, dry matter declined due to temperature and drought stress. Yield and biomass accumulation in plants depend on the number of plants, the production of dry matter, as well as the number and size of seeds. Earlier, it has been reported that high temperatures and water stress decline the yield by affecting the growth through reduced light interception over the shortened life cycle . Dreesen and his co-author have also confirmed that, the combined effect of high temperatures and water stress on crop growth and yields is more damaging than the individual stress. Similar to our results, drought and temperature effects have been reported to reflect reduced plant biomass accumulation, shorter internodes, early senescence and death, and fruit discoloration . Growth is severely affected due to alterations in the physiological and metabolic pathways, for instance photosynthesis and related attributes including chlorophyll production and fluorescence, stomatal behavior, sugar synthesis, and metabolism, in addition to the water relations restricting the allocation of sucrose to developing seeds thereby affecting their size and number . In addition, understanding the influence of drought and high temperature on the functioning of related enzymes and hormones may be helpful to unraveling the exact mechanisms involved. The combined effect of drought and heat stress on A. sieberi alba significantly affected RWC and leaf water potential. Reduced RWC affects the cellular functioning adversely and our results are in corroboration with the finding of for barley and for chickpea. Extensive damage to membranes in terms of electrolyte leakage, reduced chlorophyll content, and photosynthetic performance after exposed to combined stresses is attributed to the substantial reduction of leaf RWC and the stomatal conductivity . Drought and temperature stress increased the electrolyte leakage level, indicating membrane instability which could be due to alterations in the lipid-protein configuration and loss of cellular functioning . In accordance to our results, earlier studies which discussed the deleterious impact of combined stress on RWC and membrane leakage in chickpeas and Poa pratensis L. are available . The damaging impact of drought and high temperature are obvious on the functioning of photosystem II (Fv/Fm) and the maximum quantum yield . Stress induced reduction in the electrolyte leakage indicates the degradation of the D1 protein configuration and loss of cellular functioning . Negative effects of drought and heat stress may have considerably contributed to reduce photosynthetic functioning reflecting on the altered physiological process and plant metabolism . Further drought and heat stress declined chlorophyll concentration probably due to disturbances in chloroplast structural integrity, uptake of magnesium , and increased chlorophyll denaturation . Increased stomatal conductivity during heat stress is considered to be an adaptive mechanism to improve transpiration for allowing cooling, as have been reported in wheat . In this study, stomatal conductance in heat stressed plants was comparable with the controls, reflecting the reduction in stomatal conductance maximally due to the drought stress mediated declined leaf water potential and the decline is highly intensified after the stress period prolonged from five to ten days. Improved accumulation of secondary metabolites may have further strengthened the antioxidant potential leading to protection of the structural and functional integrity of thylakoid membranes and chlorophyll stabilization . In this study, drought induced the accumulation of phenols more conspicuously than heat stress. Improved antioxidant potential reduces the oxidative damage to membranes and proteins hence protecting the organelle functioning and the whole plant performance . Moreover, the interactive effects of heat and drought can be attributed to the improved scavenging capacity of antioxidants triggered by both stress factors. It has been reported that stress induced oxidative injury results from the disequilibrium between generation and elimination of free radicals in the photosynthetic and respiratory pathways . It is believed that increased metabolite (phenolics, flavonoids, anthocyanins, lignin, etc.) synthesis strengthens the non-enzymatic antioxidant system . Additionally, to the enzymatic components, the non-enzymatic components including phenols, flavanols, ascorbic acid, glutathione, and tocopherols, also contribute to prevention of oxidative stress effects through improved ROS scavenging . Sayed et al. found that flavonoids impart photoprotection in plants against high temperatures and drought stress. Reports discussing the combined effect of drought and heat stress on the accumulation of phenols and antioxidant potential are rare. Improved antioxidant potential reduces the oxidative damage to membranes and proteins, hence protecting the organelle functioning and the whole plant performance . The interactive effects of heat and drought can be attributed to the improved scavenging capacity of antioxidants triggered by both stress factors . Secondary metabolites significantly affect the plant interactions with biotic and abiotic components in addition to their key role in medical, nutritional, and cosmetic purpose . To ease stress mediated deleterious effects plants increase the synthesis of phenolics, flavonoids, alkaloids, terpenoids, steroids, tannins, saponins, glycosides, and xanthoprotein . It is believed that increased metabolite synthesis strengthens the non-enzymatic antioxidant system by altering peroxidation kinetics and maintaining the membrane fluidity . Additionally, phenols, xanthoprotein, and flavonoids impart photoprotection in plants against damaging growth factors like radiations . The accumulation of secondary metabolites may have reduced the oxidative damage effects by limiting the accumulation of ROS under both individual as well as combined stresses . Under water and UV stressed conditions, flavonoid synthesis and ROS scavenging has been reported to increase . Similar to our observation's, higher flavonoid, saponins, and tannins accumulation has also been reported in wheat exposed to drought and high temperature stress . Furthermore, combined effect of drought and high temperature triggered the plants to accumulate significant quantities of low molecular weight compounds such as proline, glycine betaine, and sugar alcohols to buffer the cellular redox potential for better withstanding the stress factor through maintenance of tissue water content . Accumulated sugars in stressed plants can serve cellular functions such as energy source for stress recovery, signal transduction, and osmoprotection . Heat or water stressed plants accumulate soluble sugars to an appreciably high level in order to generate significant osmotic potential . Proline acts as a metabolic signal leading to control the mitochondrial and photosynthetic functions by maintaining the redox balance, hence imparting stress tolerance and plant development . Future studies to unravel the exact regulation at genetic and molecular levels are required. A plant's response to the combination of stress factors like drought and heat imparts suppression of key processes like photosynthesis with concomitant enhancement in the expression of defense protein coding genes . Among the different genes worked out, the expression of aquaporin (AQP), DHN-1, LEA, and OSM-34 was much more apparent in drought stress conditions, whereas HSP-70 expression was much more apparent in heat stressed ones. Multiple isoforms of AQPs exist in plasmalemma and tonoplast membranes maintaining the flow of water in and out of cells ultimately influencing the water transfer via leaves and roots . In addition, AQPs have been identified to play key roles in regulation of root and leaf hydraulic conductance, thereby influencing processes including phloem loading, xylem water exit, stomatal movement, and gas exchange . Recently, Wang et al. have demonstrated increased drought stress tolerance in potato over-expressing plasma membrane AQP gene StPIP1. Late embryogenesis abundant (LEA) proteins are important hydrophilic proteins having a major role in abiotic stress tolerance in plants, especially in drought . LEA proteins mediate plant protection by serving as antioxidant, hydration buffering, stabilizing membranes and proteins, metal ion binding, and DNA and RNA interactions . Dehydrin (DHN) proteins fundamentally control growth under abiotic stresses and it has been reported that plants exhibiting higher expression of DHN show improved tolerance to drought . In A. sieberi alba the drought responsive genes including AQP, DHN, LEA, and OSM showed apparent enhancement in their expression under drought conditions as compared with heat stressed and the control, however, HSP70 transcript levels were more in heat stressed plants as compared with drought and the control. Plants exposed to heat stress exhibit protein dysfunction through their improper folding of amino acid chains to non-native proteins leading to unfavorable interactions and protein aggregation . In the present study, higher transcript levels of HSP70 depict the role of these molecular chaperone for maintaining the high-quality proteins in the cell and also assist in cellular signaling. Therefore, upregulation of stress specific genes assisted A. sieberi alba in withstanding the drought and heat stress, and further studies are required to unravel their exact involvement in improving tolerance to combined drought and heat stress. Pot Experiments and Stress Treatments Achenes of A. sieberi alba were manually detached, and good seeds were detected by compound microscope. After germination, seedlings were commonly grown for three months. After that, pots were divided into four groups, including (a) control (normal irrigation at 17-22 • C), (b) drought, (c) heat (high temperature) stressed, and (d) drought and heat stressed, and were analyzed at five and ten days after stress treatment. For the drought group, water application was reduced by 50%. For the high-temperature stress group, 37 • C point was found to be suitable based on preliminary experiments. The pots for A. sieberi alba. were arranged in a completely randomized block design with five replicates in a greenhouse maintained at 65% humidity and a 12/12 h light/dark regime. Growth Measurements Morphological traits of treated and untreated A. sieberi alba plants were measured. Plant height was measured with a manual scale. Three plants, including the root, were harvested and taken to the laboratory to measure plant height, leaf area, shoot and root fresh and dry weights. Dry weight was measured after drying in an oven at 60 • C for 72 h. Relative Water Content and Leaf Water Potential (LWP) For measurement of leaf, the relative water content (RWC) method described by was followed. The following formula was used for calculation: where FW = fresh weight, DW = dry weight, and TW = turgid weight. Leaf water potential was measured in the last fully expanded leaf of control and stressed plants by using leaf water potential system WP4C, Germany. Gas Exchange, Chlorophyll Fluorescence Parameters and Water Use Efficiency All photosynthetic measurements were performed on intact leaves on clear sunny days. Net photosynthetic rate (Pn), stomatal conductance (Gs), transpiration rate (Tr), and intercellular CO 2 concentration (Ci) of fully expanded leaves were measured between 09.00 and 11.00 a.m. using an infrared gas analyzer system (TPS-2, USA). The CO 2 concentration in the chamber was 380 ± 10 mol −1 , and a photosynthetic photon flux density of 800 mol m −2 s −1 at the leaf surface was provided by a LED red-blue light source (LI-COR 6400-02). The water use efficiency (WUE) was calculated as Pn/Tr. Chlorophyll fluorescence measurements were made on the fully expanded leaves with the fluorometer after adapting in darkness for 30 min. Determination of Fv/Fm was based on the method of . Steady-state fluorescence (Fs) and maximum fluorescence (Fm) of light-adapted leaves were measured when fluorescence reached a steady-state level. The maximum quantum efficiency of PSII photochemistry in the dark (Fv/Fm) and actual photosynthetic efficiency (ΦPSII open) in light were determined . Measurement of Chlorophyll For estimation of chlorophyll pigments, fresh leaf sample (100 mg) was extracted in acetone, and the absorbance of the supernatant was recorded at 622, 664, and 440 nm using spectrophotometer . Membrane Stability Index (MSI) Membrane stability index was determined following the method of . The 100 mg leaf samples were cut into discs and kept in test tubes containing 10 mL of double distilled water in two sets. One set was kept at 40 • C for 30 min and another set at 100 • C in boiling water bath for 15 min and their respective electric conductivities C 1 and C 2 were recorded. The calculation was done by the following formula: Membrane stability index = × 100 Lipid Peroxidation Lipid peroxidation was measured as malondialdehyde (MDA) content using the thiobarbituric acid method according to . Molar coefficient of 155 mmol L −1 cm −1 was used for calculation and expressed as nmol g −1 FW. Electrolyte Leakage Electrolyte leakage was estimated by immersing leaf discs in deionized water in a test tube, and the initial electrical conductivity (ECa) was measured. The tissue containing tubes were heated for 25 min at 50 • C and 100 • C for 10 min in a water bath to measure the respective electrical conductivities (ECb) and (ECc), respectively . The following formula was used for calculation: Determination of Proline Content Proline was extracted by homogenizing one gram leaf samples in 3% sulphosalicylic acid. After that, 2.0 mL supernatant was mixed with 2 mL acid ninhydrin and glacial acetic acid and mixtures were incubated in a water bath at 100 • C for one h. After cooling, proline was separated using toluene and absorbance was measured at 520 nm . Total Antioxidant Activity Total antioxidant activity was estimated by measuring DPPH free radical scavenging activity using the method described by . Plant tissue was extracted in ethanol (Merck, Darmstadt, German) and centrifuged at 10,000× g for 10 min. To 0.5 mL of supernatant was added 3 mL ethanol and 0.5 mM DPPH (300 µL) radical solution (Cayman Chemical Company, Michigan, USA) Therefore, absorbance was recorded at 517 nm. Ethanol and sample served as blank and control was ethanol and DPPH. Determination of Mannitol, Sorbitol Inositol Content The sorbitol content in treated and untreated A. sieberi plant leaves were measured using a Colorimetric Assay Kit (BioVision, Inc., San Francisco, CA, USA. Sorbitol was oxidized to fructose with the proportional development of intense color with an absorbance maximum at 560 nm. The sorbitol content in treated and untreated A. sieberi plant leaves were measured using a Mannitol Colorimetric Assay kit, (Sigma-Aldrich, Darmstadt Germany), measure the absorbance at 450 nm. The inositol content in the control and treated A. sieberi plant leaves were measured using a myo-Inositol Assay kit (BioVision, Inc., San Francisco, CA, USA. Phytochemical Screening Alkaloids were detected following Mayer's Test after 2 mL of Mayer's reagent was added to extract and formation of dull white precipitate revealed the presence of alkaloids. For the testing of the presence of terpenoids, Hirshhorn reaction was followed, and the occurrence of red to purple color upon heating with trichloroacetic acid determined the presence of terpenoids. For testing steroids, a Liebermann Burchard test was followed, and 1 mL extract was mixed with 1 mL of glacial acetic acid and 1 mL of acetic anhydride followed by the addition of two drops of concentrated sulphuric acid. The solution turned bluish green indicating the presence of steroids. For tannins, ferric chloride was added to the extract, and the formation of a dark blue or greenish black color showed the presence of tannins. Saponins were detected by the formation of copious lather after thoroughly shaking 1 mL of the extract with 5 mL of distilled water. Flavonoids were detected by following the Shinoda test with the formation of a red color after the addition of magnesium and a few drops of concentrated hydrochloric acid. For detection of phenols, a ferric chloride test was carried out and the extract was mixed with a few drops of aqueous ferric chloride (10%) and the appearance of blue or green color indicated the presence of phenols. For detection of glycosides, the substance was mixed with a small amount of anthrone followed by addition of one drop of concentrated sulphuric acid. After gentle warming over a water bath, formation of dark green color indicated the presence of glycosides. A xanthoprotein test was followed for detection of aromatic amino acids. The extract was mixed with 1 mL of concentrated nitric acid, and the white precipitate was formed. The mixture was boiled and cooled followed by the addition of 20% sodium hydroxide, and the appearance of orange color indicated the presence of aromatic amino acids. Gene Expression Levels The primer sequences used in qRT-PCR for the following five genes: HSP70, aquaporin, osmotin-34, LEA-1, and DHN1 genes are listed in Table 2 and β-actin was used as a reference gene. A total reaction volume of 20 µL was used including 2 µL of template, 10 µL of SYBR Green Master Mix (Fermentas, Burlington, ON, Canada), 2 µL of reverse primer (Fermentas, Burlington, ON, Canada), 2 µL of forwarding primer (Fermentas, Burlington, ON, Canada), and sterile distilled water. PCR assays were performed using the following conditions: 95 • C for 15 min followed by 40 cycles of 95 • C for the 30 s and 60 • C for 30 s. The CT of each sample was used to calculate ∆CT values (target gene CT subtracted from β-actin gene CT). The relative gene expression was determined using the 2 -∆∆Ct method . Statistical Analysis Data are presented as mean ± SEM (standard error for the mean) of three independent biological replicates. Statistical procedures were performed using IBM-SPSS version 23.0 for Mac OS and figures were compiled with Microsoft Excel 2016. Data were checked for outliers using SPSS. Data were checked for normality using Shapiro-Wilk's normality test at p < 0.05 to assess whether the data were parametric or nonparametric. One-way and two-way analysis of variance (ANOVA), followed by Duncan's Multiple Range Test (DMRTs) post hoc, were applied for each treatment group to estimate the significances among treatment groups. Means followed by different letters indicate significant differences at p < 0.05. Differences in the nonparametric data of phytochemical screening were assessed by using the Kruskal-Wallis significance test followed by pairwise comparisons post hoc analysis using SPSS. In order to integrate the results, a complete data set comprising all growth, physiological, biochemical, phytochemical, and gene expression parameters were subjected to multivariate analysis using SPSS statistical software. Conclusions Conclusively it can be inferred from the present study that drought and heat stress drastically influenced growth and metabolism of A. sieberi alba by reducing water uptake and use efficiency. Drought and heat stress inhibited photosynthesis and related attributes. Accumulation of osmolytes increased in stressed plants resulting in assisting in mitigation of oxidative effects of drought and heat stress on membrane structure and functioning. Differential expression of key drought and heat stress responsive genes was evident reflecting in some sort of dual functioning under combined effect of drought and heat stress, however, the interactive role of tolerance mechanisms at the biochemical and molecular levels in response to drought and heat stress is not known. Therefore, further studies are
2. Folsom and the Human Antiquity Controversy in America Folsom played a pivotal role in the development of American archaeology. Most everyone knows this. What may be less well known is why this particular site, alone among dozens of localities championed since the midnineteenth century, including several bison kills, finally established that humans were in the Americas by late Pleistocene times (see Meltzer 1991b). What may not be known at all is why, in the decade after the breakthrough at Folsom, the site’s investigators—Jesse Figgins and Harold Cook—were completely excluded from professional discussions of the site and North American Paleoindians. As it happens, those issues are linked in ways that reveal much about the history and context of research into human antiquity in America, and about the nature of scientific controversy and its resolution. This chapter explores those issues, but two brief comments on what this chapter is not: (1) it is not intended to be a strict narrative of the history of fieldwork at Folsom (the necessary parts of that are given in chapter 4) but, rather, aims more broadly at this and other archaeological and paleontological localities being investigated in the 1920s, to show how events and actions elsewhere set the stage and influenced the work—and the perceptions of the work—here at Folsom; (2) this chapter is also not intended to be an overview of the human antiquity controversy, although it necessarily requires a brief summary of that long and bitter dispute in order to establish the intellectual backdrop against which the research at Folsom was inevitably set and the gauge with which the evidence from this site would be measured (see also Meltzer 1983, 1991b, 1994). This chapter explores just what made the Folsom site so important, why it mattered, and what it meant for the discipline and those involved, by seeking to answer— invoking the spirit of Groucho Marx—a deceptively simple question. Who’s Buried in Grant’s Tomb?
BREAKING: Hacker Kim Dotcom Has Evidence #SethRich Was WikiLeaks Source Kim Dotcom, a famous internet entrepreneur, recently tweeted out further information on Seth Rich, the DNC staffer who was murdered in July of 2016 in Washington, D.C. Kim Dotcom now says he has evidence Seth Rich, the murdered DNC operative, is the Wikileaks source. He’s ready to release the evidence! The tweet reads: “If Congress includes #SethRich case into their Russia probe I’ll give written testimony with evidence that Seth Rich was @Wikileaks source.” If Congress includes #SethRich case into their Russia probe I’ll give written testimony with evidence that Seth Rich was @Wikileaks source. — Kim Dotcom (@KimDotcom) May 19, 2017 The Gateway Pundit has reported numerous times on the Seth Rich story that you can read here, we have also reported on Kim Dotcom’s affiliation with the case.
Ethical care during COVID-19 for care home residents with dementia The COVID-19 pandemic has had a devastating impact on care homes in the United Kingdom, particularly for those residents living with dementia. The impetus for this article comes from a recent review conducted by the authors. That review, a qualitative media analysis of news and academic articles published during the first few months of the outbreak, identified ethical care as a key theme warranting further investigation within the context of the crisis. To explore ethical care further, a set of salient ethical values for delivering care to care home residents living with dementia during the pandemic was derived from a synthesis of relevant ethical standards, codes and philosophical approaches. The ethical values identified were caring, non-maleficence, beneficence, procedural justice, dignity in death and dying, well-being, safety, and personhood. Using these ethical values as a framework, alongside examples from contemporaneous media and academic sources, this article discusses the delivery of ethical care to care home residents with dementia within the context of COVID-19. The analysis identifies positive examples of ethical values displayed by care home staff, care sector organisations, healthcare professionals and third sector advocacy organisations. However, concerns relating to the death rates, dignity, safety, well-being and personhood – of residents and staff – are also evident. These shortcomings are attributable to negligent government strategy, which resulted in delayed guidance, lack of resources and Personal Protective Equipment, unclear data, and inconsistent testing. Consequently, this review demonstrates the ways in which care homes are underfunded, under resourced and undervalued.
Fusion of multimodal temporal clinical data for the retrieval of similar patient cases The temporal evolution of the patient is a key factor in providing an effective healthcare. Moreover, comparing a patient case with other cases having a similar progress in time can prove valuable for medical decision making. However, the large amounts of complex temporal clinical data are not always considered by clinicians in the clinical information process. In this work, a computational framework is proposed for the comparison of multimodal temporal clinical data obtained from different patients. The purpose of the proposed framework is to use a patient's temporal evolution so as to retrieve the most similar profiles from large repositories which can be used to compare treatments, diagnosis, test results and other information. This is achieved by retrieving similar temporal patterns from the data by being based on a sequence similarity scheme. The similarity between the patient cases is assessed by a novel fusion scheme that involves the estimation of multiple dynamic time warping distances between the temporal clinical sequences. The results obtained from its application on a reference dataset of hepatic infections demonstrated high precision even for low recall rates.
We don’t live in a post-truth era, despite claims to the contrary. Facts exist as stubbornly as they did before. But we do live in a time when highly-placed people in government and media are actively undermining the idea that there is such a thing as verifiable truth by propagating untruths at dizzying speeds. You can call these falsehoods “weaponized lies,” as the psychologist Daniel J. Levitin does. Or you can call them what the late astronomer Carl Sagan might have: baloney. Written the year before he died, Sagan’s 1995 book The Demon-Haunted World: Science As a Candle in the Dark (the full text is available on the Internet Archive) contains a chapter delightfully titled “The Fine Art of Baloney Detection.” After running through a list of claims from pseudoscientists, mediums, and homeopaths, Sagan makes his diagnosis: These are all cases of proved or presumptive baloney. A deception arises, sometimes innocently but collaboratively, sometimes with cynical premeditation. Usually the victim is caught up in a powerful emotion—wonder, fear, greed, grief. Credulous acceptance of baloney can cost you money; that’s what PT Barnum meant when he said, “There’s a sucker born every minute.” But it can be much more dangerous than that, and when governments and societies lose the capacity for critical thinking, the results can be catastrophic, however sympathetic we may be to those who have bought the baloney. Sagan saw scientists as the natural first line of defense against an onslaught of nonsense. In the course of their training, scientists are equipped with a baloney-detection kit. The kit is brought out as a matter of course whenever new ideas are offered for consideration. If the new idea survives examination by the tools in our kit, we grant it warm, although tentative, acceptance. The kit, Sagan explained, consists of the fundamental principles of scientific skepticism. A dispassionate review of the evidence behind a claim, with an eye for fallacious or fraudulent arguments, is the best way to vanquish baloney, in its benign and insidious forms. Think of the kit as a checklist of challenges for yourself when evaluating new or suspect information. As with all exercises, repetition will make you stronger and better. Sagan lays out the steps: