source
stringclasses 1
value | snapshot
stringclasses 3
values | text
stringlengths 264
621k
| size_before_bytes
int64 269
624k
| size_after_bytes
int64 183
251k
| compression_ratio
float64 1.27
4.93
|
|---|---|---|---|---|---|
warc
|
201704
|
Washington Governor Jay Inslee Monday filed a formal request for a federal disaster declaration. If this is granted, it means the survivors of the massive landslide near Oso, Washington, would be eligible for federal assistance. Many of them will be counting on that since they don’t have landslide insurance.
Inslee said the search teams are continuing their efforts. But at the same time, the state is looking to the future. "We are now in the beginning phases of what you might think of as our mid-term or long-term planning process, particularly for housing for these families," he said.
Displaced families may get help in the form of short-term housing vouchers, but many in the landslide zone won't see insurance payouts for their lost property.
A Seattle-based industry group estimates that fewer than one percent of all home and business owners in Washington have landslide coverage. It’s expensive and can cost an additional $1,000 a year or more for a single-family home.
It’s suspected that most homeowners in the path of the slide did not have this type of coverage. A spokeswoman for the Washington State Insurance Commissioner said this means few people, if any, will be getting payouts.
| 1,224
| 679
| 1.802651
|
warc
|
201704
|
Friday, August 24, 2012
John Lovett (Loyola New Orleans) has posted Love, Loyalty and the Louisiana Civil Code: Rules, Standards and Hybrid Discretion in a Mixed Jurisdiction (Louisiana Law Review) on SSRN. Here's the abstract:
This
article examines the design of legal directives found in and surrounding
the Louisiana Civil Code through the prism of the classic rules versus
standards debate. The Preliminary Title portion of the article
introduces the vocabulary, descriptions and justifications typically
displayed in jurisprudential debates over the propriety of rules and
standards. Books One, Two and Three of the article analyze the extent
to which several significant legal regimes in the Louisiana Civil Code —
regimes that are likely to affect individuals in moments of personal
crisis, when they enter into and exit from intimate personal
relationships and when their love and loyalty to one another and to
other intimate associates is most severely tested — have incorporated
open textured standards as a primary form of rule design, have resisted
discretionary remedialism by remaining tethered to relatively
crystalline rules or have produced models of hybrid discretion.
Although the author originally expected to discover that Louisiana private law had largely embraced discretionary decision making within the realm of the Civil Code, punctuated with occasional moments of discretion skepticism, just as Niall Whitty has observed occurring in Scotland, the article’s analysis reveals that Louisiana has not evolved so decisively in the direction of standard based decision making models. Indeed, in the particular areas of private law examined (family law, co-ownership, and the inter-relationship between forced heirship and undue influence claims challenging wills), the author finds that Louisiana’s private legal order has only been partially transformed by the general trend toward discretionary remedialism that scholars like Whitty have observed occurring in other legal regimes. The article concludes by pointing to a number of additional concerns that should inform further scholarship examining whether Louisiana has assembled the proper mix of rules and standards.
Steve Clowney
http://lawprofessors.typepad.com/property/2012/08/lovett-on-rules-v-standards-debate-in-louisiana-law.html
| 2,355
| 1,205
| 1.954357
|
warc
|
201704
|
Innocence & Justice Innocence & Justice
Course Description
Students in the Innocence and Justice seminar will study the systemic causes of wrongful convictions in the context of real-life actual innocence case studies from around the country. The curriculum is culled from many different sources, including newspaper articles, documentary films, actual police reports and tapes of interrogations, case law and law reviews. Students will also have the opportunity to work to provide post-conviction relief for inmates who have been wrongly convicted and who have a credible claim of factual innocence of the charged offense(s). During the first several weekly sessions, students will be provided with an overview of criminal procedure, trial practice, and habeas corpus law. The next several sessions will be devoted to the causes of wrongful convictions, including mistaken eyewitness identifications, bogus forensic science, prosecutorial/police misconduct, and ineffective assistance of counsel. Students will also be doing a review and brainstorming of inmates' files; each student will be expected to present a short written case-brief of a number of inmates' files and a short oral presentation of pertinent information about the inmates' cases to the seminar participants, who will provide input on the strengths and weaknesses of the cases. Sessions on investigative techniques will be included throughout the semester.
Course work used for grading includes active participation in seminar, written case-briefs and an oral presentation to the Innocence and Justice Project Board of Directors at the end of the semester.
In this unique seminar, students will apply newly-gained knowledge about the Great Writ to real cases involving inmates in New Mexico prisons who have submitted information about their cases to the New Mexico Innocence and Justice Project and who have a colorable claim of factual innocence. Seminar participants will learn invaluable investigative techniques in the process of uncovering the facts that eluded the trial court that convicted the inmates. Students may also have the opportunity to work with practicing attorneys by drafting motions and pleadings.
Enrollment is limited to twelve students in their second or third year.
| 2,268
| 1,028
| 2.206226
|
warc
|
201704
|
Last night, Washington lawyers gathered to celebrate one of the finer aspects of the legal profession--the spirit of public service.
At the annual Equal Justice Works awards dinner, held in the atrium of the Ronald Reagan Building, the organization honored three lawyers who have dedicated themselves to putting their legal skills to work for some of society's least well off.
Equal Justice Works provides two-year fellowships to young lawyers who implement public works projects in under-served communities. At last night’s dinner, David Stern, who serves as chief executive officer of the organization, announced that this year's awards dinner raised over $1.43 million.
Scott Burrill, a University of Iowa College of Law 3L,
received the Exemplary Public Service Award for a Law Student for his work
representing indigent clients as part of his internship at the Alaska Public
Defender Agency. Burrill, who was introduced by Judge Ann Claire Williams
of the U.S. Court of Appeals for the 7
th Circuit, said his work for
the Public Defender Agency reminded him of “why I went to law school in the
first place.”
Equal Justice Works also honored Stanford Law School Dean Larry Kramer with the John R. Kramer Outstanding Law School Dean award for his leadership of Stanford’s effort to make the law school more public service oriented. Under Larry Kramer’s watch, Stanford has expanded its clinical education program, pushed law students to pursue public service after they graduate, and built the international law program to support the increasing globalization of law practice.
Judge David Tatel of the U.S. Court of Appeals for the D.C. Circuit presented Larry Kramer with the award, calling him an “outstanding” law school dean who has overseen “a dramatic explosion in public interest programs.”
The final award of the evening, and Equal Justice Work’s highest honor, was presented to D. Bruce Sewell, who received the Scales of Justice Award for his work in founding Intel Corp.’s pro bono legal program. That program has helped thousands of Intel employees put their skills to use as volunteers helping improve their communities. In September, Sewell stepped down as senior vice president and general counsel of Intel to join Apple Inc. as general counsel and senior vice president of legal and government affairs.
The Scales of Justice Award was presented by Sen. Kirsten Gillibrand of New York, herself a lawyer who left her lucrative partnership at Boies, Schiller & Flexner to go into government service.
In his remarks, Sewell took time to acknowledge the fact that despite the increasing commitment to pro bono work at both law firms and in corporate legal departments “the legal profession is one of the least diverse portions of society in the world, particularly at the senior partner and general counsel level.” According to this year’s National Law Journal diversity special section, the number of minority lawyers at some of the country’s largest law firms has remained relatively flat.
Sewell closed by challenging the lawyers in attendance to “look at ourselves and ask whether we are creating opportunities for young, diverse lawyers.”
“We have the opportunity to share the potential and passion of diverse lawyers, so that one day we may transcend the diversity of race, gender, and religion,” he said.
| 3,439
| 1,654
| 2.079202
|
warc
|
201704
|
Issue Brief
The F-35 Joint Strike Fighter is being developed to replace most of the Cold War tactical aircraft operated by three U.S. military services and nine allies. The success of the program depends on holding down costs. However, House backers of an unneeded “alternate engine” for the single-engine F-35 are threatening to withhold money for the fighter unless their pet project is funded — a move that potentially drives up the cost of each plane in the program. In effect, the legislators are trying to hold hostage the modernization of military air fleets to assure their home states get jobs at the expense of taxpayers and our warfighters.
Supporters of the alternate engine say they want the military to buy two engines for the F-35 so there can be competitions to discipline price and performance. However, military and civilian officials in the Pentagon have been telling Congress since 2007 that the plan is likely to backfire, because it would force the government to pay for two production lines, two supply networks, and two workforces while reducing the volume of work given to either team. Most of the outside analysts who have looked at the alternate engine agree, finding that billions of dollars in up-front costs might never be recovered, and that fielding two different engines for the same plane would complicate wartime logistics.
Defense secretary Robert Gates correctly dismisses the alternate engine as a waste of money, pointing out that the engine selected years ago to power the F-35 is performing well in tests, while its rival is not performing well at all and may not be able to meet military needs. Because the alternate engine will not reach the field until long after the engine currently being used in all F-35s, some of the potential benefits from competition have already been lost. Beyond that, the Pentagon’s deputy comptroller for programs and budgets told Congress on May 19 that many users around the world would resist buying competing engines due to the cost and complexity involved in supporting two different systems.
The latter argument seems to be borne out by experience. No new military aircraft program in recent times has entered service using competing engines, and no other subsystem on the F-35 — such as the radar or the landing gear — is being competed across the lifetime of the program, because it’s just too expensive to do things that way. The normal approach is to hold a series of competitions at the beginning of the program, pick the best candidate for each subsystem, and then use the winning items in the production plane. The primary engine for the F-35 was chosen that way, but General Electric and its allies in Congress didn’t like the outcome so they have been agitating ever since for the subsidies they need to stay engaged.
For Secretary Gates, the alternate engine controversy has become a test of whether the Pentagon can stop wasting tax dollars. He has recommended to President Obama that the entire defense authorization bill should be vetoed if it contains money for the alternate engine, because the military needs to spend the billions of dollars involved on more urgent items. Some will argue the budget deficit is so huge that one more superfluous program won’t make a big difference, but that kind of reasoning partially explains why we have a big deficit in the first place. If Congress can’t bring itself to cancel military programs even when the Pentagon says they are unneeded, how will we ever get out of the fiscal mess we are in?
Find Archived Articles:
| 3,610
| 1,716
| 2.10373
|
warc
|
201704
|
Last week I had the opportunity to give some constructive feedback to a vendor. I met with my sales rep, as well as two designers/developers of the database interface, via telephone and Adobe Connect. Right from the start they made me a presenter and I was able to walk them through my thoughts and give suggestions for improvements. At the end of the one-hour conference call, I felt like I had told them good information, and they told me they appreciated the feedback. They even asked if they could contact me later in the summer to send prototypes. This opportunity to provide a vendor feedback (and have them listen!) does not come by very often, so I did not want to waste my time or theirs. If you ever have the chance to provide feedback of any sort, here are my five suggestions to make the process valuable for all involved.
1. Come prepared
Even though I am quite comfortable with this particular database, I spent an hour the morning before our call to go through the database and make clear notes about what I wanted to show them. I used Evernote to outline my thoughts, just as if I was going to give a presentation to a class. To be honest though, I was actually more prepared for the meeting than I am with most classes I teach. The vendor reps had multiple questions for me as I was taking them through my demonstration (which I appreciated!) so it was useful to have the outline to get back on track after answering them.
2. Set the stage
When providing feedback, make sure you set the stage to the reps about who your users are. This particular database vendor has both academic and corporate clients, so it was important for me to tell them that my users are predominantly undergraduates, 18-22, who only want to use Google, and require an answer in 2 minutes or less. I had to let them know that while I was providing the feedback, I was doing so on behalf of my users. I know how valuable the information in this particular database is, but my students have a hard time getting to it, and that likely shows in the lower-than-they-should-be usage statistics. I also hinted at similar products, which my students find easier to use, that the vendor should check out for a comparison with their own product.
3. Don’t gripe
This is a big one. Don’t whine and gripe period. Doing so will likely result in you losing credibility and the vendors stop listening. If you have gripe, think and rephrase into a reasonable suggestion that is based on your experiences working with your users. Again, make suggestions on behalf of the users, not because you think the database interface was designed by a flock of turkeys.
4. Put yourself in their shoes
Understand that database design — both the in back end indexing and the front end interface — can be extremely complicated. Even though you think that the team of turkeys who designed the database did so overnight, in actuality considerable thought likely went into making it work. Sometimes in an effort to appease everyone (i.e., paying customers), vendors throw every single limiter and feature possible to the users, only to muddle the interface and make the resource more difficult to use. In my conversation with this vendor last week, I did my best to let them know that I understood that they had an enormous amount of information to present to diverse user groups. I also did not pretend to know what was technically possible with altering the database interface, nor did I make assumptions that all of my suggestions would be appreciated by all of their customers. While I can be an expert in understanding how my community uses a particular resource, I can’t claim to be an all-knowing expert on how everyone should use a database, or in how a database should be designed to meet every user’s needs. These vendors who care about these issues, such as the one I talked to, have an extremely huge job, and I’m not sure I’d really want to be in their shoes.
5. Follow up with additional information
Shortly after our meeting, I emailed my Evernote outline and notes to my vendor rep, as well as links to some videos I had made on using the database. The notes show my thought process as I demonstrated how I use the database, while the videos show how I teach my community to use the database. Both can be used, along with their own notes (and potentially the Adobe Connect recording, if they recorded) for them to follow up with questions. The vendor also said they may be in touch this summer with additional questions and perhaps some prototypes, so it appears that the opportunity for feedback will continue.
| 4,635
| 2,105
| 2.2019
|
warc
|
201704
|
By little, I mean narrowly focused, not unimportant!
I've been frustrated lately by reading assignments that are way too difficult for students. I bet they have been even more frustrated than I have! This is particularly difficult when content area teachers are doing their best to bring in authentic reading material such as current news in the field. The problem is that such material is written at a level that precludes independent reading. Instead, students end up needing a great deal of support. When teachers assign reading material, sometimes it is difficult to know until after students have done the reading whether the material is too easy or too advanced. Here are two tools that can make evaluating the reading level of text a little easier. For a rough estimate, Microsoft Word has the option to evaluate readability statistics as part of the grammar and spelling check. You turn this feature on by clicking on the MS Office button in the upper left hand corner, clicking on Word Options, choosing "proofing", and checking the box that says "show readability statistics." Copy and paste or type about 100 words of the text into MS Word and save. Then, when you are in a document, go to the reviewing toolbar and run the spelling and grammar check. At the very end of the check, you get a window with a Flesch-Kinkaid reading level. It's not perfect, and I personally think it skews a little low, particularly if the text has a lot of dialogue. However, it is a great quick and easy check that a reading selection isn't way off base. The second tool is a program that you can download called Reading Rater that is free. It's nice to sometimes do a cross check between the two, but I have found them to be consistent. Hope these are useful for you!
| 1,768
| 927
| 1.907228
|
warc
|
201704
|
Our Passage to Freedom Program is just one of the wonderful programs that allows Little Shelter to make a difference not only in our community, but across America
Throughout the United States, animal overpopulation continues to be a problem. Puppy mills continue to breed animals for profit. In addition, not all pet owners spay or neuter their pets, which often results in unwanted litters and homeless animals.
We believe that every animal deserves the opportunity to live a long and happy life. Little Shelter assists other shelters by taking in some of their animals, thereby allowing them to rescue more homeless and unwanted animals.
Little Shelter makes sure that the animals’ basic needs are met. Each animal receives medical and behavioral evaluations, a warm, safe place to stay, and plenty of nutritious food and water. But we go beyond the basics. The animals receive personalized attention. They are walked frequently throughout the day and have time to play with our staff, volunteers and other animals.
Little Shelter is proud to be able to offer what we believe is an invaluable service, and we are confident that our efforts make a tangible difference in the lives of the animals we serve.
| 1,217
| 662
| 1.838369
|
warc
|
201704
|
Depending on your genetic make-up, your body may store fat in your tummy, buns or thighs. For most people, all three areas can benefit from a little shaping up, as most clothes tend to highlight at least one, if not all three, features. To tone your abs, glutes and legs, aim for a mix of cardio to burn calories with strength-training moves to target specific muscle groups. Work on your core strength will not only help shape up your abs, but it will also condition your body to gain more endurance and resilience during your workouts.
Burn Body Fat with Cardio
To melt away extra inches, 60 minutes of daily aerobic exercise will get your heart rate up, and kickstart your body into working off your fat reserves. Certain cardio moves will target your three problem areas while also burning calories. Running will help target your legs and glutes, while running on stairs will tone your rear end quickly. Indoor or outdoor cycling will also tone legs and butt as you pedal your way to a leaner physique. Swimming is an effective total-body workout that will engage your core as you also kick your legs. Unfortunately, most cardio doesn't specifically target the abs, but you can keep your abs engaged as you exercise to make sure these muscles receive part of the benefit.
Crunch Your Way to Tight Abs
Once your cardio routine is done, start targeting specific areas. Abs benefit from repetitive moves, though traditional crunches can be boring and, if done incorrectly, tough on your back. Try crunches on a foam wedge or stability ball to keep the lower back from overextending. Reverse crunches ask you to raise your legs instead of your shoulders, helping to keep the back stable. Try a modified crunch where you rest on your sit bones, with legs and arms raised. Slowly lower legs and arms without touching the floor, then slowly raise them again. The common Pilates move, the Plank, will tone your abs without moving a muscle. Start by lying flat on a mat. Slowly raise yourself with your arms, as if beginning a pushup. Without moving, engage your core to keep your body straight, still and suspended as you hover on outstretched palms and toes.
Work Your Butt Off
Just 20 minutes can be enough time to squeeze in a few moves to tone your glutes. Any form of lunges will target your rear end. Do the moves while holding small dumbbells to add to the intensity and the results. For basic lunges, start with your feet together. Step your left leg forward so your feet are 2 to 3 feet apart. Keeping your back upright, lower until your knees are bent at 90-degree angles. Use your right leg to stand back up while you lift your left leg in front of you. Balance for a breath, then lower your leg and repeat.
Tone Your Legs in No Time
Moves to target your legs will focus on engaging those muscles to help with balance and stability. Many yoga moves require strong legs to maintain poses, and will be effective for toning calves and thighs. In addition to your cardio, try squats or lunges to exercise legs and glutes. At the gym, spend part of your routine on a stair climber, and make sure the resistance gets your calves burning.
Photo Credits Thinkstock Images/Comstock/Getty Images
| 3,203
| 1,569
| 2.041428
|
warc
|
201704
|
Research & Markets added the Primary Research Group’s Survey of Academic Libraries, 2012-13 Edition to its offerings as of March 29. The report contains data from 110 American academic libraries.
When it comes to spending, libraries feel they’re not falling behind with the rest of their schools: almost 69 percent expect their resource allocation to keep pace with other departments. However that isn’t necessarily enough to go around: some 60 percent of colleges in the sample with more than 10,000 students say their capital budget has declined over the past two years, and more than 63 percent of libraries in the sample say that salaries and benefits for their librarians have declined in real terms over the past year.
(The Association of Research Libraries Salary Survey, on the other hand, found that academic librarians’ salaries in 2010-2011 increased 1.5 percent in the United States and two percent in Canada. As
LJ reported, while that’s the smallest percentage increase since 2005, it is still greater than consumer price index increases during the same time period.)
Materials spending has grown slightly, by less than 2 percent, the Primary survey found, but unsurprisingly, the real growth category is technology: a quarter of libraries in the sample have bought ereaders, iPads, or other devices on which patrons can read ebooks, and community colleges have spent a mean of almost $45,000 on new computers or workstations for library instructional technology center.
Other topics addressed in the complete report include digitization of special collections, conference attendance and library staff training, views of open access, use of cloud computing and inventory tracking technologies and more.
| 1,748
| 928
| 1.883621
|
warc
|
201704
|
The Transparency Bill – on which I seem to have spent most of my waking hours, excluding the few days with our grandchildren over Christmas – has had some very positive results for the House of Lords.
Setting aside the particular areas of agreement and disagreement the reputation of the House has been improved in four ways.
First, a large number of charities, campaigning and pressure groups have become much more aware of the significance of Parliament, and of the Lords itself, in our role as scrutineers of Government proposals. That must be good.
Secondly, they have worked together to a much greater extent than previously in ensuring we were well briefed. The independent Commission led by Lord Harries of Pentregarth was especially effective in this respect, but so too were the Charities Aid Fund, Bond, AVECO, and individual organisations like the Royal British Legion, OXFAM and Friends of the Earth. That too was very helpful.
Thirdly, the result was a whole package of sensible amendments on which we worked together, with support from various parts of the House. A number of these were first tabled by me at the Committee stage but this week we shared responsibility for leading on them, with Lord Harries making the first case, and others of us following on, and some with me taking that role. We therefore had signatories from the Conservative and Liberal Democrat benches as well as from Crossbenchers. We also had Labour supporters when we voted, and carried the day as a result.
Fourthly, the combined effect of the above was that we have been securing some really useful improvements and clarification from Ministers. In particular, my colleagues Lord Wallace of Tankerness and Lord Wallace of Saltaire (no relations !) are to be commended for not only listening so carefully, but in seeking to meet the many concerns expressed by so many organisations – and expressed by us on their behalf during the Committee stage – before the Christmas Recess.
My only disappointment is that a cross-party attempt to exempt charities from the provisions of this Bill altogether was stillborn when the Opposition indicated it would back the Government in voting to keep charities inside the regulations. After all the complaints about the inadvertent effect the Bill could have on charities, I was perplexed by their decision.
We still have Third Reading on Tuesday, and I expect further improvements to be made at that point in relation to the operation of constituency spending limits. It is only to be hoped that the Prime Minister does not then wish to instruct colleagues in the Commons to vote against any of the amendments sent to them by the Lords. All are carefully considered and should be accepted. All in all, the end result will be a MUCH better Bill than the one which came to us from the other end of the building… and a very healthy reminder of the value of the House when it truly represents opinion outside.
| 2,967
| 1,492
| 1.988606
|
warc
|
201704
|
Many labs are measuring degranulation of T cells by cell-surface expression of CD107 (see Betts et al., J Immunol Methods 281: 65 (2003), Link to PubMed abstract), and it's often desirable to simultaneously detect intracellular cytokines, such as IFNg. Unfortunately, optimization of both of these readouts in the same stimulation is not trivial. I suggest the following tips:
1. Stimulate cells in the presence of 5 ug/mL EACH of brefeldin A and monensin. Monensin helps maximize the CD107 readout, whereas brefeldin A helps maximize the IFNg readout. To acheive these final concentrations without addition of an excessive amount of solvent, dissolve monensin (Sigma #5273) at 5 mg/mL in methanol, then dilute 1:10 in PBS on the day of use, followed by 1:100 dilution into the culture medium. Ditto with brefeldin A, using DMSO as the initial solvent (or use the FastImmune brefeldin A from BD [Link to product description], which comes as a 5 mg/mL stock in DMSO).
3. Add the antibody, at 10 uL per 200 uL of cell stimulation culture, at the beginning of the activation period, and activate for 5-6 hours. Continue with processing for surface and intracellular staining as usual.
| 1,185
| 690
| 1.717391
|
warc
|
201704
|
As any parent, I am always concerned that I am doing all I can to keep my kids healthy. I try to hit all of the food groups when I cook and make sure they brush after. I kiss boo boos and take them to the doctor when I am told and make sure they get enough sleep at night. We have been blessed with pretty healthy kids and have not had any major illnesses with either of them. The one thing I do have a problem with is getting them to take their vitamins. Caitlin, who is almost 15, thinks she is “too old” to take vitamins and Henry cannot swallow pills. With Henry’s autism, we have always had an issue with his swallowing. At the age of almost 12, we still find ourselves reminding him to take small bites and to cut his grapes before he eats them. As far as taking medicine, vitamins or anything similar, we struggle big time. When I heard about alternaVites, a multi vitamin in powdered form, I could not wait to try them.
How To Keep Your Kids Healthy
Since we cannot get all the vitamins and minerals we need from our food, we need to take supplements. We thought we could get Henry to take gummy vitamins, but they tasted awful and he spit them out. I tried one and I do not blame him. It was gross! It is hard to know how to keep your kids healthy when most products do not work for you. With alternaVites, you have one small packet that has a flavored powder that is similar to a Pixie Stix. They come in Strawberry Bubblegum and Raspberry Cotton Candy (Henry’s favorite). What I love about this product is that if you cannot coax your child to take the alternaVites as is, you can mix it in yogurt, applesauce or pudding.
I am not surprised that alternaVites have been awarded the Best Product of 2012 by Vitamin Retailer. This product is endorsed by doctors, pediatricians and dieticians nationwide and even comes in an adult form! Each packet is free of any additives, coloring, flavoring and is certified Kosher. When compared to traditional vitamin pills or tablets, alternaVites have the same amount or better of the recommended daily allowance of minerals, vitamins and nutrients. If you have a child or an adult, who has difficulty swallowing, these are perfect! What a relief to have finally found a vitamin that Henry can take. Now I can rest easy knowing he is getting the vitamins and minerals he needs each day. You can purchase alternaVites from their website for $15.95 per box of 30 packets as well as on Amazon.
| 2,463
| 1,259
| 1.956315
|
warc
|
201704
|
What pushes women mad? What makes men burn with enormous assurance? What makes both of these satisfied in bed? Views differ, but the most popular reason cited is the dimension and power of the man's erection. The health-related sector has produce every couple that will be surely satisfyed by a healthy and effective remedy, since there are a number of men who appear so bugged by their so averagely - sized penises that overtly influence their intercourse lifestyles and self - esteem.
>> Order & Learn more about VigRX Plus pill << >> Order & Learn more about VigRX Plus pill <<
Since its start in 2,000, VigRX has developed into a more effective and better VigRX Plus that most men use to satisfy their egos and sexual urges. With this powerful penis enhancer tablets, dysfunctional erection, premature ejaculation, and psychological baggage are no more a difficulty. Though a whole lot of penis enlargement products have been introduced in the market, the list is dominated by VigRX Plus. On its website, at vigrxplus.com, recommendations from its different users attest to the effectiveness of VigRX Plus. They are happy with the timely climax, extreme orgasms, more substantial erections, and advanced sexual stamina that are brought to them by VigRX Plus.
Albion Medical, the manufacturing company of VigRX Plus, factors its success to an improved formulation. Every VigRX Plus capsule is a formula of aphrodisiacs and ancient herbs from South America, China, and Europe, which were scientifically studied and designed to produce ideal results and satisfaction.
Male Enhancement Products In Bangladesh
One component of VigRX Plus is the Epimedium (locally called Horny Goat Weed), which.originally came from China. This leaf extract promises to boost sexual libido because erection is enhanced by its prime ingredient, icariin,. Epimedium, also increases blood flow through the member, increasing sexual sense. According to vigrx.com, Epimedium functions by affecting directly testosterone that enhances sexual need and energy.
Also from China, Cuscuta seed extract decreases the dying of sperm, as well as treat early climax. This seed extract has fertility functions that function in both the male and female physiques.
Same as Cuscuta, blood flow is improved by Ginkgo Biloba, thus enhancing erections. Because its parts make way to get an improved circulation and oxygenation, Ginkgo Biloba's treats other potential problems in the body, including impotence among men.
Oriental Red Ginseng has been an aphrodisiac, supplement, and china medication because the ancient times. Ginseng is thought to replenish weak bodies and increase energy. While energizing the body, Ginseng's potent ingredient ginsenoside modifies the flow of blood for the organ and the mind, so stopping early ejaculation and possible impotence.
Like the rest of the ingredients, Saw Palmetto acts as a powerful aphrodisiac that generates multiple results in a man's sexual performance and health. Saw Palmetto treats urinary infections and enlarged glands, while improving the blood circulation and hormonal balance.
With VigRX Plus' aim to remedy erectile dysfunction and insufficient sexual desire, Muira Pauma bark extract is developed to the pill. According to study, 60 % of men have enhanced their strength after making use of this bark extract. Future studies are set to reveal other advantages that may be perhaps sent by Muira Pauma, though its full potential haven't yet been discovered.
What other aphrodisiac is effective at making sensual desires that later change into an increased libido? Catuaba bark extract is the most renowned aphrodisiac in Brazil that attends to impotence and enhances libido. More than its sexual capabilities, Catuaba also enlivens stressed physiques and shoots up a man's anxious system.
Rather than just stimulating the libido of every man, VigRX Plus features an unique part that directly cares for your center. Hawthorn berry helps boost a man's flow of oxygen and blood to his heart and mind, therefore decreasing the chances of blood pressure and heart irregularities.
The mixture of the aforementioned ingredients certainly makes VigRX Plus a product. What makes VigRX more exceptional is its added formula of other three effective ingredients that hit all of the other penis enlarger supplements. Bioperine, Damiana, and Tribulus just made VigRX Plus much better.
Tribulus Terrestris also strengthens stimulates and erections and boosts libido, while enhancing physical energy. From being an European ancient sexual medication, Tribulus is now utilized by modern Europe and North America to treat sexual dysfunctions and other general body weaknesses.
Damiana, a trusted and famous herbal aphrodisiac since the time of the Mayans, has been developed to higher improve VigRX Plus. Men will notice an improved level of sex grit, longer erection, and more sensual orgasms. With these results whole, a VigRX Plus person will really be pleased with his lovemaking encounter.
To hasten the human body absorption degree of the rest of the components, VigRX Plus is packed with the winning ingredient Bioperine. Research in america present that combining Bioperine with other nutrients shows a 30% escalation in the intake velocity. This means the effectiveness of VigRX Plus may be rapidly experienced by the consumers. There is as it is the only person with Bioperine nothing that may make match with the effectiveness of VigRX Plus.
VigRX Plus is a holistic enhancer merchandise that attends to the physical, emotional, and sexual needs of men. The people behind VigRX Plus work hard to please their clients. Additionally, since all the components are natural herbs, there's not really one side-effect that will upset or cause any user.
| 5,780
| 2,601
| 2.222222
|
warc
|
201704
|
CAT scan Computed Tomography (CT)
A CT scan is a special type of x-ray carried out by a radiographer that takes pictures of cross sections or slices of organs and structures in the body. Each scan or slice when put together forms a 3-D picture of the body. A CT scan offers different views of different tissue types including liver, pancreas, bones soft tissue and blood vessels. CT scans are commonly performed on the head, chest and abdomen and involve exposure to radiation in the form of x-rays however; this is kept to a minimum. The patient lies on a couch which slides through a narrow doughnut as the images are obtained.
Benefits
A CT scan offers a more detailed image that that of a plain x-ray and allows quick, accurate diagnosis of a number of medical and surgical conditions.
The Procedure
Patients are encouraged not to eat anything for up to four hours prior to the scan however, they can drink water. Strenuous exercise and caffeine should be avoided on the day of the scan and it is advisable to arrive in plenty of time in order to allow the heart to rest.
Most scans take approximately 30 minutes but occasionally a scan of the abdomen can take up to one hour. The procedure is painless.
During the scan patients are asked to lie very still on the CT table and are sometimes attached to an ECG monitor that shows the heart rate. For certain scans patients will be given an injection or drink of contrast agent (a radio-opaque dye) which allows the radiologist to see parts of the body more clearly. Scans of the chest, abdomen or pelvis requiring the injection of contrast will take longer than non-contrast scans.
If a patient is pregnant or there is or possibility of being pregnant, the radiographer should be informed prior to the scan. If another test with no ionising radiation can be performed then this would be preferable.
Patients should continue to take their current medication unless they are advised otherwise and diabetics should inform the Radiology Department prior to a scan which medication they are taking.
It is helpful if patients bring any previous x-rays with them.
Possible Side Effects
The injection can make some patients feel hot and give them a strange taste in the back of the throat. This is quite normal and quickly passes. Occasionally other side effects such as feeling sick, skin reactions and very rarely anaphylaxis can occur. For this reason patients are asked to stay in the department for 15 minutes after their scan to observe for this. If delayed reactions develop after leaving the hospital, then patients should seek medical attention.
| 2,613
| 1,290
| 2.025581
|
warc
|
201704
|
freely available re-usable Int. J. Environ. Res. Public Health 2013, 10(8), 3363-3383; doi:10.3390/ijerph10083363 1 2 3 Abstract :The microbiological quality of water from a wastewater treatment plant that uses sodium hypochlorite as a disinfectant was assessed. Mesophilic aerobic bacteria were not removed efficiently. This fact allowed for the isolation of several bacterial strains from the effluents. Molecular identification indicated that the strains were related to Aeromonas hydrophila, Escherichia coli (three strains), Enterobacter cloacae, Kluyvera cryocrescens (three strains), Kluyvera intermedia, Citrobacter freundii (two strains), Bacillus sp. and Enterobacter sp. The first five strains, which were isolated from the non-chlorinated effluent, were used to test resistance to chlorine disinfection using three sets of variables: disinfectant concentration (8, 20 and 30 mg·L −1), contact time (0, 15 and 30 min) and water temperature (20, 25 and 30 °C). The results demonstrated that the strains have independent responses to experimental conditions and that the most efficient treatment was an 8 mg·L −1dose of disinfectant at a temperature of 20 °C for 30 min. The other eight strains, which were isolated from the chlorinated effluent, were used to analyze inactivation kinetics using the disinfectant at a dose of 15 mg·L −1with various retention times (0, 10, 20, 30, 60 and 90 min). The results indicated that during the inactivation process, there was no relationship between removal percentage and retention time and that the strains have no common response to the treatments. 1. Introduction
Reclaimed water is primarily used for agriculture and recreational activities in developing countries that have limited water supplies [1,2]. Wastewater is usually treated in activated sludge systems, which allow for the removal of high organic loads but results in the ineffective elimination of pathogens [3]. For this reason, reclaimed water may transmit human diseases and poses an environmental risk [2,4]. In wastewater treatment plants (WWTPs), secondary effluents are commonly disinfected using chemical agents, such as chlorine and its derivatives, because of their biocidal effect [5].
Sodium hypochlorite (NaClO) is a widely used disinfectant due to its strong oxidizing capacity. When it comes into contact with water, this molecule produces both HClO (hypochlorous acid, the more active fraction of chlorine) and ClO
− (hypochlorite ion). These fractions constitute the free available chlorine [6]. NaClO affects the plasmatic membranes of bacterial cells and disables enzymatic active sites. NaClO also diminishes the biological functions of proteins, and it produces deleterious effects on DNA. Because HClO predominates, these effects are potentiated at low pH values. This is attributed to a higher penetration of the disinfectant through the bacterial cell envelope [7,8].
This type of chemical disinfection is not always effective against pathogenic bacteria because the concentration of residual chlorine needed to inactivate each type of microbe is specific [1]. Environmental and physicochemical factors must also be considered during inactivation because they affect the efficacy of the disinfectant. Thus, it seems difficult to establish common conditions that will satisfactorily inactivate all species of microorganisms, especially for pathogens that have developed resistance to disinfectants [9,10].
The mechanism by which bacteria acquire resistance to chlorine and its derivatives is not well understood. It is known that environmental conditions (e.g., temperature) can diminish resistance to stress factors such as chlorine [11]. The term stressome describes the phenomenon of indirect resistance that occurs when additive environmental and stress factors cause the expression of genes that increase bacterial resistance [12]. Additionally, suspended solid particles and organic matter can provide protection to microorganisms by generating a demand for residual chlorine, which decreases the availability of chlorine and weakens the disinfection process. Microbial aggregation is another factor that confers resistance to chlorine disinfection [13,14].
Several waterborne diseases are caused by opportunistic and pathogenic bacteria that are found at lower levels than the traditional indicators of water quality [15,16]. The methods for the detection of these microorganisms are complex, and some species show greater resistance to high doses of disinfectants [17]. Consequently, the simple presence/absence method that is traditionally used to indicate treated wastewater quality does not guarantee the presence or absence of opportunistic and pathogenic bacteria [3]. New strategies are currently being developed to decrease the presence of pathogenic microorganisms in secondary effluents. To this end, it is important to assess the effects that different doses of disinfectants have on microbes, the retention times the microbes are exposed to, and the temperature of the milieu [4,10,18,19,20].
The objectives of this research were: (1) to assess the microbiological quality of water by counting mesophilic aerobic bacteria at different points of the WWTP; (2) to isolate and identify bacterial strains from the non-chlorinated and chlorinated effluents; (3) to evaluate the resistance of bacteria, isolated from the non-chlorinated effluent, to NaClO disinfection. The experimental conditions included: exposure to different doses of NaClO, to different contact times with the disinfectant and at different temperatures; and (4) to assess the response of the bacterial strains, isolated from the chlorinated effluent, before NaClO treatment by investigating the kinetics of inactivation at a single common dose using various contact times.
2. Materials and Methods 2.1. Microbiological Quality 2.1.1. Sampling
Water samples were obtained from the WWTP at the Instituto Tecnológico de Estudios Superiores de Monterrey in Hidalgo, Mexico, in October 2009 and March 2010. This plant treats municipal wastewater using a conventional activated sludge process with extended aeration, and the tertiary treatment is chemical disinfection using NaClO (11%) by a dripping process. The disinfectant dose used in the WWTP is approximately 15 mg·L
−1; this dose guarantees a residual chlorine concentration of 0.5 mg·L −1, as recommended by international water treatment regulations [21]. The samples were collected as follows: (1) at the influent; (2) at the discharge point of the secondary, non-chlorinated, effluent; and (3) at the discharge point of the chlorinated effluent. Temperature and pH parameters were measured in situ using a multiparameter water quality meter equipment (HI 8014, Hanna Instruments, Padova, Italy). All procedures were performed according to the protocols described in the Standard Methods for the Examination of Water and Wastewater [22]. 2.1.2. Mesophilic Aerobic Bacterial Counts
The microbiological quality of water from the WWTP was determined by calculating the mesophilic aerobic bacteria removal percentage at the three points of sampling. The quantity of colony-forming units (CFUs) was assessed using the 10-fold serial dilution method. Each dilution was plated in duplicate on inverted standard count agar (Bioxon, Queretaro, Mexico) and incubated at 37 °C for 24 h. The results of the quantifications are reported as the log
10(CFU·100 mL −1) and as percentages. 2.2. Isolation and Identification of Bacterial Strains 2.2.1. Isolation of Bacterial Strains
The plates of mesophilic aerobic bacteria were used to isolate bacterial strains randomly. Only colonies approximately 1 mm in diameter that were completely separated from each other were collected and cultured again. The resulting strains included five strains from the non-chlorinated effluent and eight strains from the chlorinated effluent.
2.2.2. Preparation of Bacterial Suspensions
A suspension was prepared for each of the thirteen strains. These cultures were inoculated in triplicate into flasks containing 200 mL of nutritional culture medium (Bioxon). The cultures were then incubated at 37 °C until the suspension was standardized to 0.5 McFarland (1.5 × 10
8 CFU·mL −1), as reported by Cavalieri [23]. This reading was the initial cell density for each assay. Cell density values were obtained at 460 nm using a Genesys 10 UV-visible spectrophotometer (Thermo Scientific, West Palm Beach, FL, USA). 2.2.3. Molecular Identification of Bacterial Strains
Gene amplification, sequencing and molecular identification of the thirteen bacterial strains were performed at the Universidad de Santiago de Compostela in Spain. Frozen strains were transported in media containing glycerol (20%). In the laboratory, the strains were reactivated in brain-heart infusion broth (Difco, Franklin Lakes, NJ, USA) at room temperature for 24 h. The strains were then purified and cultured on plate count agar (Liofilchem, Via Scozia, Italy) to evaluate the growth of viable bacteria. To ensure proper DNA extraction, tubes with enriched cultures were grown in duplicate in brain-heart infusion agar at 30 °C for 48 h. The extraction and purification of DNA was performed using a Qiagen extraction kit (Hilden, Germany). DNA was quantified using a Qubit fluorometer (Invitrogen, Carlsbad, CA, USA). The amplification of DNA fragments was performed using a MyCycler Thermocycler (BioRad, Hercules, CA, USA) and the universal primer pair for the 16S rRNA gene: p8FPL/p806R [24]. DNA amplicons were tested by gel electrophoresis using the SYBR safe marker (BioRad). The sequences were obtained using an automatic sequencing system (ABY 3730XL DNA Analyzer, Applied Biosystems, Foster City, CA, USA). The sequence alignments were performed using ClustalX2 6.0-2010 and Chromas Lite 2.01-2005 software. Finally, the sequences were compared to the GenBank database to assign the closest formal taxon to each sequence.
2.3. NaClO Resistance Tests
To assess bacterial resistance to chlorine, three treatments were tested on the five bacterial strains isolated from the non-chlorinated effluent. In treatment I, NaClO (11%) was added to generate concentrations of 8, 20 and 30 mg·L
−1 in dilution bottles that contained 90 mL of sterilized saline solution and 10 mL of each bacterial suspension. The doses tested are similar to and higher than those recommended for municipal wastewater disinfection processes [25] carried out in a WWTP. The strains were exposed to contact times (T) of 0, 15 and 30 min at 20 °C. After these contact times, 100 µL of the solution was spread-plated in duplicate on trypticase soy agar (Dibico, Mexico City, Mexico) at 37 °C for 24 h. In treatment II, the same experimental procedure was performed on each strain, but the temperature was raised to 25 °C. Treatment III was performed under the same conditions but at a temperature of 30 °C. Finally, the CFUs were quantified using a Quebec type colony counter (Sol-Bat, Puebla, Mexico), and the results were reported as the log 10(CFU·100 mL −1). The reduction of the bacterial content for each experiment was depicted graphically by the relation log 10(N/N 0) vs. log 10(C 0 T). 2.4. Inactivation Kinetics of the Bacterial Strains
The kinetics of inactivation were analyzed for the eight strains isolated from the chlorinated effluent and identified by molecular techniques. In dilution bottles, 90 mL of sterilized saline solution, 15 mg·L
−1 of NaClO (11%), and 10 mL of each bacterial suspension were mixed. The strains were exposed to this single dose of disinfectant for contact times of 0, 10, 20, 30, 60 and 90 min at room temperature.
After these contact times with the disinfectant, 100 µL of each solution was spread-plated in duplicate on Mueller Hinton agar (Bioxon) at 37 °C for 24 h. Subsequently, CFUs were quantified and reported as log
10(CFU·100 mL −1). The inactivation of the eight bacterial strains was verified by calculating two removal percentages: the removal percentage at 90 min and the maximum removal percentage reached (at any retention time). Inactivation was also expressed as the log 10(CFU·100 mL −1). 2.5. Statistical Analysis
The results of each NaClO resistance test (Section 2.3), including the mean and standard deviation, were calculated for the treatment response of the five bacterial strains, isolated from the secondary effluent, using Sigmaplot version 10 software (2006).
A four-way ANOVA test was conducted to analyze the resistance of the five strains to treatments I, II and III (Section 2.3). To establish optimal values for each parameter (i.e., contact time, dose of disinfectant and temperature) for removing the five strains with the highest efficiency, an analysis of variance was conducted using Advanced Systems and Designs software (version 2.5, American Supplier Institute, Santa Clara, CA, USA) to perform the Taguchi method using an orthogonal array. In all tests, data were transformed into natural logarithms.
For the kinetics of inactivation (Section 2.4), the means and standard deviations measured for each isolated strain were compared. To determine the differences between isolated bacterial strains, two- factor variance analysis with one average per group were performed using Student’s t-test.
3. Results and Discussion 3.1. Microbiological Quality 3.1.1. Sampling
The mean temperature values at the three points of the WWTP were: 20.39 °C at the influent, 19.9 °C at the non-chlorinated effluent and 20.35 °C at the chlorinated effluent. The average pH values were 8.66, 8.35 and 8.43 at the same points, respectively. These values are similar to those previously reported for the same WWTP [21]. These environmental conditions are suitable for the potential growth of microorganisms.
3.1.2. Mesophilic Aerobic Bacterial Count
Mesophilic aerobic bacteria were quantified in the samples from the three points at the WWTP, as described in Section 2.1.2. In the first sampling, 8.9 × log
10(CFU·100 mL −1) were measured in the influent, while 5.8 and 6.4 × log 10(CFU·100 mL −1) were measured in the non-chlorinated and the chlorinated effluents, respectively. These results correspond to 99.91% and 99.68% microbial removal, respectively.
For the second sampling, the results were 9.2 × log
10(CFU·100 mL −1) in the influent, 6.9 × log 10(CFU·100 mL −1) in the non-chlorinated effluent (99.54% removal) and 6.3 × log 10(CFU·100 mL −1) in the chlorinated effluent (99.88% removal). In this case, the removal of bacterial cells in both effluents was more than two log units better than the influent value. However, in both samplings, the bacterial counts were above the range suggested by Salgot et al. [26] for direct reuse of a treated effluent; the suggested range is 1,000–10,000 CFU·mL −1, which corresponds to 5–6 × log 10(CFU·100 mL −1).
The chlorine treatment in the WWTP is thus ineffective. However, higher concentrations must be avoided to prevent the formation of organochlorinated compounds [6]. Although the international standard was met, mesophilic aerobic bacteria were not efficiently removed from the chlorinated effluent of the WWTP. This inefficient removal allowed for the isolation of several strains from both the non-chlorinated and chlorinated effluents.
3.2. Isolation and Identification of Bacterial Strains 3.2.1. Isolation of Bacterial Strains
A total of five bacterial strains were isolated from the non-chlorinated effluent and used for the resistance test. A total of eight strains were isolated from the chlorinated effluent and tested to analyze the kinetics of inactivation.
3.2.2. Molecular Identification of Bacterial Strains
The genetic sequences of the bacterial strains isolated from both the non-chlorinated and chlorinated effluents of the WWTP were compared to known sequences in the GenBank database. The closest taxa are shown in Table 1.
The low similarity (<97%) of the 16S rRNA sequences for most of the isolated strains to bacterial taxa in the GenBank database does not support the unequivocal assignment of each strain to a formal species taxon, but it has been argued that relatively high percentages of similarity are useful for the establishment of relationships at least at the genus level [27], therefore the comparisons to come are valid. Five bacterial strains were identified as very close (98–100% similar) to an equal number of taxa (Bacillus sp. FRC_Y9-2, Citrobacter freundii
b, Escherichia coli, Kluyvera cryocrescens a and Kluyvera intermedia; superscripts indicate that more than one isolated strain is related to the same bacterial taxon). The other eight bacterial strains were less similar to the closest taxa, as shown in Table 1. Table 1.Similarity of the isolated strains to the closest taxa identified in the GenBank database.
WWTP Effluent Test Closest taxon Access number % Similarity Non-chlorinated RT Aeromonas hydrophila AN-2 AY987736.1 95 Enterobacter cloacae A5-B25 AF406657.1 92 Escherichia coli CP002516.1 98 Escherichia coli BL21 AM946981.2 89 Escherichia coli PD3 FR715025.1 95 Chlorinated IK Bacillus sp. FRC_Y9-2 EF158823.1 100 Citrobacter freundii a NR_028894.1 96 Citrobacter freundii b FN997639.1 99 Enterobacter sp. MS5 FN997607.1 88 Kluyvera cryocrescens a AM933754.1 98 Kluyvera cryocrescens b AM933754.1 94 Kluyvera cryocrescens c AM933754.1 95 Kluyvera intermedia NR_028802.1 99
Type of experiment performed: RT: Bacterial resistance test; IK: Bacterial inactivation kinetics. Superscripts indicate that more than one isolated strain is related to the same bacterial taxon.
The strains isolated from the non-chlorinated effluent were expected because their closely related taxa (i.e., Aeromonas hydrophila, E. coli and Enterobactercloacae) represent bacteria that are commonly isolated from aquatic environments [28] and domestic wastewater [29]. Most of the strains isolated from the chlorinated effluent are related to common waterborne pathogens [28]. In contrast, strains of Kluyvera are often found in hospital sewage samples [30]. Shi et al. [31] reported the presence of Citrobacter sp. in chlorine-disinfected water and pipeline transportation systems. The presence of microorganisms from the genus Bacillus in a chlorinated effluent is likely a result of the high resistance of its endospores, which has been widely reported in the literature [25]. In fact, it has been observed that the endospores produced by Bacillus subtilis exhibit a similar level of resistance to oocyst-forming protists such as Giardia [32] and Cryptosporidium [33]. This feature allows for the use of this bacterial species as a surrogate in chlorine inactivation assays. The taxonomic differences that were observed between the bacterial strains identified in the chlorinated and non-chlorinated effluents are not attributable to any selective efficiency of the WWTP because the isolation of strains for further culture was random. However, the close relationship of the strains, isolated from the chlorinated effluent, with pathogenic taxa poses a potential sanitary risk, as reclaimed water from this plant is used to irrigate gardens and soccer fields. The fact that microorganisms other than coliforms were identified highlights the necessity for new indicators to improve the quality of reclaimed water [21,34].
3.3. NaClO Resistance Tests
The degree of resistance of each of the bacterial strains from the non-chlorinated effluent to the disinfection treatments was determined by quantifying the number of CFUs (log
10(CFU 100 mL −1)). The highest and the lowest resistance values measured under different experimental conditions for each strain are shown in Table 2. Table 2.Resistance of the bacterial strains to disinfectant treatment.
Closest taxon Log inactivation Resistance (log 10(CFU·100 mL − 1)) Maximum Treatment Minimum Treatment A. hydrophila 0.84 10.43 TI 30 mg·L −g/15 min 11.27 TII 20 mg·L −g/0 min E. coli 1.31 10.15 TI 8 mg·L −g/30 min 11.45 TIII 8 mg·L −g/30 min E. coli PD3 1.86 10.62 TIII 30 mg·L −g/0 min 12.48 TI 30 mg·L −g/0 min E. coli BL21 0.80 10.37 TI 8 mg·L −g/0 min 11.18 TII 20 mg·L −g/0 min E. cloacae 0.81 10.38 TIII 30 mg·L −g/30 min 11.19 TIII 8 mg·L −g/0 min
TI = 20 °C. TII = 25 °C. TIII = 30 °C.
The strains most affected by the chlorination process were those related to E. coli and E. coli PD3, as they had the highest reduction of CFUs measured in logarithmic units, similar to results previously reported by Koivunen et al. [35] using a dose of 18 mg·L
−1. Tree et al. [36] suggested that E. coli strains are more sensitive to free or combined chlorine than other water microorganisms, especially at temperatures near 15 °C.
After the treatments, the remaining quantity of CFUs of each strain was correlated to the product of the initial concentration of NaClO (C
0) and the retention time (T). Figure 1 shows the ratio of survival for each bacterial strain at the beginning (N 0) and at the end (N) of each treatment. The results are reported as Log values. Figure 1.Reduction of CFUs at three temperatures (20, 25 and 30 °C) as a function of the product of the initial disinfectant concentration (mg·L −1) and the contact time (min). ( a) A. hydrophila; ( b) E. coli; ( c) E. coli PD3; ( d) E. coli BL21; ( e) E. cloacae.
These results indicate that in the bacterial disinfection process there is variation in the resistance of each strain, even when using high doses of the disinfectant.
3.4. Inactivation Kinetics of the Bacterial Strains
The removal percentages of the bacterial strains isolated from the chlorinated effluent are shown in Table 3. Individually, the majority of the strains had a maximum removal value equal to the value observed after the maximum contact time (90 min). This result suggests that beyond a certain lethal contact time with the disinfectant, further exposure to achieve a higher inactivation response is unnecessary. The remarkable cases were the strains related to K. cryocrescens
c and K. intermedia, which showed the highest percentage of removal after a retention time of 30 min, while at 90 min the removal percentage diminished. The strain related to Enterobacter sp. MS5 showed high resistance to the disinfection treatment, as a complete recovery of the initial CFU number was observed at 90 min. This unexpected result contrasts with the findings of King et al. [37], who achieved a 99% inactivation rate for isolated Enterobacter agglomerans and E. cloacae strains after approximately 1 min of exposure to 1 mg·L −1 of free residual chlorine. Table 3.Bacterial inactivation values and removal percentages obtained during analysis of the kinetics of inactivation.
Closest taxon Removal % Inactivation T = 90 min Max% (log 10(CFU·100 mL −1)) Bacillus sp. FRC_Y9-2 98.87 98.87 1.96 Citrobacter freundii a 92.52 92.52 1.86 C. freundii b 99.71 99.71 2.25 Enterobacter sp. MS5 0 98.91 ** 2.09 Kluyvera cryocrescens a 14.89 14.89 1.61 K. cryocrescens b 86.54 86.54 0.87 K. cryocrescens c 98.17 98.30 * 1.77 K. intermedia 65.37 87.45 * 0.9 * T = 30 min. ** T = 20 min. Superscripts indicate that more than one isolated strain is related to the same bacterial taxon.
A high inactivation response of most strains occurred with a disinfectant dose of 15 mg·L
−1 and a variable contact time. The reduction in CFUs ranged from 0.87 to 2.25 log units. These values were similar to those reported by Macauley et al. [38], who observed reductions in the number of swine lagoon bacteria ranging from 2.2 to 3.4 log units with a 30 mg·L −1 dose.
Among the tested strains, only the strain related to Bacillus sp. (phylum Firmicutes) is a Gram-positive bacterium. This strain is also the only endospore-forming organism identified in this study. The particular composition of the Gram-positive bacteria cell wall, the lack of an outer membrane, a special set of genes, but more likely because the organisms had left the endospore protection, caused greater removal percentages than were observed for most of the strains, except for those related to C. freundii
b and K. cryocrescens c.
Our results indicate that specific conditions are needed to eliminate each of the different bacterial species identified. These findings suggest that it is impossible to establish a single dose and a single contact time to inactivate all of the bacteria present in treated water. This finding is in agreement with Dow et al. [33], who attributed different inactivation responses to changes in the physical conditions of the water, such as temperature, when testing monochloramine or ozone on a single bacterial species (i.e., Bacillus subtilis).
3.5. Statistical Analysis 3.5.1. NaClO Resistance Tests
A group analysis of the inactivation dynamics of the five bacterial strains isolated from the non-chlorinated effluent indicated that the degree of resistance to the disinfection process varied. This fact can be observed in the calculations presented in Table 4.
Table 4.Means and standard deviations (CFU·100 mL −1) obtained for the bacterial strains during the disinfection process.
T (min) 0 15 30 Treatment I (20 °C) mg·L −1 x δ x δ x δ 8 1.03E+11 6.99E+10 1.0357E+11 8.244E+10 8.244E+10 6.7504E+10 20 1.363E+11 7.47E+10 6.6674E+10 3.877E+10 9.814E+10 6.196E+10 30 6.421E+11 1.32E+12 7.3978E+10 5.553E+10 7.444E+10 7.5511E+10 Treatment II (25 °C) 8 1.01E+11 3.69E+10 9.4558E+10 6.407E+10 9.507E+10 6.2074E+10 20 1.205E+11 6.16E+10 8.4292E+10 4.971E+10 4.568E+10 1.6236E+10 30 8.99E+10 5.93E+10 8.8934E+10 5.069E+10 4.349E+10 8.326E+10 Treatment III (30 °C) 8 1.45E+11 6.11E+10 1.2841E+11 7.769E+10 1.078E+11 1.0039E+11 20 9.814E+10 4.39E+10 9.246E+10 7.504E+10 8.758E+10 6.5128E+10 30 9.838E+10 3.76E+10 7.4647E+10 2.888E+10 4.559E+10 1.8856E+10 Figure 2.Standard deviations observed in resistance tests that used temperature and disinfectant dose as variables. C = concentration (mg·L −1) ( a) T = 0 min; ( b) T = 15 min; ( c) T = 30 min.
The standard deviations demonstrated that the contact time with the disinfectant did not affect group resistance (Figure 2). The high standard deviations indicate the independent resistance of each bacterial strain to the treatments. This finding supports the statement at the end of Section 3.3, that it exist variation in the resistance of each strain to chlorine disinfection. This variation is likely due to a particular response to the disinfectant’s mechanism of action rather than to the disinfectant dose or to the temperature, as suggested by Cho et al. [19]. These results are also consistent with those reported by Berry et al. [12], who suggest that molecular mechanisms confer chlorine resistance to bacteria. As mentioned above, this resistance is most likely due to the expression of certain genes in response to stress factors, such as oxidizing agents, variations in temperature, osmotic shock or small amounts of organic matter present in the culture medium. Such growth conditions could presumably alter the bacterial inactivation process by reducing bacterial metabolism or by changing the permeability of the cell membrane.
The ANOVA test showed significant differences for each set of experimental assays (Table 5). The most efficient conditions for decreasing bacterial resistance were a low temperature (20 °C), a long contact time (30 min) and a low dose of disinfectant (8 mg·L
−1). The E. coli strains showed the least resistance to the treatments tested (Figure 3). These results do not support the plausibility that significant amounts of organic matter were present in the bacterial suspensions; therefore, we can dismiss the possibility that major interaction of organic matter with chlorine prevented its biocidal effect. Table 5.Values from the ANOVA test.
Source d.f. S V F ρ A 2 0.02 0.01 B 2 0.08 0.04 3.67 4.02 C 2 0.19 0.10 9.24 12.42 D 2 0.10 0.05 4.88 5.84 R 2 0.13 0.07 6.23 7.89 e1 16 0.86 0.05 5.16 50.23 <e> 2 0.02 0.01 19.60 TOTAL 26 1.39 0.05 100
A: temperature; B: contact time; C: concentration; D: bacterial groups; R: s/n ratio; d.f.: degrees of freedom; S: sum of squares; V: variance; F: variance ratio; ρ: percent contribution of source; e1: pooled; <e>: pooled estimate of experimental error.
Figure 3.Values from the ANOVA test and the average midline of all values. 3.5.2. Inactivation Kinetics of the Bacterial Strains
The relationship between bacterial inactivation and contact time is shown in Figure 4. The standard deviation plot for each bacterial strain demonstrated that most of the strains reacted in a different way to the disinfection process, most likely because the disinfectant had not lost its biocidal capability, although few active fractions of chlorine could have been formed [7]. Of interest were the strain related to C. freundii
b, which exhibited an accelerated inactivation response, and the strain related to Enterobacter sp. MS5, which had the highest CFU recovery with the longest contact time. Figure 4.Response of the bacterial strains to the treatment, as represented by the trend line and slope value. K. cryocrescens a(y = −0.012x + 10.14). K. cryocrescens b(y = −0.005x + 12.92). K. cryocrescens c(y = −0.018x + 11.64). K. intermedia (y = −0.002x + 12.86). C. freundii a(y = −0.013x + 12.68). C. freundii b(y = −0.0412x + 13.524). Bacillus sp. FRC_Y9-2 (y = −0.022x + 12.33). Enterobacter sp. MS5 (y = 0.0057x + 11.92).
Similarly, the mean analysis for all the strains showed similar inactivation responses in three groups of bacteria (A, B and C), while in two other groups (D and E), the responses were independent (Table 6). There were significant differences in the inactivation tests of the last two groups when compared with the former groups.
Two-factor variance analysis found no relationships or significant differences between bacterial inactivation and contact times, particularly when the tabulated Fisher number (F
t = 2.69) and the calculated Fisher number (F c = 2.53) were compared. This result was not observed for the individual inactivation trend of each strain; here again, significant differences were found (F c 2.21 > F t 2.17). This finding was also supported by an individual analysis of each strain using Student’s t-test. Table 6.Comparison of means using Student’s t-test.
Group Closest taxon Mean Sd T cal T tab A K. cryocrescens b 12.64 0.34 0.68 2.17 K. intermedia 12.76 0.28 B Bacillus sp. FRC_Y9-2 11.56 0.87 0.15 K. cryocrescens c 11 0.79 C K. cryocrescens a 9.7 0.76 0.43 C. freundii a 12.23 0.58 D, E C. freundii b 12.53 0.89 3.34 Enterobacter sp. MS5 12.11 0.75 Among groups A–B A 12.7 0.31 3.9 B 11.27 0.83 C–B C 10.97 0.67 2.82 B 11.27 0.83
T cal = calculated Student’s t value; T tab = tabulated Student’s t value.
However, the possible protective effect of organic matter upon the bacterial cells during chlorine disinfection must be considered. A higher chlorine demand caused by organic compounds present in the culture medium causes a rapid decline in the availability of free chlorine. In experiments performed by Virto et al. [14] with a calculated organic load of 1,120 ppm, the concentration of NaClO (10%) had to be raised several times to achieve bacterial inactivation. The disinfectant dose had a clear but differential effect on the bacterial strains only above 15–35 mg·L
−1, which was in contrast to the low chlorine concentration (approximately 1 mg·L −1) necessary to completely inactivate the same microbial populations when tested in a distilled water milieu. In our experiments, the dose of 15 mg·L −1, although relatively high, efficiently achieved a significant inactivation response in each bacterial strain within the time intervals considered. Thus, it is unlikely that the free chlorine was prevented from interacting with the bacterial cells by organic matter in the experiments conducted in this study, although this possibility cannot be completely dismissed. It is also possible that morphological or physiological features of each bacterial strain contribute to chlorine resistance. 4. Conclusions
Our analysis demonstrated that the secondary treatment of active sludge does not efficiently remove the mesophilic aerobic bacteria from the wastewater influent of the WWTP under study. A higher removal of bacteria did not occur even after the chlorination treatment, meeting the international standard, was performed. Therefore, the disinfection treatment using only NaClO in this WWTP is ineffective; another treatment could be used in combination with chlorine to increase removal efficiency of bacteria.
Several bacterial strains were isolated from the non-chlorinated and chlorinated effluents of the WWTP. A comparison of the gene sequences of the 16S rRNA of these strains with known taxa demonstrated that a diversity of bacteria is present in municipal wastewater, most of which are different from the traditional coliform indicators.
In tests of resistance to NaClO, the standard deviations indicated that each bacterial strain responded independently when experimental conditions vary. The ANOVA test demonstrated that the most efficient conditions for decreasing the bacterial resistance of all strains were low temperature (20 °C), increased contact time (30 min) and low doses of disinfectant (8 mg·L
−1). The strains related to E. coli taxa showed the least resistance to the experimental treatments.
In the bacterial inactivation experiments, a modest reduction in log units was achieved, although there was no clear relationship between removal percentages and specific retention times. Statistical analyses indicated that each strain has a particular inactivation response. It would be useful to test different inactivation conditions on distinct groups of opportunistic and pathogenic bacterial species that are phylogenetically related to each other and to address the impact of organic matter content on the efficiency of chlorine disinfection for these groups of species.
It must be stressed that bacterial cells that remained viable after both the disinfection tests and the analysis of inactivation kinetics using NaClO (11%) are resistant. Especially, the bacterial strains isolated from the chlorinated effluent represent a serious sanitary risk because most of strains are phylogenetically related to species and genera that include opportunistic and pathogenic microorganisms. These strains are non-fecal in origin and are different from coliforms. Thus, there is an urgent need to improve reclaimed water regulations to include species other than the traditional indicators of water quality.
Acknowledgements
The work of SM-H was supported by a graduate scholarship (number 217745) that was kindly provided by CONACyT, Mexico. Some chemical reagents were generously provided by the Administration of the B.A. in Biology at UAEH, Mexico. We thank the Instituto Tecnológico de Estudios Superiores de Monterrey, Hidalgo campus, for allowing us to sample from its WWTP. The authors recognize Jose A. Rodriguez-Ávila for his comments on the procedure for analyzing inactivation kinetics.
Conflict of Interest
The authors declare no conflict of interest.
References Helbling, E.D.; VanBriesen, M.J. Continuous monitoring of residual chlorine concentrations in response to controlled microbial intrusions in a laboratory-scale distribution system. Water Res. 2008, 42, 3162–3172. [Google Scholar] [CrossRef] Hassen, A.; Mehrouk, M.; Ouzari, H.; Cherif, M.; Boudabous, A.; Damelincourt, J.J. UV disinfection of treated wastewater in a large-scale pilot plant and inactivation of selected bacteria in a laboratory UV device. Bioresour. Technol. 2000, 74, 141–150. [Google Scholar] [CrossRef] Jeffrey, P.; Seaton, R.A.F.; Stephenson, T.; Parsons, S. Infrastructure configurations for wastewater treatment and reuse: A simulation based study of membrane bioreactors. Water Sci. Technol. 1998, 38, 105–111. [Google Scholar] Veschetti, E.; Cutilli, D.; Bonadonna, L.; Briancesco, R.; Martini, C.; Cecchini, G.; Anastasi, P.; Ottaviani, M. Pilot-plant comparative study of peracetic acid and sodium hypochlorite wastewater disinfection. Water Res. 2003, 37, 78–94. [Google Scholar] [CrossRef] Katz, A.; Narkis, N.; Orshansky, F.; Friedland, E.; Kott, Y. Disinfection of effluent by combinations of equal doses of chlorine dioxide and chlorine added simultaneously over varying contact times. Water Res. 1994, 28, 2133–2138. [Google Scholar] [CrossRef] Tchobanoglous, G.; Burton, F.L.; Stensel, H.D. Wastewater Engineering Treatment and Reuse; Metcalf and Eddy, McGraw-Hill: New York, NY, USA, 2003; pp. 1217–1330. [Google Scholar] Estrela, C.; Estrela, C.R.; Barbin, E.L.; Spanó, J.C.E.; Marchesan, M.A.; Pécora, J.D. Mechanism of action of sodium hypochlorite. Braz. Dent. J. 2002, 13, 113–117. [Google Scholar] [CrossRef] McDonnell, G.; Russell, A.D. Antiseptics and disinfectants: Activity, action, and resistance. Clin. Microbiol. Rev. 1999, 12, 147–179. [Google Scholar] Apella, C.M.; Araujo, Z.P. Microbiología del agua. Conceptos Básicos. In Tecnologías Solares para la Desinfección y Descontaminación del Agua; Blesa, M.A., Blanco, G.J., Eds.; Solarsafewater: Buenos Aires, Argentine, 2005; pp. 33–50. [Google Scholar] Goel, S.; Bouwer, E.J. Factors influencing inactivation of Klebsiella pneumoniae by chlorine and chloramine. Water Res. 2004, 38, 301–308. [Google Scholar] [CrossRef] Wu, C.W.; Schmoller, S.K.; Shin, S.J.; Talaat, A.M. Defining the stressome of Mycobacterium avium subsp. paratuberculosis in vitro and in naturally infected cows. J. Bacteriol. 2007, 189, 7877–7886. [Google Scholar] [CrossRef] Berry, D.; Holder, D.; Xi, C.; Raskin, L. Comparative transcriptomics of the response of Escherichia coli to the disinfectant monochloramine and to growth conditions inducing monochloramine resistance. Water Res. 2010, 44, 4924–4931. [Google Scholar] [CrossRef] Winward, G.P.; Avery, L.M.; Stephenson, T.; Jefferson, B. Chlorine disinfection of grey water for reuse: Effect of organics and particles. Water Res. 2008, 42, 483–491. [Google Scholar] [CrossRef] Virto, R.; Mañas, P.; Álvarez, I.; Condon, S.; Raso, J. Membrane damage and microbial inactivation by chlorine in the absence and presence of a chlorine-demanding substrate. Appl. Environ. Microbiol. 2005, 71, 5022–5028. [Google Scholar] Djuikom, E.; Njiné, T.; Nola, M.; Kemka, N.; Zébazé Touget, S.H.; Jugnia, L.B. Significance and suitability of Aeromonas hydrophila vs. fecal coliforms in assessing microbiological water quality. World J. Microbiol. Biotechnol. 2008, 24, 2665–2670. [Google Scholar] [CrossRef] Salem, I.B.; Ouardani, I.; Hassine, M.; Aouni, M. Bacteriological and physico-chemical assessment of wastewater in different regions of Tunisia: Impact on human health. BMC Res. Notes 2011, 4. [Google Scholar] [CrossRef] Orta Ledesma, M.T.; Díaz Pérez, V.; Aparicio, G. Desinfección de Agua Potable Contaminada Con Vibrio Cholerae Adaptada Al Cloro. Consolidación Para el Desarrollo; CEPIS: Mexico City, Mexico, 1996. Available online: http://www.bvsde.paho.org/bvsaidis/caliagua/mexico/ 02392e14.pdf (accessed on 10 June 2012). Germer, J.; Boh, M.Y.; Schoeffler, M.; Amoah, P. Temperature and deactivation of microbial faecal indicators during small scale co-composting of faecal matter. Waste Manag. 2010, 30, 185–191. [Google Scholar] [CrossRef] Cho, M.; Kim, J.; Kim, J.Y.; Yeon, J.; Yoon, J.; Kim, J.H. Mechanisms of Escherichia coli inactivation by several disinfectants. Water Res. 2010, 44, 3410–3418. [Google Scholar] [CrossRef] Luczkiewicz, A.; Jankowska, K.; Fudala, K.S.; Olanczuc, N.K. Antimicrobial resistance of fecal indicators in municipal wastewater treatment plant. Water Res. 2010, 44, 5089–5097. [Google Scholar] [CrossRef] Coronel-Olivares, C.; Reyes-Gómez, L.M.; Hernández-Muñoz, A.; Martínez-Falcón, A.P.; Vázquez-Rodríguez, G.A.; Iturbe, U. Chlorine disinfection of Pseudomonas aeruginosa, total coliforms, Escherichia coli and Enterococcus faecalis: Revisiting reclaimed water regulations. Water Sci. Technol. 2011, 64, 2151–2157. [Google Scholar] [CrossRef] Standard Method for the Examination of Water and Wastewater, 20th ed.; American Public Health Association: Washington, DC, USA, 2008. Cavalieri, J.S. Manual de Pruebas de Susceptibilidad Antimicrobiana; American Society for Microbiology: Washington, DC, USA, 2005; pp. 39–53, 225–231. [Google Scholar] Fernández-No, I.C.; Böhme, K.; Gallardo, J.M.; Barros-Velázquez, J.; Cañas, B.; Calo-Mata, P. Differential characterization of biogenic amine-production bacteria involved in food poisoning using MALDI-TOF mass fingerprinting. Electrophoresis 2010, 31, 1116–1127. [Google Scholar] Lazarova, V.; Savoye, P.; Janex, M.L.; Blatchley, E.R.; Pommepuy, M. Advanced wastewater disinfection Technologies: State of the art and perspectives. Water Sci. Technol. 1999, 40, 203–213. [Google Scholar] Salgot, M.; Huertas, E.; Weber, S.; Dott, W.; Hollender, J. Wastewater reuse and risk: Definition of key objectives. Desalination 2006, 187, 29–40. [Google Scholar] [CrossRef] Fox, G.E.; Wisotzkey, J.D.; Jurtshuk, P. How close is close: 16S rRNA sequence identity may not be sufficient to guarantee species identity. Int. J. Syst. Bacteriol. 1992, 42, 166–170. [Google Scholar] [CrossRef] Cabral, J.P.S. Water microbiology. Bacterial pathogens and water. Int. J. Environ. Res. Public Health 2010, 7, 3657–3703. [Google Scholar] [CrossRef] Picão, R.C.; Cardoso, J.P.; Campana, E.H.; Nicoletti, A.G.; Petrolini, F.V.; Assis, D.M.; Juliano, L.; Gales, A.C. The route of antimicrobial resistance from the hospital effluent to the environment: Focus on the occurrence of KPC-producing Aeromonas spp. and Enterobacteriaceae in sewage. Diagn. Microbiol. Infect. Dis. 2013, 76, 80–85. [Google Scholar] [CrossRef] Sarria, J.C.; Vidal, A.M.; Kimbrough, R.C., III. Infections caused by Kluyvera species in humans. Infect. Dis. Soc. Am. Clin. Infect. Dis. 2001, 33, 69–74. [Google Scholar] Shi, P.; Jia, S.; Zhang, X.X.; Zhang, T.; Cheng, S.; Li, A. Metagenomic insights into chlorination effects on microbial antibiotic resistance in drinking water. Water Res. 2013, 47, 111–120. [Google Scholar] [CrossRef] Barbeau, B.; Boulos, L.; Desjardins, R.; Coallier, J.; Prévost, M. Examining the use of aerobic spore-forming bacteria to assess the efficiency of chlorination. Water Res. 1999, 33, 2941–2948. [Google Scholar] [CrossRef] Dow, S.M.; Barbeau, B.; von Gunten, U.; Chandrakanth, M.; Amy, G.; Hernandez, M. The impact of selected water quality parameters on the inactivation of Bacillus subtilis spores by monochloramine and ozone. Water Res. 2006, 40, 373–382. [Google Scholar] [CrossRef] Figueras, M.J.; Borrego, J.J. New perspectives in monitoring drinking water microbial quality. Int. J. Environ. Res. Public Health 2010, 7, 4179–4202. [Google Scholar] [CrossRef] Koivunen, J.; Heinonen-Tanski, H. Inactivation of enteric microorganisms with chemical disinfectants, UV irradiation and combined chemical/UV treatments. Water Res. 2005, 39, 1519–1526. [Google Scholar] [CrossRef] Tree, J.A.; Adams, M.R.; Lees, D.N. Chlorination of indicator bacteria and viruses in primary sewage effluent. Appl. Environ. Microbiol. 2003, 69, 2038–2043. [Google Scholar] [CrossRef] King, C.H.; Shotts, E.B.; Wooley, R.E.; Porter, K.G. Survival of coliforms and bacterial pathogens within protozoa during chlorination. Appl. Environ. Microbiol. 1998, 54, 3023–3033. [Google Scholar] Macauley, J.J.; Qiang, Z.; Adams, D.C.; Surampalli, R.; Mormile, M.R. Disinfection of swine wastewater using chlorine, ultraviolet light and ozone. Water Res. 2006, 40, 2017–2026. [Google Scholar] [CrossRef]
© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
| 44,878
| 17,085
| 2.626749
|
warc
|
201704
|
Please enable JavaScript in your browser settings.
Treatment procedures of the urinary system rename Section 1
Question Answer diuretics medications administered to increase urine secretion, primarily to rid the body of excess water and salt dialysis a procedure to remove waste products as well as excess water from the blood of a patient whose kidneys no longer function hemodialysis process by which waste products are filtered directly from the patient's blood;performed on an external hemodialysis unit; most common type hemodialysis unit aka artificial kidney shunt artificial passage that allows the blood the flow between the body and the hemodialysis unit dialysaate a sterilized solution made up of water and electrolytes;cleanses the blood by removing waste products and excess fluids electrolytes salts that conduct electricity and are found in the body fluid, tissue, and blood peritoneal dialysis the lining of the peritoneal cavity acts as the filter to remove waste from the blood; the dialysate flows into the peritoneal cavity around the intestine through a catheter implanted in the abdominal wall continuous ambulatory peritoneal dialysis a dialysate solution is instilled from a plastic container worn under clothing;every 4 hours the used solution is drained back into this bag and the bag discarded; new bag attached and repeated continuous cycling peritoneal dialysis used a machine to cycle the dialysate fluid during the night while the patient sleeps Section 2
Question Answer nephrolysis the surgical freeing of a kidney from adhesions nephropexy aka nephrorrhaphy; the surgical fixation of a nephroptosis nephrostomy the placement of a catheter to maintain an opening from the pelvis of one of both kidneys to the exterior of the body pyeloplasty the surgical repair of the ureter and renal pelvis pyelotomy a surgical incision into the renal pelvis renal transplantation aka kidney transplant; the grafting of a donor kidney into the body to replace the recipient's failed kidneys extracorporeal shockwave lithotripsy high-energy ultrasonic waves traveling through water or gel are used to break up the stone into fragments extracorporeal situated or occurring outside the body percutaneous nephrolithotomy the surgical removal of a nephrolith through a small incision in the back Section 3
Question Answer ureterectomy surgical removal of a ureter ureteroplasty surgical repair of a ureter ureterorrhaphy surgical suturing of a ureter ureteroscopy treatment for a nephrolith lodged in the ureter cystectomy the surgical removal of all or part of the urinary bladder neobladder a replacement for the missing bladder created by using about 20" of the small intestine ileal conduit aka ileostomy; the use of a small piece of intestine to convey urine to the ureters and to a stoma in the abdomen cystopexy the surgical fixation of the bladder to the abdominal wall cystorrhaphy the surgical suturing of a wound or defect in the bladder lithotomy surgical incision for the removal of a nephrolith from the bladder urinary catheterization aka cathing; performed to withdraw urine for diagnostic purposes, to allow urine to drain freely, or to place fluid such as a chemotherapy solution into the bladder indwelling catheter remains inside the body for a prolonged time based on need urethral catheterization performed by inserting a plastic tube called a catheter through the urethra and into the bladder suprapubic caatheterization the placement of a catheter into the bladder through a small incision made through the abdominal wall just above the pubic bone Foley catheter made of a flexible tube with a balloon filled with sterile water at the end to hold it in place in the bladder intermittent catheter aka short-term catheter;inserted as needed several times a day to drain urine from the bladder Section 4
Question Answer meatotomy a surgical incision made in the urethral meatus to enlarge the opening urethropexy surgical fixation of the urethra to nearby tissue;performed to correct urinary stress incontinence urethrotomy a surgical incision into the urethra for relief of a structure stricture an abnormal narrowing of a bodily passage ablation cancer treatment that involves the removal of a body part or the destruction of its function through the use of surgery, hormones, drugs, heat, chemicals, elctrocautery or other methods electrocautery the use of high-frequency electrical current to destroy tissue prostatectomy the surgical removal of all or a part of the prostate gland; performed to treat prostate cancer or to reduce an enlarged prostate gland radical prostatectomy the surgical removal of the entire gland where it is extremely enlarged or when cancer is suspected transurethral prostatectomy aka TURP; the removal of excess tissue from an enlarged prostate gland with the use of a resectoscope resectoscope a specialized endoscopic instrument that r esembles a cytoscope retrograde ejaculation when an orgasm results in semen flowing backward into the bladder instead of out through the penis Kegel exercises a series of pelvic muscle exercises used to strengthen the muscles of the pelvic floor bladder retraining behavioral therapy in which the patient learns to urinate on a schedule, with increasingly longer time intervals as the bladder increases its capacity Section 5
Question Answer ARF acute renal failure BPH benign prostatic hyperplasia cath catheterization CKD chronic kidney disease cysto cystoscopy DRE digital rectal examination ESRD end-state renal disease IVP intravenous pyelogram PKD polycystic kidney disease TURP transurethral resection of the prostate UTI urinary track infection Pages linking here (main versions and versions by same user)
| 5,728
| 2,321
| 2.467902
|
warc
|
201704
|
Overview The term, Hammer Toe, is commonly used as a general classification for any condition where the toe muscle weakens, causing digital contracture, and resulting in deformity, a digital contracture like this can actually be a hammertoe, claw toe or mallet toe, depending on which joints in the toe are contracted. Clawtoes are bent at the middle and end joints, while hammertoes are bent at the middle joint only. When it?s mallet toe, the joint at the end of the toe buckles. The skin near the toenail tip develops a painful corn that can eventually result in an ulcer. Doctors further categorize all forms of hammertoe based on whether the affected toe is flexible, semi-rigid or rigid. The more rigid the toe, the more pain it will cause. Causes Hammer toe results from shoes that don?t fit properly or a muscle imbalance, usually in combination with one or more other factors. Muscles work in pairs to straighten and bend the toes. If the toe is bent and held in one position long enough, the muscles tighten and cannot stretch out. Some other causes are diabetes, arthritis, neuromuscular disease, polio or trauma. Symptoms A toe (usually the second digit, next to the big toe) bent at the middle joint and clenched into a painful, clawlike position. As the toe points downward, the middle joint may protrude upward. A toe with an end joint that curls under itself. Painful calluses or corns. Redness or a painful corn on top of the bent joint or at the tip of the affected toe, because of persistent rubbing against shoes Pain in the toes that interferes with walking, jogging, dancing, and other normal activities, possibly leading to gait changes. Diagnosis Hammer toes may be easily detected through observation. The malformation of the person’s toes begin as mild distortions, yet may worsen over time - especially if the factors causing the hammer toes are not eased or removed. If the condition is paid attention to early enough, the person’s toes may not be permanently damaged and may be treated without having to receive surgical intervention. If the person’s toes remain untreated for too long, however the muscles within the toes might stiffen even more and will require invasive procedures to correct the deformity. Non Surgical Treatment You can usually use over-the-counter cushions, pads, or medications to treat bunions and corns. However, if they are painful or if they have caused your toes to become deformed, your doctor may opt to surgically remove them. If you have blisters on your toes, do not pop them. Popping blisters can cause pain and infection. Use over-the-counter creams and cushions to relieve pain and keep blisters from rubbing against the inside of your shoes. Gently stretching your toes can also help relieve pain and reposition the affected toe. Surgical Treatment Sometimes surgery can not be avoided. If needed, the surgery chosen is decided by whether we are dealing with a flexible or rigid hammer toe. If the surgery is on a flexible hammer toe, it is performed on soft tissue structures like the tendon and or capsule of the flexor hammer toe. Rigid hammer toes need bone surgeries into the joint of the toe to repair it. This bone surgery is called an arthroplasty.
Patiko (0)Rodyk draugams
| 3,265
| 1,516
| 2.153694
|
warc
|
201704
|
Adelaide needs a north-south traffic corridor
Opinion Piece
JBO-001/2014
12 May 2014
Adelaide Advertiser
A fully upgraded north-south road corridor has been somewhat of a mirage for South Australians.
We have talked about this project since it was first recommended in 1968. Yet today, some 45 years later, much of South Rd more closely resembles a parking lot than a free-flowing north-south road corridor. This lack of progress has left Adelaide with a rising congestion problem, putting a lid on our productive capacity.
This congestion gridlock was highlighted in a RAA Travel Time report, released in July last year, which found that travel from O’Halloran Hill to Anzac Highway and on to West Terrace takes almost 9 minutes longer than in 2012. Travel speeds along this stretch are now at 21km/h in the morning and 27km/h in the afternoon, less than half the legal speed limit.
For too long South Australia has lacked the leadership to deliver this major infrastructure upgrade. Adelaide commuters are sick and tired of governments flip-flopping on whether it’s the Darlington Interchange or Torrens-to-Torrens project that’s more important, which usually depends on the political breeze of the moment. The reality is congestion is choking all of South Road.
Our election commitment recognises the importance of the Darlington project, but we also know that the Torrens-to-Torrens upgrade is a significant part of this vital road corridor.
That is why, not only are we delivering on our commitment to get the Darlington project moving again, we are also determined to see a fully upgraded north-south road corridor built within a decade.
This is the first time a government has set a timeline to complete this vital road project. State Liberal Leader, Steven Marshall was instrumental in securing this commitment—first raising the idea with the Prime Minister in October last year.
This is an ambitious commitment that will require a strong focus from both the federal and state government. But it is a goal we must meet if we are to improve our economic performance and ensure future generations enjoy the living standards and prosperity that we have come to expect.
We are serious about building the north-south road corridor as quickly as possible, which is why in this week’s federal budget we will deliver $944 million towards this vital project—an additional $450 million on top of our original election commitment.
This is the single largest infrastructure investment in South Australia by any Australian Government.
This funding will allow for the completion of the two highest priority sections on the Corridor—the South Road upgrade at Darlington and the Torrens Road-to-River Torrens section.
The Australian Government is providing $496 million—80 per cent of the total cost —for the Darlington project and $448 million, 50 per cent of the cost, for the Torrens-to-Torrens section. This funding is part of our broader $2 billion infrastructure investment across South Australia to be outlined in the federal budget.
Once the development phase is complete, early works on both the Darlington and Torrens to Torrens projects can commence later this year, with expected completion in 2018.
The Federal Government is committing more money to upgrade South Rd.
This is a win for all South Australians.
The long-awaited upgrades will improve access for commuters and heavy vehicles to the rapidly expanding industrial and residential growth areas in the north and the south. It will improve access to the Port, the airport and freight terminals, including the Islington intermodal, and it will also improve the efficiency of the public transport system, all of which accelerates new opportunities for economic development, encourages job creation and slashes travel times for commuters.
Since coming to government just seven short months ago, the Australian Government has instigated the biggest road and rail construction program in our nation’s history. Before the decade is out, we will have invested over $45 billion in major infrastructure projects across the country as part of our broader Economic Action Strategy.
These investments, along with other key reforms in the federal budget to get our economy moving again, will ensure South Australia fulfils its economic potential and becomes one of the premier states to live and do business.
| 4,431
| 2,036
| 2.176326
|
warc
|
201704
|
Nick Sherry Assistant Treasurer 9 June 2009 - 14 September 2010 NO.076 Legislation Introduced to Reform the Taxation of Employee Share Schemes
The Rudd Government has today introduced into Parliament the final form of the legislation to reform the taxation of employee share schemes, an important integrity measure contained in the 2009-10 Budget which will deliver a $135 million boost to the Budget bottom line.
The Assistant Treasurer, Senator Nick Sherry, highlighted the Government's strong support for employee share schemes.
"The Rudd Government believes employee share schemes align the interests of employees and employers, boost productivity and encourage good corporate governance," the Assistant Treasurer said.
"These reforms will better target the employee share scheme tax concessions and improve corporate governance outcomes by encouraging schemes to offer genuine loyalty or performance conditions to gain access to the deferred tax concession."
"The Government has consulted widely with industry experts, including with the Board of Taxation, and with the Australian community to develop the most effective and workable reforms possible."
"I thank the Board of Taxation and the range of stakeholders with whom I have met and received advice during the several stages of this consultation."
As a result of consultation undertaken by both the Government and the Board of Taxation, the legislation and explanatory materials introduced today:
widen the exposure draft refund provisions to ensure that a refund will not be denied when employee share scheme benefits are forfeited as a result of leaving employment; include significant additional guidance and examples of the real risk of forfeiture test, including when forfeiture conditions relating to retirement would constitute a real risk; provide clear transitional arrangements for shares and rights acquired before 1 July 2009; adjust the exposure draft provisions related to salary sacrifice arrangements to make it administratively easier to offer complex schemes involving both shares or rights with a real risk of forfeiture, and salary sacrifice arrangements; exempt employee share trusts from capital gains tax over shares acquired to satisfy the exercise of rights provided under an employee share scheme; and amend certain tests in the exposure draft package, such as the tests requiring schemes to be offered to a broad cross-section of employees, to make the rules easier to comply with.
The Assistant Treasurer has previously asked the Board of Taxation to consider two further issues raised in consultation:
how to best determine the market value of employee share scheme benefits; and whether shares and rights under an employee share scheme at a start-up, R&D or speculative focused company should have separate tax deferral arrangements, despite not being subject to a real risk of forfeiture.
The Board of Taxation will report their findings in relation to these issues to the Government by February next year.
Consistent with the current law, tax on employee share scheme benefits cannot be deferred beyond the time when an employee ceases employment with their employer. This has been a feature of the law since 1995.
"I have considered stakeholder requests for the removal of the cessation of employment as a taxing point, but to do this would raise significant tax integrity issues, and punch a major hole in the revenue base, and that is untenable at this time," said the Assistant Treasurer.
As previously announced, the changes to the taxation of employee share schemes will apply from 1 July 2009.The legislation and explanatory materials are available at
www.aph.gov.au.
CANBERRA
21 October 2009
| 3,721
| 1,626
| 2.288438
|
warc
|
201704
|
Please join Mommybites Boston for a teleclass focused on healthy eating healthy habits for your family. As moms and families, we’re constantly on the go. This includes eating on the go. But, are those packaged snacks your children rely on providing them with the nutrients they need to develop mentally and physically? In this session, we’ll discuss:
Key nutrients for healthy behavior and development in children Links between nutrition and common health challenges from asthma and eczema to hyperactivity and behavioral issues Challenges with packaged foods and what to watch out for Top nutrient dense snacks for your children’s development Strategies for incorporating new, nutrient-dense foods into your child’s diet *Dial-in information sent upon completing your registration. Not sure if you can make the teleclass? Don’t worry! Everyone who registers will receive a link to the taped call within 3-5 business days, so be sure to sign up. Hosted By: Danielle Shea Tan is the Founder of Healthy Mamas for Happy Families. Danielle is a Busy Mama. Nutrition & Wellness Coach wife, friend, daughter, yogi, foodie, bleeding heart, world traveler…yes busy! She is also a Certified Health Coach from the Institute for Integrative Nutrition (2012). She started Healthy Mamas for Happy Families with the goal to help mamas become healthy role models for their families.
| 1,400
| 762
| 1.83727
|
warc
|
201704
|
Zopa Review – Borrowing with Zopa
When it comes to borrowing money most of us just head straight to our local bank to ask for a loan. After all, this is what we’ve always done and what the generations before us did as well.
This is a fairly sensible approach and until recently it was probably the only way of borrowing money that I would have considered. However, the internet has provided us with a financial revolution in so many ways and now peer to peer lending is an option to be carefully considered.
Zopa is the biggest peer to peer lending site in the UK and it was also the first to get going anywhere in the world. So how does borrowing with Zopa work and what are the benefits? This is what we are discussing in this Zopa review.
The Way it Works
The first thing we need to understand is that the money you borrow through Zopa comes from a number of different lenders. People can sign up to either borrow money or to lend it. To keep things as watertight as possible the money a person lends is then split up into small chunks and loaned out to a number of borrowers. This means that you won’t be borrowing directly from one person but rather receiving the funds from a number of sources. Everything is arranged online and it is easy to track the progress of your account over time. To date over 80,000 people have borrowed in this way and more than 57,000 lenders have chipped in with money to fund those loans.
The Criteria
We are used to banks checking our financial details and running credit checks on us to make sure that we can be trusted to pay back a loan, so how does Zopa work? Well, the idea is very similar. To start the ball rolling you need to meet their eligibility criteria. This means being over 20 years of age and having lived in the UK for a minimum of the last 3 years. You also need to have a minimum salary of £12,000 and a good credit history. The criteria is very clear, so it should be easy to work out quickly whether or not you will be able to get the money you need in this way.
What You Can Borrow
Of course, we all have different borrowing needs. For instance, you might need quite a large loan to buy a new car, consolidate other existing debts or grow a business, for example. (Sole traders only with Zopa, if you are a company looking to borrow then try LendingCrowd) Or, you might just need to borrow a smaller amount to do some basic home improvements. The amount you can borrow from Zopa starts at £1,000 and goes up to a maximum of £25,000, so it should cover the majority of possibilities. The repayment terms offer a high degree of flexibility, ranging from 2 years to 3, 4 or 5 years. Of course, if you are planning on taking out a loan then it is important to work out exactly how much you need to borrow and over how long you want to pay it back. Borrow too little and you risk having to go back and ask for more money. On the other hand, if you borrow too much then you’ll be paying interest on money you didn’t need and might end up paying more in interest. It’s important to know this before entering any loan contract.
Equally, it is also vital to choose the most appropriate loan term. If you try and pay it off more quickly than you can afford to then you could run into problems by over stretching your finances. The opposite situation is one where you take longer than you really needed to pay off a loan. In this case, you will pay back more interest than you really need to, as well as have the debt outstanding for longer than you would like.
While it is important to take into consideration the term of the loan, one positive aspect of borrowing money with Zopa is that there are no early repayment fees, giving you the flexibility to repay the loan early or make extra payments to pay down the balance of your loan without penalty.
Why Do It?
If you have always used your bank to borrow money then what would be likely to make you change over to an internet site offering peer to peer lending? Well, the first point to bear in mind is that cutting the traditional bank out of the process can produce more competitive rates of interest. If we can cut even a little bit off our interest rates when we have to borrow money then that has got to be a good thing.
After all, over a term of several years even a little bit of extra interest will add up. Since we don’t want to owe any more money than we absolutely have to, keeping the interest rate low makes sense. Other people might be upset or angry at their bank for problems such as what they see as being excessive charges. Again, with Zopa they do not charge you for making additional repayments or deciding to close the loan early. Whatever your reason is for looking at a new way of borrowing money like peer to peer lending, the modern approach adopted by Zopa looks as though it is definitely worth considering.
Summary
Peer to peer lending is a clever way of borrowing money which will hopefully lead lower rates of interest. The process seems slick and trustworthy enough to make giving it a try well worth considering. Zopa is well trusted by their customers, which is illustrated by them being voted most trusted loan provider in the Moneywise Customer Service Awards from 2010 to 2016.
| 5,282
| 2,400
| 2.200833
|
warc
|
201704
|
Wait, what? It's Chomsky's fault that there's linguistic bias? Here's the start of the article: It's time to challenge the notion that there is only one way to speak English.Why do we persist in thinking that standard English is right, when it is spoken by only 15% of the British population? Linguistics-loving Harry Ritchie blames Noam Chomsky. Did you see that great documentary on linguistics the other night? What about that terrific series on Radio 4 about the Indo-European language family tree? Or that news report on language extinction? It is strange that none of those programmes happened, or has ever happened: it's not as if language is an arcane subject. Just as puzzling is the conspicuous lack of a properly informed book about language – either our own or language in general.Oh. So, who gets the blame? After some meandering commentary on Pinker's Language Instinct, among other things, we eventually learn: I put it down to the strange way that the discipline developed under the aegis of the man who has dominated and defined it since the late 50s, the father of modern linguistics, Chomsky.
And this is no vague blame, just about the popular impact or perception of linguistics:
the wholesale acceptance of Chomsky's rationalist assumptions has meant that the discipline has been hunting for unicorns while neglecting many key areas of language. There is still little research being carried out on, for example, environmental influences on children's language acquisition. Most pressingly of all, too little work is being done to record the languages currently facing extinction. By one estimate, 95% of the 7,000 languages now spoken in the world are in danger of dying out. Recording these should have been a priority.
I just googled 'endangered languages' and got over 2,000,000 hits. Google 'generative grammar' and you get over 500,000. That probably roughly reflects the current levels of activity on those two fronts. I would add something on 'environmental influences on children's language acquisition', but I'm not entirely sure what it means.
Ultimately, Chomsky "turned grammar into a technical subject full of jargon and algebra studied on whiteboards by men with beards". Yeah, that's certainly killed physics and cognitive science and whatnot.
I eventually realized I was reading science fiction.
Image from here.
| 2,364
| 1,243
| 1.90185
|
warc
|
201704
|
Browse Results For:
Adorno's Aesthetic Theory Revisited
A discussion of Theodor Adorno’s
Aesthetic Theory is bound to look significantly different today than it would have looked when the book was first published in 1970, or when it first appeared in English translation in the 1980s. In The Fleeting Promise of Art, Peter Uwe Hohendahl reexamines Aesthetic Theory along with Adorno’s other writings on aesthetics in light of the unexpected return of the aesthetic to today’s cultural debates.
Is Adorno’s aesthetic theory still relevant today? Hohendahl answers this question with an emphatic yes. As he shows, a careful reading of the work exposes different questions and arguments today than it did in the past. Over the years Adorno’s concern over the fate of art in a late capitalist society has met with everything from suspicion to indifference. In part this could be explained by relative unfamiliarity with the German dialectical tradition in North America. Today’s debate is better informed, more multifaceted, and further removed from the immediate aftermath of the Cold War and of the shadow of postmodernism.
Adorno’s insistence on the radical autonomy of the artwork has much to offer contemporary discussions of art and the aesthetic in search of new responses to the pervasive effects of a neoliberal art market and culture industry. Focusing specifically on Adorno’s engagement with literary works, Hohendahl shows how radically transformative Adorno’s ideas have been and how thoroughly they have shaped current discussions in aesthetics. Among the topics he considers are the role of art in modernism and postmodernism, the truth claims of artworks, the function of the ugly in modern artworks, the precarious value of the literary tradition, and the surprising significance of realism for Adorno.
Translator-Authors in the Age of Goethe
The turn of the nineteenth century was a particularly fertile period in the history of translation theory and practice. With an unprecedented number of works being carefully translated and scrutinized, this era saw a definite shift in the dominant mode of translation. Many translators began attempting, for the first time, to communicate the formal characteristics, linguistic features, and cultural contexts of the original text while minimizing the paraphrasing that distorted most eighteenth-century translations. As soon as these new rules became the norm, authorial translators—defined not by virtue of being authors in their own right but by the liberties they took in their translations—emerged to challenge them, altering translated texts in such a way as to bring them into line with the artistic and thematic concerns displayed in the translators’ own “original” work. In the process, authorial translators implicitly declared translation an art form and explicitly incorporated it into their theoretical programs for the poetic arts. Foreign Words provides a detailed account of translation practice and theory throughout the eighteenth and early nineteenth centuries, linking the work of actual translators to the theories of translation articulated by Goethe, Wilhelm von Humboldt, and, above all, Friedrich Schleiermacher. Employing a variety of critical approaches, author Susan Bernofsky discusses in depth the work of Kleist, Hölderlin, and Goethe, whose virtuoso translations raise issues that serve to delineate a theory of translation that has relevance at the turn of the twenty-first century as well. Combining a broad historical approach with individual readings of the work of several different translators, Foreign Words paints a full picture of translation during the Age of Goethe and provides all scholars of translation theory with an important new perspective.
Nationalism, Cosmopolitanism, and the "Bildungsroman"
The
Bildungsroman, or "novel of formation," has long led a paradoxical life within literary studies, having been construed both as a peculiarly German genre, a marker of that country's cultural difference from Western Europe, and as a universal expression of modernity. In Formative Fictions, Tobias Boes argues that the dual status of the Bildungsroman renders this novelistic form an elegant way to negotiate the diverging critical discourses surrounding national and world literature.
Since the late eighteenth century, authors have employed the story of a protagonist's journey into maturity as a powerful tool with which to facilitate the creation of national communities among their readers. Such attempts always stumble over what Boes calls "cosmopolitan remainders," identity claims that resist nationalism's aim for closure in the normative regime of the nation-state. These cosmopolitan remainders are responsible for the curiously hesitant endings of so many novels of formation.
In
Formative Fictions, Boes presents readings of a number of novels-Goethe's Wilhelm Meister's Apprenticeship, Karl Leberecht Immermann's The Epigones, Gustav Freytag's Debit and Credit, Alfred Döblin's Berlin Alexanderplatz, and Thomas Mann's Doctor Faustus among them-that have always been felt to be particularly "German" and compares them with novels by such authors as George Eliot and James Joyce to show that what seem to be markers of national particularity can productively be read as topics of world literature.
The Matter of Obscenity in Nineteenth-Century Germany
Fragile Minds and Vulnerable Souls investigates the creation of "obscene writings and images" as a category of print in nineteenth-century Germany. Sarah L. Leonard charts the process through which texts of many kinds—from popular medical works to stereoscope cards—were deemed dangerous to the intellectual and emotional lives of vulnerable consumers. She shows that these definitions often hinged as much on the content of texts as on their perceived capacity to distort the intellect and inflame the imagination.
Leonard tracks the legal and mercantile channels through which sexually explicit material traveled as Prussian expansion opened new routes for the movement of culture and ideas. Official conceptions of obscenity were forged through a heterogeneous body of laws, police ordinances, and expert commentary. Many texts acquired the stigma of immorality because they served nonelite readers and passed through suspect spaces; books and pamphlets sold by peddlers or borrowed from fly-by-night lending libraries were deemed particularly dangerous. Early on, teachers and theologians warned against the effects of these materials on the mind and soul; in the latter half of the century, as the study of inner life was increasingly medicalized, physicians became the leading experts on the detrimental side effects of the obscene. In
Fragile Minds and Vulnerable Souls, Leonard shows how distinctly German legal and medical traditions of theorizing obscenity gave rise to a new understanding about the mind and soul that endured into the next century.
Narration, Rhetoric, and Reading
Franz Kafka: Narration, Rhetoric, and Reading presents essays by noted Kafka critics and by leading narratologists who explore Kafka’s original and innovative uses of narrative throughout his career. Collectively, these essays by Stanley Corngold, Anniken Greve, Gerhard Kurz, Jakob Lothe, J. Hillis Miller, Gerhard Neumann, James Phelan, Beatrice Sandberg, Ronald Speirs, and Benno Wagner examine a number of provocative questions that arise in narration and narratives in Kafka’s fiction. The arguments of the essays relate both to the peculiarities of Kafka’s story-telling and to general issues in narrative theory. They reflect, for example, the complexity of the issues surrounding the “somebody” doing the telling, the attitude of the narrator to what is told, the perceived purpose(s) of the telling, the implied or actual reader, the progression of events, and the progression of the telling. As the essays also demonstrate, Kafka’s narratives still present a considerable challenge to, as well as a great resource for, narrative theory and analysis.
A Tenuous Legacy
InGerman-Jewish Thought and Its Afterlife,Vivian Liska innovatively focuses on the changing form, fate and function of messianism, law, exile, election, remembrance, and the transmission of tradition itself in three different temporal and intellectual frameworks: German-Jewish modernism, postmodernism, and the current period. Highlighting these elements of theJewish tradition in the works of Franz Kafka, Walter Benjamin, Gershom Scholem, Hannah Arendt, and Paul Celan, Liska reflects on dialogues and conversations between themandonthereception of their work.She shows how this Jewish dimension of their writings is transformed, but remains significant in the theories of Maurice Blanchot and Jacques Derrida and how it is appropriated, dismissed or denied by some of the most acclaimed thinkers at the turn of the twenty-first century such as Giorgio Agamben, Slavoj iek, and Alain Badiou.
Women and the Import of Fiction, 1866-1917
In postbellum America, publishers vigorously reprinted books that were foreign in origin, and Americans thus read internationally even at a moment of national consolidation. A subset of Americans’ international reading—nearly 100 original texts, approximately 180 American translations, more than 1,000 editions and reprint editions, and hundreds of thousands of books strong—comprised popular fiction written by German women and translated by American women. German Writing, American Reading: Women and the Import of Fiction, 1866–1917 by Lynne Tatlock examines the genesis and circulation in America of this hybrid product over four decades and beyond. These entertaining novels came to the consumer altered by processes of creative adaptation and acculturation that occurred in the United States as a result of translation, marketing, publication, and widespread reading over forty years. These processes in turn de-centered and disrupted the national while still transferring certain elements of German national culture. Most of all, this mass translation of German fiction by American women trafficked in happy endings that promised American readers that their fondest wishes for adventure, drama, and bliss within domesticity and their hope for the real power of love, virtue, and sentiment could be pleasurably realized in an imagined and quaintly old-fashioned Germany—even if only in the time it took to read a novel.
The Troubled Inheritance of Modern Literature
| 10,631
| 4,783
| 2.222664
|
warc
|
201704
|
Smile. Your ignition interlock device will now take your picture as you blow.
The Washington State Patrol believes too many drivers are using passengers, including children, to blow into the devices to get the cars started. So starting Jan. 1, all new devices will come with digital cameras. They will snap pictures of who’s blowing into the device so the state patrol can tell for sure who’s using them.
“We see it on a regular basis,” State Patrol Sergeant Ken Denton, who oversees the state’s interlock program, said. “How often? I can’t really put a number on that, but it is happening.”
Interlocks are required on the vehicles of those who’ve been accused or convicted of impaired driving. The machine requires a legal breath sample from the driver before allowing a car to start.
“We’ve even heard stories of people trying to use portable air compressors to take the test,” said Lt. Rob Sharpe, commander of the Washington State Patrol’s Impaired Driving Section.
Washington’s law allows those whose drivers’ licenses would normally be suspended to drive legally with an interlock. It was an acknowledgment that those accused or convicted of impaired driving have jobs and family obligations that require a car.
“History taught us that these people were going to drive anyway,” said Captain Rob Huss, commander of WSP’s Office of Government and Media Relations. “The Ignition Interlock License gives them a way to drive legally, but gives the rest of us some assurance that they’re sober and safe.”
While the camera won’t bust anyone right away, the machine’s software records failures or attempts to tamper with the device. The company which leases the interlocks downloads the information and in turn contacts the State Patrol.
“We do make personal visits to drivers if we have evidence they have tried to fool the machine,” Sharpe said. “Having a picture will be the best possible evidence that someone was trying to cheat.”
In addition to those newly convicted of DUI, drivers who have long term interlock requirements will have to add cameras to their systems.
| 2,217
| 1,067
| 2.077788
|
warc
|
201704
|
America Recycles Day – What Are You Recycling? America Recycles Day – What Are You Recycling? By Diane MacEachern NABBW’s Going Green Expert
Today is America Recycles Day. Recycling is important, because it saves energy, reduces trash, and helps stop climate change. Here\’s what I recycle, and how I\’ve changed what I buy so I can buy less in the first place, reuse more and throwaway less. And keep reading for information on how you can recycle and reduce the number of catalogs you receive.
Food and soda cans – I recycle glass, metal and plastic containers in my community\’s curbside recycling program. But I also use a Soda Stream water spritzer so I almost never buy bottled drinks anymore. I spritz water myself, then add various flavorings and sweeteners depending on what I want to drink. I\’m saving a lot of money doing this, too. Beer and wine bottles – I generally buy glass rather than cans or plastic bottles. If I\’m having a party, I buy larger bottles of wine, which use less material per serving than regular-sized bottles. Plastic milk jugs – I can buy milk in glass bottles at my local food coop (though they cost about $2 a gallon more than milk in plastic jugs). Plastic laundry jugs (when I use liquid detergent) – I generally prefer to use powdered detergent in cardboard boxes, which are better to recycle than plastic jugs. I also use concentrated detergent, so I use less per load of laundry, and extend the life of the package. Clothes – I recycle old socks and t-shirts into cleaning rags. I donate most of my used clothes to the local thrift shop or the neighborhood church. Electronics – I recycle old monitors, computers, fax machines, chargers, phones, and pretty much anything else with a cord on it, taking most of it to Best Buy or Staples, which accept almost any reasonably-sized electronics at no charge. I even recycled my TV! Lightbulbs – I can now recycle my light bulbs at my city\’s community waste facility. Some stores, like Ikea and Home Depot, also accept them. Paper (newspapers, junk mail, magazines) – All of my paper goods can be recycled curbside, but the trick is to reduce the amount of paper coming into my house in the first place. I read most newspapers and magazines online, and have used Catalog Choice to reduce the number of unwanted magazines and catalogs I receive. Plastic bags – I use reusable cloth bags instead of plastic bags, but if I have excess bags, I recycle them at my grocery store. Toys – I have given my kids\’ used toys to neighborhood kids or donated them to the local thrift store. Furniture – I have sold unwanted furniture through my neighborhood list-serv, or simply given it away to others who can use it. EBay.com, CraigsList.com and FreeCycle.org are also great ways to unload sofas, chairs, lamps, and dining sets you no longer want or need. Appliances – The easiest appliance to recycle in my neighborhood is actually my refrigerator. Here\’s how I not only recycled my old refrigerator, but received $200 when I did it. Food – The ultimate way to recycle food is to compost it. I have a barrel composter in my backyard that helps me turn fruit and vegetable scraps, egg shells, and other non-meat or dairy waste into a rich fertilizer I can put on my garden. REDUCE UNWANTED CATALOGS IN THE FIRST PLACE
This year, America Recycles Day has teamed up with Catalog Choice to help consumers reduce the number of unwanted catalogs they receive in the mail. It\’s free and quick to sign up, and much easier than calling individual companies to try to get your name off their list.
What do you recycle? Please let us know!
| 3,749
| 1,790
| 2.094413
|
warc
|
201704
|
_________________________________________________________________________
GET THE FREE NATIONAL CYBER SECURITY APP FOR YOUR PHONE AND TABLET
LAS VEGAS – Today’s world is a hacker’s paradise.
As nearly all facets of life become more dependent on digital conveniences, the opportunities for gifted tech manipulators have become virtually endless. Cars, homes, safes, phones and guns have become routine fodder for hackers with intentions both noble and nefarious.
Thousands of people gathered in Las Vegas over the weekend for DefCon 23, one of the world’s most popular hacker conferences. Several demonstrations at the event revealed frightening vulnerabilities to the security of the so-called smart devices on which the world is lenient.
Below are six of the scariest hacks revealed at 2015’s DefCon:
A $32 device that unlocks cars, opens garage doors
Hackers have been taking advantage of remote locking systems since they became commonplace but a handheld device unveiled at DefCon has made the practice easier than ever. The RollJam — designed by security researcher Samy Kamkar — intercepts radio waves containing the codes that are sent from a key fob to the vehicle it controls, according to Wired.
The device can also reportedly be used to manipulate remote garage door openers, giving tech-savvy thieves a route into homes. While the RollJam sounds potentially complex, it’s valued at about $32 per unit, according to CNBC.
Hackers can manipulate death records fairly easily
One of the most illuminating presentations from DefCon 23 came from a computer security expert who warned about the ease of falsely declaring someone officially dead. The demonstration showed how a hacker could pose as a doctor or funeral home director for the purposes of forging death certificates. According to Australia’s ABC News, a similar process could be used to digitally “birth” nonexistent babies.
Hacker uses digital tech to destroy chemical barrel
The hacking demonstrations at DefCon weren’t all limited to digital space. One involved using a remotely manipulated device to implode an enclosed metal barrel — creating a frightening what-if scenario.
Hacker Jason Larsen used code to crush the barrel in front of an audience, sending a shockwave through the room, as seen in the image below. According to Wired, the hack worked by simultaneously vacuum-packing the drum and raising its temperature. The demonstration showed what could happen if a volatile chemical plant were attacked by hackers.
Breaking into a Brink’s safe takes 60 seconds
The name Brink’s is perhaps synonymous with money security — but a team of hackers showed how quickly one of the company’s digital safes can be opened without using dynamite or drills. A small USB stick can be inserted into a port on the Brink’s CompuSafe Galileo which will manipulate the safe’s locks and open it in about a minute,according to eWeek, who talked to the hackers behind the device.
Hackers can break out of house arrest
Note to legal professionals: When sentencing criminal hackers, it’s probably best to avoid house arrest as an option. At a DefCon demonstration on Friday, a hacker revealed he’d discovered a way to fool the GPS of location-tracking devices often worn on the ankle of a person under house arrest.
The demonstration worked on a single model tested, but the hacker told Vice he was confident other models have similar weaknesses.
GPS system hacking could send self-driving cars ‘over a cliff’
Manipulating GPS navigation systems is nothing new in the hacking world, but according to Forbes, a team of Chinese researchers proved it’s easier than ever at DefCon. At its most innocent, hacking into a car’s GPS could be used to give a driver the wrong directions. But one expert told Forbes it could also be used to send a self-driving car into a deadly crash.
Source: http://www.abcactionnews.com/news/national/6-scary-revelations-from-2015-defcon-hacker-conference
_______________________________________
| 4,125
| 2,007
| 2.055306
|
warc
|
201704
|
The holidays are a joyous time for many – but for others, including many who suffer from chronic illness, it can be a difficult time. What the head of the Southern Pain Society calls the “Holiday Blues” or the “Charlie Brown Christmas” may occur at any holiday or vacation time, but most commonly happens during the December holidays.
We asked one of our contributors, Dr. Geralyn Datz about the difficulty that some people have in the holiday season. She says the sadness and even depression can come on for a variety of reasons, like high physical stress as well as psychological and financial and family tension. Dr. Datz is a licensed clinical psychologist who specializes in behavioral medicine.
For some pain patients, it can be caused by both the memories of what life was like when you were pain free and/or because, well, the pain just hurts.
What helps manage it?
The answers are not surprising but often for pain patients, they just aren’t easy to do:
Rest and Get Enough Sleep Regular Exercise Eat a balanced/healthy diet
Dr. Datz talks about coping…and has some tips on what to do.
Surround yourself with supportive people—reconnect with old friends Talk with family about the limitations your pain imposes, “Be honest with yourself and with your family about what you can and cannot do,” she said. If you are religious, “focusing on the spiritual significant of the holidays can also help.”
Dr. Datz leads the Southern Pain Society which was incorporated in 1989 and is a region of the American Pain Society covering the 18 southern states and Puerto Rico.
“Our mission is to serve people with pain by advancing research and treatment and to increase the knowledge and skill of the regional professional community,” she said.
The Christmas holidays aren’t easy for the chronic pain patient.
In our commentary section to this article tells us how you are doing during the holidays and what you do to cope. We’ll take some of the comments and share them in a story on Christmas Day.
Your friends at the National Pain Report wish you a Merry Christmas and happy holiday season, and know that we are thinking about you.
| 2,223
| 1,140
| 1.95
|
warc
|
201704
|
About PMD Clinical features of PMD Pelizaeus-Merzbacher disease (PMD), named after two German physicians who first described its most important clinical features, is a rare condition caused by mutations affecting the gene for proteolipid protein 1 (PLP1, formerly called PLP). The PLP1 gene lies on the X chromosome so that most affected individuals are males who inherit the mutant or abnormal gene from their mothers. Rarely, females can have symptoms. Clinically, Pelizaeus-Merzbacher disease usually begins during infancy and signs of the disease may be present at birth or in the first few weeks of life. The first recognizable sign is a form of involuntary movement of the eyes called nystagmus. The eye movements can be circular, as if the child is looking around the edge of a large circle, or horizontal to-and-fro movements. The nystagmus tends to improve with age. Some infants have stridor (labored and noisy breathing). Infants may show hypotonia (lack of muscle tone; floppiness) at first, but most eventually, over several years, develop spasticity (a type of increased muscle tone or stiffness of the muscles and joints). Motor and intellectual milestones are delayed, however the intellectual delay is often more apparent than real, if care and time are taken to evaluate the children. Most PMD individuals learn to understand speech, but verbal output can vary from normal speech to almost complete mutism. Head and trunk control may be a problem and wavering or tremor of the upper body ( titubation)when sitting is common. Trouble with coordination ( ataxia) is also common, and dexterity of the arms and fingers is usually reduced. Vision is usually reduced to some degree, probably from the effects of the myelin abnormality, but also from the nystagmus as well. Although the following terms are somewhat artificial, they are used in many textbooks and medical reports. Connatal PMD refers to the most severe form of the disease, with neurological signs, such as nystagmus, stridor and hypotonia, being noticeable from birth to within the first few weeks of life. Seizures may occur only in these children. These children usually are unable to talk or walk, although they may comprehend quite well. The Classical PMD syndrome is the most commonly seen form of the disease. Nystagmus usually begins in the first 2 months to 6 months. Later, delay in the usual developmental milestones, such as rolling over, sitting up, standing, walking and speech are seen. Muscle tone may be hypotonic, although this is not as noticeable as in the connatal child. Most of these children do learn to talk, although they may have slurred speech (dysarthria). Some of these children learn to walk with assistance, such as walkers, but most are not able to. Virtually all PMD patients have ataxia. We now know that some mutations of the PLP1 gene may result in a less severe syndrome, called spastic paraparesis (weakness and stiffness of the legs) or SPG2, where the major sign is gait difficulty due to weakness and spasticity of the legs. One family has been reported with a mutation that causes tremor and/or attention deficit disorder as the major abnormalities. Peripheral nerve myelin is usually not affected, however we have discovered that in the rare families whose mutations prevent the synthesis of any PLP1 (the PLP1 null syndrome) have a mild peripheral myelin disorder, but have less severe overall neurologic difficulties. The clinical diagnosis generally includes the clinical findings listed above along with a family history consistent with X chromosome transmission (that is, being passed down by mothers, and never being passed from an affected father to his son). The most useful screening test after the neurologic examination and family history, is a brain magnetic resonance imaging (MRI) scan, which is a very sensitive test for leukodystrophies (diseases of the white matter), most reliable if it is done after one or two years of age (the times when the major white matter pathways in the brain are developing). Other tests to exclude other leukodystrophies such as the lysosomal storage diseases (such as metachromatic leukodystrophy, Salla disease and Krabbe disease) and adrenoleukodystrophy should also be done. Evoked potentials testing is also helpful and should show abnormal central conduction but normal or near normal peripheral conduction. The definitive test is demonstration of a pathologic mutation of the PLP gene. PMD genetics There are two major aspects of the disease that are important to really understand it. The first is the genetics of PMD and the second relates to the effect of PLP1 mutations on the nervous system. First I'll describe the genetics. PMD occurs when there is a change (or mutation) in the body's "blueprint" material. These blueprint materials, called genes, control the way a body is made, what it looks like, and how it works. Most genes come in pairs. One gene of each pair comes from the mother's egg and the other from the father's sperm. In the tens of thousands of gene pairs, sometimes one will be changed. The mutation may be inherited or may happen by itself. Sometimes a mutated gene will not cause problems. Other times a gene with a mutation will cause the body not to work correctly, and that a person will have a genetic condition such as PMD. Genes are carried on chromosomes. Most individuals have 46 chromosomes in each cell in their body. The chromosomes come in 23 pairs with the first 22 pairs being identical in males and in females. The last pair is the sex chromosomes; females have two X chromosomes, while males have one X and one Y chromosome. The chromosome can be thought of like a bookcase and the gene as a book located on the bookcase. DNA (deoxyribonucleic acid) which is the basic component of the gene, is like the letters in the book. Genetic information is stored, and passed down from generation to generation, in the form of the precise sequence of DNA letters or bases. Since the gene for PMD is located on the X chromosome, the disease typically affects only boys or men in a family. Technically, this is called X linked inheritance. Remember that females have two X chromosomes while males have one X and one Y chromosome. If there is a gene on the X chromosome which is not working properly, males will be affected more often than females, since females likely have a gene on the other X chromosome which does work properly and this usually compensates for the defective X chromosome. Females who carry the gene for PMD therefore typically are not affected since the PLP gene on the other X chromosome is normal. Males with PMD are usually not able to have children, so the disease when it occurs in several generations is passed on by women who are carriers for the PMD mutation. Women who carry the PMD gene have a 50% or 1 in 2 chance of passing it on to their sons and their daughters. These odds are the same for every pregnancy. What happened in one pregnancy does not in any way influence the odds for the next pregnancy. Sons who inherit the gene would be affected, whereas daughters would be carriers. If a daughter did not inherit the PMD gene, then she would not pass PMD on to her children. Basic molecular biology Deoxyribonucleic acid (DNA), which carries the instructions that instruct cells to make proteins, is made up of four chemical bases or letters, abbreviated C, T, G, and A (for cytosine, thymidine, guanine and adenine). A DNA molecule is simply a long chain of these bases strung together. The information is the sequence of bases. This is like all the information stored in a book in the order of specific letters of the alphabet, or the information on a computer disk represented by a long string of zeroes and ones. In fact, each chromosome is basically a single molecule of DNA. The largest human chromosome (the first) has about 120,000,000 bases. A mutation (any alteration of the DNA) that affects only a single base (one letter) is called a point mutation. Other types of mutations can occur as well, including insertions (additions of DNA into a gene), deletions (removal of part of a gene), and duplications where entire genes are present in one or more additional copies. The gene responsible for PMD is the proteolipid protein 1 gene (PLP1) and it is located on the X chromosome. PLP1 duplication The types of mutations that are known to cause PMD fall into two general categories: point mutations and duplications. In just the past few years it has been discovered that most PMD is caused by duplications (or rarely triplication or even quintuplication) of the entire PLP1 gene. This seems to be the case for PMD families around the world and we still do not understand why it occurs. The duplications appear to account for about 50 to as much as 75 % of those families with PMD. We currently believe that the duplication results in too much otherwise normal proteolipid protein being made. Furthermore, this excessive PLP1 is toxic to the cells (called oligodendrocytes) trying to make myelin. There can be quite a lot of difference in the neurologic difficulties between families with duplications. One reason for this may be due to the differences in the size of the duplication in different families. While we believe that members of the same family will have the same size duplication, there is know to be a big difference in duplication size between different families. The smallest duplications known are around 100,000 DNA bases in length, but the biggest ones found so far are around 5 million bases. The PLP1 gene is about 30,000 bases long. Other factors that may explain the differences in families are what genes other than PLP1 are also duplicated, and whether some of these genes that come before or after PLP on the X chromosome are mutated by the duplication. Further research will be needed to understand the variability between families (and even within families) affected by PMD. PLP1 point mutations Point mutations are usually mistakes in the gene where one of the bases or 'letters' is replaced by the wrong one (technically called a base substitution). Depending upon where the letter is and what it is replaced by, the mutation could result in: No effect One amino acid in the protein encoded by the gene is replaced by the wrong amino acid (amino acid substitution).
Depending on the place and nature of the amino acid substitution, these mutations can have mild or severe effects. PLP1 with just one wrong amino acid at a critical location is toxic to myelin forming cells, just as is overabundance of normal PLP1 (and may even be more toxic than overabundance)
The protein is prematurely terminated (ends at the wrong place) Disturbance in the regulation of the gene Disturbance in splicing of the gene Mutations can also result in the gain or loss of more than one base. If this occurs in the region of the gene that codes for protein then this might not only result in the gain or loss of one or more amino acids in the protein, but also might cause the protein to be completely disturbed after the place the mutation occurs because the machinery that decodes the genetic information into protein (called ribosomes) gets out of register with the proper code and just makes scrambled protein after the mutation site. Since there are only 4 letters in the genetic alphabet and they are read in words 3 letters long, there are 64 possible genetic words or codons possible. Of these 64 codon possibilities, 61 of them code for one of 20 possible amino acids. The remaining 3 codons are called termination codons and tell the protein synthesis machinery to stop making protein. Notice that there are more codon possibilities than there are amino acids. Some amino acids have more than one codon that can encode them, whereas others have only one or two codon possibilities. Proteins are simply chains of amino acids hooked together like beads on a chain. To make a simple analogy, take the following simple sentence: The red fox ran far and sat. Now if one of the letters is mistyped, like what happens with a base substitution mutation, the meaning of the sentence changes: The red sox ran far and sat. These missense mutations may sometimes not be harmful or cause mild disease, but if they occur at an important location in the protein can be quite harmful. If, as in the case of a base deletion, all the words get jumbled up after the mutation, because the protein synthesis machinery has to read the code three letters at a time (these are called frame shift mutations): The red oxr anf ara nds at. The severity of this type of mutation depends mostly upon where the mutation is located. If the frame shift occurs at the end of the gene, it may not cause severe problems, whereas a mutation near the beginning of the gene will typically have severe consequences. Although not strictly point mutations, the effects of mutations that delete or insert a small number (for example two to a couple of dozen) of bases, are similar to what happens with single base mutations. Many PLP1 mutations have been identified. Most of these point mutations are unique to a specific family. Since these are unique mutations, it is not easy to predict for a PMD patient with one of these mutations what will happen over the course of his life, especially if there is no prior history of the disease in the family. A major goal of genetic research on PMD focuses on the clinical signs caused by specific mutations in PLP1. This is called genotype-phenotype correlation. To make matters even more complicated, we now know that most genetic information coding for proteins is broken up into chunks that are separated, sometimes by very large distances, from each other. These chunks are called exons, and the DNA segments that separate the exons are called introns. The genetic information in the nucleus of a cell is first transcribed to molecules of ribonucleic acid (RNA), then the introns are removed from the RNA to generate the messenger RNA (mRNA) molecules that have all the protein coding information nicely spliced together. The mRNA then leaves the nucleus to serve as the blueprint for the protein synthesis machinery in the cytoplasm (the rest of the cell that surrounds the nucleus) of the cell. We know that the PLP1 gene is broken up into 7 exons, and it turns out that one of the exons (the third one) sometimes is partially spliced out, resulting in a protein that looks like PLP, but is missing 35 amino acids in the middle of the protein. The smaller protein is called DM20. There are some PMD causing mutations that affect how the PLP1 mRNA is spliced together. We also know that in addition to the regions that code for protein, there are regions of genes that regulate their expression. In order for the right proteins to be made in the right organs and in the right amounts, there are many processes that have to be regulated very precisely. One important type of regulation occurs in the nucleus, which has to decide which genes to turn on and which to turn off, and by how much. Some DNA sequences that lie near but usually outside of the protein coding regions function to regulate gene expression or transcription into RNA. Mutations that change these regulatory sequences can have drastic affects on the gene, and might result in the protein being made in too high or too low an amount, or to be made in the wrong organ or at the wrong time of life. PLP1 and myelin PMD is one of the leukodystrophies, disorders that affect the formation of the myelin sheath, the fat and protein covering--which acts as an insulator--on neural fibers (axons) in the central nervous system or CNS, which is the brain and spinal cord. About 75 % of myelin is made up of fats and cholesterol and the remaining 25 % is protein. PLP1 constitutes about half of the protein of myelin and is its most abundant constituent other than the fatty lipids. New experiments indicate that about half or more of affected individuals have a duplication of an otherwise normal PLP1 gene. Thus, it appears that the presence of too much PLP1 in oligodendrocytes, the cells that make myelin in the central nervous system, is harmful. The point and other small mutations usually cause the substitution of one of the amino acids for another somewhere in the protein or prevent PLP1 from reaching its full length. This probably results in the protein being unable to fold into the correct shape or to interact with other myelin constituents. These mutant proteins are toxic to oligodendrocytes and prevent them from making normal myelin. Treatment Unfortunately, there is currently no cure for Pelizaeus-Merzbacher disease, nor is there a standard course of treatment. Gene therapy and cell transplantation are being explored as possible therapies. For now, however, treatment is symptomatic and supportive, and may include medication for seizures and the stiffness or spasticity that most PMD patients have. Physical therapy can be helpful in maintaining strength and joint flexibility, and occupational therapy is helpful in maximizing the abilities of a PMD patient. Braces or walkers may enable a child to walk. If speech or swallowing is impaired, a speech/swallowing therapist should be able to provide important guidelines to make speech more understandable and to prevent choking. Orthopedic surgery may help reduce contractures, or locked joints, that can result from spasticity. A physical medicine specialist (also known as physiatrist or rehab doctor) may be the most effective physician in evaluating a child's needs and coordinating all the different therapists. A developmental pediatrician should also evaluate each child to assess his abilities and help to design an educational curriculum to maximize his learning and potential. It is important in these developmental assessments to factor in the longer time it takes a PMD child to process information, and also to factor in the motor limitations most kids with PMD have. Periodic developmental assessments should be done to monitor each child's progress. Genetic counseling Once a PLP1 gene mutation is identified in a family, it is possible to test family members for the mutation and to provide prenatal diagnosis for parents who have a risk of transmitting this disorder. Such testing, especially for a couple planning a family, or for a woman who wants to know whether she is a carrier, should be done under the guidance of a medical geneticist and/or genetic counselor. Carrier testing is usually deferred until the female is 18 years of age. It is now possible to do preimplantation genetic testing (PGD) for PMD, but this is often not covered by health insurance. Prognosis The prognosis for those with Pelizaeus-Merzbacher disease varies. Some mutations are more severe than others and may result in death during childhood, but most live into adulthood. Survival into the sixties has been seen. The course of the disorder is usually very slow, with some individuals reaching a plateau and remaining stable for years. However, some do worsen over time, for reasons that we do not understand, and will need further research. Research A international group of clinicians and researchers working on Pelizaeus-Merzbacher disease and proteolipid protein has been organized to promote research to facilitate understanding of disease pathogenesis and development of specific treatments and, we hope, a cure. In North America, please contact James Garbern for more information. These articles, available from a medical library, are sources of in-depth information on Pelizaeus-Merzbacher disease: Boulloche, J. and Aicardi, J. Pelizaeus-Merzbacher disease: clinical and nosological study. Journal of Child Neurology 1:233-9 (1986) [Abstract]. Cailloux, F. et al. Genotype phenotype correlation in inherited brain myelination defects due to proteolipid protein gene mutations. European Journal of Human Genetics 8:837-845 (2000) [Abstract]. Cambi, F. et al. Refined genetic mapping and proteolipid protein mutation analysis in X-linked pure hereditary spastic paraplegia. Neurology 46:1112-7 (1996) [Abstract]. van der Knaap, M and Falk, J. The reflection of histology in MR imaging of Pelizaeus-Merzbacher disease. AJNR Am J Neuroradiol. 10(1):99-103 (1989). [Abstract]. Garbern, J. PLP1-related disorders, Genereviews (2004). Garbern, J. Pelizaeus-Merzbacher disease, eMedicine (2005). Garbern, J., Cambi, F., Shy, M. and Kamholz, J. The Molecular Pathogenesis of Pelizaeus-Merzbacher disease. Archives of Neurology 56:1210-1214, (1999) [Abstract]. Garbern, J., Cambi, F. et al. Proteolipid protein is necessary in peripheral as well as central myelin. Neuron 19:205-218 (1997) [Abstract]. Gencic S, Abuelo D, Ambler M, Hudson LD. Pelizaeus-Merzbacher disease: an X-linked neurologic disorder of myelin metabolism with a novel mutation in the gene encoding proteolipid protein. Am J Hum Genet. 1989 Sep;45(3):435-42 (1989) [Abstract]. Gow, A. and Lazzarini, R. A cellular mechanism governing the severity of Pelizaeus-Merzbacher disease. Nature Genetics 13:422-428 (1996) [Abstract]. Hudson LD, Puckett C, Berndt J, Chan J, Gencic S. Mutation of the proteolipid protein gene PLP in a human X chromosome-linked myelin disorder. Proc Natl Acad Sci U S A. 86:8128-31 (1989) [Abstract] Inoue, K et al. A duplicated PLP gene causing Pelizaeus-Merzbacher disease detected by comparative multiplex PCR. Am J Hum Genet. 59:32-9 (1996) [Abstract]. Mimault, C. et al. Proteolipoprotein gene analysis in 82 patients with sporadic Pelizaeus-Merzbacher disease: duplications, the major cause of the disease, originate more frequently in male germ cells, but point mutations do not. American Journal of Human Genetics 65:360-369 (1999) [Abstract]. Seitelberger, Franz, Urbanits, S. and Nave, K.-A. Pelizaeus-Merzbacher disease. Handbook of Clinical Neurology, vol. 22 (66) new series, H. Moser, ed. Elsevier Science, Amsterdam, (1996). Trofatter JA, Dlouhy SR, DeMyer W, Conneally PM, Hodes ME. Pelizaeus-Merzbacher disease: tight linkage to proteolipid protein gene exon variant. Proc Natl Acad Sci U S A. 86:9427-30 (1989) [Abstract]. Wolf NI, Sistermans EA, Cundall M, Hobson GM, Davis-Williams AP, Palmer R, Stubbs P, Davies S, Endziniene M, Wu Y, Chong WK, Malcolm S, Surtees R, Garbern JY, Woodward KJ. Three or more copies of the proteolipid protein gene PLP1 cause severe Pelizaeus-Merzbacher disease. Brain. 128:743-51 (2005) [Abstract] Woodward, K. and Malcolm, S. Proteolipid protein gene: Pelizaeus-Merzbacher disease in humans and neurodegeneration in mice. Trends in Genetics, 5:4:125-128 (1999) [Abstract]. Yool, DA, Edgar, JM, Montague, P and Malcolm, S. The proteolipid protein gene and myelin disorders in man and animal models. Human Molecular Genetics 9:987-992 (2000) [Abstract]. Additional information is available from the following organizations and individuals: Ms. Patti Daviau 525 S. Harris Indianapolis, IN 46222 (317) 635-7359 PDaviau@clarian.org Ms. Laura Spear 2 John James Audobon Marlton, NJ 08053 The PMD Foundation, Inc. Marlton, NJ dhobson@pmdfoundation.org The Myelin Project Myelin Project European Leukodystrophy Association ELA Nat. Org. for Rare Disorders (NORD) P.O. Box 8923 New Fairfield, CT 06812-1783 (203) 746-6518 (800) 999-6673 Hunter's Hope Foundation PO Box 643 Orchard Park, NY 14127 Toll Free: 1-877-984-HOPE (716) 667-1212 hunters@huntershope.org Association for Neuro-Metabolic Disorders c/o 5223 Brookfield Lane Sylvania, OH 43560 (419) 885-1497 United Leukodystrophy Foundation 2304 Highland Drive Sycamore, IL 60178 (815) 895-3211 (800) 728-5483 Nat. Tay-Sachs & Allied Diseases Assoc. 2001 Beacon St., Ste. 204 Brookline, MA 02146 (617) 277-4463 (800) 906-8723 The National Human Genome Research Institute has a great deal of information on a wide variety of genetics topics that you might find useful. The Public Broadcasting System has a nice site that help explain genetics: The Human Genome The National Center for Biotechnology Information has excellent online textbooks.
| 24,048
| 10,000
| 2.4048
|
warc
|
201704
|
March 4, 2008
A University of Iowa researcher has received a $255,762 National Science Foundation (NSF) grant for undergraduate nanoscience and nanotechnology research.
Sarah Larsen, associate professor of chemistry in the College of Liberal Arts and Sciences and grant project director, said that the NSF funding, formally known as a Research Experience for Undergraduates (REU) grant, will provide between eight and 10 summer research positions for undergraduate students interested in studying nanoscience and nanotechnology.
"Through this REU program, which is co-sponsored by the Division of Chemistry and the Engineering Education Centers at NSF, undergraduate students will have a unique opportunity to participate in exciting, interdisciplinary research being conducted by faculty affiliated with the Nanoscience and Nanotechnology Institute at the UI. Applications are currently being accepted and we hope to recruit a diverse group of undergraduate students for the program," she said.
Larsen, who also serves as associate director of the UI Nanoscience and Nanotechnology Institute, noted that the grant was received through the UI institute, a fact that will further promote institute research goals in health and the environment.
Vicki Grassian, institute director, professor of chemistry, and professor of chemical and biochemical engineering in the UI College of Engineering, underscored the benefit to undergraduate students.
"With the NSF award, the Nanoscience and Nanotechnology Institute at the UI will support undergraduate students in research on environmental and health aspects of nanoscience and nanotechnology. This is a wonderful way to engage undergraduate students in cutting-edge, interdisciplinary research," said Grassian.
The program itself, which will run from May 27 through Aug. 1, is designed to provide junior or senior undergraduate students a measure of research experience in cutting-edge topics related to environmental and health aspects of nanoscience and nanotechnology. REU participants will have the opportunity to work with faculty mentors from the departments of chemical and biochemical engineering, civil and environmental engineering, chemistry, pharmacy, and occupational and environmental health.
Further information about the program, whose application deadline is March 15, can be found at http://research.uiowa.edu/nniui/reu_2008/index.html.
Additional information about the institute can be found at http://research.uiowa.edu/nniui/.
STORY SOURCE: University of Iowa News Services, 300 Plaza Centre One, Suite 371, Iowa City, Iowa 52242-2500.
MEDIA CONTACT: Gary Galluzzo, 319-384-0009, gary-galluzzo@uiowa.edu
| 2,685
| 1,208
| 2.222682
|
warc
|
201704
|
The somatosensory nervous system (providing touch and position sense) deteriorates with age, and is associated with the impairment of physical balance. James J Collins from Boston University, USA, and colleagues investigated whether stimulating the sensory system with vibrating insoles could improve postural control.
15 young people (average age 23 years) and 12 older people (average age 73 years) took part in the study. They received low-frequency subsensory (undetectable) mechanical stimulation via insoles, or no stimulation (the control measure), during a series of 30-second trials where participants had to stand quietly with their eyes closed.
The investigators measured the degree of sway from each participant during the trials; use of the vibrating insoles substantially improved balance (by a reduction in sway) among the elderly participants; younger participants also had reduced sway. The balance of younger people without stimulation was achieved by elderly participants when they had received stimulation from the vibrating insoles.
James J Collins comments: "Elderly people gain more in motor control performance than do young people with the application of noise to the feet. Noise-based devices, such as randomly vibrating shoe insoles, might be effective in enhancing the performance of dynamic balance activities (eg, walking), and could enable older adults to overcome postural instability caused by age-related sensory loss."
| 1,460
| 744
| 1.962366
|
warc
|
201704
|
The opinion of the court was delivered by: Simandle, District Judge
This action had its genesis in the tragic death of Pennsylvania resident Nathan E. Kase, who became intoxicated while at the Marriott Seaview Resort & Spa in Galloway, New Jersey, fell down a flight of stairs at the hotel, and died several weeks later. He left behind two very young children and his wife, Heather Kase, the plaintiff in this matter both individually and as administrator of the Estate of Nathan Kase and administrator ad prosequendum of that Estate ("Plaintiff").
Plaintiff asserts wrongful death and survivorship claims on behalf of her late husband. At issue before the Court, however, are not the circumstances of Mr. Kase's death, but the law to be applied to Plaintiff's pursuit of a remedy for that death.
Defendants Marriott Hotel Services, Inc. and LaSalle Hotel Operating Partnership, L.P. ("Defendants") have brought this motion for partial summary judgment on the choice of law to be applied to Plaintiff's request for damages [Docket Item 32].*fn1
Plaintiff's complaint seeks the application of the Pennsylvania Wrongful Death Act and Survival Act, while Defendants urge the Court to apply the New Jersey Wrongful Death Act and Survival Act. The Court, having considered the matter and for the reasons set out below, finds that New Jersey law should be applied to the calculation of damages in this litigation.
Plaintiff's complaint chronicles the events that allegedly led to Nathan Kase's death.*fn2 On April 15, 2005, Mr. Kase arrived at the Seaview Resort & Spa in Galloway, New Jersey ("Seaview Resort" or "hotel"), for a weekend business function organized by his employer, the law firm of Wolf, Block, Schorr and Solis-Cohen, LLP ("Wolf Block"). (Compl. ¶ 31.) This trip came after lengthy correspondence between Seaview Resort, which is owned and operated by Defendants, and Wolf Block's Philadelphia office. (Pl. Opp'n at 2-3.) The Complaint alleges that on the evening of Mr. Kase's arrival, he attended an event where hotel employees served him multiple alcoholic beverages, after which he went to the hotel lounge, where hotel employees served him more alcoholic drinks even though he was visibly intoxicated. (Compl. ¶¶ 32-34.) Finally, Mr. Kase left the lounge to head downstairs to the hotel's game room, which required him to navigate what Plaintiff describes as a stairway rendered hazardous by poor design and maintenance. (Id. ¶¶ 35-36.) Mr. Kase proved unable to climb down the stairs, and instead fell down the flight of stairs and landed at the bottom. (Id. ¶ 39.) As a result of this fall, he suffered severe injuries and had to be transported to Atlantic City Hospital, where testing showed that he had a blood alcohol content of 248 milligrams per liter. (Id. ¶¶ 38-39.) On May 10, 2005, Mr. Kase died, allegedly as a result of his fall at Seaview Resort. (Id. ¶ 38.)
At the time of his death, Mr. Kase and his wife and children were residents of Philadelphia, Pennsylvania. (Id. ¶ 5.) Plaintiff now resides in Atlanta, Georgia. (Id. ¶ 1.) Defendant Marriott Hotel Services, Inc. is incorporated in Delaware and has its principal place of business in Maryland. (Id. ¶ 9; Answer ¶ 9.) Defendant LaSalle Hotel Operating Partnership is a Delaware limited partnership, also with its principal place of business in Maryland.*fn3 (Compl. ¶ 14; Answer ¶ 14.) Plaintiff alleges Defendants engage in extensive advertising for the Seaview Resort in and around Philadelphia. (Pl. Opp'n at 2-3.)
Plaintiff filed her complaint in this matter on April 12, 2007, asserting diversity jurisdiction pursuant to 28 U.S.C. § 1332. In her complaint, she sets forth a wrongful death claim (Count I), a survival action (Count II), and a claim for loss of consortium (Count III). She further asks that the Court apply Pennsylvania law to her wrongful death and survival actions. Defendants responded, as explained above, with the instant motion for partial summary judgment on choice of law.
| 4,088
| 1,854
| 2.204962
|
warc
|
201704
|
Argentine president Cristina Kirchner is on a desperate campaign to cover up the gravity of her failures. It’s how Argentina money has become totally worthless.
Increasingly delusional and sad, she continues to use the Falkland Islands in an attempt to gin up nationalism and take the heat off herself among an increasingly destitute population irked over their economy’s heading into the abyss.
Hi, I'm Andrew Henderson. I've spent almost a decade learning the right way (and the wrong way) to "plant flags" for greater freedom and prosperity. If you're tired of paying high taxes and living like a slave, then this blog will show you to how
go where you're treated best. It is legally possible to dramatically reduce your tax burden, move your money overseas, and get a second passport... all while living wherever you please. If that sounds good to you, keep reading or click here if you need immediate help.
Argentines are mad – huge protests there show it – about everything from rising crime to sky-high inflation. Corruption scandals dot the now wildly unfavorable Kirchner, who is taking a page from every power hungry and nanny state dictator in attempting to re-write the constitution to allow herself a third term.
Like every corrupt government on a spending spree, Argentina has been accused to distorting the numbers and punishing those who speak out about it.
Of course, don’t just blame Kirchner for that; the US did it earlier this year when they punished a ratings agency that downgraded their debt.
All the while, Argentina’s middle class is being pummeled by being forced to hold onto rapidly devaluing pesos and other currency controls, all in the name of soaring public spending on welfare programs that lock up the votes of the poorest citizens.
Sound familiar?
As to the Falklands, there are similarities with my writing last week about the British takeover of Turks and Caicos, a self-governing overseas territory. With the Falklands, the shoe is on the other foot; the islands’ couple thousand residents are expected to affirm their loyalty to British Overseas Territory status this year.
Just like with the Turks and Caicos, Argentina doesn’t like it. A Kirchner ally said Falkland residents “are implanted settlers who do not have the right to define the territory’s status”.
It just goes to show: Big Government doesn’t care what you think. They just want power – and your money.
Argentina shows a powerful example of how narcissistic politicians will throw you – and your livelihood – under the bus to retain their personal image cult.
Don’t like 24% inflation? Tough.
Want to spend your own money outside the country, even if its worth only pennies? How dare you.
The damage done in Argentina in just the last few years is staggering, but the parallels to the American economy – and the similar image cult among US politicians – is hard to miss.
Politicians will find an excuse to distract you while they take away your economic sovereignty.
Unfortunately, most of your fellow citizens will believe whatever they’re told. Planting flags to diversify your money and never believing it won’t happen “here” are your best defenses.
The Land of the Free, for instance, has many of the tenets of malaise of Argentina, and if things were to worsen, I have no doubt the American government could cause a distraction by saying it wants greater control over Guam or some other insert-name-of-petty-issue-here.
Latest posts by Andrew Henderson (see all) Will a “Tier B” passport affect my travel benefits? - January 22, 2017 Why entrepreneurs should ignore frugal financial blogs - January 9, 2017 Choosing to become a citizen of the Comoros Islands - January 8, 2017
| 3,829
| 1,932
| 1.981884
|
warc
|
201704
|
If there is something one learns with compressive sensing it is that your assumptions on the unknown is central to how your signal will be reconstructed. For 200 years, we had to minimized energy, since 2004, we are looking for the sparsest signal and we may eventually be able to look for additional structure [3,4, 5]. Here are two examples I gleaned over a few weeks where the current assumptions are probably not the good ones. Can compressive sensing help ?
In the Genetics of Parkinson's Disease: I asked a question as to why GWAS studies on Parkinson's did not pick up on the GBA gene ? a review paper that aimed to answer specifically that question was sent to me by one of its author and here is what they said back in 2008:
....The identification and recognition of this strong association between glucocerebrosidase and PD raises many questions. Why did epidemiologic studies of PD miss this association? Why did genetic linkage studies not pick out the glucocerebrosidase locus on chromosome 1q21 as a Parkinson susceptibility region? Why did the whole genome association studies not identify the locus? Epidemiologists probably missed the association because Gaucher disease is much rarer than PD and the clinical phenotype is usually so different from parkinsonism that it was never considered. Genetic linkage studies would have struggled to identify the locus because of the rare nature of glucocerebrosidase mutations in most datasets with the exception of the Ashkenazi population. Furthermore, none of the mutations seem to be fully penetrant and so they will show only weak evidence for segregation in families with PD. Finally, whole genome association studies apply an overly strict correction for multiple testing and rely on the assumption, incorrect in the case of glucocerebrosidase, that there is a single disease-associated allele at each locus. The existence of the multiple disease-associated allelic variants in the gene candidate could be a general phenomenon for other neurodegenerative disorders.Such heterogeneity has important implications for replication studies that would need to assess a battery of variations in the gene of interest using datasets with homogeneous genetic background. Hence, the glucocerebrosidase example is an illustration of how an important genetic risk factor for a complex disease can evade detection by systematic analysis: it only came onto the radar because of astute clinical observations. 3
If I understand correctly, an algorithm that looks for a certain type of variant would bin that variant in one category. But overall, every individual category found through this means would not match the disease phenotype. The matching would not be explanatory. In this case, while every variant is very unique, the larger set of all the variants (group of category) could be matched globally to the disease phenotype. The reason the disease has several forms (different phenotypes) is probably because the variants are acting differently in the diverse biochemical networks [2]. And then there is the curious case of Autism or the sparse set of signaling pathways for Cancer [6]
The second misunderstood assumption is explained in The Effects of Connection Reconstruction Method on the Interregional Connectivity of Brain Networks via Diffusion Tractography by Longchuan Li, James K. Rilling, Todd M. Preuss, Matthew F. Glasser, and Xiaoping Hu. The abstract reads:
Estimating the interregional structural connections of the brain via diffusion tractography is a complex procedure and the parameters chosen can affect the outcome of the connectivity matrix. Here, we investigated the influence of different connection reconstruction methods on brain connectivity networks. Specifically, we applied three connection reconstruction methods to the same set of diffusion MRI data, initiating tracking from deep white matter (method #1, M1), from the gray matter/white matter interface (M2), and from the gray/white matter interface with thresholded tract volume rather than the connection probability as the connectivity index (M3). Small-world properties, hub identification, and hemispheric asymmetry in connectivity patterns were then calculated and compared across methods. Despite moderate to high correlations in the graph-theoretic measures across different methods, significant differences were observed in small-world indices, identified hubs, and hemispheric asymmetries, highlighting the importance of reconstruction method on network parameters. Consistent with the prior reports, the left precuneus was identified as a hub region in all three methods, suggesting it has a prominent role in brain networks
As Matt says, "And, there are certainly unknown unknowns beyond that."
[1] Gaucher and Parkinson diseases: Unexpectedly related, Ekaterina Rogaeva, John Hardy
[2] Sunday Morning Insight: The extreme paucity of tools for blind deconvolution of biochemical networks
[3] Optimization with multiple non-standard regularizers,
[3] Optimization with multiple non-standard regularizers,
[4] Multiple regularizers, Robust NMF and a simple question
[5] Universal MAP Estimation in Compressed Sensing
[5] Universal MAP Estimation in Compressed Sensing
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
| 5,676
| 2,589
| 2.192352
|
warc
|
201704
|
Soy versus whey protein bars: Effects on exercise training impact on lean body mass and antioxidant status 3:22 DOI: 10.1186/1475-2891-3-22
© Brown et al; licensee BioMed Central Ltd. 2004
Received: 26 August 2004 Accepted: 08 December 2004 Published: 08 December 2004 Abstract Background
Although soy protein may have many health benefits derived from its associated antioxidants, many male exercisers avoid soy protein. This is due partly to a popular, but untested notion that in males, soy is inferior to whey in promoting muscle weight gain. This study provided a direct comparison between a soy product and a whey product.
Methods
Lean body mass gain was examined in males from a university weight training class given daily servings of micronutrient-fortified protein bars containing soy or whey protein (33 g protein/day, 9 weeks, n = 9 for each protein treatment group). Training used workouts with fairly low repetition numbers per set. A control group from the class (N = 9) did the training, but did not consume either type protein bar.
Results
Both the soy and whey treatment groups showed a gain in lean body mass, but the training-only group did not. The whey and training only groups, but not the soy group, showed a potentially deleterious post-training effect on two antioxidant-related related parameters.
Conclusions
Soy and whey protein bar products both promoted exercise training-induced lean body mass gain, but the soy had the added benefit of preserving two aspects of antioxidant function.
Background
Many male exercisers avoid soy protein because there is a perception that it is inferior to proteins like whey for supporting lean boss mass gain. This perception persists even though there are no studies comparing whey and soy for effects on lean body mass gain. Soy may actually help promote lean body mass gain by the antioxidants associated with soy protein. Antioxidants are agents, either consumed in the diet or made by the body, which work against molecular damage due to oxidant reactions caused by free radicals, which are reactive molecules with an unpaired electron [1]. Soy protein isolate contains a mixture of antioxidants including isoflavones, saponins, and copper, a component of a number of antioxidant enzymes [2]. Body free radical production seems to be particularly high during exercise, and the resulting oxidant stress appears to contribute to muscle damage and fatigue [3]. This damage and fatigue could conceivably limit progress in exercise training by slowing muscle recovery between exercise workouts. This could limit lean body mass gain during an exercise program.
If soy protein can promote lean body mass gain at least as well as whey, there may be one advantage to consuming soy protein. Soy protein contains antioxidants which may not only help with lean body mass gain, but which can also promote other aspects of health. Antioxidant actions are thought to work against the onset and severity of many diseases and health problems [1]. This may be particularly important during exercise training, which in some cases, depletes antioxidant capacities and/or increases oxidant stress [i.e. [4, 5]]. This may explain why high degrees of chronic exercise can be detrimental. For example, some athletes show increases in histochemical muscle lesions as well as high cancer mortality, which have been linked to prolonged periods of exercise [6, 7]. However, this area has been controversial since some studies suggest that long term exercise training produce body adaptations which increase antioxidant defenses [i.e. [8, 9]]. Either way, soy protein antioxidants could conceivably exert beneficial effects during exercise training, either by restricting antioxidant depletion or by enhancing antioxidant capacity increases.
The present study compared a soy protein product to a whey protein product in subjects undergoing a 9 week weight training program. Subjects were evaluated for lean body mass gain and changes in antioxidant status. The latter was done using one measurement of a component of antioxidant capacity and one for a component of oxidant stress. The former was based on an assay called plasma antioxidant status which assesses the ability to scavenge certain chemically generated radicals. The oxidant stress parameter was plasma myeloperoxidase, a measure of neutrophil activation, which is associated with increased secretion of superoxide radical [1].
Methods Subjects
This study was approved by the Human Subjects Review Committee for Biomedical Sciences at The Ohio State University. All subjects signed an informed consent form. Male subjects, aged 19–25, were recruited from the Sport, Fitness and Health Program courses at The Ohio State University to participate in the present 9-week study. All subjects were considered experienced weightlifters with at least 1 year or more experience in strength training, which was confirmed by a questionnaire. Subjects were reported to be non-smokers, non-vegetarians, not currently taking supplements of any kind, and having no major health problems (i.e., diabetes, cardiovascular disease, etc.). All subjects had a body mass index (BMI) of less than 30.
Strength Training Program
At the start of the study, each subject was put on a common strength training program to strictly follow for the duration of the 9 week study. Subjects were given either workout 1 or workout 2. The two workouts were identical with the exception of exercise order and were designed to prevent subjects in the strength training classes from having to perform the same exercises at the same time. Midway through the program, subjects with workout 1 were given workout 2 and vice versa in order to maintain consistency.
The strength training protocol was 3 sets of 4–6 repetitions for 14 exercises so that strength was the variable being maximized. The following exercises were performed to work all major muscle groups: 1) chest press; 2) chest fly; 3) incline press; 4) lat pull-down; 5) seated row; 6) military press; 7) lateral raise; 8) preacher curl; 9) bicep curl; 10) supine tricep extension; 11) seated tricep extension; 12) leg press; 13) calf raise; and 14) abdominal crunches.
Protein Treatments
Subjects were randomly assigned in a double-blind manner to either a soy, whey, or control group. The controls did the exercise program but did not consume a protein product (n = 9/each group). The soy protein product was DrSoy
® Bars, which contained 11 grams of protein and an assortment of micronutrients. The whey bars were made using the same recipe as the DrSoy ® Bars except that whey protein was substituted for soy protein. The products were supplied to study personnel in plain wrappers with different colors for each product. The color code was unknown to the subjects and study personnel who were in contact with the subjects. Each subject was instructed to consume 3 bars per day for the 9-week training period. This was in addition to the subjects' self-selected diet. Subjects were instructed not to change eating patterns during the course of the study. The time of the day when the bars were consumed was recorded daily in the subject's fitness log so that compliance could be monitored. Measurements
Lean body mass was analyzed by hydrostatic weighing. Each subject performed at least 3 efforts and an average reading was taken. Blood was drawn into heparin tubes before and after the 9 week treatment period on a day when the subjects did not exercise. Blood was spun at 3000 × g and the plasma was stored at -70°C until analysis. Unfortunately, a problem during blood processing made some plasma samples unavailable for analysis. Plasma was analyzed for free radical scavenging capacity using the Total Antioxidant Status Assay Kit from Calbiochem-Novachem Corp. (San Diego, CA). Plasma myeloperoxidase was analyzed using an ELISA kit from Calbiochem-Novachem.
Statistical analysis
Statistical analysis was done by the Jump 3.1 program (SAS Institute, Cary, NC), with significance at p < 0.05. For each parameter and treatment group, values prior to the 9 week treatment were compared to values after treatment by paired, 2-tailed Student's t-test. In addition, for lean body mass, the changes in values for soy treatment were compared to the change in values for the other two groups by Tukey test.
Results
Subject characteristics.
WHEY
SOY
CONTROL (Training Alone)
AGE
20.36 ± 0.34
21.67 ± 0.24
20.44 ± 0.63
HEIGHT (cm)
180 ± 1.55
179 ± 1.30
178 ± 1.81
WEIGHT (kg)
81 ± 2.81
79 ± 2.49
79 ± 0.48
LBM (kg)
67 ± 1.96
66 ± 2.30
67 ± 1.65
Discussion
In this study, soy and whey were both effective at increasing lean body mass with exercise training, but the soy had the added advantage of inhibiting two negative effects of training on antioxidant status. The percent change in the radical scavenging capacity (total antioxidant status) seen with training alone and training plus whey was substantial compared to the differences typically seen for these types of measurements[11–13].
The lean body mass data seen here contradicts the common, but unconfirmed notion that soy is inferior to whey for promoting lean body mass gain. It should be noted, however, that the general trend for this study may or may not be duplicated for other study designs. For example, the time frame used here, 9 weeks, is not overly long for seeing lean body mass gain, which may explain why the training alone did not produce an effect on lean body mass gain. Thus, the effects of soy or whey on lean body mass gain versus training alone may be more pronounced than in longer studies. It should also be noted that the training program used here emphasized low exercise repetitions in subjects not used to this type of training. In addition, this study included only subjects that were still relatively early in their training experience, and placed no restriction on Calorie intake. These design considerations were geared toward gaining bulk and power. The effects of whey or soy on lean body mass might be different in a design that emphasizes higher repetitions or Calorie restriction in other types of subjects. In addition, it can be noted that the current study diet intervention used bars which included added micronutrients. Thus, this study did not determine if the effects of the soy or whey protein required co-administration of micronutrients.
It is not known whether the negative effects of training seen here for antioxidant status in the whey plus training alone groups would continue upon longer training. The current state of knowledge concerning exercise training effects on antioxidant defenses does not present a clear pattern [i.e. [4, 5, 8, 9]], possibly because of the highly variable circumstances involved in different studies such as training intensity, types of exercise done, types of antioxidant measures used, fitness level of the subjects, length of training, and dietary patterns of the subjects. These variables may help explain why some studies find training-induced declines in antioxidant defense while others find no change or even an increase. Nonetheless, the present study suggests that soy protein intake can promote antioxidant function during training which could be helpful no matter what the effects of training by itself.
Another unresolved issue is whether the effects on lean body mass seen here for the two proteins were due to increased total protein intake or other factors. In regard to the former, the data regarding the amount and type of protein intake necessary to produce optimal strength training gains is conflicting. While a diet meeting the current RDA for protein intake (0.8 g/kg body mass) may be sufficient for the sedentary individual, recent studies suggest dietary protein exceeding that of the RDA is needed for muscle hypertrophy [14, 15]. One of the difficulties in deriving an exact protein recommendation for exercisers is that total energy intake has not been consistent in the studies. In some studies, total energy intake was low, which can cause an abnormally high percentage of energy output to be derived from protein [15, 16]. In the present study, a 3 day diet record gave no indication that Calorie intake was low (data not shown).
If soy and whey promotion of lean body mass gain was not due to increased total protein intake, which remains uncertain, then other factors were responsible. In the case of soy protein, there are associated antioxidants [2]. As presented in the Introduction, this could conceivably help indirectly with lean body mass gain. In the case of whey, the content of essential amino acids, especially those with sulfur, may be conducive to promoting lean body mass gain [i.e. [17, 18]].
In summary, soy and whey protein bars both supported lean body mass gain in conjunction with a short term power-based weight training program, but only the soy bar prevented a training-induced drop in antioxidant capacities.
Declarations Acknowledgements
The authors thank Joshua Selsby and Kristi Seifker for excellent technical assistance.
Authors’ Affiliations References Kehrer J: Free radicals as mediators of tissue injury and disease. Crit Rev Toxicol. 1993, 23: 21-48.View ArticlePubMedGoogle Scholar DiSilvestro RA: Antioxidant actions of soya. Food Indust J. 2001, 4: 210-220.Google Scholar Clarkson PM: Antioxidants and physical performance. Crit Rev Food Sci Nutr. 1995, 35: 131-141.View ArticlePubMedGoogle Scholar Schippinger G, Wonisch W, Abuja PM, Fankhauser F, Winklhofer-Roob BM, Halwachs G: Lipid peroxidation and antioxidant status in professional American football players during competition. Eur J Clin Invest. 2002, 32: 686-692. 10.1046/j.1365-2362.2002.01021.x.View ArticlePubMedGoogle Scholar Bergholm R, Makimattila S, Valkonen M, Liu ML, Lahdenpera S, Taskinen MR, Sovijarvi A, Malmberg P, Yki-Jarvinen H: Intense physical training decreases circulating antioxidants and endothelium-dependent vasodilatation in vivo. Atherosclerosis. 1999, 145: 341-349. 10.1016/S0021-9150(99)00089-1.View ArticlePubMedGoogle Scholar Karlsson J: Antioxidants and Exercise. 1997, Champaign IL: Human KineticsGoogle Scholar Polednak AP: College athletes, body size, and cancer mortality. Cancer. 1976, 38: 382-387.View ArticlePubMedGoogle Scholar Selamoglu S, Turgay F, Kayatekin BM, Gonenc S, Yslegen C: Aerobic and anaerobic training effects on the antioxidant enzymes of the blood. Acta Physiol Hung. 2000, 87: 267-273.View ArticlePubMedGoogle Scholar Robertson JD, Maughan RJ, Duthie GG, Morrice PC: Increased blood antioxidant systems of runners in response to training load. Clin Sci. 1991, 80: 611-618.View ArticlePubMedGoogle Scholar Grisham MB, Jones HP: Superoxide and inflammation. In: Cellular Antioxidant Defense Mechanisms. Edited by: Chow CC. 1988, Boca Raton: CRC Press, 3: 123-142.Google Scholar DiSilvestro RA, Blostein-Fujii A, Watts B: Low phytonutrient, semipurified liquid diets depress plasma total antioxidant status in renal dialysis patients. Nutr Res. 1999, 19: 1173-1177. 10.1016/S0271-5317(99)00078-0.View ArticleGoogle Scholar Rossi AL, Blostein-Fujii A, DiSilvestro RA: Soy beverage consumption by young men: increased plasma total antioxidant status and decreased acute, exercise-induced muscle damage. J Nutraceuticals Funct Med Foods. 2000, 3: 33-44. 10.1300/J133v03n01_03.View ArticleGoogle Scholar Dasgupta A: Decreased total antioxidant capacity and elevated lipid hydroperoxide concentrations in sera of epileptic patients receiving phenytoin. Life Sci. 1997, 61: 437-443. 10.1016/S0024-3205(97)00401-3.View ArticlePubMedGoogle Scholar Lemon PWR: Is increased dietary protein necessary or beneficial for individuals with a physically active lifestyle?. Nutr Rev. 1996, 54: S169-S175.View ArticlePubMedGoogle Scholar Lemon PWR, Proctor DN: Protein intake and athletic performance. Sports Med. 1991, 12: 313-325.View ArticlePubMedGoogle Scholar Welle S, Matthews DE, Campbell RG, Nair KS: Stimulation of protein turnover by carbohydrate overfeeding in men. Am J Physiol. 1989, 257: E413-E417.PubMedGoogle Scholar Walzem RL, Dillard CJ, German JB: Whey components: millennia of evolution create functionalities for mammalian nutrition: what we know and what we may be overlooking. Crit Rev Food Sci Nutr. 2002, 42: 353-375. 10.1080/10408690290825574.View ArticlePubMedGoogle Scholar Lands LC, Grey VL, Smountas AA: Effect of supplementation with a cysteine donor on muscular performance. J Appl Physiol. 1999, 87: 1381-1385.PubMedGoogle Scholar Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
| 16,999
| 7,309
| 2.325763
|
warc
|
201704
|
Two weeks ago, this column addressed the state of the New York City multifamily market, and most of the statistics discussed pertained to the entire city-wide market. I received many e-mails and a few calls asking for the data to be broken down on a submarket-by-submarket basis, so here it is.
It is particularly illustrative to look at the way the market has performed during the past nine quarters. From 2009 through the first quarter of 2011, the trends are very apparent.
It is clear, although no trends are without exception, that cap rates expanded in 2010 from 2009 levels as value hit its low point toward the end of last year. The 1Q11 results show that cap rate compression has returned to the multifamily marketplace and that an upward swing in value is occurring in nearly all submarkets.
Let’s see how each has performed.
Manhattan
The dollar volume of sales in the walk-up sector in the Manhattan submarket was nearly $90 million in 1Q11, putting the submarket on pace to be well ahead of the approximately $250 million totals in 2009 and 2010.
There were 22 buildings sold in the first quarter, containing 276 units total. Both of these numbers, if annualized, would be well ahead of the pace of the prior two years.
More important, we have seen cap rates and gross rent multiples stay approximately where they were in 2010, but the average price per unit has increased to well over $400,000 and the average price per square foot has risen to $592 from last year’s $525, a 13-percent increase. With caps and GRMs remaining relatively flat and price per square foot rising the way it did, it tells us that rent levels are rising.
Moving to the elevator sector, in 1Q11 the total dollar volume of sales was approximately $105 million, running at about half the pace of last year’s $832 million. In this submarket, there were only five buildings sold in 1Q11, containing a total of 230 units. Notably, the cap rate dropped from 2010’s 4.84 percent to 4.22 percent and the GRM average increased from 12.74 to 14.40.
The average price per unit increased from last year’s $419,000, to over $542,000. Additionally, the average price per square foot increased by 18 percent, from $438 last year to $518 in 1Q11.
Northern Manhattan
In 1Q11, in the walk-up sector, there were only $11.2 million in total sales. This pace, if annualized, is about one-third of last year’s total of about $130 million. Also in the walk-up sector, there were eight buildings sold in 1Q11, with a total of 114 units.
The average capitalization rate dropped from an average of 7.48 percent in 2010, to 6.62 percent this year. The average price per unit increased from approximately $104,000 to over $110,000 and the average price per square foot also increased last year, by 20 percent, from $164 to $196.
In the elevator sector, there was only one property sold, at a price of $2.8 million. The property contained 31 units. It is difficult to ascribe statistical significance to this one transaction as, clearly, the pace is significantly below the $319 million of total activity last year in this sector.
| 3,135
| 1,407
| 2.228145
|
warc
|
201704
|
The 26th, and perhaps most anticipated, iteration of the Asia-Pacific Economic Cooperation (APEC) summit concluded Tuesday, kicking off a busy week for several of the world’s most powerful leaders. The 2014 edition was set against a backdrop of tempered global economic growth, Russia’s declining relationship with the West, continued Chinese and Japanese dispute, and of course low oil prices. While the summit was short on physical confrontation, Russia supplied the drama as President Putin put his chivalry on display and launched, in earnest, his country’s Asian pivot.
The 21-member nation forum – headlined by China, Russia, and the United States – accounts for nearly 40 percent of the world’s population, 55 percent of world GDP, and approximately 60 percent of world energy consumption.
Of the member economies, perhaps none was more eager to get the ball rolling than Russia. Last week, Russia’s central bank slashed economic growth forecasts for 2015 and predicted record capital outflows. Moreover, the bank anticipates Western sanctions, which have limited the country’s development of their vast energy reserves, will last until at least the end of 2017. Dependence on oil revenue remains dangerously high, but the ill effects of declining prices have actually been stemmed by the more rapid collapse of the ruble – down nearly 30 percent on the year.
Source: QZ
With his sights firmly set on the East, Putin acted quickly and inked a second big gas deal with his Chinese counterpart Xi Jinping. The accord, signed Nov. 9th, follows the 30-year, $400 billion deal signed in May, which will move up to 38 billion cubic meters (bcm) per year through the 2,500 mile Power of Siberia pipeline currently under construction. The two nations’ most recent cooperation centers on the long-discussed Altai pipeline – a 1,700 mile route from Russia’s productive Western Siberian fields to China’s restive Xinjiang region. Under the new deal, China will purchase an additional 30 bcm of gas for a period of 30 years. Once both pipelines are complete, China will become Russia’s largest gas customer, surpassing Germany.
It’s been a busy year for Putin and Jinping, who last month opened a currency-swap line. The yuan-ruble swap line worth approximately $25 billion will allow both countries easier access to the other’s currency, facilitating greater trade and investment, especially in the financial and energy sectors. As both countries look to decrease their economic dependence on the West, China is buying into assets across Russia. Heavily indebted state-owned Rosneft just sold 10 percent of its highly productive Vankor project to China National Petroleum Corporation, and Yamal LNG – a sanctions-hit megaproject in Siberia – may soon see more than $10 billion in additional Chinese investment.
Source: Gazprom
The cooperation is significant and according to President Putin, “Utterly important to keep the world within the limits of international law.” Still, China’s behavior appears to be more predatory than friendly. In 2013, Russia exported 196 bcm of natural gas, of which roughly 161 bcm, or 82 percent, was bound for Europe. With European customers looking for a way out, Russia becomes more a beggar than a chooser.
When details of the May gas deal first emerged, China – unafraid to make the most of their leverage – was on the books for $350 per thousand cubic meters, a great deal less than the average price of $380 paid by European customers. More recent reports suggest the two sides have yet to come to an agreement regarding a $25 billion “prepayment,” which may actually be a loan depending on who you talk to. The pricing disputes are likely far from over as China is confident with its current supply portfolio and will look to exploit Russia’s weakened position.
While Russia has placed a majority of its eggs in China’s basket, Putin has been exploring ties with no less energy hungry Japan and India. Tokyo is reportedly interested in a pipeline between the two nations in the neighborhood of 20 bcm per year, but the two countries’ political history suggests this idea will remain undeveloped for some time. Gas exports to India, the world’s third largest importer of energy, seem to make more sense for Russia – the two nations already have robust military cooperation. However, a record $40 billion pipeline would have to cross the Himalayas or less stable Afghanistan and Pakistan in route to India.
Moscow is quick to proclaim the – still pending – arrangements mark the birth of a new geopolitical center. Politics have certainly turned Russia away from Europe, but gas will ensure it remains.
By Colin Chilcoat of Oilprice.com
More Top Reads From Oilprice.com: Is Big Change On The Horizon For Eurasian Gas Market? Total Makeover: Can Pouyanné And Co. Move Forward In Russia? OPEC & Russia’s Vulnerability and America’s Ingenuity
| 5,061
| 2,488
| 2.034164
|
warc
|
201704
|
Abstract
Objectives: To describe clinical outcome after percutaneous coronary intervention (PCI) for acute coronary syndrome (ACS) due to graft failure. Background: Limited data are available on outcome after PCI for graft failure-induced ACS in the drug-eluting stent (DES) era. Methods: Patients were identified who underwent PCI either with DES or BMS for ACS due to graft failure between January 2003 and December 2008. Follow-up was performed at 1 year and April 2011. The primary endpoint was the composite of death, myocardial infarction (MI), or target vessel revascularization (TVR). Kaplan–Meier estimates were calculated at 1 and 5-year follow-up. Predictors were identified by backward selection in Cox proportional hazards models. Results: A total of 92 patients underwent PCI, of which 77 were treated with bare metal stents (BMS) and 15 with DES. Patient and procedural characteristics were similar in both groups. Mean follow-up was 3.2 years. Five-year composite event rate was 65.9% after BMS vs. 43.4% after DES implantation (
P = 0.17). Individual endpoints were comparable in both groups. Recurrence of angina, hospitalization, and repeat interventions were similar. After multivariable adjustment, the use of DES was not associated with a significant reduction in the primary endpoint (HR = 0.44, 0.18–1.04, p = 0.06). Conclusion: In patients presenting with ACS due to acute graft failure, long-term outcomes remain poor. In a nonrandomized comparison with BMS, DES use was not associated with significant improved long-term clinical outcomes. © 2012 Wiley Periodicals, Inc.
| 1,616
| 907
| 1.781698
|
warc
|
201704
|
Abstract
An understanding of molecular periodicity that has a basis in quantum states is highly desirable. The article by Ray Hefferlin, Jonathan Sackett, and Jeremy Tatum on page 2078 (DOI: 10.1002/qua.24469) explores this possibility through specific studies of diatomic systems that successively approach the formation of a rare-gas molecule. Molecules echo atomic periodicity, because properties for series of molecules follow a periodic behavior defined by molecular magic numbers. In the case of many band systems, it is the ratio of the force constants of the upper and lower states that determines the periodic behavior.
| 631
| 395
| 1.597468
|
warc
|
201704
|
Abstract Problem: To determine if the stage of oestrous cycle, at the time of immunization, affects the magnitude of mucosal and systemic immunity. Method of study: Female BALB/c mice were immunized with tetanus toxoid and cholera toxin by the oral, intranasal and transcutaneous routes. Groups of mice were immunized at proestrus, oestrus, postestrus and diestrus. Antibodies in serum and mucosal secretions were determined by ELISA and T cell responses by lymphocyte proliferation assay. Results: Oral immunization at the oestradiol dominant stage of cycle (oestrus and proestrus) significantly enhanced TT-specific IgG and IgA levels in female reproductive tract (FRT) secretions and TT-specific IgA levels in faecal extracts. Transcutaneous immunization at diestrus enhanced TT-specific IgG in faecal extracts. TT-specific T cell proliferation is greatest following intranasal immunization at proestrus and transcutaneous immunization at diestrus, particularly in the caudal and lumbar lymph nodes draining the FRT and colon. Conclusions: Reproductive cycle-associated changes in the endogenous sex hormones oestradiol and progesterone influence the levels of vaccine-induced immunity in the FRT and distal colon following oral and transcutaneous immunization.
| 1,266
| 615
| 2.058537
|
warc
|
201704
|
As damaging as the earthquake and its aftershocks were, the fires that burned out of control afterward were even more destructive. It has been estimated that up to 90% of the total destruction was the result of the subsequent fires.Likewise, for the 1923 Great Kantō earthquake: Because the earthquake struck at lunchtime when many people were using fire to cook food, the damage and the number of fatalities were augmented due to fires which broke out in numerous locations. The fires spread rapidly due to high winds from a nearby typhoon off the coast of Noto Peninsula in Northern Japan and some developed into firestorms which swept across cities. This caused many to die when their feet got stuck in melting tarmac; however, the single greatest loss of life occurred when approximately 38,000 people packed into an open space at the Rikugun Honjo Hifukusho (Former Army Clothing Depot) in downtown Tokyo were incinerated by a firestorm-induced fire whirl. As the earthquake had caused water mains to break, putting out the fires took nearly two full days until late in the morning of September 3. The fires were the biggest causes of death.So, yes, it does seem obvious that the shaking is the dangerous part, but there can also be secondary consequences that make a disaster even greater.
The point of this post is to discuss the 7.2 quake near the Mexico-US border Sunday afternoon; the geoblogosphere, as always, has been great about discussing and clarifying this event. Keeping with the light-hearted tone for the moment, at least three people posted or shared this xkcd comic:There's actually quite a bit of useful information in that comic. We're accustomed to thinking of sound waves as traveling a mile in 5 seconds (count the seconds between the lightning flash and the thunder, divide by five), but earthquake tremors, some of which are "sound," travel much, much faster: 3-5 km (1.8 to 3 miles) per second, or 9-15 times faster than sound in air. Very powerful, very disruptive and very fast. I've read accounts of quake witnesses describing dust being raised, but I don't think I've ever seen a picture as striking as this one, from NBC Local on Tumblr.
The caption reads,
Brothers traveling in Mexico during Sunday’s deadly earthquake photographed this surreal sight: The power of the quake lifting a layer of dust off a mountain range.CNN also has a video clip from the area. The dramatic photographs were shot by Roberto and Adrian Marquez Marquez just after the 3:40 p.m magnitude 7.2 quake. The pictures show the area around La Rumorosa, the highest point in Tecate.
The Berkley Seismological Laboratory's Seismo Blog presented a nice tectonic diagram showing the nature of the faults in the area. The bold red segments with approximately N-S to NE-SW orientation are actually little segments of oceanic ridge, where hot rock rising from within the earth partially melts to create basalt and new oceanic crust. The lighter red segments with NW-SE orientations are right-lateral strike slip faults, like the San Andreas Fault. The "SAF" in the upper left corner marks the southern-most end of that fault.There is common misunderstanding among non-geologists that major tectonic boundaries like this are simple lines- single faults- that are sharp and distinct. "Every one knows" the San Andreas Fault is the boundary between the North American Plate and the Pacific Plate; the truth, as is so often the case in geology, is much messier
Moving in a little closer, Callan had a nice post earlier today of the Colorado River Delta at Pathological Geomorphology, with an excellent description of the setting: "The Baja California peninsula is essentially a freshly minted continental terrane, ripped off the west coast of Mexico by the relative motion of the Pacific Plate with respect to the North American Plate." And on the same blog, I posted the picture below... I was looking for dunes, and found a fairly substantial volcanic field I hadn't been aware of before. I suppose if you'd asked me, I might have guessed there were volcanoes in the area, but I hadn't known there were:
Here's roughly the same location in Flash Earth. If you go explore the area, you can see that the cone in the lower right of the above picture is a northern outlier of a field that stretches quite a ways; my estimate is about 35-40 miles.
And from the ground, the Christian Science Monitor has this photo.
It's not clear to me whether this is a natural de-watering structure or a broken main, and the caption doesn't really help...
On Monday, a man walks on cracked mud caused by underground water that leaked to the surface during Sunday's 7.2-magnitude earthquake in Mexicali, Mexico.It does appear to be running down the middle of a street, so I'm leaning toward broken water line. Still, I have read about sand blows in the area- not surprising in a delta area- so I can't be certain. The CSM also has a gallery of some of the damage in the area. Some of the damage looks pretty bad, but it doesn't look devastated. And I have been relieved to read that, at least so far, there are only two fatalities associated with the quake; one of those was apparently a man who panicked and ran in front of a car.
So this wasn't "the big one," and hopefully it will serve as yet another reminder that we live on a planet that, for all its appearance of docile stability, can turn mercilessly violent with very little warning.
| 5,454
| 2,738
| 1.991965
|
warc
|
201704
|
The opinion of the court was delivered by: Terrence F. McVerry United States District Court Judge
MEMORANDUM OPINION AND ORDER
Before the Court for consideration is DEFENDANT'S MOTION TO DISMISS OR STAY THIS ACTION (Document No. 10). Defendant Etkin & Company, Inc. ("ECI") has filed a brief in support and Plaintiff James E. Winner, Jr. ("Winner") has filed a response and brief in opposition (Document Nos. 12, 13). The motion is ripe for decision.
When considering a motion to dismiss, the court accepts as true all well-pleaded allegations of fact. See Albright v. Oliver, 510 U.S. 266, 267 (1994). Federal Rule of Civil Procedure 8(a)(2) provides that a complaint need only offer "a short and plain statement of the claim showing that the pleader is entitled to relief" enough to "give the defendant fair notice of what the plaintiff's claim is and the grounds upon which it rests." See Fed.R.Civ.P. 8(a)(2). This is a minimum notice pleading standard "which relies on liberal discovery rules and summary judgment motions to ... dispose of unmeritorious claims." Swierkiewicz v. Sorema N.A., 534 U.S. 506, 513-14 (2002). Claims lacking merit may be dealt with through summary judgment pursuant to Rule 56. Id. If a defendant feels that a pleading fails to provide sufficient notice, he or she may move for a more definite statement pursuant to Rule 12(e) before fashioning a response. Id.
However, in Bell Atlantic Corp. v. Twombly, 127 S.Ct. 1955 (2007), the United States Supreme Court recently issued a decision which may represent a sweeping change in the pleading standard applicable to complaints filed in federal court. At a minimum, as all nine justices agreed, the oft-quoted standard that a complaint may not be dismissed "unless it appears beyond doubt that the plaintiff can prove no set of facts in support of his claim which would entitle him to relief" has been retired and "is best forgotten." Id. at 1968. The Supreme Court explained that a complaint must allege enough "facts" to show that a claim is "plausible" and not merely conceivable. Indeed, the Twombly Court made a distinction between facts that are merely "consistent" with wrongful conduct and facts that would be "suggestive" enough to render the alleged conduct plausible. The Supreme Court also emphasized the need for district courts to prevent unjustified litigation expenses resulting from claims that are "just shy of a plausible entitlement." Id. at 1967, 1975.
ECI is a small investment firm. This case arose out of ECI's efforts to secure a buyer for Winner Steel, Inc., which is allegedly controlled by Winner. On May 17, 2005, ECI and the "Company" entered into an agreement that provided for a Success Fee if ECI found a buyer. James Winner signed that agreement in his capacity as Chairman of the Company. The agreement contains an arbitration clause. ECI filed a claim in arbitration against the Company and James Winner in his individual capacity. Winner filed the instant suit for declaratory relief to enjoin or stay the arbitration. At issue is whether James Winner, in his individual capacity, is bound by the arbitration clause contained in the agreement.
Defendant asserts three grounds for dismissal of the complaint: (1) improper venue; (2) that arbitrability should be decided by the arbitration panel; and (3) that James Winner is a proper party to the arbitration. For the reasons set forth in Plaintiff's brief, the motion to dismiss is without merit. The Court will briefly address Defendant's arguments seriatim.
Defendant's argument regarding venue puts the cart before the horse by implicitly assuming that the agreement governs Winner's choice of venue. Defendant does not challenge the appropriateness of venue in this Court pursuant to 28 U.S.C. § 1391. Rather, ECI contends that the locale of the arbitration proceeding filed by ECI should supersede Winner's ability to file suit in a different judicial district. In the alternative, ECI argues that this case should be stayed at least until the location of the arbitration proceeding is finalized. Winner contends that he is not a party to the contract and therefore is not bound by it. The Court agrees with Winner. In Cortez Byrd Chips, Inc. v. Bill Harbert Construction Company, 529 U.S. 193, 195 (2000), the Supreme Court determined that the FAA venue provisions are permissive, and do not supersede jurisdiction under the general venue statute. Indeed, because arbitration is a matter of contract, it would be fundamentally unfair to restrict the rights of any person that was not a party to that contract. Thus, this lawsuit should not be dismissed or stayed as a result of the arbitration.
Defendant's arbitrability argument suffers from a similar flaw. Before any dispute can be submitted to an arbitration panel, Defendant must show that Winner agreed to submit to that panel's authority. There is a significant, and dispositive, distinction between signing a document in one's capacity as a corporate officer, on one hand, and agreeing to be bound personally, on the other. In Lumax Indus. v. Aultman, 669 A.2d 893, 895 (Pa. 1995), the Pennsylvania Supreme Court noted the "strong presumption in Pennsylvania against piercing the corporate veil" and explained that a corporation "is to be regarded as an independent entity" even if owned by one person. There is no indication in the agreement that Winner was a party in his individual capacity. Indeed, the agreement explicitly stated that the contracting party was "Winner Steel, Inc. (the "Company")." The Court further agrees with Plaintiff that Kaplan v. First Options of Chicago, Inc., 19 F.3d 1503, 1512-14 (3d Cir. 1994), is on-point and controlling. In sum, Defendant falls far short of the applicable "clear and unmistakable evidence" standard.
Defendant's final argument, that Winner is, in fact, a proper party to the arbitration, is premature. The veil-piercing doctrine requires a multi-factor, factually-intensive inquiry and is not to be presumed lightly. See Lumax. As explained above, in resolving a motion to dismiss, the Court must construe the complaint in the light most favorable to Plaintiff.
An appropriate order follows.
AND NOW, this 6th day of September, 2007, in accordance with the foregoing Memorandum Opinion it is hereby ORDERED, ADJUDGED AND DECREED that the DEFENDANT'S MOTION TO DISMISS OR STAY THIS ACTION (Document No. 10) is DENIED. Defendant shall file ...
| 6,480
| 3,018
| 2.147117
|
warc
|
201704
|
I review the recent work performed on computing the geometric
discord in non-inertial frames. We consider the well-known case of an inertially maximally entangled state shared by inertial Alice and non-inertial Robb. It is found that for high accelerations the geometric discord decays to a negligible amount; this is in stark contrast to the entropic definition of quantum discord which asymptotes to a finite value in the same limit. Such a result has two different implications: the first being that usable quantum correlations are more limited in this regime than previously thought and the second being that geometric discord may not be a sufficient measure of quantum correlations. I will discuss both of these perspectives.
| 733
| 413
| 1.774818
|
warc
|
201704
|
All the comments so far seem to forget that blind review goes in two directions. A senior faculty member in my department likes to point out that she very often knows who wrote the articles she reviews, but the important thing is that the authors don't know she's the one reviewing them -- which means she can be fully honest in her assessments of their work. That said, given what we know about unconscious bias, it is important for reviewers at least to be self-reflective about this. Which I understand some are not very inclined to do.
It seems to me that this faculty member has it all wrong. (I'm not sure exactly how much of this the anonymous commenter believes, and how much s/he is merely attributing to the faculty member.) Blind review goes in two directions, and this means that the referee is not supposed to know whose paper it is. Because, as the anonymous commenter notes, there are loads of unconscious biases, and blind review is supposed to control for them. But it is not enough to be "self-reflective" about this. If the biases are unconscious, it is literally not possible to correct them via self-reflection. The way to correct them is to eliminate the bits of knowledge they operate on. And so the effective way to be self-reflective about latent biases is to acknowledge that they are there, and to realize that blind review procedures are the only way to protect against them, and to observe those procedures.
Or am I missing something?
--Mr. Zero
P.S. The faculty member is right about how the author shouldn't know who the referee is.
| 1,571
| 751
| 2.091877
|
warc
|
201704
|
It ain’t what you don’t know that gets you into trouble.
It’s what you know for sure that just ain’t so. – Josh Billings
Much well-known business advice is sadly obsolete but can still be found in articles, business books and, not least, in daily use in the workplace. It seems that some companies are still guided by thinking that is sadly out of date – if it was ever true to begin with.
The worst of these old maxims are not only wrong, they’re bad for people and bad for business. Businesses who use them are making their employees unhappy and are harming the bottom line.
I recently wrote a post about the Top 5 Business Maxims That Need To Go, listing 5 horrendous examples. I also asked people to contribute the maxims they would like to get rid of, and got some great suggestions, so here are 5 more pieces of bad business advice that are making people unhappy at work and harming the bottom line.
Old maxim #1: People only work if you constantly kick their butt
Meaning: People are inherently lazy and only work when properly spurred on and controlled by managers.
– Submitted by JACH
This is of course just plain wrong. The interesting thing is though, that managers who take this approach often end up with people who behave this way for two reasons:
Treating employees in this way makes them demotivated and resentful so they start doing as little as they can get away with Motivated, skilled employees refuse to put up with this treatment and leave
Instead, treating people like responsible adults who actually want to do great work makes people want to live up to this. People have an amazing ability to live up (or down) to our expectations.
New maxim: Treat people great and they do great work Old maxim #2: The only way to get ahead is to put in the hours
Meaning: Success requires more than 40 hours/week. If you won’t put in the hours, somebody else who will is going to come along and take your place.
Some results
can be achieved through working more. If you can dig one hole in an hour you can dig two holes in two hours.
But some results don’t scale that way: If a programmer can write 100 lines of code in an 8-hour work day, it doesn’t follow that she can code 200 in a 16-hour day. In fact, the output of 16 hours of work may be significantly
lower than what you get in 8.
You might even get more work done in 6 hours a day than you do in 8. That’s what one company discovered, to their great surprise, when financial problems forced them to reduce working hours.
Instead of mindlessly putting in the hours, ask yourself how the work you do scales? How long is your optimal work day or work week?
New maxim: Maximize your results, not your hours Old maxim #3: Sales fixes everything
I’ll let Guy Kawasaki explain the meaning of this one: As long as you have sales, cash will flow, and as long as cash flows, (a) you will have the time to fix your team, your technology, and your marketing; (b) the press won’t be able to say much because customers are pouring money into your coffers; and (c) your investors will leave you alone.
I adore sales. Cash is absolutely delightful. But sales and cash do not solve every problem.
Let’s say your entire team is stressed and overworked. Will sales fix this? Let’s say nobody’s communicating properly, because half the people on your team hate the other half. Let’s say two of your best employees are about to quit because they’re being bullied by their manager. It would be pointless to try to solve these kinds of problems by increasing sales.
In fact, more sales can make a bad situation worse because:
The company will focus more on the customers than on its own people More sales means more work and potentially more stress for an unhappy organization
So while sales are wonderful, there are a whole set of common issues in a workplace that are not solved through more sales. I would in fact suggest that making your people happy is much more likely to result in higher sales, than higher sales are to result in happy people.
New maxim: Happy people fix everything Old maxim #4: Leave your personal life at home
Meaning: We come to work to work. Who you are in your free time does not matter.
– Submitted by Scott Nutter
This is just ridiculous. As if you’re one person at home and a different person at work. As if your personality, private interests and opinions were somehow going to contaminate the workplace and ruin everyone’s professionalism.
Henry Ford is said to have complained “Why do workers come with a brain, when all I need is a pair of hands?�? Well today businesses can’t settle for hands. We can’t even settle for brains alone, we also need people’s energy, creativity, ideas, opinions and motivation. We need the whole person to come to work every day.
New maxim: Be yourself at home and at work Old maxim #5: The business of business is business
Meaning: Companies must focus on their business and nothing else. Also often used to mean that the only goal of a business is the bottom line.
Well if this is true, then why do successful companies like Southwest Airlines, Patagonia, Semco, Kjaer Group, Great Harvest and many others spend time and money on charities, in their communities and on environmental issues?
I’ll tell you why:
It feels goodto do goodand it makes employees happy and proud to work for these companies It’s good for the bottom line
Also, Jim Collins proved in his book Built To Last, that companies who only focus on the bottom line perform significantly worse than companies who maintain a broader scope and also focus on other issues.
New maxim: There’s more to business than just business Wrap-up
The scariest thing about these old maxims is that they tend to be accepted unquestioningly because they are repeated so often – a little like nursery rhymes used to educate children. That means it’s not enough to oust the old maxims we need to replace them with new ones that are likely to bring better results for people and for the bottom line.
So here they are at a glance, the tired maxims and the suggested replacements:
Tired old maxim Shiny new maxim To get ahead you must work long hours Maximize your results, not your hours People only work if you’re constantly kicking their butt Treat people great and they do great work Leave your personal life at home Be yourself at home and at work Sales fixes everything Happy people fix everything The business of business is business There’s more to business than just business
Know any more bad business advice, mantra, maxim, truism that needs to go? Write a comment!
If you liked this post, I’m pretty sure you’ll also enjoy these:
| 6,823
| 2,995
| 2.27813
|
warc
|
201704
|
The price of sending a parcel in Germany is set to increase as the four largest carriers in the country look to combat rising operational costs, according to Die Welt.
DPD, GLS, Hermes and DHL could raise their prices as a result of environmental regulations, higher wages and rising energy costs, executives from each respective company told the German newspaper. Furthermore, the operators are coming under pressure on pricing from large online customers, and a growing home delivery failure rate is proving expensive, because “the modern person is rarely at home”.
As a result of this, Die Welt said that both business and private customers will have to adapt to a higher postage rate in the future.
Rico Back, CEO of GLS, told the newspaper that he expects to have to raise prices by 3-5% this year, stating that rising operational costs will be passed on to the consumer. Last year’s price increases hampered growth at GLS – with the company only recording a 2% increase in shipments. In comparison, Hermes reported a double digit increase and DPD saw 8% growth, similar to DHL. However, this move has seen GLS report a profit margin before tax of 8% – ahead of industry figures.
DHL is not planning a price hike for residential customers, but regularly adjusts the postage costs for businesses. DHL’s Andrej Busch explained that price increases are averaging 4%, “mainly driven by higher transport and energy costs”. Whilst DPD CEO Arnold Schroven also told Die Welt that rising costs will be passed on to customers.
Hermes CEO Hanjo Schneider told the newspaper: “The prices will keep rising. This is triggered by higher logistics costs, and in turn, environmental concerns. Truck tolls and diesel fuel are only two [of the challenges we face]; more stringent Euro standards for truck fleets and rising wages are [among the others].”
Die Welt reported that both DPD and GLS, the B2B specialists, do not want to primarily compete with Hermes and DHL for deliveries through Amazon, eBay and QVC. Back said: “Amazon and eBay are not our primary target group.”
Source: Die Welt / Post&Parcel
| 2,163
| 1,121
| 1.929527
|
warc
|
201704
|
By Claude Goguen, P.E., LEED AP
Chances are if you haven’t worked on a LEED project recently, you will. Since its unveiling in 2000, Leadership in Energy and Environmental Design (LEED) has become a part of building construction vernacular in North America and around the world.
From its humble beginnings, LEED has grown to the point that it is now certifying 1.5 million sq ft of building space each day in 135 countries. Today, more than 54,000 projects are currently participating in the current version, LEED 2009, comprising more than 10.1 billion sq ft of construction space. Some owners and specifiers have embraced it while others find flaws in this rating system, but love or hate it, it continues to grow.
As a part of that growth, the U.S. Green Building Council (USGBC) is currently launching LEED v4. After a couple of years of deadline extensions and six public comment periods, 86% of USGBC members voted to approve this latest version.
The USGBC is taking a phased approach to LEED v4. This means that rather than requiring all projects to use it right away, it is giving the marketplace time to become familiar with the concepts and theories that it’s based on. Project teams can register their projects under LEED 2009 until June 1, 2015.
What’s changed?
Practitioners familiar with previous versions of LEED will recognize the same fundamental structure. There are still prerequisites and credits, 100 base points, regional priority credits and pilot credits. v4 has more emphasis on USGBC’s goal of reducing carbon emissions, and this means increased energy efficiencies across the board. Consequently, v4 had adopted ASHRAE standards.
LEED v4 is technically more rigorous than its predecessor. This version also expands the market sectors (21) able to use LEED including data centers, warehouses and distribution centers, hospitality, existing schools, existing retail, and LEED for Homes Mid-Rise.
Credit weightings have also been revised. Point distribution will more closely tie the rating system requirements to the priorities articulated by the USGBC community.
There are new prerequisites and credits across the LEED credit categories and rating systems. Point values have also changed. Each rating system has gone through a weighting process and has LEED points associated with each credit and option of the rating system.
How does LEED v4 affect the use of precast concrete?
Some of the changes affecting the use of precast concrete include:
Site Development– Protect or Restore Habitat (formerly SS 5.1): The requirement is to preserve and protect from all development and construction activity 40% of the green field area on the site (if such areas exist). Precast will still contribute in this category because it’s made to order, reduces storage space on site and minimizes site disturbance. Rainwater Management(combined former 6.1, “Stormwater Design – Quality Control,” and 6.2, “Stormwater Design – Quantity Control”): Precast will still contribute through the use of stormwater products to manage the runoff. Heat Island Reduction: Precast concrete has a higher solar reflectance than many other materials, which is beneficial in reducing the heat island effect. Building Product Disclosure and Optimization– Environmental Product Declarations: Multi-Attribute Optimization –This credit rewards the use of products that comply with one of a few criteria including products sourced (extracted, manufactured, purchased) within 100 miles of the project site. Precast concrete manufacturers are often located within short distances from the project. Building Product Disclosure and Optimization– Sourcing of Raw Materials: Leadership Extraction Practices– This credit awards points based on the use of products that meet at least one of six responsible extraction criteria for at least 25%, by cost, of the total value of permanently installed building products in the project including recycled content. Precast concrete includes pre- and post-consumer recycled content mostly through the use of supplementary cementitious materials and reinforcing. Construction and Demolition Waste Management: Reduction of Total Waste Material– Do not generate more than 2.5 lbs/sq ft of construction waste on the building’s floor area. The use of precast concrete significantly reduces construction waste, because it arrives on site ready to be installed. Regional Materials: The “regional” definition will no longer be 500 miles. It is currently based on “Regional Core Based Statistical Area,” updated Dec. 1, 2009, by the U.S. Office of Management and Budget. Thermal Comfort(renamed from “Controllability of Systems – Thermal Comfort,” combined with “Thermal Comfort – Design Requirements for Achievement”): Design of heating, ventilation and air conditioning (HVAC) systems and the building envelope will need to meet the requirements of ASHRAE Standard 55-2010, “Thermal Comfort Conditions for Human Occupancy.” Precast Enclosures: Precast enclosures will contribute due to concrete’s thermal mass properties. Environmental Product Declarations
LEED v4 also awards credits for the use of Environmental Product Declarations (EPDs) for products and Life Cycle Assessments (LCAs) for whole buildings as a way to demonstrate transparency and superior environmental performance. Similar to a food nutrition label, an EPD reports environmental impacts such as carbon footprint, acidification or ozone depletion potential. EPDs list quantified life-cycle product data and are owned by the product or brand producer. In essence, they are eco-labels, and many believe they will be required for all building products in the future.
Product Category Rules (PCRs) govern how LCAs and EPDs are written. The PCR is developed for a broad product type such as vinyl siding, asphalt roof shingles and precast concrete. NPCA is working with other industry partners to create a North American PCR for precast concrete.
LEED could be an opportunity
The green building industry is continuing to grow, and LEED has been a big part of that growth. Expand your market by educating yourself on the LEED program. Request information from your suppliers in regard to recycled content and any other documentation that may assist your customers in pursuing LEED credits under the 2009 version or the new v4 version.
For more help with understanding LEED and what you need to supply to your customers, visit NPCA’s website at precast.org/sustainability.
For questions about this article, please contact Claude Goguen, NPCA’s director of Sustainability and Technical Education, at (317) 571-9500 or [email protected].
| 6,800
| 3,099
| 2.194256
|
warc
|
201704
|
Navi Mumbai, India (PressExposure) January 15, 2010 -- Cyberwarfare Market 2010-2020 Report Cyberwarfare Market 2010-2020.Our market study examines the leading cyber nations and analyses the range of factors that are driving strong global sales growth. Our analysis has concluded that worldwide spending on cyberwarfare by governments and armed forces in 2009 totalled $8.12bn. ( [http://www.bharatbook.com/detail.asp?id=129774&rt=Cyberwarfare-Market-2010-2020.html] )
Recent events have demonstrated the potential of cyberwarfare and this is driving strong growth in cyber security. Estonia came under cyber attack in 2007 at the time of a political dispute with Russia. The internet sites of Estonian banks, companies, government ministries, newspapers and political parties were targeted by distributed denial-of-service (DDoS) attacks. A year later, Georgian web pages were attacked by civilians as Russia carried out real-world military strikes during the South Ossetia War. During 2009, serious cyber attacks continued to occur, with attacks on the institutions of countries including South Korea and the US. Cyberwarfare Market 2010-2020 examines the global market for cyber-defence measures and offensive cyber capabilities from an impartial standpoint. We offer a review of significant cyberwarfare contracting activity based on our analysis of information obtained from multiple sources.
The report draws on a rich combination of primary research, interviews, official corporate and governmental announcements, media reports, policy documents, industry statements and an extensive gathering of expert opinion. Cyberwarfare Market 2010-2020 provides detailed sales forecasts for the global market and 12 leading national markets; a strengths, weaknesses, opportunities and threats (SWOT) analysis; discussions of commercial and technological trends; and assessments of market drivers and restraints. Why you should buy Cyberwarfare Market 2010-2020:
The main benefits you can derive from purchasing this report are:
- You will come to understand the current state of the global cyberwarfare market and form a clear vision of how it will develop, based on our market forecasts for 2010 to 2020.
- You will be able to examine our detailed global sales forecasts, as well as national sales forecasts for the 12 leading national cyberwarfare markets.
- You will gain an insight into the cyberwarfare marketâs potential for further growth by examining the major commercial drivers and restraints.
- You will learn how the worldâs armed forces are researching not only defensive measures but also developing offensive cyberwarfare capabilities.
- You will find out how the leading players in the cyberwarfare market are performing, with details of recent contract awards.
- You will be able to appreciate the wide range of factors affecting market growth with our SWOT analysis of strengths, weaknesses, opportunities and threats.
Boeing Integrated Defense Systems (IDS) Co-operative Cyber Defence Centre of Excellence F-Secure Corporation Kaspersky Lab McAfee Inc Science Applications International Corporation (SAIC) Spirent Communications The global cyberwarfare market is expected to continue to see significant sales growth in 2010 and beyond. We forecast that armed forces and governments are set to increase spending on securing their critical networks. More governments are anticipated to accelerate plans to develop defensive and offensive cyberwarfare capabilities, as well as establishing centres for co-ordinating cyber responses. We believe the cyber-security boom offers a lucrative range of business opportunities for defence companies and software developers. To know more and to buy a copy of your report feel free to visit : [http://www.bharatbook.com/detail.asp?id=129774&rt=Cyberwarfare-Market-2010-2020.html] Or
Contact us at :
Bharat Book Bureau Tel: +91 22 27578668 Fax: +91 22 27579131 Email: info@bharatbook.com Website: http://www.bharatbook.com Blog: [http://bharatbookresearch.blogspot.com] Follow us on twitter: http://twitter.com/3bbharatbook
| 4,099
| 1,848
| 2.218074
|
warc
|
201704
|
The breaking point for parents of Chicago's schoolchildren finally came in the fall of 1987: For the ninth time in less than two decades public school teachers were out on strike. The years of financial game-playing, continued crisis, and, most of all, educational decline had taken their toll.
Reformers had for several years chronicled the failures of the city's schools and the follies of the system's central bureaucracy, which had grown dramatically even as school enrollment, teaching staff, and real teaching time declined. Business leaders had long grumbled that the graduates of the public schools were grossly unqualified. Blacks had long denounced both the system's deliberate segregation and white politicians' attempt to retain control of the system, even though only 12 percent of students were white. But many black parents discovered in the 1980s that the system was still rotten, even with black superintendents and a predominantly black staff.
The public anger and frustration, as well as scholarly research, all pointed toward the central bureaucracy as the heart of the problem. During the year following the strike, an unusual coalition of educational reformers, the business establishment, and white, black, and Hispanic community organizations successfully pushed for state legislation. The new law radically decentralized power to the local school level, giving parents and community representatives primary responsibility to hire and fire principals, set budgets, and approve school plans. Coming at the close of a decade of reports on the failings of American education, Chicago's school reform was one of the most dramatic and ambitious responses.
The stakes are high for the nation's third-largest public school system and for national education policy. President Bush has focused on a free-market model of parental school "choice," an ambiguously defined option with increasing political appeal. By contrast, Chicago's reform emphasizes democratic "voice" as the route to effective schools. It hypothesizes that local control will create more effective, responsive, and innovative schools.
The Chicago plan, however, does not preclude parental choice. The city continues to offer its pre-reform array of magnet schools and optional enrollment programs, which, according to critics, help only a privileged few. The new state law mandates the eventual creation of more programs involving parental choice, but Chicago education reformers reject the free-market model and hope to place new options for choice within a framework of popular democratic institutions.
There are strong political and educational reasons for favoring the popular democracy framework over the marketplace, but if Chicago's bold experiment fails for lack of support, the drumbeat for the free-market model will surely grow louder. Unfortunately, these new laboratories of local educational democracy have been hobbled by inadequate external political and financial support from the outset.
Starting Small
For two years, Chicago's local school councils have been grappling with their new responsibilities, overseeing the education of roughly 400,000 students in 610 schools. They have created programs to restore discipline, fight gangs, and generate school spirit -- ranging from enforcing tough ultimatums on gang members and drop-outs to requiring school uniforms. They have fixed up school buildings, improved libraries, added computers, beefed up arts programs, and initiated new curricula from environmental studies to African-American history.
Many schools added teacher aides or teachers, and a few have either decided to expand their facilities, start a new school, or relieve overcrowding by shifting to a year-round schedule. In some instances, there are new channels of communication and cooperation among neighborhood schools or among teachers and sub-units within schools that had been isolated before, reporting only to superiors. New principals were hired in 38 percent of the schools, and a few dozen of the system's schools have contracted with distinguished outside experts to develop curricula or to revamp the entire school.
Despite an auspicious start, however, Chicago's school reform is far from a clear-cut success. The roles of parents, principals, teachers, and bureaucrats are still unclear. Many factions with disparate agendas are waiting for reform to fail, and no strong leadership figures in politics or the schools are pushing for its success. State and local politicians have been unwilling to commit the money needed to put flesh on the bones of reform. The central bureaucracy continues to fight to retain control and to waste scarce dollars; by design or inertia, its actions sabotage reform.
Few local school councils (LSCs) have so far initiated any far-reaching educational reforms. There has been little effort to involve teachers or give them the necessary retraining, and the teachers union remains an ambivalent participant in reform. If these obstacles persist, the tentative blossoming of hope and enthusiasm among parents and the reform-minded minority within the system may soon wither into cynicism and withdrawal.
Yet the new law has begun to effect change. Chicago reform is predicated on the idea that there is no one best system for all schools. And the responses so far are diverse:
At Dumas Elementary School in a poor black neighborhood, the LSC has supported principal Sylvia Peters' ambition to begin offering high school courses for students who want to continue with the Dumas program's focus on the arts, African-American culture, and development of character and moral values. But Dumas parents remain divided over the LSC decision to encourage school uniforms, and teachers remain divided over the best educational strategy for teaching reading. At Amundsen High School, a United Nations of ethnicities with a big gang problem, the LSC picked a new principal, banned all gang symbols, and set new standards: roughly one-fourth of the students who had been failing were told that they could return only if they earned summer school credit, then came with their parents in the fall and signed a performance contract. The school also cracked down on truancy. At Whitney Young High School, a respected magnet school, the LSC decided to focus attention on black and Hispanic males in danger of "dropping out spiritually," providing special counselors who would link the kids with individual mentors. In the Mexican-American Little Village neighborhood, seven of the ten local schools have begun working as a "cluster" to provide special programs and educate parents on weekends. There were some divisive battles, including the charge that a white principal was replaced by a Hispanic for ethnic reasons. Also, neighborhood residents hailed one school's decision to relieve overcrowding by physically expanding but the decision also reflected the unwillingness of Hispanics to send their children to underused, largely black schools not far away. At the Michele Clark Middle School in a west-side African-American neighborhood the LSC bought computers for a new writing program it initiated (at the suggestion of a teacher), established a new program for teaching Algebra, encouraged more African-American emphasis in the curriculum, and tried to bring in Whittle Communications' controversial school-oriented television Channel One (but the superintendent vetoed it). Yet parental participation has largely dwindled to those on the LSC since the excitement of choosing a principal passed.
There also are signs that more substantial educational innovation may be underway soon. For example, a group of high schools has joined Brown University Professor Theodore Sizer's Coalition of Essential Schools, which encourages teacher collegiality and an individualized approach to students. James Comer, a professor of psychiatry at Yale University's School of Medicine and a faculty member of Yale's Child Study Center, will work with a half-dozen schools to create the supportive atmosphere and cooperative relations among teachers and parents that have transformed other schools. Former civil rights leader Bob Moses has already brought to several schools his Algebra Project, which teaches algebraic concepts through experiences of daily life.
This diversity of advice on innovation is new. There had been limited experiments in school autonomy and decision-making prior to the 1988 reform law, but central administrators nevertheless had dominated technical support.
The One Worst System?
Chicago's school crisis had been brewing for decades before then-Secretary of Education William Bennett, in an overstatement made in the aftermath of the 1987 school strike, declared the city's schools the "worst in America." Under the two-decade rule of the late Mayor Richard J. Daley, Chicago had deliberately maintained a highly segregated system. As whites fled to the suburbs and many remaining white families sent children to the large Catholic school system, citizen support for the public schools diminished. Daley tried to buy labor peace with the school system's unions through financial sleight-of-hand that, after his death, resulted in full-scale crisis in 1979.
Over the years, blacks fought for more control of a system that by 1990 had a student population that was 58 percent black (plus 27 percent Hispanic, 3 percent Asian and other nonwhites). For the past decade the superintendents, board chairs, and a slim majority of the staff have been black.
Yet, as several reform groups documented throughout the 1980s, the school system's performance was bad and getting worse. The central bureaucracy was not just a terrible drain on resources; perhaps even worse, to justify their existence, administrators attempted to regulate more closely nearly every aspect of teaching and school life. Principals and teachers alike were oriented more toward demands of their superiors, less to the needs of students, the wishes of the parents and community, or their own professional judgment. The bureaucracy imposed a disastrous reading skills program that subverted the joy of reading through mind-numbing regimentation designed to be "teacher-proof."
The system especially failed its poor, black, and Hispanic students. The central bureaucracy diverted more than one-third of state compensatory aid for schools with poor children to support its own operations. In a vicious circle of incompetence, Chicago schools overwhelmingly drew their teachers from two of the worst state universities, which enrolled mainly ill-prepared Chicago students.
The result was not surprising. From 43 to 57 percent of entering high school students failed to graduate, according to studies by the Chicago Panel on Public School Policy and Finances and by Designs for Change, another advocacy group. But a few schools performed fairly well. Those schools were mainly in white neighborhoods or were magnet schools designed to provide a modicum of integration and refuge for middle-class families.
Overwhelmingly, the best minority students were siphoned off to magnet schools, and white middle-class parents connived and lobbied to get these better schools to accept their children. The vast majority of the system was allowed -- perhaps even expected -- to fail. But as Designs for Change researchers Donald Moore and Suzanne Davenport argued, this "new, improved sorting machine" gave the appearance of greater fairness while perpetuating or worsening the traditional inequities.
At present, more than half of Chicago's high school students and a quarter of the elementary students attend schools outside their residential districts, under a wide variety of pre-reform programs including magnet schools. But when it comes to admission to the better magnet schools, the real choices are made by the principals, from a huge pool of applicants. For the remainder, choice is a grim illusion: one bad school or another. As a result, less than one-third of those students who do not drop out read at twelfth-grade level when they graduate.
In response to the crescendo of discontent, Mayor Harold Washington, the city's first black mayor, convened an "education summit" in 1986 to persuade businesses to guarantee jobs to public school graduates if they met performance standards.
According to Mary O'Connell's
School Reform Chicago Style: How Citizens Organized to Change Public Policy, school Superintendent Manford Byrd rejected the proposal from the education summit, saying, "We've got an excellent system; if you give us New Trier [a wealthy suburban school district] students, we'll have good outcomes." Byrd's "blame the victim" mentality pervaded the system, legitimized by research such as the 1966 report Equality of Educational Opportunity by James Coleman and colleagues, which showed the primary determinant of school success was family socioeconomic status.
Yet in the wake of the Coleman report, educational researchers across the country had found or created schools that made a difference in poor communities. For example, Comer has helped create such effective schools in New Haven and other cities. These schools share certain distinguishing features: Principals are educational leaders; parents are involved; staff believe that students can learn; time is primarily spent on "interactive" learning, especially reading; and there are consistent efforts to maintain an orderly, attractive atmosphere and to discourage drop-outs and truancy. Comer, in particular, stresses development of a nurturing, supportive atmosphere that integrates the parents into school life.
Black Doubts
But Mayor Washington, bogged down in battles with his old Democratic machine enemies, was reluctant to devote much effort to a school system that he at best indirectly influenced. He was also reticent to disturb the status quo since the public school system provided the economic base of a large proportion of the city's black middle class, who had been his political supporters.
The major black community and civil rights groups were often actively hostile to what they called educational "deform." Many blacks also saw the reform effort as a threat to black control of a major urban institution. And middle-class black educators had doubts that poor parents of any sort were capable of running the schools.
Even black advocates of reform were suspicious about some of their allies such as white bankers and executives or Hispanic and white community groups that had clashed with blacks on other issues. And some blacks were wary of reformers who criticized the local teachers union, led by a black woman, for the union's resistance to parental involvement in running the schools.
Jesse Jackson lobbied forcefully against reform and continued afterwards to fight its implementation, battling, for example, to save the job of his old protégé, Manford Byrd. The black middle class, many of whom were school employees, was the political and financial base for Jackson's Operation PUSH, despite his vocal advocacy of the disenfranchised poor. When PUSH came to the shove of disgruntled black parents, Jackson's organization sided with the black administrators, above all, and the teachers.
The nineteen-day teachers strike in 1987 provoked such a grass-roots outpouring of anger and dissatisfaction that the mayor reactivated the education summit and brought in a new group of community and parent organizations. Few of the existing parent-teacher associations or local school improvement councils, weak bodies devised under previous reform legislation, were involved in the reform debate.
Washington had been strong enough politically to risk confronting an important group of supporters. His successor in the mayor's office, Eugene Sawyer, who took office upon Washington's death in 1987, was politically weak, more attuned to machine-style patronage politics, and more dependent on the black middle class. Sawyer saw the battle over reform as about "contracts and jobs," which he feared blacks would lose.
State Action
After cliffhanger votes and veto battles, the school reform law passed the state legislature in 1988, effective in the fall of 1989. The final bill vested power to hire and fire the principal, make local budgets, and design school improvement plans in the ten elected members of the local school council, including six parents elected by parents, two community residents elected by the community, and two teachers elected by the staff.
The principal received new authority to hire staff without regard to union seniority and to remove unqualified teachers more speedily. The principal in theory also gained power over administration of the local school (although both school engineers and lunchroom personnel continued to assert their independence). Teachers were expected to form a Professional Personnel Advisory Committee to advise the principal and LSC.
The reform plan decentralized responsibility much more drastically than had New York City's earlier, troubled experiment with district school boards, and it gave more power to parents, less to professionals, than school reform had done in other cities such as Rochester, New York. (Although a few school systems, such as those in Miami, Florida, and Hammond, Indiana, adopted "school based management" before Chicago, their reforms were too new to provide much conclusive evidence on what worked.)
The law also placed a cap on administrative expenditures by the central board and mandated that the so-called state Chapter I funds for low-income students be distributed to the schools, not appropriated by the central office. Thus the reform reallocated about $40 million of a $2.3 billion budget to local schools to spend as they saw fit in the first year of reform, with about $53 million additional in each of the four succeeding years. In the first year of reform the average elementary school had about $90,000 to budget as it wished. The School Finance Authority, which had been established in the 1979 fiscal crisis, assumed oversight authority, and the system was mandated to prepare improved school choice options.
When the state legislature passed the Chicago school reform law, it provided no new money to implement the reforms or any innovations that might flow from local school council initiatives. Despite its constitutional responsibility to finance the majority of school costs, the state share of education expenditures has actually dropped from 48 percent in 1976 to 40 percent today. Legislators temporarily raised income taxes in 1989 (then made the hike permanent in 1991), but that simply slowed the steady relative decline in state funding. From the outset, reform was a financial orphan.
The Limits of Change
While it provided for considerable decentralization of power in the Chicago school system, the 1988 reform law by no means overturned the status quo. Almost from the start it was apparent that the local school councils could exercise their new powers only within a very circumscribed universe. The central board and administration retained the power to negotiate contracts with the unions and to make many fundamental overall budgetary decisions. The mayor, with approval of the city council, retained the power to appoint the school board, which in turn hired the superintendent. The state legislature controlled nearly half the purse strings.
And the councils faced other impediments; the mechanics of implementing the reform plan itself were daunting. The decentralization meant conducting more than 600 local school council elections and providing a crash course in the intricacies of budgeting and evaluating the school's principal to the newly elected council members -- many of whom were poor and ill-educated, some of whom could not speak English (one successful council mixed poor blacks and non-English speaking Chinese). Many newly elected councils soon faced the difficult task of evaluating the school's principal, often seeking out and evaluating candidates for the job.
Nevertheless, at the most basic level, the system worked: Nearly all the schools met their deadlines for local budgets and school plans, which were often delivered with little notice and less help. There were bitter fights in a few schools and councils, especially over replacement of principals. Charges of racial discrimination in a few councils stirred waves of paranoia, but such problems were not widespread.
In a study of a dozen local school councils, the Chicago Panel on Public School Policy and Finances found neither a model of dynamic grass-roots revolution nor a disaster. Only three out of fourteen school councils queried in another Chicago Panel survey have made important changes that might affect education, such as team teaching or encouraging cooperative learning among students. Three others did very little, and the remainder made modest first steps.
But the overall conclusion from central administrators, local reformers, and outside observers is, according to Chicago Panel Director G. Alfred Hess, Jr., "that the new ideas aren't very creative in a lot of places." Joan Jeter Slay, a former school board member and a leader of Designs for Change, concluded that at best twenty-five schools -- less than 5 percent of the total -- have undertaken significant restructuring.
While Chicago's school reform law focused on school governance, there is a large leap from either citizen control or decentralization to making schools work better. "By itself local school management does not generate better schools," argues John Kotsakis, assistant to the president of the local teachers union. "Are we doing something that changes the way kids are engaged in the learning process? Clearly in most cases, we're not." Beyond restructuring governance, Kotsakis argues, the schools must now restructure school time and space, breaking down the traditional isolation of teachers and fragmentation of the day.
Yet parents often know more about what they want out of schools than how to get it. "Nobody looked at [reform] in terms of the vehicle that we need to increase student achievement," Slay said. "We knew councils were critical, but I don't think we understood how big an adult education program we were undertaking."
Teachers as well as parents needed training to carry out the reform's objectives, although no training programs were mandated by the reform legislation. Teachers now appear less fearful of reform and more involved than at the outset, but neither the Professional Personnel Advisory Committees nor the union have yet played an important role. The teachers union has argued for an outside academy that generates educational ideas, but reformers suspicious of the union blocked earlier teacher-training proposals under union control. Yet many reformers would agree with Kotzakis's argument that school reform can succeed only if teachers are better trained and work more collegially with principals who do not insist on being authoritarian.
"One of the key holes in reform is: What's the incentive for teachers to change what they're doing in the classroom?" argues Anthony Bryk, a professor of education at the University of Chicago. Many teachers, veterans of decades in the schools, have grown cynical with the twists and turns of policies and suspect that the central bureaucracy will ultimately regain control.
At its most successful, reform has given local school councils a chance to get rid of incompetent or unresponsive principals and to make all principals accountable to local school government, not the bureaucracy. At times councils initiate ideas, but more often they express concerns and offer support to principals who are trying to make changes. Some councils, however, have been bogged down in conflict, unable to forge a consensus.
Some reformers now think that the central office, apart from a few functions like administering the payroll, should become a service center, offering program support or even maintenance services in competition with independent suppliers. They are not likely to receive support toward that goal from the central administration, however.
Reform
has diminished the size of the bureaucracy -- at least 550 positions have been cut out of a central bureaucracy of about 4,100 -- but top administrators have done everything they could to save themselves. Although district offices were cut heavily, the central office remained protected; cuts disproportionately targeted clerical and lunchroom workers rather than upper-level bureaucrats, whose jobs were often simply shuffled around.
The central bureaucracy has not only failed to encourage reform but frequently obstructed it. 'Pershing Road" -- the site of the huge central office -- has imposed arbitrary and abrupt deadlines, shifted paperwork burdens to the local school councils, and provided inadequate and confusing information.
"One of the tragedies of school reform is the utter lack of leadership at the center," Hess argues. Richard M. Daley (son of the late Richard J. Daley) made support for the new school reform a central plank in his successful 1989 mayoral campaign, but his record has been mixed since he took office. His interim school board picked a superintendent, Ted Kimbrough, who demonstrated no enthusiasm for reform and has tried to recentralize power. The board also negotiated teacher contracts that were far beyond projected school revenues, but did guarantee labor peace for Daley's 1991 reelection. Shortly after the election, the board announced a deficit of $315 million out of a $2.3 billion budget for the next fiscal year.
Another Crisis
The latest budget crisis in the summer and fall of 1991 has had both economic and political dimensions. In comparison with other big cities as well as many suburban districts, Chicago's school funding is below average; the state contributes less than it should, and local property-tax rates for schools are the third-lowest in the metropolitan area, according to Hess.
The school system's financial problem is only partly waste and misallocation of money; the schools are also simply and seriously underfunded. Although many studies show no statistical connection between spending more on education and getting improved results, that does not mean spending is never justified. Paying teachers more may be necessary just to maintain the teacher corps. Also, group merit bonuses for teachers in schools that make significant progress or dramatic reductions in class size could bring significant improvement.
Reformers, for their part, hoped to use the 1991 fiscal crisis as a lever to increase the amount of time teachers actually spend with students each day and to reduce the bureaucracy further. For example, the school board had the option of saving $40 million if it cut another 800 positions from the central office, Hess argued. But, while some of the board's initiatives were blocked or deflected, the reformers were unable to mount a sufficiently forceful campaign to shape the budget.
The future of the reform effort has already been threatened by the budget crisis. Daley used the fiscal squeeze as the occasion to raise the threat of education vouchers, which would be appealing to his loyal constituents with children in parochial schools. The mayor resisted finding new local property or state tax money for the schools. And the state legislature, in its 1991 budget, effectively left Chicago schools in a slowly tightening fiscal noose. In his Fiscal Year 1992 budget, Superintendent of Schools Kimbrough trimmed back the central office by only slightly more than 200. But more than 1,200 teachers were cut, school supplies and equipment were slashed by 90 percent, and many special programs for children who are poor or at risk of dropping out were eliminated. The superintendent closed thirteen schools, including several that were deemed successful, even though the move saved less than $2 million.
Although schools opened in the fall of 1991, many were in chaos for weeks. In November, a teachers contract was finally ratified. The final deal gives teachers a small pay increase this year and will result in immediate school closings. Reformers, who wanted cuts in the bureacracy rather than school closings, warned that the financial crisis will return with greater force next fall.
The fiscal crisis and cuts have undermined both the spirit and substance of reform. "We've had a couple of years to create a sense of efficacy [on the councils]," Bryk argued. "Now there's tremendous centralization of decision-making that has pervasive effects. You're trying to convince people they have power and then telling them they have no power. That could be terribly destructive to reform."
During the second round of local school council elections in the fall of 1991, both the number of candidates and the voter turnout dropped by nearly half from the initial elections, but there were reasonably full and contested slates nearly everywhere. The fall-off may reflect reduced local controversy and the early routinization of local school politics as much as any disillusionment with reform, but the failure of Kimbrough or Daley to push participation also hurt.
The success of reform will depend as much on the creation of a constructive local political culture as on the mechanics established in reform legislation. Historically, locally controlled schools have ranged from models of democracy to models of parochialism and patronage. To succeed, Chicago will have to buck its own traditions of patronage politics.
There are some checks against abuse. The School Finance Authority has oversight powers, but more important, school reform and community groups have helped keep reform on course. Not only have most councils remained independent of their aldermen and other political forces, but some school council leaders have become new political challengers. Education has become a more openly debated political issue and a much larger constituency is learning about the issues at stake.
Paradoxes of Decentralization
In a still limited way, school reform is a social movement. While reform now has far greater black popular support than when it was proposed, the movement still relies most heavily on the groups and individuals that fought for it initially. No one has organized the members of the local school councils into a coherent, potent voice in defense of their own interests.
The centralized bureaucracy dearly contributed mightily to the deterioration of Chicago schools. But decentralization alone will not be the answer. Local control would work best, ironically, if there were central leaders who were strong champions of local school councils and guarantors of both financial and technical support. Decentralized institutions need nurturing and protection by such leaders, but few leaders are willing to help without gaining influence or control. Yet without outside support -- money, encouragement, professional assistance, and more -- the local school councils by themselves seem unlikely to lead to the dramatic educational innovation the schools need.
The formal governmental mechanisms of school reform may be necessary but not sufficient. To flourish, the schools require creation of a degree of consensus among teachers, parents, students, and principal. If the teachers union must become more flexible and as concerned about the quality of education as the contracts of its members, it is also true that reformers and councils must respect the union rights, professional responsibilities, and need for reasonable job security of the teachers. Chicago has reached temporary accommodation but no profound resolution of these issues.
In some cases, members of local school councils may bring ideas that improve their schools. But in most instances, school reform will work -- if it does at all -- because the councils hire strong, innovative principals who are accountable to the parents, community, and teachers. Under the old system, principals thrived or survived by pleasing their superiors in the central administration, which had little interest in giving them autonomy. But effective principals will also share power and responsibility with teachers and parents, just as an effective superintendent in this new system will encourage the autonomy of individual schools.
If all goes well in the next five to ten years, differences in quality among the schools may decline as the overall performance rises, while the differences among individual school's educational programs and philosophies may grow. At that point, introducing more choice will be logical and necessary. Most Chicago reformers do not want a system where the principals exercise more control over school placement than do the parents. One alternative might be the use of a lottery to select among a pool of applicants for limited space in a popular school. "We will not get to choice as a vehicle for change," Hess argues. "We will get change, and that will result in greater choice." But politicians may opt for a voucher or free-market choice system if improvements do not come quickly.
Choice Possibilities
In response to failing institutions, social scientist Albert O. Hirschman has argued, people may choose to exit or to use their voices to bring change. Advocates of laissez-faire choice, such as Brookings Institution researchers John Chubb and Terry Moe, argue that the market offers the only alternative to stultifying bureaucracy. If parents do not like public schools, they say, they should be able to exit and go to private schools at public expense.
Providing education, however, is not like marketing pizza or laundry soap. Schooling is a central element in determining the character of society; consciously or not, it inculcates values in succeeding generations of citizens. If the failures of the schools lead to such a politics of distrust that government is abandoned, yet another mechanism for creating a sense of community and common social goals will be lost.
Ironically, despite our society's inequitable and inadequate support for education, education carries an especially heavy burden in American culture: It is the key to "equality of opportunity," the surrogate for social equality. It is also the glib solution offered for all social ills -- unemployment, drug abuse, crime, trade deficits, and so on -- many of which are at least exacerbated by the workings of the free market. It is asking too much of education to expect it to redress all these inequities and social woes, but education
is an important potential counterweight to the market. Even marketplace choice advocates acknowledge that education is different: The public still foots all or much of the bill under their proposals.
The marketplace model is no more likely to produce effective innovation than the democratic reform strategy, and it is more likely to produce inequalities. Indeed, the limited marketplace choice in education through the housing market that differentiates suburban and big city schools already contributes heavily to existing inequalities. And there is no assurance that the free market will produce high-quality education: Does the performance of the market in producing children's toys or television programming (or the swindles in private trade and technical schools) justify turning education over to private enterprise?
Privatized, free-market education would not necessarily remain the domain of small educational entrepreneurs. Many corporations have central bureaucracies that are as stifling to innovation and worker initiative as any big school board. Would a General Education, Inc. assuredly be any different from General Motors?
The accountability offered through local school councils has some effects similar to a voucher system, signaling the principal and teachers what the parents want out of the school. Also, the variation that inevitably will develop with local control can lead to more meaningful choice as that option is expanded. In many cases, where the local community school proves to be satisfactory and responsive, it will also mean that such choice is unnecessary.
The most compelling argument for education choice is not the economic market analogy but rather the observation made by Deborah Meier, a pioneer advocate of alternative schools and choice in New York's East Harlem school district. Because children are very different in how they learn, even if they all can learn, and because there is no one correct way to become educated, Meier argues, choice of different types of schooling can and should be part of any public system.
If the system is sufficiently democratic, the public's voice can help shape the system as well as each school. There is a value, if we want a democratic society, in having educational institutions that recognize broader responsibilities than their own profit and loss. There is also value, both educationally and politically, in involving parents and communities as much as possible in the schooling of society's next generation.
In small communities with adequate resources, community school boards already often work well. They are rarely models of active participatory democracy, but they do provide for local accountability. With the centralization of power in large bureaucracies in the big cities, that accountability was lost.
Chicago school reformers are still betting that the new wave of democracy can topple the bureaucratic castle on Pershing Road and provide some of that lost accountability for local schools. As progress is made, politicians may be more willing to provide the schools the money they need, but such prospects now look bleak. Without political and financial support, including a champion of decentralized power at the helm of the school system, the local school councils will never get a fair test.
If a cooperative political and educational culture emerges around the schools, there should be more innovation and meaningful educational alternatives. The decentralized democratic strategy provides only a framework for change, an alternative to both the current bureaucracy and the free market. It offers many of the virtues claimed for the market plus the advantages of a more equitable, participatory, and responsive educational system and political culture. The system's final exam, however, will be based on the quality of educational institutions parents, citizens, teachers, principals, and students create within that framework. For now, the grade remains "incomplete."
You may also like You need to be logged in to comment. (If there's one thing we know about comment trolls, it's that they're lazy)
| 38,807
| 15,941
| 2.434414
|
warc
|
201704
|
New model makes diagnosing osteoporosis easierAs any expert will tell you, osteoporosis is complex and hard to predict. Most clinicians treat it only when they detect low bone density, viewing this as the definitive test. But machines to detect low bone density are expensive and far from universally available. Moreover, bone density measurements may not adequately predict osteoporosis. Therefore, given the paucity of diagnostic options, millions face unknown threats of debilitating fractures, while others may receive treatment they may not need.
Now, researchers at the World Health Organization's Collaborating Center for Metabolic Bone Diseases in Sheffield, UK hope to make osteoporosis prediction more accurate and accessible. A new model described at the IOF World Congress on Osteoporosis in Toronto, Canada, identifies susceptible people according to country-specific risk factors, including age, height, and weight, among several others (conference abstract PL2). By integrating these factors, the model predicts the likelihood of hip and other osteoporotic fractures over ten years.
Osteoporosis fracture risk varies worldwide by as much as ten-fold, said presenter John Kanis, the Center's director, who suggests higher risks in wealthier countries may reflect more sedentary lifestyles. "The model will be calibrated to specific countries and individuals according their specific risk profiles," he said. "Our goal is to identify people who genuinely face a high risk of fracture in addition to those who don't, so that treatment can be more optimally directed."
Obesity Harms Bones
In a different presentation, Dr. Hong-Wen Deng of the University of Missouri, in Kansas City, and colleagues from China, showed that obesity can accelerate bone loss (conference abstract P152). The finding undermines prior assumptions that obesity--a risk factor for everything from diabetes to heart disease--made skeletons stronger and more resistant to fractures. But Deng's research showed that the bone strengthening benefits of a heavy body aren't due to fat, as some might have assumed, but to elevated muscle mass, which increases bone density. Higher fat content, ironically, was linked to weaker bones, which are more prone to fractures. "This is quite contrary to conventional wisdom that a heavier body per se helps reduce the risk osteoporosis," Deng said. "We conclude that reducing obesity is good for osteoporosis care."
Deng and colleagues compared measures of fat, lean mass (i.e. muscle mass), and total body weight with measures of bone density in 1,988 Chinese and 4,489 Caucasian subjects. Lean mass was found to be positively associated with high bone density, reinforcing Deng's view that patients should build bone strength by building muscle, not by gaining weight through fat accumulation. Along similar lines, obese individuals could lessen their osteoporosis risk by losing fat, either with lifestyle changes, or in the future with drugs that block genetic factors leading to obesity. Thus, fat loss and osteoporosis prevention are, in fact, linked by shared goals to improve health. Calling the finding a "challenge to current dogma," Deng emphasized that additional research is needed to replicate the results in other populations.
Along these lines, a meta-analysis by John Kanis of the University of Sheffield, UK, included within the WHO's new Fracture Risk Assessment Guidelines finds thinness is a fracture risk, but obesity, on the other hand, is not protective.
Predicting Osteoporosis in Middle Age
Meanwhile, presenter Anna Holmberg, of Malmö University Hospital, Sweden showed that fracture risk in both men and women can be predicted during middle age.Osteoporosis interventions usually focus on the elderly, who are by far the most vulnerable among the population. Holmberg's findings suggest prevention opportunities can start with much younger people. "If we can identify middle-aged individuals who are at risk then we can treat them before they get fractures," she said.
The researchers reviewed data for 22, 444 men and 10,902 women recruited by Malmö Preventive Project, a prospective study of cardiac health that was initiated in Sweden in 1974. At the time of recruitment, the subject's ages ranged from 44 to 50. Now, more than 30 years later, baseline characteristics could be matched with fracture incidence over time.
Holmberg and her colleagues found that fracture incidence began climbing steadily at age 60. The most reliable predictors included diabetes, physical frailty, mental illness, and excessive alcohol consumption.
Diabetes, in particular, was strongly linked to elevated fracture risk; even more than among the elderly. "The link with diabetes was much stronger than we anticipated," Holmberg said, adding that vertebrae, ankle, and hip bones are especially vulnerable. "Diabetes can impair vision, making individuals less steady on their feet," she explained. "It also disturbs intestinal calcium uptake and vitamin D metabolism."
Among men, prior hospitalization for mental illness was strongly linked to all types of fractures. Data limitations prevented Holmberg from determining why the mentally ill appear to be at elevated risk. However, she suggests they might, on average, lead more dangerous or violent lives with more fracture opportunities.
SSRIs May Increase Fracture Risk
An additional study found that drugs used to treat depression and other mental illnesses may heighten the risk of fractures. Dr. Brent Richards from McGill University, working with researchers in the Canadian Multicentre Osteoporosis Study (CaMos) presented the findings. According to the study's results, daily use of selective serotonin reuptake inhibitors (SSRIs)--which rank among the most widely prescribed drugs in the world with combined annual sales of US $8.3 billion-- were associated with an elevated risk of X-ray confirmed fragility fractures among subjects aged 50 years and above. "The take home message is that SSRI use, depression and fractures are common in the elderly," Richards said. "So, given these high prevelances, the effect of SSRI use on fractures may have important public health implications."
Prior studies by other researchers dating back to 1998 also linked fractures and SSRI use. However, these earlier studies failed to adequately address the role of potential confounders, Richards said. Although care must be taken in the interpretation of any observational study, to help to isolate SSRI effects, the current study controlled for age, sex, bone mineral density (BMD), falls, cigarette smoking, nutrition, and many other factors that can also exacerbate fracture risk. SSRIs in particular are known to be associated with low BMD and increased risk of falling. But even after accounting for falls and low BMD, SSRIs produced a two-fold elevation in fracture risk, leading the CaMos researchers to suspect that other, unknown mechanisms are at play. "It's important to note the study was done in older subjects," Richards said. "For younger individuals, the risk of fractures associated with SSRIs is likely to be smaller. But if you're over 50 years of age and at an increased risk of fracture, then SSRIs may increase this risk further. On the other hand, one must also consider that the treatment of depression may confer substantial benefit to patients with this condition."
Osteoporosis, in which the bones become porous and break easily, is one of the world's most common and debilitating diseases. The result: pain, loss of movement, inability to perform daily chores, and in many cases, death. One out of three women over 50 will experience osteoporotic fractures, as will one out of five men 1, 2, 3. Unfortunately, screening for people at risk is far from being a standard practice. Osteoporosis can, to a certain extent, be prevented, it can be easily diagnosed and effective treatments are available.
The International Osteoporosis Foundation (IOF) is the only worldwide organization dedicated to the fight against osteoporosis. It brings together scientists, physicians, patient societies and corporate partners. Working with its 170 member societies in 84 locations, and other healthcare-related organizations around the world, IOF encourages awareness and prevention, early detection and improved treatment of osteoporosis.
Melton U, Chrischilles EA, Cooper C et al. How many women have osteoporosis? Journal of Bone Mineral Research, 1992; 7:1005-10
Kanis JA et al. Long-term risk of osteoporotic fracture in Malmo. Osteoporosis International, 2000; 11:669-674 Melton LJ, et al. Bone density and fracture risk in men. JBMR. 1998; 13:No 12:1915
IOF World Congress on Osteoporosis, held every two years, is the only global congress dedicated specifically to all aspects of osteoporosis. Besides the opportunity to learn about the latest science and developments in diagnosis, treatment and the most recent socio-economic studies, participants have the chance to meet and exchange ideas with other physicians from around the world. All aspects of osteoporosis will be covered during the Congress which will comprise lectures by invited speakers presenting cutting edge research in the field, and 35 oral presentations and more than 680 poster presentations selected from 720 submitted abstracts. More than 70 Meet the Expert Sessions covering many practical aspects of diagnosis and management of osteoporosis are also on the program.
For more information on osteoporosis and IOF please visit: www.osteofound.org
Last reviewed:By John M. Grohol, Psy.D. on 30 Apr 2016 Published on PsychCentral.com. All rights reserved.
| 9,683
| 4,393
| 2.204188
|
warc
|
201704
|
Researchers report initial success in promising approach to prevent tooth decay
Preventing cavities could one day involve the dental equivalent of a military surgical strike. A team of researchers supported by the National Institute of Dental and Craniofacial Research report they have created a new smart anti-microbial treatment that can be chemically programmed in the laboratory to seek out and kill a specific cavity-causing species of bacteria, leaving the good bacteria untouched.
The experimental treatment, reported online in the journal Antimicrobial Agents and Chemotherapy, is called a STAMP. The acronym stands for "specifically targeted antimicrobial peptides" and, like its postal namesake, STAMPs have a two-sided structure. The first is the short homing sequence of a pheromone, a signaling chemical that can be as unique as a fingerprint to a bacterium and assures the STAMP will find its target. The second is a small anti-microbial bomb that is chemically linked to the homing sequence and kills the bacterium upon delivery.
While scientists have succeeded in the past in targeting specific bacteria in the laboratory, this report is unique because of the STAMPs themselves. They generally consist of less than 25 amino acids, a relative pipsqueak compared to the bulky, bacteria-seeking antibodies that have fascinated scientists for years. Because of their streamlined design, STAMPs also can be efficiently and rapidly produced on automated solid-phase chemistry machines designed to synthesize small molecules under 100 amino acids, called peptides.
The first-generation STAMPs also proved extremely effective in the initial laboratory work. As reported in this month's paper, the scientists found they could eliminate the cavity-associated oral bacterium Steptococcus mutans within 30 seconds from an oral biofilm without any collateral damage to related but non pathogenic species attached nearby. Biofilms are complex, multi-layered microbial communities that routinely form on our teeth and organs throughout the body. According to one estimate, biofilms may be involved to varying degrees in up to 80 percent of human infections.
"We've already moved the S. mutans STAMP into human studies, where it can be applied as part of a paste or mouthrinse," said Dr. Wenyuan Shi, senior author on the paper and a scientist at the University of California at Los Angeles School of Dentistry. "We're also developing other dental STAMPs that target the specific oral microbes involved in periodontal disease and possibly even halitosis. Thereafter, we hope to pursue possible medical applications of this technology."
Shi said his group's work on a targeted dental therapy began about eight years ago with the recognition that everyday dental care had reached a crossroads. "The standard way to combat bacterial infections is through vaccination, antibiotics, and/or hygienic care," said Shi. "They represent three of the greatest public-health discoveries of the 20th century, but each has its limitations in the mouth. Take vaccination. We can generate antibodies in the blood against S. mutans. But in the mouth, where S. mutans lives and our innate immunity is much weaker, generating a strong immune response has been challenging."
According to Shi, a major limitation of antibiotics and standard dental hygiene is their lack of selectivity. "At least 700 bacterial species are now known to inhabit the mouth," said Shi. "The good bacteria are mixed in with the bad ones, and our current treatments simply clear everything away. That can be a problem because we have data to show that the pathogens grow back first. They're extremely competitive, and that's what makes them pathogenic."
To illustrate this point, Shi offered an analogy. "Think of a lawn infested with dandelions," he said. "If you use a general herbicide and kill everything there, the dandelions will come back first. But if you use a dandelion-specific killer and let the grass fill in the lawn, the dandelions won't come back."
Hoping to solve the selectivity issue, Shi and his colleagues began attaching toxins to the homing region of antibodies. They borrowed the concept from immunotherapy, an area of cancer research in which toxin-toting antibodies are programmed to kill tumor cells and leave the nearby normal cells alone.
Despite some success in killing specific bacteria in the oral biofilm, Shi said his group soon encountered the same technical difficulty that cancer researchers initially ran into with immunotherapy. Their targeting antibodies were large and bulky, making them unstable, therapeutically inefficient, and expensive to produce. "That's when we decided to get higher tech," said Dr. Randal Eckert, a UCLA scientist and lead author on the study.
Or, as Eckert noted, that's when they turned to the "power of genomics," or the comparative study of DNA among species. Eckert and colleagues clicked onto an online database that contains the complete DNA sequence of S. mutans. They identified a 21-peptide pheromone called "competence stimulating peptide," or CSP, that was specific to the bacterium. From there, they typed instructions into an automated solid-phase chemistry machine to synthesize at once the full-length CSP and a 16-peptide anti-microbial sequence, and out came their first batch of STAMPs.
After some trial and error, Eckert said he and his colleagues decided "to get even shorter." They ultimately generated a STAMP with the same anti-microbial agent but with a signature eight-peptide CSP sequence to target S. mutans. "We pooled saliva from five people and created an oral biofilm in the laboratory that included a couple hundred species of bacteria," said Eckert. "We applied the STAMP, and it took only about 30 seconds to eliminate the S. mutans in the mixture, while leaving the other bacteria in tact."
As dentists sometimes wonder, what would happen if S. mutans is eliminated from the oral biofilm? Does another equally or more destructive species fill its void, creating a new set of oral problems? Shi said nature already provides a good answer. "About 10 to 15 percent of people don't have S. mutans in their biofilms, and they do just fine without it," he said. "Besides, S. mutans is not a dominant species in the biofilm. It only becomes a problem when we eat a lot of carbohydrates."
Looking to the future, Shi said new STAMPs that seek out other potentially harmful bacterial species could be generated in a matter of days. He said all that is needed is the full DNA sequence of a microbe, a unique homing sequence from a pheromone, and an appropriate anti-microbial peptide. "We have a collection of anti-microbial peptides that we usually screen the bacterium through first in the laboratory," said Shi. "We can employ the anti-microbial equivalent of either a 2,000-ton bomb or a 200-pound bomb. Our choice is usually somewhere in the middle. If the anti-microbial peptide is too strong, it will also kill the surrounding bacteria, so we have to be very careful."
This research also was supported by a University of California Discovery Grant, Delta Dental of Washington, Delta Dental of Wisconsin, and C3 Jian Corporation. The National Institute of Dental and Craniofacial Research is the nation's leading funder of research on oral, dental, and craniofacial health.
Last reviewed:By John M. Grohol, Psy.D. on 30 Apr 2016 Published on PsychCentral.com. All rights reserved.
| 7,489
| 3,344
| 2.239533
|
warc
|
201704
|
Description
The report is based on desk-based research into the literature concerning VFEL and a survey of 18 economies (13 of which are members of APEC). The aim was to identify components of VFEL, and best practice within each component. These findings were then used to evaluate existing VFEL programs in order to highlight areas in which individual programs met or fell short of best practice.
| 400
| 265
| 1.509434
|
warc
|
201704
|
You can save lots of cash if you know how to use coupons. A lot of people just do not think about the sheer amount of cash that coupons can save, and so they spend a lot more money at the store. The tips you are about to read have been proven effective. Read further and learn more about coupon savings.
Use coupons when things are on sale to save the most money possible. Often, you will need to hang on to your coupon for a while before the item it is for goes on sale. You might also have to break up your shopping trips into two or three trips, but the savings will make the inconvenience worth your while. Spend Less With These Amazing Coupon Tips! For the online stores where you buy things from, search for coupons and discounts using a search engine. In many instances, you will find a code offering a price break on purchases. Try to find the best possible coupon combination for the best deal. While the coupon you have may be a decent deal, it is often still better to shop for the off-brand equivalent. Do not assume that the coupon gives you the best deal. When you go shopping, bring along all of your coupons, even the ones you don't plan on using. You never know, you might need that coupon and it would be great if you have it with you. Read Here For The Coupon Advice That You Need To Know Many online coupon forums post deals. Lots of online resources exist that post deals and coupons capable of generating substantial savings. These sites allow you print coupons and also interact with others to gain knowledge of the best buys and offers. Some newspapers will offer a couponer's discount. It's worth asking about. Many offer papers for 1.00 each if you subscribe to the Sunday edition, and order at least 5 copies per week. Great Tips About Coupons That Are Simple To Follow Devote one morning or afternoon per week to exclusively search, clip, and print coupons. This can help you make things more efficient. You are always able to clip things when you find them, but you need to really buckle down once a week to go over all of your options for the coming weeks. Look for coupons before making online purchases. This can be done by putting searching the word coupon along with your retailer. Any current deals will show up as codes you can use at checkout. Stores may provide free shipping options or some percentage knocked off an order of you use the current coupon code when you place an order. Do not allow couponing to rule your life. It can be easy to make reading circulars and clipping coupons into your permanent vocation. Dividing your average weekly savings by the number of hours you spend clipping coupons will let you know if the endeavor is worthwhile for you. Some stores double or triple coupon values. Look around to find which stores offer these type of deals. Take the time to check around with anyone you might know who could show you where these ideal stores can be found. Using coupons takes some getting used to, and you need to learn how to use them effectively. After you've started using coupons, you'll wonder why you didn't use them before! Apply all the ideas in this article to chip into your household expenses.
| 3,170
| 1,508
| 2.102122
|
warc
|
201704
|
As it currently operates, the commercial real estate construction industry is a disaster full of built-in waste. Seventy-percent of all projects end over budget and late. The buildingSMART Alliance estimates that up to fifty-percent of the process is consumed in waste. Almost every project includes massive hidden taxes in the form of delays, cost overruns, poor quality, and work that has to be redone. Building new structures is a fragmented, adversarial process that commonly results in dissatisfied customers and frequently ends in disappointment, bitterness, and even litigation.The industry must changefor its own good and that of its customers. But while the industry has tried to reform itself, it cant do it alone. Real change can only come from business owners and executives who refuse to continue paying for a dysfunctional system and demand a new way of doing business.The Commercial Real Estate Revolution is a bold manifesto for change from the Mindshift consortiuma group of top commercial real estate industry leaders who are fed up with a system that simply doesnt work. The book explains how business leaders can implement nine principles for any project that will dramatically cut costs, end delays, create better buildings, and force the industry into real reform.The Commercial Real Estate Revolution offers a radically new way of doing businessa beginning-to-end, trust-based methodology that transforms the building process from top to bottom. Based on unifying principles and a common framework that meets the needs of all stakeholders, this new system can reform and remake commercial construction into an industry were proud to be a part of.If youre one of the millions of hardcore cynics who work in commercial construction, you probably think this sounds like pie in the sky. But this is no magic bullet; its a call for real reform. If youre an industry professional whos sick of letting down clients or an owner whos sick of cost overruns and endless delays, The Commercial Real Estate Revolution offers a blueprint for fixing a broken industry.
| 2,078
| 986
| 2.107505
|
warc
|
201704
|
Interpreting Figurative Meaning critically evaluates the recent empirical work from psycholinguistics and neuroscience examining the successes and difficulties associated with interpreting figurative language. There is now a huge, often contradictory literature on how people understand figures of speech. Gibbs and Colston argue that there may not be a single theory or model that adequately explains both the processes and products of figurative meaning experience. Experimental research may ultimately be unable to simply adjudicate between current models in psychology, linguistics and philosophy of how figurative meaning is interpreted. Alternatively, the authors advance a broad theoretical framework, motivated by ideas from 'dynamical systems theory', that describes the multiple, interacting influences which shape people's experiences of figurative meaning in discourse. This book details past research and theory, offers a critical assessment of this work and sets the stage for a new vision of figurative experience in human life.
| 1,045
| 568
| 1.839789
|
warc
|
201704
|
Title Date of Original Version
8-1998
Type
Article
Abstract or Description
One way of perceptually organizing a complex visual scene is to attend selectively to information in a particular physical location. Another way of reducing the complexity in the input is to attend selectively to an individual object in the scene and to process its elements preferentially. This latter, object-based attention process was examined, and the predicted superiority for reporting features from 1 relative to 2 objects was replicated in a series of experiments. This object-based process was robust even under conditions of occlusion, although there were some boundary conditions on its operation. Finally, an account of the data is provided via simulations of the findings in a computational model. The claim is that object-based attention arises from a mechanism that groups together those features based on internal representations developed over perceptual experience and then preferentially gates these features for later, selective processing.
| 1,043
| 555
| 1.879279
|
warc
|
201704
|
I’d like to share a story with you. You might even call it your story.
Once upon a time there was a planet. A planet inhabited by humans. The planet was called Earth. Sound familiar?
Most of the humans were just like you.
Living their busy lives Doing the friends and family thing Planning big adventures In fact, most of these humans desired to do really big things. To have greater impact on the planet. To be part of something greater than themselves. To live happy, successful lives.
Sadly, three big obstacles kept holding them back. #1 – The humans often felt they weren’t good enough #2 – They were afraid of being different from everyone else #3 – They often hid in the closets of their lives
See, they were all doing lots of things just to fit in: Dating and mating in heteronormative ways Raising families the way Mom and Dad had done Climbing career ladders, one position at a time Juggling over packed schedules because that’s what you do Making and saving money for SOMEDAY like everyone else Heck, some were even struggling to building businesses based on the latest, biggest, BIZ GURU’s big, big ideas!
Many of these humans spent a lot of time, energy, and brainpower doing things they didn’t want to do, simply to keep their lives moving forward…or so they thought. Yep, they buckled down and did it. That’s what you’re supposed to do. Right?
Fall in line, and do what everyone else does, that’s what was expected. Don’t rock the boat by being a dreamer…by being unique…flying outside the lines!
Between “you can’t, you shouldn’t” and “that will never work,” so many of the humans on planet Earth began to lose themselves.
Losing their confidence. Hiding their uniqueness. Sacrificing themselves, their values, their beliefs.
Life suddenly became challenging. Often these humans lost sleep, put on fake smiles, and hid their most valuable assets, just to fit in.
Not only was conforming challenging, but leading a double-life became exhausting.
Never being able to share their true self and talents with the world made it impossible to fully enjoy life.
And the worst part? Everyone around them was doing the same thing. Hiding in their lives…not fully expressing themselves!
Schools were teaching it.
Companies expected it. Friends were only friends if you did it. Families in many cases only tolerated you if you did it. Society as a whole, flat out expected it…from everyone!
It being – pretending to be someone you’re NOT just to make the world go round a little easier!
The whole planet was doing it
Playing the game. Living unfulfilled lives.
Then, things began to change.
Every so often a few of the humans would start to do things differently. Summon up their confidence. Bravely owning their uniqueness in the world. Brave and courageous humans began to come out of their closets…living their truth. More and more humans began to experience success as they defined IT for themselves. Other humans began to build unconventional lives and experience real happiness, living in their uniqueness. Businesses profits and revenues exploded as people over profit became the new normal. Creativity, progressive-thought, and “let’s try something completely different,” became the standard instead of the strange or weird ideas.
The humans who were afraid to be different, finally decided to take flight!
They asked a few of the humans who’d stepped away from being like everyone else, “How did you do it? ”
Their answer?
“We mustered up our confidence. We embraced our uniqueness. We unleashed our superpowers on the world! They came out of their closets. Quit hiding their secrets. Started living their own brand of uniquely me! So can you!
Now don’t worry, I’m not going to come into your life and start busting things up, or forcing you out of your closets.
As a gay man, I know it isn’t cool to force anyone out of the closet. That’s all a personal decision.
Instead, I’m gonna jump alongside you in your life, or your organization, and start snooping around. With your permission of course!
I’ll be hunting for lot’s of stuff in those closets, but three things in particular.
#1 – Curiosity Slayers #2 – Confidence Busters #3 – Commitment Destroyers
These three culprits are notoriously known for hiding out in the dark corners of our closets, creating unnecessary havoc in life, for no darn good reason
But, that’s why I’m here. To sniff them out – in a safe, supportive way.
To help get them out of your closet, out of your life – after all, we’re all closeted about something.
Once they’re out of the way, you’ll suddenly feel energized and invincible enough to break through your unique closet door to…
Declare your sexual orientation without shame or guilt Ask that hot guy or gal out on a date, no more procrastinating Go for the promotion you didn’t think was yours to be had Take a stand, advocating for a cause, regardless what others think Jump into your entrepreneurial cape, escaping from cubicle nation Say “Bye-bye” to following, and say “Hello Beautiful” to leading
Together, we’re going to pull out some of the greatest ASSETS OF YOU that you didn’t even know you had. NO MORE HIDING!
You’ll discover that being confidently, uniquely you is an awesome idea. So much better than pretending to be someone you’re not!
Then you and I will tap my closet busting brain to put together a deep dive action plan that gets to your soul, your truth. To the you that’s been waiting to be uniquely you in the world. I help you discover you. The confident, unique you!
I’m all about uncovering you. The real you that represents you in life, love, and business.
It’s not about having a life for someday based on everyone else’s expectations.
It’s about being freakin’ in love with you and your life right now – no more hiding, no more closets!
It’s also about sharing your love and passion for what you do, how you live your life — in a way that makes a bigger impact in the world.
You have amazing gifts and assets you’re hiding that need to be shared with the world. But if you’re hiding in the closet,
you’re missing the chance to lead a life well lived.
So. . . Sound like what you’ve been looking for?
Whether you’re an individual or a team looking to stop hiding, or to break out of the closets of your life, there’s no better time than the present. Check out my writing, speaking, and individual coaching programs.
| 6,764
| 3,085
| 2.192545
|
warc
|
201704
|
Date of Award
2008
Document Type
Thesis
Degree Name
Master of Arts (MA)
Department
History
Abstract
Historians of US foreign relations have argued that, after the Civil War and prior to the professionalization movements of the 1920s, the State Department was staffed with failed politicians, adventurous lawyers, and bored businessmen through a system of political spoils. An examination of Ebenezer Jolls Ormsbee‟s experience as an envoy of the State Department on the Samoan Land Commission from 1891 to 1893, however, demonstrates that the department operated through an effective patronage system. Patrons, with experiential, social, and professional connections to appointees, sought out the best candidates they knew. By examining Mr. Ormsbee‟s childhood, Civil War experience, and political career with the Republican Party in Vermont, his various relationships with prominent individuals such as Redfield Proctor, Frank C. Partridge, and Henry C. Ide become evident. Through these relationships, Mr. Ormsbee gained his appointment to the Samoan Land Commission based upon his peers‟ belief that he was the best qualified candidate available. Mr. Ormsbee‟s position as a provincial grand bourgeoisie not only determined how he was appointed to the Samoan Land Commission, but also his relationship with and viewpoint of the native and the Euro-American communities in Samoa. For Mr. Ormsbee and his wife, Frances Ormsbee, the natives were often viewed with greater approval because of their perceived authentic barbarity, while the Euro-Americans were often found to have failed to maintain the Ormsbees‟ notion of civilization. The Ormsbees‟ social and political relationships in Samoa demonstrate the racial and class complexities of the late nineteenth century, especially when those are viewed from such microhistorical subjects as Mr. and Mrs. Ormsbee.
Recommended Citation
Gardner, Zackary, "Far from Home the Sojourns of E. J. Ormsbee in the Samoan Islands" (2008).
Graduate College Dissertations and Theses. 87. http://scholarworks.uvm.edu/graddis/87
| 2,111
| 1,100
| 1.919091
|
warc
|
201704
|
MCAT Biology Review Chapter 8: The Immune System 8.3 The Adaptive Immune System
The adaptive immune system can identify specific invaders and mount an attack against that pathogen. The response is variable and depends on the identity of the pathogen. The adaptive immune system can be divided into two divisions: humoral immunity and cell-mediated (cytotoxic) immunity. Each involves the identification of the specific pathogen and organization of an appropriate immune response.
CELLS OF THE ADAPTIVE IMMUNE SYSTEM
The adaptive immune system consists mainly of two types of lymphocytes, B-cells and T-cells. B-cells govern the humoral response, while T-cells mount the cell-mediated response. All cells of the immune system are created in the bone marrow, but B- and T-cells mature in different locations. B-cells mature in the bone marrow (although the B in their name originally stood for the bursa of Fabricius, an organ found in birds), and T-cells mature in the thymus. When we are exposed to a pathogen, it may take a few days for the physical symptoms to be relieved. This occurs because the adaptive immune response takes time to form specific defenses against the pathogen.
KEY CONCEPT B-cells mature in the bone marrow. T-cells mature in the thymus. Humoral Immunity Humoral immunity, which involves the production of antibodies, may take as long as a week to become fully effective after initial infection. These antibodies are specific to the antigens of the invading microbe. Antibodies are produced by B-cells, which are lymphocytes that originate and mature in the bone marrow and are activated in the spleen and lymph nodes.
Antibodies (also called
immunoglobulins [ Ig]) can carry out many different jobs in the body. Just as antigens can be displayed on the surface of cells or can float freely in blood, chyle (lymphatic fluid), or air, so too can antibodies be present on the surface of a cell or secreted into body fluids. When an antibody binds to an antigen, the response will depend on the location. For antibodies secreted into body fluids, there are three main possibilities: first, once bound to a specific antigen, antibodies may attract other leukocytes to phagocytize those antigens immediately. This is called opsonization, as described earlier. Second, antibodies may cause pathogens to clump together or agglutinate, forming large insoluble complexes that can be phagocytized. Third, antibodies can block the ability of a pathogen to invade tissues, essentially neutralizing it. For cell-surface antibodies, the binding of antigen to a B-cell causes activation of that cell, resulting in its proliferation and formation of plasma and memory cells, as described later in this chapter. In contrast, when antigen binds to antibodies on the surface of a mast cell, it causes degranulation (exocytosis of granule contents), allowing the release of histamine and causing an inflammatory allergic reaction.
Antibodies are Y-shaped molecules that are made up of two identical
heavy chains and two identical light chains, as shown in Figure 8.7. Disulfide linkages and noncovalent interactions hold the heavy and light chains together. Each antibody has an antigen-binding region at the end of what is called the variable region ( domain), at the tips of the Y. Within this region, there are specific polypeptide sequences that will bind one, and only one, specific antigenic sequence. Part of the reason it takes so long to initiate the antibody response is that each B-cell undergoes hypermutation of its antigen-binding region, trying to find the best match for the antigen. Only those B-cells that can bind the antigen with high affinity survive, providing a mechanism for generating specificity called clonal selection. The remaining part of the antibody molecule is known as the constant region ( domain). It is this region that cells such as natural killer cells, macrophages, monocytes, and eosinophils have receptors for, and that can initiate the complement cascade. Each B-cell makes only one type of antibody, but we have many B-cells, so our immune system can recognize many antigens. Further, antibodies come in five different isotypes (IgM, IgD, IgG, IgE, and IgA). While the specific purposes of each antibody isotype is outside the scope of the MCAT, you should know that the different types can be used at different times during the adaptive immune response, for different types of pathogens, or in different locations in the body. Cells can change which isotype of antibody they produce when stimulated by specific cytokines in a process called isotype switching. Figure 8.7. Structure of an Antibody Molecule
Not all B-cells that are generated actively or constantly produce antibodies. Antibody production is an energetically expensive process, and there is no reason to expend energy producing antibodies that are not needed. Instead,
naïve B-cells (those that have not yet been exposed to an antigen) wait in the lymph nodes for their particular antigen to come along. Upon exposure to the correct antigen, a B-cell will proliferate and produce two types of daughter cells. Plasma cells produce large amounts of antibodies, whereas memory B-cells stay in the lymph node, awaiting reexposure to the same antigen. This initial activation takes approximately seven to ten days and is known as the primary response. The plasma cells will eventually die, but the memory cells may last the lifetime of the organism. If the same microbe is ever encountered again, the memory cells jump into action and produce the antibodies specific to that pathogen. This immune response, called the secondary response, will be more rapid and robust. The development of these lasting memory cells is the basis of the efficacy of vaccinations. Cytotoxic Immunity
Whereas humoral immunity is based on the activity of B-cells, cell-mediated immunity involves the T-cells. T-cells mature in the thymus, where they undergo both positive and negative selection.
Positive selection refers to maturing only cells that can respond to the presentation of antigen on MHC (cells that cannot respond to MHC undergo apoptosis because they will not be able to respond in the periphery). Negative selection refers to causing apoptosis in cells that are self-reactive (activated by proteins produced by the organism itself). The maturation of T-cells is facilitated by thymosin, a peptide hormone secreted by thymic cells. Once the T-cell has left the thymus, it is mature but naïve. Upon exposure to antigen, T-cells will also undergo clonal selection so that only those with the highest affinity for a given antigen proliferate.
There are three major types of T-cells: helper T-cells, suppressor T-cells, and killer (cytotoxic) T-cells.
Helper T-cells ( T h), also called CD4 + T-cells, coordinate the immune response by secreting chemicals known as lymphokines. These molecules are capable of recruiting other immune cells (such as plasma cells, cytotoxic T-cells, and macrophages) and increasing their activity. The loss of these cells, as occurs in human immunodeficiency virus ( HIV) infection, prevents the immune system from mounting an adequate response to infection; in advanced HIV infection, also called acquired immunodeficiency syndrome ( AIDS), even weak pathogens can cause devastating consequences as opportunistic infections. CD4 + T-cells respond to antigens presented on MHC-II molecules. Because MHC-II presents exogenous antigens, CD4 + T-cells are most effective against bacterial, fungal, and parasitic infections. REAL WORLD
“CD” in immunology stands for
cluster of differentiation and includes cell-surface markers that can be detected by the lab technique called flow cytometry; these markers give an indication of the types of leukocytes under investigation, how many are present, and in what state of maturity they are. Cytotoxic T-cells ( T or c CTL, for cytotoxic T-lymphocytes), also called CD8 + T-cells, are capable of directly killing virally infected cells by injecting toxic chemicals that promote apoptosis into the infected cell. CD8 +T-cells respond to antigens presented on MHC-I molecules. Because MHC-I presents endogenous antigens, CD8 +T-cells are most effective against viral (and intracellular bacterial or fungal) infections. KEY CONCEPT
CD4
+ T-cells are better at fighting extracellular infections, while CD8 + T-cells are better at targeting intracellular infections. Suppressor or regulatory T-cells ( T ) also express CD4, but can be differentiated from helper T-cells because they also express a protein called reg Foxp3. These cells help to tone down the immune response once infection has been adequately contained. These cells also turn off self-reactive lymphocytes to prevent autoimmune diseases: this is termed self-tolerance. MNEMONIC
·
CD × MHC = 8
· CD
4 cells respond to MHC- + II (4 × 2 = 8)
· CD
8 cells respond to MHC- + I (8 × 1 = 8)
Finally,
memory T-cells can be generated. Similar to memory B-cells, these cells lie in wait until the next exposure to the same antigen. When activated, they result in a more robust and rapid response. REAL WORLD
Many suppressor T-cells were formerly self-reactive T-cells that have been turned off. When a suppressor T-cell inactivates another lymphocyte, it can either target it for destruction or promote its conversion into another suppressor T-cell.
A summary of the different types of lymphocytes in adaptive (specific) immunity is shown in Figure 8.8.
Figure 8.8. Lymphocytes of Specific Immunity This diagram shows the differentiation of lymphocyte precursors and the cell types involved in specific immunity.
ACTIVATION OF THE ADAPTIVE IMMUNE SYSTEM
When the human body encounters an antigen, the immune system must be able to respond. It is important to note that the innate and adaptive immune systems are not really disparate entities that function separately. The proper functioning of the entire immune system depends on the interactions between these two systems. There are five types of infectious pathogens: bacteria, viruses, fungi, parasites (including protozoa, worms, and insects), and prions (for which there are no immune defenses). While the immune system’s response depends on the specific identity of the pathogen, we present two classic examples: a bacterial (extracellular pathogen) infection and a viral (intracellular pathogen) infection. Keep in mind that this categorization is imperfect; for example, some bacteria, like
Mycobacterium tuberculosis and Listeria monocytogenes, actually live intracellularly. Bacterial (Extracellular Pathogen) Infections
Macrophages are like the sentinels of the human body, always on the lookout for potential invaders. Let’s say a person suffers a laceration and bacteria are introduced into the body via this laceration. First, macrophages (and other antigen-presenting cells) engulf the bacteria and subsequently release inflammatory mediators. These cells also digest the bacteria and present antigens from the pathogen on their surfaces in conjunction with MHC-II. The cytokines attract inflammatory cells, including neutrophils and additional macrophages. Mast cells are activated by the inflammation and degranulate, resulting in histamine release and increased leakiness of the capillaries. This allows for immune cells to leave the bloodstream to travel to the affected tissue. The dendritic cell then leaves the affected tissue and travels to the nearest lymph node, where it presents the antigen to B-cells. B-cells that produce the correct antibody proliferate through clonal selection to create plasma cells and memory cells. Antibodies then travel through the bloodstream to the affected tissue, where they tag the bacteria for destruction.
At the same time, dendritic cells are also presenting the antigen to T-cells, activating a T-cell response. In particular, CD4
+ T-cells are activated. These cells come in two types, called T h1 and T h2. T h1 cells release interferon gamma (IFN- γ), which activates macrophages and increases their ability to kill bacteria. T h2 cells help activate B-cells.
After the pathogen has been eliminated, plasma cells die, but memory B- and T-cells remain. These memory cells allow for a much faster secondary response upon exposure to the pathogen at a later time.
Viral (Intracellular Pathogen) Infections
In a viral infection, the virally infected cell will begin to produce interferons, which reduce the permeability of nearby cells (decreasing the ability of the virus to infect these cells), reduce the rate of transcription and translation in these cells (decreasing the ability of the virus to multiply), and cause systemic symptoms (malaise, muscle aching, fever, and so on). These infected cells also present intracellular proteins on their surface in conjunction with MHC-I; in a virally infected cell, at least some of these intracellular proteins will be viral proteins.
CD8
+ T-cells will recognize the MHC-I and antigen complex as foreign and will inject toxins into the cell to promote apoptosis. In this way, the infection can be shut down before it is able to spread to nearby cells. In the event that the virus downregulates the production and presentation of MHC-I molecules, natural killer cells will recognize the absence of MHC-I and will accordingly cause apoptosis of this cell.
Again, once the pathogen has been cleared, memory T-cells will be generated that can allow a much faster response to be mounted upon a second exposure.
RECOGNITION OF SELF AND NONSELF
Self-antigens are the proteins and carbohydrates present on the surface of every cell of the body. Under normal circumstances, these self-antigens signal to immune cells that the cell is not threatening and should not be attacked. However, when the immune system fails to make the distinction between self and foreign, it may attack cells expressing particular self-antigens, a condition known as autoimmunity. Note that autoimmunity is only one potential problem with immune functioning: another problem arises when the immune system misidentifies a foreign antigen as dangerous when, in fact, it is not. Pet dander, pollen, and peanuts are not inherently threatening to human life, yet some people’s immune systems are hypersensitive to these antigens and become overactivated when these antigens are encountered in what is called an allergic reaction. Allergies and autoimmunity are part of a family of immune reactions classified as hypersensitivity reactions.
The human body strives to prevent autoimmune reactions very early in T-cell and B-cell maturation processes. T-cells are educated in the thymus. Part of this education involves the elimination of T-cells that respond to self-antigens, called negative selection. Immature B-cells that respond to self-antigens are eliminated before they leave the bone marrow. However, this process is not perfect, and occasionally a cell that responds to self-antigens is allowed to survive. Most autoimmune diseases can be treated with a number of therapies; one common example is administration of
glucocorticoids (modified versions of cortisol), which have potent immunosuppressive qualities. REAL WORLD
Autoimmune diseases can result in destruction of tissues, causing various deficiencies. Type I diabetes mellitus results from autoimmune destruction of the
β-cells of the pancreas. This results in an inability to produce insulin, characterized by high blood sugars and excessive utilization of fats and proteins for energy. Other examples of autoimmune diseases include multiple sclerosis, myasthenia gravis, psoriasis, systemic lupus erythematosus, rheumatoid arthritis, Graves’ disease, and Guillain–Barré syndrome.
IMMUNIZATION
Often, diseases can have significant, long-term consequences. Infection with the poliovirus, for example, can leave a person disabled for the remainder of his or her life. Polio used to be a widespread illness; however, today we hardly hear about it outside of the Indian subcontinent because of a highly effective vaccination program, which led to the eradication of polio from the Western hemisphere.
Immunization can be achieved in an active or passive fashion. In
active immunity, the immune system is stimulated to produce antibodies against a specific pathogen. The means by which we are exposed to this pathogen may either be natural or artificial. Through natural exposure, antibodies are generated by B-cells once an individual becomes infected. Artificial exposure (through vaccines) also results in the production of antibodies; however, the individual never experiences true infection. Instead, he or she receives an injection or intranasal spray containing an antigen that will activate B-cells to produce antibodies to fight the specific infection. The antigen may be a weakened or killed form of the microbe, or it may be a part of the microbe’s protein structure.
Immunization may also be achieved passively.
Passive immunity results from the transfer of antibodies to an individual. The immunity is transient because only the antibodies, and not the plasma cells that produce them, are given to the individual. Natural examples are the transfer of antibodies across the placenta during pregnancy to protect the fetus and the transfer of antibodies from a mother to her nursing infant through breast milk. In some cases of exposure, such as to the rabies virus or tetanus, intravenous immunoglobulin may be given to prevent the pathogen from spreading. REAL WORLD
In 1998, a paper published in
The Lancet claimed to have found a link between vaccines and autism. This paper has since been withdrawn from The Lancet after it was demonstrated to be fraudulent and scientifically inaccurate. In fact, no well-designed scientific study has yet shown this link to exist. However, the sensationalist reporting of this connection in the lay population has led many parents to avoid immunizing their children. Since 1998, outbreaks of measles and mumps in the United States and other industrialized nations have raised concerns about the resurgence of illnesses that were previously almost eradicated. Vaccines do carry risks, including rare cases of encephalitis (brain inflammation) and Guillain–Barré syndrome (an autoimmune disease in which the myelin of peripheral nerves is attacked), but so too do the pathogens these vaccines protect against. MCAT Concept Check 8.3:
Before you move on, assess your understanding of the material with these questions.
1. For each of the lymphocytes listed below, what are its main functions?
· Plasma cell:
· Memory B-cell:
· Helper T-cell:
· Cytotoxic T-cell:
· Suppressor (regulatory) T-cell:
· Memory T-cell:
2. What are the three main effects circulating antibodies can have on a pathogen?
·
·
·
3. How do antibodies become specific for a given antigen?
4. What is meant by positive and negative selection?
· Positive selection:
· Negative selection:
5. Which cells account for the fact that the secondary response to a pathogen is much more rapid and robust than the primary response?
6. What is the difference between active and passive immunity?
· Active immunity:
· Passive immunity:
| 19,388
| 7,687
| 2.52218
|
warc
|
201704
|
The molecular motor protein called kinesin is a cellular mover and shaker, stirring to action everything from cilia to dividing chromosomes. Hoping to unravel how kinesin uses the energy molecule ATP (adenosine triphosphate) to crawl along microtubules, researchers are scrutinizing the protein from many angles—they've even tacked a single kinesin molecule to a tiny glass rod to measure its strength (about 5 piconewtons, or the force a laser pointer makes on a screen). For the latest dispatches from this hot field, visit the Kinesin Home Page
Part tutorial, part database, the site began in 1996 with a review paper by Duke molecular geneticist Sharyn Endow. Colleagues contributed more articles, and bioinformatics experts at the Fred Hutchinson Cancer Research Center in Seattle added outside links, creating an info cache that's frequently updated. The site lists the family tree for the dozens of known versions of kinesin; you can jump to sequences in protein databanks, or peruse crystallographic structures. Other links point you toward kinesin lab Web pages and the latest PubMed articles. You need not be an expert to enjoy the site's many images: Check out fluorescently labeled kinesin proteins in dividing cells, for example, and weird movies of fruit fly larvae, with defective kinesin genes, thrashing about.
| 1,335
| 769
| 1.736021
|
warc
|
201704
|
David Bernstein and Randy Barnett have interesting posts up at Volokh about the growing split among conservative originalists. Barnett’s post came first and he notes that when conservatives today invoke the idea of “judicial restraint” in opposition to judges “legislating from the bench”, they are in fact buying in to a New Deal era concept that spawned the idea of a
presumption of constitutionality. He quotes from an endorsement of Alito’s nomination in the Weekly Standard pointing out that Alito is not a Thomas-style originalist but a pragmatist who defers to government greatly:
More importantly, Judge Alito’s Casey opinion shows him to be faithful to the judicial duty not to “legislate from the bench,” an overused phrase which means simply that judges should go the long mile before substituting their views for those of the people’s elected representatives.
This view of the role of judges was perhaps the New Deal’s most bipartisan achievement. The departures from it during the heyday of the Warren Court produced friction among the liberal Justices appointed by FDR (notably between Douglas and Frankfurter), as well as controversy with a new generation of conservatives who saw the New Deal-type of rational basis test as key to preserving the democratic accountability of public decision-making. Conservatives felt odd, and still do, defending a New Deal doctrine (and being attacked for it from the left). But this particular New Deal doctrine is an established tradition with bipartisan support, and Judge Alito’s Casey dissent show him standing squarely within it. Nothing could be more mainstream.
He also goes on to note that the split between Scalia and Thomas in
Raich, where Scalia upheld the power of the Federal government in a clearly non-originalist opinion while Thomas stuck to his originalist principles and dissented, may bring conservative originalism to a crossroads. David Bernstein picks up on that idea and goes further, saying that conservative originalism is in fact in crisis:
Randy’s post reminded me that I’ve been wanting to note that conservative judicial originalism is currently in a state of crisis, precisely because of Justice Scalia’s “fainthearted” originalism. If Justice Scalia, originalism’s supposed great champion, is unwilling to overturn or even go out of his way to distinguish as anti-originalist opinion as Wickard v. Filburn (holding that growing grain on one’s own land for consumption on one’s own farm can be regulated under Congress’ power to regulate “interstate commerce”), then what is left of originalism?
One could say that it’s simply “too late” to reconsider sixty-two year old precedents like Wickard. But why sixty-two year-old precedents, and not thirty-two year old precedents (i.e., Roe v. Wade)? Scalia’s fainthearted originalism begins to look a lot like, “I got into this business to overturn Warren Court decisions, and I’ll use originalism as tool to that end, but I’m not especially interested in reconsidering New Deal precedents.”…
I expect that Scalia’s problem is that to be a true originalist, many New Deal precedents would have to go out the window, and this is neither politically, nor, in many instances, practically feasible (In Raich, Randy certainly provided Scalia with some easy ways to distinguish Wickard, but I suspect Scalia felt that Wickard should either be interpreted rather broadly, or overturned entirely, and he opted for the former). But to be a sincere originalist, one has to grapple with how to resolve this quandry, not simply refuse to apply originalist reasoning out of “faintheartedness.”…But simply pulling a Scalia, and begging off from the tough issues as distractions from what I beleive he sees as the real task of preventing the liberal elite from enacting its agenda through the judiciary just won’t do. Originalism becomes a weapon to be pulled out when convenient, not a consistent theory of interpretation. That’s culture war politics, not originalism, and Scalia’s failure to identify any theory of originalism that justified his opinion in Raich dramatically lowered my estimation of him as a jurist.
I would argue that the problem goes even deeper than that. The Scalia/Bork version of originalism will always find itself in this quandary because, applied consistently, it reaches some truly disturbing results. So they have to put it away sometimes and pretend that they’re not putting it away. If they left room for an appeal to the broad principles of natural rights, as the Thomas version of originalism does (or better, as the Barnett/Gerber version of liberal originalism does), they would find a way out of that conundrum. As long as they refuse to allow the Declaration to be used as a lens through which to view the Constitution, this sort of contradiction is inevitable.
| 5,024
| 2,284
| 2.19965
|
warc
|
201704
|
Essential Guide Modern data center strategy: Design, hardware and IT's changing role A comprehensive collection of articles, videos and more, hand-picked by our editors
Research data center scientists are the cowboys of the computing world, but the research computing IT architecture...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
has carry-over potential into enterprises grappling with how to process big data.
"Research computing was big data before big data," said Richard Villar, vice president of data center and cloud at IDC, a research firm based in Framingham, Mass.
Most enterprise data centers were designed to support systems of record -- availability and reliability mattered for the entire infrastructure, Villar said. But now IT supports systems of engagement and insight.
Systems of engagement are inherently dispersive and customer-facing. Systems of insight focus on analytics and thrive on quickly allocated and reallocated compute resources. Availability and reliability for these applications refer to the entire fabric of the systems, not individual servers.
Enterprise IT shops also deal with new types of data coming into the business and how to improve customer service through data analysis. The Massachusetts Green High-Performance Computing Center (MGHPCC) shows how distributed compute addresses this problem.
The original big data architecture
The MGHPCC, a cooperative research computing data center located in Holyoke, Mass., operates on a "hard shell and soft core IT architecture," according to James Cuff, researcher at Harvard University.
Harvard, MIT, the University of Massachusetts, Boston University and Northeastern University share the MGHPCC. Researchers must obey different data regulations, and often bring different mixes of hardware into the facility. But these tenants frequently work together on projects that demand cross-connects on the network and, increasingly, share resources for the cost benefit of higher utilization.
James CuffHarvard University
The MGHPCC is analogous to a modern corporation -- some servers are isolated for compliance and security reasons, while others host workloads from multiple departments or scale down to idle when demand is low. Only 20% of the IT equipment runs on uninterruptable power supplies, and the remaining 80% can fail -- or power down -- without much issue. It's a design tailored to research computing, where algorithms churn away toward an end result, producing high volumes of expendable data along the way. It could also teach enterprises how to process big data.
"In research computing, we're the cowboys on the frontier taking chances," Cuff said. "But we're responsible cowboys."
Load balancing is the best way to deal with the pressure of new services and rapid scaling, when you use it for the right use cases, Villar said.
"All data centers are under pressure to get the maximum work for a minimal cost of IT assets," he said.
A batch scheduling system orchestrates all MGHPCC workloads. For example, a scientist loads 6,000 pieces of work that each run for two hours. The computing software's architecture tolerates bad nodes or missing compute. While the code runs, it writes out periodic stop points. If an individual piece of work lands on a failed node, the program picks up from that point on a good one. Google and Netflix rely on this style of computing, and so do astrophysicists.
"It wouldn't work for financial trades," admitted Cuff. "For six nines data centers, this is heresy."
For the majority of business IT, however, consideration of its facility design will save unnecessary costs without abandoning the systems of record and workloads that need protection. Major Web properties build their data centers with only enough power to shut down gracefully in the event of a failure, but they don't treat mission-critical data the same as batch data analytics programs. Batch analytics and non-real-time big data can pick up where they left off and not stay up 100% of the time, Villar said.
"We used to build a lot of high availability and mainframe bomb-proof power," Cuff said. "You can do the same thing with distributed computing, but it's a lot harder because there's a business function that's hard to change."
The core storage and networking resources at the center are its "crown jewels" and must be protected from downtime. But the hundreds of racks of compute are more flexible.
In case of a major failure, there's very little value in shoring up all of the systems with backup power because workloads can shift to other facilities instead, Villar said.
MGHPCC's researchers build out data systems for large projects that can support data processing across distributed resources, Cuff said, with 20 Gbps backlinks to replicate between data centers on different campuses and the MGHPCC. For projects that require rigorous backup or transfer ridiculously large data streams, MGHPCC spreads the load over the storage closest to the compute. And for international projects, such as data analysis from the Large Hadron Collider at CERN, reliable access is paramount.
"We treat [the various data centers] all as one big Layer 2 network with fast switches and large chunks of the infrastructure as one machine," Cuff said.
Thanks in part to server efficiency increases with every product generation -- and the researchers' willingness to share assets for higher utilization -- the MGHPCC data center still has a great deal of room left for growth.
Next Steps
The hardware you need to take on big data
Three experts on big data's big changes
Why build? Big data on AWS
| 5,754
| 2,719
| 2.116219
|
warc
|
201704
|
EQT (NYSE:EQT) announced today that it increased its estimated ultimate recovery rates (EURs) in the Marcellus. It also increased drilling inventory through effective down-spacing (drilling wells closer together) and well type curves (implying higher rates of return from drilling wells).
Obviously this is positive for EQT, as it means their Marcellus drilling program is more economic than had been previously thought. It bumps EURs up closer to the level that high-valued Range Resources (NYSE:RRC) and Cabot (NYSE:COG) are generating in the core of the North Eastern Pennsylvania portion of the Marcellus play. As can be seen in the chart below, Cabot stock has outperformed EQT stock over the past year on the back of Cabot's excellent well results, so improving results for EQT could help EQT stock "catch up".
The EUR increases are particularly impactful in Northwest West Virginia, which had not been considered as delineated or as prospective as SW Pennsylvania or NE Pennsylvania. EQT is now booking reserves at the same level in NW WV as in SW PA, which is positive for their prospects in NW WV and also reads through positively for other operators in the area.
Oil and gas companies active in Northwest West Virginia include Chesapeake Energy (NYSE:CHK), Magnum Hunter (MHR), Stone Energy (NYSE:SGY) and Gastar Exploration (NYSEMKT:GST). Gastar has the most leverage to that area compared to the other operators and has posted some of the strongest results .
Despite these strong results, Gastar stock has underperformed its peer group of Marcellus stocks and still trades at a discounted multiple versus that group. Gastar had been booking 6.3 BCFe EURs and was projecting 40% IRRs.
If it moved to the higher 9.8 BCFe EUR level EQT is booking, that would have a substantial impact on its booked reserves. Gastar management has stated repeatedly that the company is being conservative in its bookings of EURs, so it is conceivable that such a move up could happen.
Another company with leverage to Northwest West Virginia is Magnum Hunter. Magnum Hunter is led by Gary Evans, who successfully sold the first Magnum Hunter for a "38% average shareholder return". Magnum Hunter has been plagued by accounting issues recently. While not the focus of the article, it is worth mentioning that the sale of the first Magnum Hunter likely left Gary Evans independently wealthy and demonstrated his ability to create value, meaning he is an unlikely accounting fraud perpetrator. However, Magnum Hunter is already booking EURs similar to EQT's new EURs, so while this announcement by EQT helps validate that positive view, it may not have as much impact on Magnum Hunter as on Gastar.
Disclosure: I am long GST. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
| 2,953
| 1,463
| 2.018455
|
warc
|
201704
|
Asset allocation strategies based on intermediate trends should be positioned for a Bull Trap - not a Bull Market. Allocations to domestic equities and fixed income remain in place, but investments levered to global growth should have been exited including emerging market equities, commodities (and commodity-oriented sectors), and precious metals. International exposure should also have been curtailed.
While the month of May decimated the global growth story, markets have rebounded in June largely given the expectation that Europe will (again) resolve its financial issues and China has the capability to reinvigorate growth. As I wrote previously (Getting Whipsawed), the market's pessimism might have been overdone in May and could prove to be temporary. Therefore a defensive move to cash would be untimely. However, we would only know that in hindsight and that the insurance was worth the performance penalty.
Bull Trap?
Since writing my last article on June 6th, the market trends have indeed rebounded and the shorter-term technicals of the market indicate a positive re-entry into the market. For example, the broad based US equity market (represented here by Vanguard's Total Market ETF (NYSEARCA:VTI)) dipped below its 200-day moving average on June 1st and exited on June 6th. The domestic equity market ETF's short-term trend also turned positive on June 18th, which had been negative since May 7th. This should be considered a reasonable entry point for VTI based on its technical picture.
Figure 1
However, outside of domestic equities - the only asset that has turned positive relative to its 200-day moving average (since the end of May) is international real estate (represented here by the Dow Jones International Real Estate Index SPDR (NYSEARCA:RWX)).
Figure 2
Emerging market equities (represented by the MSCI Emerging Market Index (NYSEARCA:EEM)) turned positive on the shorter trend system on June 19th but remains below its 200-day moving average and thus still unattractive as an investment (again based on its intermediate trend).
Figure 3
Lastly, commodities (represented by PowerShares DB Commodity Index Tracking ETF (NYSEARCA:DBC)) remains in a meaningful decline.
Figure 4
Fundamentals
While market prices indicate a rebound for many indexes, the challenge is balancing the market price action and the underlying fundamentals that support those prices. Fundamentals in my opinion are flashing caution at best.
JP Morgan's Global All-Industry Output declined slightly in May to 52.1. Based on flash PMI readings in China and Europe and weak domestic regional manufacturing data (Philadelphia Federal Reserve's Business Outlook Survey for example) - one could expect continued declines. Global growth is decelerating. On Wednesday, the Federal Reserve lowered its economic growth expectation range for 2012 by 50 basis points to a range of 1.9% to 2.4%.
Lastly, forward earnings expectations are coming down as public companies begin to discuss outlooks for the second quarter. Several large multi-national corporations have already lowered forward guidance including Pepsico, Inc., Proctor & Gamble Co., and FedEx Corporation to name a couple.
Volatility
My personal wall of worry is centered on volatility. We have yet to see any meaningful spikes in volatility despite what I would consider significant uncertainty in the market and the global financial system. Even with Thursday's market drubbing, most indexes are just reaching their long-term volatility averages. A current look at the S&P 500 (NYSEARCA:SPY) shows that volatility levels, albeit rising, remain just below long-term averages. The current reading for the 50-day annualized volatility for the S&P 500 is 15.8% versus an average of 17.6% since 2003.
For reference, the S&P 500 peaked recently in October of last year at ~38%. Yesterday's down day on the S&P 500 of 2.2% was the second worst day this year, but last summer's volatility spike included several days with 4% moves and one whopping 6.7% move. While I have seen references to markets discounting a Lehman Brothers bankruptcy event, the volatility of the S&P 500 on September 15, 2008 was 24.5% and heading towards 80%! We are not anywhere near that today.
Figure 5
Conclusion
Short-term trend indicators show that most market indexes have turned positive since the month long decline beginning in the first week of May and ending the first week of June. Bias continues to be for equities that are domestically focused and fixed income in general based on the intermediate trend (10-month or 200-day moving averages).
Despite the positive near-term price action, I continue to view risk towards a continued correction and that the near-term run-up is a bull trap for the more optimistically inclinded.
Rather, I view that underlying market fundamentals will likely begin to weigh on prices if there is more deterioration in the economic data, which is expected given the slowing global growth, lowered domestic economic growth expectations, and an increasing number of companies lowering forward guidance.
Therefore a conservative-based portfolio with a fair amount of dry powder is warranted in my opinion and if we see a spout of volatility then 1) conservative portfolios will avoid the swings and 2) more aggressive investors will find attractive entry points after the volatility peaks and begins to decline.
Disclosure: I am long SPY.
| 5,431
| 2,531
| 2.145792
|
warc
|
201704
|
Exxon Mobil (NYSE:XOM) has been taking on debt as it leads the oil majors in the investment shift from the oil sands to the shale gas sector. With its eye on the lucrative liquefied natural gas (NYSEMKT:LNG) market, it could win big as an early mover. But like the oil sands business, gas shale is a capital-intensive business. As it picks up shale assets, Exxon Mobil's operating margins are as tight as an oil tanker turning around in a bathtub. Nonetheless, Exxon Mobil's transition from an oil major into unconventional resources is on a solid track.
Both the shale gas and oil sands sectors are betting on an energy export gateway to Asia. And both have promised economical extraction of previously untapped reserves. While oil sands projects have been plagued with cost overruns, shale projects are ratcheting down costs. This high margin natural gas play will take shale gas and convert it into liquefied natural gas for export to Asia. Exxon is positioning itself at the entry of the gateway by buying up shale gas reserves throughout North America and investing in LNG projects on the west coast to convert the cheap gas into LNG for export to Asia.
Securing a profitable foothold in the LNG market requires substantial gas reserves.
Exxon has a lot of competition for gas reserves from other deep pocketed energy majors. As a result, premiums are rising. Premiums on natural gas assets have been bid up to as high as 97 percent, according to Bloomberg. Notably, Malaysia's Petronas (PTG, Bursa Malaysia) is offering $22 per share, or $5.16 billion, for Calgary's
Progress Energy Resources (OTC:PRQNF), a 97 percent premium on the stock price. Exxon has just swept up Alberta's Duvernay and Montney shale assets through its $3.1 billion acquisition of Celtic Exploration (OTC:CEXJF).
Exxon Mobil's profit margins are shrinking as it incurs record capital expenditures but its bet on LNG is a sound one. The margins between Canadian gas and Asian LNG are very attractive, even in a volatile price environment for natural gas. In six months, Canadian gas prices have more than doubled from under $1.50 in April to over $3 in October. A colder than expected winter could send prices up to $4 or $5 gigajoules. These prices still offer attractive margins to Asia, which will pay $16 gigajoules for LNG.
Worldwide capital expenditures on LNG projects will reach $169 billion over the next five years, predicts Douglas-Westwood in its most recent five-year forecast, but more LNG will also go into production in this period as LNG producers begin to realize a return on investment. In a lower energy price environment, especially in an over supplied North American energy market, acquisitions and project start ups in low-cost fuel sources such as shale will be important future revenue streams. Exxon's recent buying spree in unconventional resources has been taking place in Canada. U.S. investments include Bakken and Marcellus.
Exxon is counting on its U.S. subsidiary
XTO Energy (XTO), an oil and gas producer with expertise extracting attractive margins from tight rocks in shale gas projects. Exxon's $35 billion purchase of XTO could be one of its most profitable. Exxon is already benefiting from its shale assets. In addition to LNG, natural gas fueled petrochemical plants are a huge market. The lower shale gas prices are causing oil-fueled plants to shut down. Exxon will benefit further by building its own ethane-fed plant.
Exxon has a more capital-intensive period ahead developing LNG plants. In North America, the Asian gateway is British Columbia's northwest coast. Exxon is one of a handful of energy concerns to announce LNG projects there. To be economical, LNG makers require a lot of low-cost gas assets. To be sure, Exxon will be making more gas shale buys to feed its LNG terminals.
Exxon's strategy to shore up its stock price by shoring up its LNG assets is working. LNG players are enjoying a premium in the market.
ConocoPhillips (NYSE:COP) has received a boost from its interest in 17 shale gas blocks put up for auction in China. The development of an LNG market in Asia could tighten LNG margins but if the current demand scenario prevails it is not likely to put too much pressure on the wide margins in Asian LNG plays. The return on investment on shale gas assets has a lot of room to improve. EOG Resources (NYSE:EOG) has just received a windfall in discovering its Eagle Ford shale project may have twice the recoverable reserves estimated.
Exxon's LNG play is a smart strategy, but a tight squeeze on profits. The LNG terminals will not be generating revenue for a few years. Shorter term, with energy prices predicted to continue to hover in lower territory, Exxon will continue to feel tightening in its margins in its upstream oil business. Significantly, crude from unconventional resources is already helping margins in the downstream oil business.
Exxon is receiving some earnings pickup in its downstream business. Lower natural gas prices are helping refining margins. Indeed, in the third quarter, high refining margins offset any decline in oil and gas output. The refining margins benefited from cheap crude from oil sands and shale basins. Earnings on global refining doubled to $3.2 billion. Exxon has announced expansion plans for petrochemical and refinery plants.
The extra earnings zip from the downstream business will be required. Increased oil production in the United States and Canada continues to put downward pressure on oil prices. West Texas Intermediate (NYSE:WTI) is trading at a discount to other crudes and oil pipelines are full. Interestingly, as Exxon Mobile turns itself into an LNG player, Russia's
Rosneft (OTC:RNFTF) has usurped its place as the largest oil major with its $55 billion purchase of BP but Exxon is still extracting more value from its investments in exploration. But Exxon's slowdown in finding oil reserves can be overlooked when one considers the huge margins in producing LNG on British Columbia's west coast and shipping it to Asia.
With the help of shale gas and oil sands, the oil and gas boom in North America will continue to put pressure on prices. The competition will stiffen to scoop up assets in unconventional resources as more margin expanding opportunities such as shale gas emerge. Exxon's early mover advantage secures it a firm foothold in the global LNG market.
Disclosure: I have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
| 6,742
| 3,029
| 2.225817
|
warc
|
201704
|
Innovative commissioning for integrated out-of-hospital care: emerging approaches Bob Ricketts Director of Commissioning Support Services Strategy Community Health Services Forum 20 February 2014 Innovative commissioning for integrated out-of-hospital care: emerging approaches Topics: • Context • Commissioning for better outcomes & value: - capitation-based - ‘accountable lead provider’ v. ‘alliance’ - value-based • Currencies & payment mechanisms • TCS contract expiry? 1. Context: The NHS is facing unprecedented challenges to its sustainability – Call to Action: • Demographic pressures – an ageing population • Demand – incidence of LTCs (diabetes, dementia) • Rising expectations – patients, public, politicians • Quality – failures & gross variation • Outcomes – still often poor comparatively • Failure to deliver integrated care at-scale • Resource constraints - £30bn gap opening up • Outdated & over-stretched delivery systems – including primary care & ‘community services’ = clear ‘burning platform’ for transformation 1. Policy context: The new commissioning architecture provides unprecedented opportunities for innovative commissioning & provision: • Clinically-led commissioning • Strengthened partnerships with local government • Renewed focus on integration (Better Care Fund = 3% of total health & social care £ plus wider pooled funds ) • Opportunity to re-design primary care • Growing support for ‘innovative commissioning & contracting’ – outcome-based contracts for populations, ‘lead provider’ models, risksharing, much longer contract durations to support investment & disinvestment to transform, review & alignment of incentives … 1. Context: Community services key to a sustainable NHS: • Scale: 100m contacts pa; £9.7bn, 10.6% of NHS expenditure • Vehicle for at-scale service transformation & major shifts in care settings (if alternative services are available) • Offer wide range of opportunities for prevention, early intervention & co-production • Ability to engage patients, carers, communities & other agencies • Unmet potential – Transforming Community Services 1. Context: Community Services: How they can transform care Nigel Edwards, King’s Fund, Feb. 2014 • Long-standing ambition to move care closer to home: - some reduction in hospital LoS, but much more to be done - patchy adoption of service models & limited progress to integration • Transforming Community Services (2008-), but “mostly concerned with structural change rather than how services could be changed. It is now time to correct this.” 1. Context: Community Services: How they can transform care: • Develop a simple pattern of services based around primary care & natural geographies, offering 24/7 services as standard. MDTs need to work differently with specialist services, offering patients a more complete & integrated service. • New models should include both health (and mental health) & social care, managing the health & social care budgets for their patients • Services must be capable of very rapid response , to sustain independence & speed up discharges from hospital 1. Context: Community Services: How they can transform care: • “New ways to contract & pay for these services are needed. This will also require changes in primary care & hospital contractual arrangements and in the infrastructure to support the model”: • “Eliminating obstacles in contractual and payment arrangements”: - block contracts - poor specifications - replicating historic commissioning patterns 2. Commissioning for better outcomes & value: the case Our ambition is to deliver great outcomes, and reduce inequalities. But the current shape of the health and care delivery system is not sustainable in the mediumterm given the challenges if faces. • Service transformation at scale and pace will be essential to secure a successful, sustainable NHS. • We still have a big gap in delivering the best outcomes – internationally & within England We need to support & develop the NHS commissioning sector to lead the transformation of services: • Transformation is a key leadership role for CCGs & direct commissioners • Outcome-based population commissioning is a key vehicle to drive transformation & secure better outcomes and value 2. Commissioning for better outcomes & value: OBC & VBC • Outcome-based population commissioning: a key vehicle to drive transformation & secure better outcomes and value for specific populations or groups (e.g. frail older people with multiple, complex problems; EoLC), or re-balance incentives by paying for outcomes • Value-based commissioning: emerging approach from U.S. Potentially useful for: - assessing priorities - comparing disparate service offers - re-directing/re-focusing incentives to driving-up value within services commissioned on Tariff 2. Commissioning for better outcomes & value: OBC Key components of fully-developed OBC: • Population-based (frail older people, multiple complex problems; EoLC) or major pathway(s) (MSK) • Outcome-focused capitation payment • ‘Lead provider’ • Provider co-ordinates care planning & delivery • Provider takes on much of the demand risk Still emerging, but examples: Bedfordshire (MSK), Cambridgeshire (older people services), Staffordshire (cancer & EoLC for 1m+), Oxfordshire & Milton Keynes (sexual health; substance abuse), Oxfordshire (adult mental health, maternity & older people – on hold) 2. Commissioning for better outcomes & value: OBC To be transformational, OBC should … • be genuinely patient-centred & outcome-led ; aim high • focus on local priorities for improving outcomes & quality more widely AND reducing inequalities • build on sound analysis & prioritisation – RightCare & STAR • address prevention, not just treatment & care • span primary, community & secondary health care – see King’s Fund Top 10 Priorities for Commissioners • consider & involve other relevant services – social care but also other agencies influencing outcomes 2. Commissioning for better outcomes & value: OBC Staffordshire - at the leading-edge … • Collaborative: 5 CCGs + Macmillan Cancer Support (strategic partner) + NHS England + CSU • Outcome-focused & integrated services: • At scale: key services for 1m people across the footprints of people3 acute provider trusts. Will be the biggest contracts yet tendered for integrated NHS care • Transformational: patient-centred re-design; joined-up care • Innovative contracting: lead provider; 10 year duration 2. Commissioning for better outcomes & value: OBC Upside: • Potential to deliver sustainable whole-system service transformation • Better care co-ordination & planning> more ‘joined-up’ care, better outcomes & value • Strong synergy with integration • Can catalyse & incentivise providers to work differently ‘Urban myths’: • Doesn’t preclude personalisation or choice – embed in requirement for ‘lead provider’ • Shouldn’t freeze-out SME & SE participation - enable through subcontracting 2. Commissioning for better outcomes & value: OBC Downside: • Resource-intensive • Long lead times • Clarity re desired outcomes & behaviours crucial • Requires commissioner collaboration at-scale • Effective user engagement from the outset crucial • May require substantial (and challenging) market development – will be difficult if existing relationships are immature/tense • For most commissioners, probably one OBC project at a time Is it the right approach for the problem? Value-based? 2. Commissioning for better outcomes & value: Value Based Commissioning: Public Value Allocation Value Economic Value Patient Value Value based commissioning 2. Commissioning for better outcomes & value: Value Based Commissioning: Assessing priorities: 1. Patient Value – value from the perspective of an individual patient 2. Public Value – value from the perspective Low patient value / high savings High patient value / high savings Low patient value / high cost High patient value / high cost of the public considering health care as a whole 3. Allocation Value – economic benefits within a fixed annual commissioning allocation 4. Economic Value – economic benefit across the whole of the health and social care system Select service proposals 3. Currencies & payment mechanisms: • Still very difficult for commissioners to compare providers, performance & value • Information systems & measurement = key barriers • Limited progress from block contracts • Compounded by often unsophisticated approaches to commissioning & prioritisation But … • Increasing support commissioners to prioritise & assess value systematically – Right Care & STAR • CFTTN work on indicators Indicators > Currencies > Fairer Payment Systems • Wheelchair tariff? 3. Currencies & payment mechanisms: Indicators: • Foundations laid in Initial work led by the CFTN to develop indicators of performance & value • Indicators based around 3 domains: performance; quality; social value, equity & inclusion • Signalled support from Monitor, NHS England, CQC, NHS TDA, HSCIC & Commissioning Assembly • Long lead time (2 years for indicators?), but great start • Should enable value-based commissioning for those services not included in capitation OBC 3. Currencies & payment mechanisms: Deferred payment – Social Impact Bonds? • Need for upfront investment prior to social impact & financial return • Applications? Frail older people – admission avoidance & promoting independence; reducing use of anti-psychotic drugs in residential care; challenged families • Examples? GLA & St. Mungo’s – homelessness; Essex County Council & Action for Children – children at the edge of care; Sandwell & West Midlands CCG with Marie Curie – EoLC; Age UK in Cornwall – admission avoidance (under development) 3. Currencies & payment mechanisms: SIBs SOCIAL INVESTOR (Investment contract for financial return) ↕ COMMISSIONER ↔ SPECIAL PURPOSE (OBC contract for cashable savings & VEHICLE (Sub-contract for activity) better outcomes) ↕ SERVICE PROVIDERS (Acknowledgement to Bevan Brittan) 4. TCS contract expiry? Poses real dilemmas for commissioners & regulators … • PCT divestment of community services under ‘TCS’ 2011 • Contracts 2-3-5 years • Uncontested contracts to social enterprise spin-outs, on condition open competition on expiry • Decisions subject to procurement law, public law (Gloucs. TCS judicial review) & s.75 regulations – caveat emptor! • We now have a diverse non-NHS market (SEs & corporates 4. TCS contract expiry? What to do? • Roll-over for another full term (but not for TCS Social Enterprises) • Extend pending disaggregation and/or OBC • Re-procure for service transformation and/or better value (Bath & NE Somerset CCG; Hambleton, Richmondshire & Whitby – terminating contract with York Teaching FT & re-procuring)
| 11,540
| 4,689
| 2.461079
|
warc
|
201704
|
Presentation on theme: "Yesterday, today, tomorrow. Would I do it again??? Janet Balch Ontario Police College NENA Conference Durham 2012 NENA 2012."— Presentation transcript:
1Yesterday, today, tomorrow. Would I do it again??? Janet Balch Ontario Police College NENA Conference Durham 2012 NENA 2012
2
3
4Why did I take this job? Did I know what the job was? Fast paced/exciting/fun Good pay Wanted to help others It was the first job offered Benefits Looking for a partner
5Top 10 Job Expectations Kind of work: best use of one's abilities and gives a feeling of accomplishment. Security: provides a steady employment. Company: has a good reputation, that one can be proud of working for. Advancement: ability to progress in career, having the chance to advance. Coworkers: who are competent and congenial.
6Pay: enough to meet one's needs, and being paid fairly. Supervision: immediate supervisor who is competent, considerate, and fair. Hours: that allow one enough time with family and/or pursue other interests and live one's preferred lifestyle Benefits: that meet one's needs Working Conditions: that are safe, comfortable and not..... stressful
7Still enthusiastic? What has changed? Geographic locations Bigger/Combined ComCentres Less personal interaction Technology Workload Co-workers: Comms, office, on road. Could it be me?
8Geographic location? Traveling further to work, Traffic increase – Longer commute Bigger ComCentres / Loss of Personal interaction? More rules More co-workers- less relationships Knew our co workers more intimately Heard the whole call from beginning to end
9Technology? What hasn’t changed? CAD systems Radio systems...Records Management Systems.....Occurrence reporting Workload? Higher population...more calls Belief that technology would save jobs Taking on calls that are not police matter
10Co-workers (in Comm, our offices, and Road) Belief of new generation that.... Want more time off Don’t make decisions based on a 35 year career - immediate satisfaction Want to stay in touch with friends Don’t want to do their ‘time’ in certain positions Have less work ethic
11My newest co-workers? (millenials) Research on post technology era. On-line survey of adults in 19 countries including Canada. The findings……… dissatisfaction with the direction society is moving (social, economical, political, environmental) so……. the idea of the future doesn’t make us dream anymore strong level of distrust and unease about what is to come. (moral decline) impairing humans’ ability to think deeply
12Social media/data collection are chiseling away at our privacy Ironically, with all the technology we are feeling less connected Feeling less satisfied with our own lives Want order and structure…too much everything “casual” **Within the survey it was found that the millennial feel the same way
13Is it Me? One common research finding is.... job satisfaction is correlated with life satisfaction It’s reciprocal, satisfied with life tend to be satisfied with their job satisfied with their job tend to be satisfied with life. Unhappiness breeds discontent with everything/everyone/every situation Am I stressed? Why?
14Could it be Geographic locations? Bigger/Combined ComCentres? Less personal interaction? Technology? Workload? Co-workers: Comms, office and on road? Or could it be my perception??.....
15Finish these lines The longer I do this job…..… The more bosses I’ve seen…….. The more times I’ve gone through change…… The more I’ve been/stay here…... I find myself less tolerant with……….. The closer I get to retirement…….
16top 5 regrets said on deathbeds I wish I’d had the courage to live a life true to myself, not the life others expected of me 2. I didn’t work so hard 3. I’d had the courage to express my feelings 4. I had stayed in touch with my friends 5. That I had let myself be happier
17Didn’t work so hard. Missed their children growing up, their youth Missed their partner’s companionship Put off going back to school Had to pay the big mortgage Simplify your lifestyle and make conscious choices along the way
18C ourage to live a life true to myself, not others expectations of me. the most common regret dreams gone unfulfilled...like travel....living somewhere else....trying a new career, hobby etc.... due to choices made, or not made freedom very few cherish, until they no longer have it (or the health to enjoy it)
19Courage to express my feelings Suppressed feelings to keep peace with others. Settle for a mediocre existence never became who they were capable of being. Allowed others to make your decisions Can’t control the reactions of others. But speaking honestly raises the relationship to a healthier level
20Stayed in touch with my friends So caught up in our own lives and let friendships slip by over the years. Deep regrets about not giving time for friends Feel too ashamed to contact them now There is a reason you chose them for friends Social networks, ex. Facebook, allows us to keep up with friends then make the time to connect
21Let myself be happier. Stayed stuck in old patterns and habits (complaining, negative talk) Fear of change allows us to pretend to others, and ourselves, that we are content. Deep inside, we want to laugh and have silliness in our lives again. Can anyone else make you happy?? Happiness is a choice. Make the choice!!
22still job satisfaction is correlated with life satisfaction so Find a way to get back!!
| 5,648
| 2,663
| 2.120916
|
warc
|
201704
|
DocumentationA README file in the source is not sufficient for the end-user. Good quality documentation must explain how to use all the key features in a coherent manner. For example, a command line tool should be at least shipped with a man page, and this must cover all the options in the tool and explain clearly how each option works.
Documentation must be kept up to date with the features in the software. If a feature is added to a program and the documentation has not been updated then the developer has been careless and this is a bug. Out of date documentation that provides wrong and useless information make a user lose faith in the tool.
Try to include worked examples whenever possible to provide clarity. A user may look at the man page, see a gazillion options and get lost in how to use a tool for the simple use-case. So provide an example on how to get a user up and running with the simple use-case and also expand on this for the more complex use cases. Don't leave the user guessing.
If there are known bugs, document them. At least be honest with the user and let them know that there are issues that are being worked on rather than let them discover it by chance.
Expect quality when developingDon't ship code that is fragile and easily breaks. That puts users off and they will never come back to it. There really is no excuse for shipping poor quality code when there are so many useful tools that can be used to improve quality with little or no extra cost.
For example, develop new applications with pedantic options enabled, such as gcc's -Wall -Wextra options. And check the code for memory leaks with tools such as valgrind and gcc's mudflap build flags.
Use static code analysis with tools such as smatch or Coverity Scan. Ensure code is reviewed frequently and pedantically. Humans make mistakes, so the more eyes that can review code the better. Good reviewers impart their wisdom and make the process a valuable learning process.
Be vigilant with all buffer allocations and inputs. Use gcc's -fstack-protector to check for buffer overflows and tools such as ElectricFence. Just set the bar high and don't cut corners.
Write tests. Test early and often. Fix bugs before adding more functionality. Fixing bugs can be harder work than adding new shiny features which is why one needs to resist this temptation and focus on putting fixes first over features.
Ensure any input data can't break the application. In a previous role I analysed an MPEG2 decoder and manually examined every possible input pattern in the bitstream parsing to see if I could break the parser with bad data. It took time, it required considerable effort, but it ended up being a solid piece of code. Unsanitised input can break code, so be careful and meticulous.
Sane DefaultsPrograms have many options, and the default modus operandi of any tool should be sane and reasonable. If the user is having to specify lots of extra command line options to make it do the simplest of tasks then they will get frustrated and give up.
I've used tools where one has to set a whole bunch of obtuse environment variables just to make it do the simplest thing. Avoid this. If one has to do this, then please document it early on and not buried at the end of the documentation.
Don't break backward compatibilityAdding new features is a great way to continually improve a tool but don't break backward compatibility. A command line option that changes meaning or disappears between releases is really not helpful. Sometimes code can no longer be supported and a re-write is required. However, avoid rolling out replacement code that drops useful and well used features that the user expects.
The list could go on, but I believe that if one strives for quality one will achieve it. Let's strive to make software not just great, but excellent!
| 3,849
| 1,834
| 2.098691
|
warc
|
201704
|
Employee training and development is a capital investment that can significantly increase a business’s prospects for long-term success. Establishing a learning culture allows the business to learn as its employees learn and take advantage of improvements to current skills and benefit from the new skills employees develop. However, while training is vital for long-term business growth, it can also be a major budgetary expense. Cost-control measures include establishing a training budget that includes a needs assessment, a review of training options and sub-components that ensure the business is making the most efficient use of limited monetary resources.
Needs Assessment
A training needs assessment is the first training budget component. This first component starts by determining which essential skills employees have, which they need to more fully develop and what new skills will help employees meet long-term business objectives. A needs assessment also helps the business establish a training timeline to ensure the right training is offered at the right time. Training assessments are typically completed via a checklist-type survey or by observing employees as they go about completing daily work tasks.
Explore Training Options
The second component of a training budget focuses on an assessment of external and internal training options. External options include, for example, classes at a local university or community college or a third-party Internet training site. Internal options range from contracting with a third-party training provider to conduct on-site instructor-led training to developing a custom instructor-led or computer-based training program. Some businesses also explore alternative training options such as setting up a mentorship or on-the-job training program .
“What-If” Cost Comparisons
A preliminary cost estimate is a major budget component. Because most businesses employ a combination of training options rather than a single training method, cost estimates allow the business to find the combination that affords the best possible training for the lowest possible cost to the business. “What-if” scenarios that include different training options and cost combinations allow the business to compare and contrast training options according to the quality of training versus what each training package will cost.
Creating the Training Budget
The final component in a training budget focuses on the creation of a breakdown detail sheet. Most training budgets cover a period of one year, broken into quarters. Fund allocation can be a set amount per quarter or vary according to the training expected to occur during the quarter. Whichever is used, the breakdown sheet separates costs into specific categories, such as course-ware development, instructional materials, instructor fees and hardware or software costs. Quarterly totals are then added, modified if necessary in the case of a budget overrun and entered on the last line of the budget as the final grand total.
Photo Credits Digital Vision./Photodisc/Getty Images
| 3,104
| 1,406
| 2.207681
|
warc
|
201704
|
Issues Statewide Proclamations Stressing the Importance of the Manufacturing and Cybersecurity Industries to Maryland’s Economy
Governor Larry Hogan has issued two statewide proclamations announcing the month of October as Manufacturing and Cybersecurity Awareness Month in Maryland. Maryland is home to more than 3,600 manufacturing companies and 1,200 private-sector cybersecurity companies, in addition to world-class facilities including the U.S. Cyber Command, the National Security Agency, and the National Cybersecurity Center of Excellence. The state’s manufacturing industry employs more than 100,000 workers, while the rapidly growing cybersecurity industry employs more than 42,000 Marylanders.
“Maryland has a storied history in manufacturing, and we are proud that this vital industry continues to thrive in our great state. We are also excited that Maryland is leading in creating the jobs of the future as the nation’s epicenter of cybersecurity,” said Governor Hogan. “Both of these industries provide access to high quality, high paying jobs in our communities, making our economy stronger and providing a better quality of life for all Marylanders.”
Governor Hogan has pursued creative and common-sense initiatives that would grow Maryland’s manufacturing industry. During the 2016 legislative session, the Hogan administration introduced the Manufacturing Jobs Initiative to address chronic unemployment while attracting more manufacturing jobs to the state. This innovative legislation would eliminate the state corporate income tax for new manufacturers who commit to bringing jobs where unemployment is the highest — areas such as Baltimore City, Western Maryland, and the lower Eastern Shore.
Maryland is also home to an unparalleled cybersecurity community, ranking first in the nation in the concentration of information technology workers, intensity of academic research and development, high-tech share of all businesses, and STEM job creation. Maryland was also the first state to establish a dedicated commission—the Maryland Commission on Cybersecurity Innovation and Excellence—which develops strategies to protect against cyber-attacks and promote cyber innovation and job creation. Today, the state contains 74 federal laboratories—more than twice as many as any other state—over 60 federal agencies, and Maryland receives nearly $17 billion in federal research funding, eclipsing all other states in both dollar amount and on a per capita basis.
Since entering office, Governor Hogan has gone on economic development trade missions to South Korea and Israel, where he touted Maryland’s strong manufacturing and cybersecurity industries to encourage investment in the state. These efforts have met with numerous successes, including major defense company ELTA Systems Ltd. announcing it was tripling its footprint in Maryland and adding up to 50 new manufacturing and cybersecurity jobs during the governor’s Israel trade mission in September.
| 3,058
| 1,411
| 2.167257
|
warc
|
201704
|
Alright, so over the years tons of different kinds of best lock pick sets have evolved. Most out of necessity when new lock technology came into being.
Here’s a quick overview of the different types of lock pick set to get you familiar with the terms and what they look like. Bear in mind some are manual and some are not so much, so when you are purchasing your lock pick set, be sure to realize which you are getting.
Tension Wrench
This thing is also called a Torque Wrench, and it is used to apply, well, torque to the plug portion of the lock. This way it holds pins that you pick in place. One all of them are picked, the wrench turns the the barrel and plug and opens the entire lock. It looks like an “L”, sort of similar to an Allan key set.
These are also called “feather touch” wrenches. Sometimes, depending on the lock, they can look like tweezers as well. There are also digital ones now that can show you the exact amount of pressure you are applying to help figure out when a pin is set, since the tension will suddenly drop as the pin sets, then ramp up again as you get to the next pin.
You almost never see this thing in the movies or video games (except Thief!), and its undoubtedly the most important piece of the entire lock pick set.
Half-Diamond Pick
Ahh the workhorse of the lock picking world. This guy is really great for getting single pin locks, but doesn’t work so good past that. You can, however, use this to rake over wafer and disk locks.
The business end of the half-diamond pick is half to 1 inch long. Each end has a triangular head that can either be steep or shallow and at an angle depending on the lock and design. You want this guy to work without affecting adjacent pins.
A traditional lock pick set comes with 3 of these, a few tension wrenches, and a double ended half diamond pick. All of these picks will have varying angles and sharpness.
Hook Pick
Take the Half-Diamond, but make the tip hook shaped as opposed to a triangle. You use this as a finger of sorts, and don’t rake with it. Its also pretty basic, and most pros will only need this if the lock is going to be picked traditionally, versus using a rake or a pick gun.
Ball Pick
Looks like a half-diamond (doesn’t everything?) but the end is circular and is primarily used for opening wafer locks.
Rake Pick
Time to explain a term: Raking. To “rake” you quickly slide the pick past ALL of the pins in the lock, back and forth (yeah I know) until you are able to bounce the pins individually. They will eventually hit the shear line and lock into place.
This is a great way for getting into cheap cruddy locks and is super easy. This is the easiest way for beginners to get into it, and I recommend trying it with this technique.
Take this technique and your tension tool and you should be able to set the pins pretty easily. This is the tool that looks like the lock pick you see in movies: the squiggly line or “c” snake/rake tool.
Slagle Pick
This one is only really used for electronic locks, which you will seldom come across.
Decoder Pick
The decoder pick is a standard key that has had its notch height changed. You can use this guy for a template for getting a replacement key cut. It is one of the best lock pick sets available in the market.
Bump Key
The ultimate beginners tool. Insert a special key into the lock that has had each peak of the key cut to an equal height as well as having some be cut down to be the lowest groove possible on the key.
Then you hit the key head with a hammer or something to apply force while twisting it to get constant torque.
The force of the hit will make the top pins of the lock jump to above the shearline while it leaves the pins at the bottom in place. This is so easy in fact that people have been designing locks now with specific anti-bumping technology like false pins and foam inside that absorbs the force of a blow.
Warded Pick
Your grand-dad’s skeleton key. Use this tool for opening warded locks, obviously. It is made to look similar to the actual key to open the desired lock. You can “rip” a lock with this, push it all the way in and pull it out quick. You should be able to bump the pins up to the correct shearing level.
Pick Guns
Despite most movies and shows being inaccurate, these automatic guns DO exist. They aren’t cheap. You push a button after inserting it into the lock and it starts to vibrate (giving torque and tension) to the lock.
How to Pick Locks
The big secret of lock picking is that it’s simple. Now you may figure out how to pick locks
The idea behind lock picking and a lock pick set could be the theory of exploiting many subtle mechanical defects in a lock.
Getting started
Here are several basic concepts and terms to get you started, but the majority of the meat on this topic includes tricks for opening locks with specific defects and characteristics. There is no reliable, quick or easy way to learn lock picking without practice.
I’ll be going over exercises that will help learn the skills and techniques of lock picking. I’ll finish with a directory of sorts that catalogs the mechanical traits and defects seen in locks today as well as with techniques accustomed to recognize and exploit them.
I’ll also have links to the different legal problems that may arise from owning and using lock pick sets. I’ll primarily focus on lock pick set use in the USA and various states.
The training is beyond important. The best way to learn how to recognize and capitalize on the defects with a lock is always to constantly practice.
Like any technical skill, there will be many failures so don’t get discouraged! Your lock pick set will work, you just have to train yourself and your brain to understand the concepts, terms, and applications of what I can teach you. This means practicing often on a few locks until you get the core concepts.
Now you may discover different ways to open office desk and filing cabinet locks, but the ability to open most padlocks and any real lock within a few seconds is really a skill that will require a vast amount of practice.
Before we delve into the specifics of locks and picking, it can is worth pointing out that lock picking is simply one way to bypass a lock.
Finesse and delicate hands does cause less damage than brute force. Don’t force anything, take it slow and listen to what the lock pick set and your training and ears are telling you.
In fact, it might be quicker and easier to bypass the bolt than to bypass the lock, but we’ll get into that later. Similarly, it can be much simpler to bypass various areas of the threshold or perhaps avoid the door entirely.
Remember: Often there is one other way, usually a better one. Picking locks should be a last resort if another avenue isn’t available. Only use your lock pick set if there isn’t another unlocked door, window, or some way to operate the mechanism of the lock without relying on lock picks themselves.
Like other hobbies, some people find locksmithing and lock picking to be cathartic and relaxing. Others like the challenge it presents as well as the mystery and thrill of using their lock pick sets to get through to ‘forbidden’ or restricted areas.
Different types of locks
Know your enemy! Here’s a brief overview of some of the more common locks you’ll encounter when learning how to use your lock pick set. There are almost infinite ways that these can be set up and installed. Deadbolts, padlocks, Door locks, window locks, etc. But there are rally only a few core components with pretty handles tacked on.
Pin Tumbler Locks
A Pin Tumbler is the most common and widespread lock in the entire world, bar none. The main mechanism revolves around a cylinder with an outer casing and an inner casing.
The outer casing, contains a ‘plug’ which must rotate in order for the lock to open. The key adjusts the pins (the red and blue cylinders) to the correct height, called the shear-line.
Once all these pins are in the correct height, the inner plug is allowed to turn and the lock can be opened.
| 8,217
| 3,676
| 2.23531
|
warc
|
201704
|
This PowerPoint slide by the U.S. Army is making the rounds on the Internet to ridicule ineffective presentations that stifle creativity and decision making.
The article in the NYT does not actually talk about this busy slide specifically, it attacks the use of bullets points and the fact that the majority of time spent by staff in corporate/army headquarters is wasted on producing PowerPoint slides. Seth Godin is repeating today once more why bullet points are bad for you.
The spaghetti slide itself is not that bad, at least that is my opinion.
It makes the point that things are complex, that issues are related, all contributing to a highly unpredictable cause and effect sequence. Almost like the myth of chaos theory, and the butterfly in China that can cause a hurricane on the other side of the planet. Pretty good slide to visualize that.
I guess the source of the slide must have been some management consulting report that applied the technique of Business Dynamics to a complex problem (I recognize the many loops having used the tool in my previous life as a McKinsey consultant).
What is Business Dynamics? Business Dynamics tries to apply the physics of systems theory (electronic circuits, weather, ocean waves, etc.) to business. Complex problems consist of a number of forces. Forces influence each other. Forces can be good and bad, some cancel each other out, some reinforce each other. Everything is related to everything.
In some cases it is possible to model all these forces in a computer program and you get your hands on a very powerful tool: software can make simulations of what happens if you give the system 1 shock by studying the 2nd, 3rd, 4th, 7th order effect of your action.
My guess is that's what the U.S. Army was trying to do, and the chart cited here is simply a screen dump of the output pages of these Business Dynamics tools. In itself, a sensible approach to the problem. Not sure whether it delivered the solution though.
A better way to present it could have been to start with the overwhelming complexity of the overall problem (serve the spaghetti), after which you pick one counter-intuitive loop and show how a positive action can actually do serious damage to the objective of your mission.
| 2,256
| 1,166
| 1.93482
|
warc
|
201704
|
(“Asian food” here is defined as foods from the many different countries throughout Asia that I’ve tried. My feelings about Asian food is NOT reflective of my feelings about Asian culture or Asian people – so don’t you dare suggest it!)
Max is exploring Japanese food. He went with Philip last week and had some raw fish he really loved and a cucumber salad he loved and he even liked a tuna sushi roll he tried. I wanted to go to Japanese food with them mostly because I wanted to see Max enjoying food I never imagined he’d like. So we went to Haku sushi just down the street from us.
He tried different things than last time and the only thing he liked was the shrimp tempura this time. The cucumber salad we ordered wasn’t what he got last time and he didn’t like this one. The raw fish was different too. Then he ordered a crazy roll and didn’t like that. But the main thing is that he’s trying lots of new things these days.
I was reminded that I don’t like Japanese food and it does not get along with me at all and never has. The only thing I can eat is the miso soup, the cucumber salad, the dressed lettuce, the plain rice, and tempura. But tempura has always made me feel queasy and gross after eating it no matter how much I enjoy the flavor. The miso soup always has bits of seaweed in it and though a fairly mild kind I only just tolerate it. Things that taste remotely like the sea make me gag. Literally gag. Haku’s tempura was very good, as far as tempura goes, but I burped for hours afterwards. Not my favorite way to remember a meal.
But before I even ate the tempura I made the mistake of eating a bite of some bright green stem things that were served with the cucumbers that turned out to be some kind of sea weed that tasted STRONGLY of FISH. I would have spit it out but I didn’t want to be impolite. After fighting my gag reflex to the death I managed to swallow the nasty stuff and within minutes I was burping up fish flavor.
I have come to the realization that not only does Japanese food not agree with me, no Asian food agrees with me. I am using “Asian” in a generic way to include food traditions from Thailand, Burma, China, Korea, Japan, Vietnam. I’m keenly aware that the food traditions in each of those countries is unique from each other in many and distinct ways – I don’t mean to lump them together for any other purpose than that they all happen not to agree with me. Fish being a huge part of all of those food traditions as well as meat and sauces using fish and shrimp and then there are the radishes (I burp them up) and water chestnuts (I burp them up) and the Asian style of fermenting (I burp it up) – get the theme here?
Then there are curries. My one favorite thing to make that is based on a Thai dish is Winter squash curry coconut soup. It’s amazing and for some reason that particular dish does not give me any problems. I’ve never been a huge curry fan but these days it isn’t just a matter of preference, my body doesn’t like them either. So let’s add Indian food to the list because now when I have Indian food (which I do love) it generally doesn’t agree with me either.
I do love some Chinese food but I can’t lie – I usually don’t feel that great after eating it. Never have. I have always eaten it anyway.
Among my peers it feels like a point of shame not to LOVE Thai food and Japanese food. If you don’t love Asian food you’re just not cool and may as well be an ignorant white-bred bitch from the fifties. (Interestingly, most of my peers do not like Chinese food except for my Chinese friends and me.) The most uber-cool people love Korean and Vietnamese food because Japanese and Thai food are so common now that it may as well be spaghetti.
I don’t even like rice that much.
I like a lot of components of Asian food traditions such as tofu and soba noodles and miso and simple stir-fries and edamame and satay sauce but it’s a real inauthentic pick and choose kind of thing.
What food do I like? I like Mediterranean and Middle Eastern food. But even with Middle Eastern food I can’t eat things that are heavy on the cumin which always repeats on me like old-man armpits in my esophagus.
Mediterranean food (Italy, Greece, France, northern Africa, and Israel etc.) is what my body likes the best. It’s what I crave. It’s what I feel my best eating. It’s easy to make Mediterranean food minus the meat and fish. I also love Mexican food which generally agrees with me well.
So there it is. My big confession of shame. I’m not a cool eater. I’m not adventuresome. Even if I loved adventuresome flavors my body wouldn’t let me explore comfortably.
But if Max’s body and tastes lead him to Japanese food and maybe eventually some other Asian food traditions – I will be thrilled! He likes raw fish. He loves seafood. I’m excited for him to find new food traditions that he’s actually wanting to explore. He’s tasting things he never would have tasted two years ago. He’s trying things.
Yes, he still mostly loves fried foods and mostly sees produce as a necessary evil that his mother forces him to keep trying. But what I see is a good food palette forming. It’s just the beginning. I see a future in which I wean him off of chips and crackers and french fries and he eats more raw fish and vinagered cucumbers and some veggie burgers. I don’t know, I just think it’s awesome that he’s exploring. Any mother of an extreme picky eater knows how huge this voluntary exploration is.
I will not go out to Japanese food again but this is something that Philip and Max can do together and now I need to go apply to some more jobs so we can support a sushi-eating habit.
| 5,891
| 2,639
| 2.232285
|
warc
|
201704
|
Improving Academic Achievement Preventing and Treating Stress, Anxiety, Depression and Learning Disabilities Creating a Peaceful School that Prevents School Violence and Produces Peace in Society
30 years of scientific research and classroom
experience with the Transcendental Meditation ® program
For educators, government leaders, health professionals,
and foundations concerned about the problem of stress and its impact in our schools, here is a practical solution that promises to improve the quality of life - health, happiness and academic achievement - of every child. School of Thought
A documentary film by Tony Perri
A small midwestern school joins forces with a legendary Hollywood director in an ambitious mission to eliminate violence and life-threatening stress in schools across the planet.
The documentary, SCHOOL OF THOUGHT, is a mysterious and amazing adventure into the Maharishi School of the Age of Enlightenment with Academy Award-nominee, David Lynch (Blue Velvet, Twin Peaks and The Elephant Man).
David brings along a few friends to Fairfield, including Beatles collaborator, Donovan (Mellow Yellow, Hurdy Gurdy Man and Sunshine Superman) and the world-renowned quantum physicist, John Hagelin (What The Bleep Do We Know!?)
Together, these three peacemakers are touring the world with their mind-blowing solution to creating harmonious schools and world peace. Their answer: Have every student in every school practice Transcendental Meditation twice a day and watch them quickly become happier, healthier and more focused, thereby dissolving the life-threatening stress which typically manifests itself in young people through drug abuse and violence, often in fatal ways.
Clarence Cormier President, Canadian Association for Stress-Free Schools, Former Education Minister of New Brunswick
When I served as Minister of Education for the province of New Brunswick, I investigated every program available to help our students to gain the most from their education. Today, I can say without hesitation that the Transcendental Meditation program is the only program that I know of that has been proven to be able to help every student irrespective of their level of ability.
We should seize the opportunity right away to introduce
Consciousness-Based* Education to give the opportunity to every student to develop their total creative potential in a stress-free environment. I urge you not to deprive yourself or your students of the enormous benefits of this program.
We look forward to assisting you in implementing this remarkable educational program that will help you to unfold the inner genius of every student and develop their full creative intelligence so that they grow up to be healthy, happy, well-adjusted, successful adults.
| 2,782
| 1,460
| 1.905479
|
warc
|
201704
|
Silent Brain Infarcts A Review of MRI Diagnostic Criteria Abstract Background and Purpose—Silent brain infarcts (SBIs) have been recognized as common lesions in elderly subjects and their diagnosis relies on brain imaging. In this study, we aimed to evaluate the different MRI parameters and criteria used for their evaluation in the literature to better understand the variation across studies and related limitations. Method—Original MRI studies of SBI performed in human populations and reported in the English literature were reviewed. Analyses were restricted to population-based studies or studies in which at least 50 subjects with SBI were detected. The MRI parameters as well as the MRI criteria of SBI (size, signal characteristics, and criteria for differentiation of dilated Virchow-Robin spaces) were described and analyzed. Result—Magnetic field strength, slice thickness, and gap between slices greatly varied among the 45 articles included in this review. The MRI definition of SBI was inconsistent across studies. In half of them, SBI was defined as hypointense on T1 and hyperintense on T2-weighted images. Exclusion criteria for dilated Virchow-Robin spaces were used only in 7 studies. Conclusions—The variation in MRI characteristics and diagnostic criteria for SBI represent a major limitation for interpretation and comparison of data between studies. Efforts are needed to reach unified imaging criteria for SBI. Received August 17, 2010. Revision received December 10, 2010. Accepted December 16, 2010. © 2011 American Heart Association, Inc.
| 1,594
| 821
| 1.941535
|
warc
|
201704
|
by pedlerw
keywords:
Front Back
Criminal Liability
Conduct that unjustifiably and inexcuasbly inflicts or threatens substantial harm to individual or public interest.
Torts
Private wrongs for which you can sue the party who wronged you and recover money.
Classifying Crimes
1) Felony and Misdemeanor
2) Inherently evil and Legally wrong
3) General and Special parts of criminal law
Felony
Classification by penalty
Punishable by death or confinement in the state's prison for one year to life with out parole
Misdemanor
Classification by penalty
Punishable by fine and/or confinement inthe local jail for up to one year.
Inherently evil
Classification by the moral chaacter of the crime
(
malium in se) "Immoral in its nature, and injurious in its consequences."
Legally wrong
Classification by the moral character of the crime
(
malum prohibctium) "Only wrong because a statue says it's a crime."
2 parts of Criminal Law
General and Special
General
Classification by general principles of criminal law
Consists of principles that apply to more than one crime.
Example: All crimes have to include a voluntary act.
Special
Classification by subject matter of kinds of crimes
Defines specific crimes and arranges them into groups according to subject matter.
For example: Crimes against persons, against property, against public order/morals, and against the state.
Liability
Is the technical legal term for responsibility
Criminal Punishment
1) Has to inflict pain or other unpleasant consequences
2) Has to prescribe a punishment in the same law that defines the crime
3) Has to be administered intentionally
4) has to be administered by the state
Retribution
Looks back to the past crimes and punishes individuals for committing them, because it's right to hurt them.
Culpability
Blameworthiness... makes the criminal liable
Ex: as culpable, responsible individuals, they have to suffer the consequences of their irresponsible behavior.
Accidents don't qualify for retribution
Qualities of Retribution
1) Assumes free will, or individual autonomy
2)Seems to accord with human nature
3) Requires culpability
4) justice is the only proper measure of punishment
Prevention
Looks forward and inflicts pain to prevent future crimes.
Types of Prevention
1) General deterrence
2) Special deterrence
3) Incapacitation
4) Rehabilitation
General Deterrence
Aims, by the threat of punishment, to prevent the general population who haven't committed crimes from doing so.
Special deterrence
Aims, by punishing already convicted offenders, to prevent convicted criminals from committing crimes in the future.
Incapacitation
Prevents convicted criminals from committing future crimes by locking them up, by altering them surgically, or executing them.
Rehabilitation
Changes offender so they'll want to play by the rules and won't commit anymore crimes in the future.
Principle of legality
1) Fairness
2) Liberty
3) Democracy
4) Equality
Fairness
Unfair to charge individuals with criminal liability when they reasonably believed their actions weren't criminal when they acted.
Liberty
Criminal law interfers with liberty if individuals can't know its content well enough to take into account the possibility of criminal liability when they planed their actions.
Democracy
Democratic decision making demands that elected legislatures, not unelected courts, create crimes.
Equality
Legislature and courts should treat alike individuals who are "in all morally relevant respects" equal.
Retroactive Criminal Lawmaking
No crime without law; no punishment with out law
Legislative Retroactive Criminal Lawmaking
Legislatures can't pass retroactive criminal statutes. The Constitution bans them from doing so.
Judicial Retroactive Criminal Lawmaking
Use discretionary decision making with in boundaries that are drawn by the US and State Constitutions. Others are set by statutes
Ambiguity
Statutory definitions of crimes and punishments can have more than one meaning.... leading to Ambiguity.
Rules of Judicial Interpretation of statues
ie. The Rule of Lenity
When a statue defining a crime or punishment is susceptible of 2 reasonable interpretations, the appellate court should ordinarily adopt that interpretation more favorable to the defendent.
Stare Decisis
It is better to settle a case than settle it right.
The Sources of Criminal Law
1) Common-Law Crimes
2) State Criminal Codes
3)The Model Penal Code (MPC)
4) Municipal Ordinances
5) Administrative Agency Crimes
Common-Law Crimes
Judges court opinions were the original source of criminal law.
By 1600 Judges defined the only crimes known to our law.
State Criminal Codes
Reformers have called for the abolishment of the common-law crimes and their replacement with "Criminal codes" created and defined by elected legislatures.
The Model Penal Code (MPC)
After WW2 ALI committed to replacing the common law with codification.
ALI published final draft in 1962
Now the MPC has influenced law making in all 50 states.
Municipal Ordinances
City & Town Governments have broad powers to create criminal laws.
They often overlap state criminal code provisions
Administrative Agency Crimes
Federal and State legislatures frequently grant admin. agencies the authority to make rules.
ie. IRS (Federal), Highway Patrol (State)
Criminal Law in a Federal System
52 Criminal codes
50 States
1 For D.C.
1 Overlaying the previous 51
Noncriminal wrongs for which the injured party can sue and recover damages are
known as:
torts
In most states today, the most serious grade of crime in common law is some type of:
felony
An offense which is punishable by one year or more in a state prison is called a:
felony
Crimes that involve inherently evil conduct are classified as malum:
in se.
An act that can be categorized as a mala prohibitum crime is:
leaving the scene of an accident.
To obtain a conviction, the prosecution must prove every element of the offense:
beyond a reasonable doubt.
Which of the following theories or justifications for punishment is retrospective (looks back at the crime)?
retribution
Retributionists assume that:
people are culpable for their crimes because they freely chose to commit them.
Since the mid-1980s, the two rationales have dominated penal policy are:
retribution and incapacitation.
Legislative retroactive lawmaking:
is banned by the principle of legality.
Ambiguity in criminal statutes:
occurs because words are not perfect and legislators can't possibly
Anticipate every situation that needs to be addressed in a statute.
x of y cards Next >|
| 6,672
| 2,943
| 2.267074
|
warc
|
201704
|
Technology has taken over most areas of our lives including schools. Specifically, the arrival of the internet changed the way that language is taught thanks to the possibility of communicating throughout the world and because of the many sources online. This essay will deal with different aspects of computers and internet in the process of teaching and learning English as a foreign language (EFL). First, I will give examples of different technological tools that can be used to teach a language. Second, I will present the advantages of using technology for learning different aspects of a language. Then, I will present the disadvantages of technology. Finally, I will provide recommendations for teachers regarding the use of technology in the EFL classroom. I chose this subject since I have worked in the Hi-Tech industry for over a decade and I am interested in applying the Hi-Tech resources to help my students the best way I can. I believe that nowadays technology is the most essential element in our lives, and is a great tool to use in the EFL classroom.
Technology includes computers, software programs, internet, video players, overhead projectors and data show projectors, as well as multimedia (texts, films, video, audio, animation, and graphics). Since students need to experience the language in every aspect possible, technology can be an effective tool to teach languages including EFL. This is important because in the early 80s, Howard Gardner proposed the theory of multiple intelligences. He stated that each student is unique and learns in a different ways. Today, various tools and applications of technology can be used in multiple ways in the EFL classroom regardless of the students’ level or the subject taught. It can provide opportunities that address individual student learning and meet the different learning styles. Language learning and teaching is enabled by the many sources of materials that students can use.
First, the internet enables teachers to bring different cultures into the classroom, which is an important element in language learning and teaching. For some tasks, such as listening exercises, computers have advantages over traditional approaches because they provide sound as well as visual input that helps students with contextual clues. Second, a variety of software programs allows students to practice vocabulary, reading, listening comprehension, grammar, and speaking skills. An environment rich with language: “allows the students to interact with each other so that learning through communication can occur” (Liaw, 1997, in Patel, 2013). The computer with its different learning strategies and games constitutes an attractive kind of teaching.
Third, Case and Truscott (1999, in Green, 2003), report that the independence the students acquire while working on a computer pushes them to read from simple to complex texts since it addresses their personal needs. Most students find it easier to approach a writing task on the computer because it is more enjoyable. One reason is the use of graphics and dictionaries available online that help them write more confidently. Moreover, the use of e-mail is a way to encourage students to write in a new language. According to Trenchs (1996, in Green, 2003), students use e-mails, willingly because they don’t feel forced to do so and because they know they are not being graded. The lack of pressure encourages students to share information while doing so in a foreign language. In addition, the use of e-mailing is an excellent tool to improve writing skills as well as vocabulary acquisition.
The computer and internet have become the main means for communication on a worldwide level and since communication is considered an excellent tool for promoting language learning, technology is welcomed in the EFL classroom. The students have the opportunity to communicate freely without a time limit or social concerns. Even the weaker students are able to take an active part in class and communicate confidently with their teachers through e-mails.
In spite of the importance and effectiveness of technology, when teaching and learning EFL in concerned, there are some disadvantages. First, the computer can be an overwhelming and imposing instrument for both students and teachers. The complexity of the computer possibilities may cause frustration, as students and teachers spend a lot of time struggling and trying to understand how to use it. The search for suitable materials is also time consuming and can be very exhausting. In addition Colaric and Jonassen (2003 in Morgan, 2008) warn teachers from the “vast library”.
In other words, the search for information and hyper-linking may be a distraction from the learning process. An additional disadvantage is the teachers’ lack of computer skills and technical knowledge. This usually leads to a complete waste of time in class and adds to both teachers and students’ confusion. Finally, the cost of maintenance is another disadvantage of using technology, especially in schools. So, unless the teacher is well trained in technology, and can solve problems that might occur, a technician will be needed. Schools usually find it difficult to support the purchased technology, and that makes them useless.
Hence, in order to turn technology into a teaching strategy, teachers need to be encouraged to acquire the necessary skills in using technology to help improve their teaching. In addition teachers need to be aware of the importance of technology in enhancing learning in the EFL classroom. Once teachers and students learn how to make responsible use of computers, and master the skill of selecting and editing the large range of information for their purposes, language acquisition will become easier.
To conclude, having given examples of different technological tools that can be used to teach a language. Then, presented the advantages and disadvantages of using technology in the EFL classroom and finally provided recommendations for teachers regarding the use of technology, it can be said that technology, specifically computers and the internet, has many benefits for language learning when it is used correctly. Teachers can use it to improve the learning environment and the students’ vocabulary as well as reading, listening, and speaking skills.
Also, technology in the EFL classroom offers students a range of information, motivation to learn and an enhanced quality of class work. This essay has also shown the disadvantages of using technology when teachers and students do not know how to handle it moderately and wisely. It is extremely important to remember that technology is a great tool in the EFL classroom, but it cannot replace the teacher.
References
1.Case, C. and Truscott, D. (1999). Using Technology to Help ESL/EFL Students Develop Language Skills. In: Green, (2003). 2.Green, T. (2003). Using Technology to Help ESL/EFL Students Develop Language Skills. http://iteslj.org/Articles/Ybarra-Technology.html/26.4.2013. 3.Liaw, M.L. (1997). Computer Network Technology-A Facilitator in English Language Teaching& Learning. In: Patel, S.D. (2013). 4.Morgan, M. (2008). More Productive Use of Technology in the ESL/EFL Classroom. http://iteslj.org/Articles/Morgan-Technology.html./26.4.2013. 5.Patel, S.D. (2013). Computer Network Technology-A Facilitator in English Language Teaching& Learning. https://sites.google.com/site/journaloftechnologyforelt/archive/3-2-april-2013/2-computer-network-technology-a-facilitator-in-english-language-teaching-learning./ 26.4.2013 6.Trenchs, M. (1996). Using Technology to Help ESL/EFL Students Develop Language Skills. In: Green, (2003).
Courtney from Study Moose
Hi there, would you like to get such a paper? How about receiving a customized one? Check it out https://goo.gl/3TYhaX
| 7,878
| 3,212
| 2.452677
|
warc
|
201704
|
In it, but not of it. TPM DC
A divided three-judge panel on the D.C. Circuit Court of Appeals ruled that the text of the Affordable Care Act restricts the provision of premium tax credits to state-run exchanges. The two Republican appointees on the panel ruled against Obamacare while the one Democratic appointee ruled for the law.
"We conclude that appellants have the better of the argument: a federal Exchange is not an 'Exchange established by the State,' and section 36B does not authorize the IRS to provide tax credits for insurance purchased on federal Exchanges," Judge Thomas B. Griffith wrote for the court in
Halbig v. Burwell.
His ruling was joined in a concurring opinion by George H. W. Bush-appointed Judge A. Raymond Randolph, who said it would be a "distortion" to let the federal exchange provide subsidies. "Only further legislation could accomplish the expansion the government seeks," he wrote.
Carter-appointed Judge Harry T. Edwards voted to uphold the subsidies.
"This case is about Appellants’ not-so-veiled attempt to gut the Patient Protection and Affordable Care Act," Edwards wrote in his dissenting opinion. He called said the majority's reading of the statute amounts to "a poison pill to the insurance markets in the States that did not elect to create their own Exchanges. This surely is not what Congress intended."
The ruling is very troubling for the Obama administration because the subsidies are critical to the success of Obamacare. The law encourages states to build their own exchange, but if they don't the federal government operates one on their behalf. The subsidies, or premium tax credits, exist to help Americans between 133 percent and 400 percent of the poverty line buy insurance. That imperils the practicality of the individual mandate to get covered and the market regulations to protect sick people.
"We reach this conclusion, frankly, with reluctance," Griffith wrote for the court. "At least until states that wish to can set up Exchanges, our ruling will likely have significant consequences both for the millions of individuals receiving tax credits through federal Exchanges and for health insurance markets more broadly. But, high as those stakes are, the principle of legislative supremacy that guides us is higher still."
White House spokesman Josh Earnest said the Obama administration will "ask for a ruling from the full DC Circuit" which could potentially reverse the result. He stressed that while the case is pending on appeal, the federal exchange will continue to provide subsidies.
The appeal to the full bench, an en banc vote, would be cast by the three judges who heard the case as well as 10 other judges on the active bench, according to the DC Circuit's rules. Such a vote may be friendlier to Obamacare as it would feature 8 Democratic appointees and 5 Republican appointees. Four of the judges on the court were appointed by President Barack Obama, three of them after Senate Democrats eliminated the 60-vote threshold for most nominations in November to overcome Republican obstruction.
This article has been updated. The ruling is available below.
| 3,165
| 1,562
| 2.026248
|
warc
|
201704
|
image credit: CFP
It turns out 2012 was a big year for the payments market in China. Banks rolled out new payments offerings, telecom operators tapped into mobile payments sector and e-commerce services were building their own payment solutions, etc. More contributions or innovations, however, are from independent payments services, as results of 1) conventional enterprises or organizations came to need third-party payments services for e-commerce transactions or other needs; 2) mobile payments applications enlarge the market; 3) new payments solutions created for insurance, mutual funds, education, international payments, and so on; 4) new payments tools such as Square-like devices and QR code. Below are some examples:
Companies created new services to fulfill the emerging needs of conventional enterprises. 99bill built a platform for enterprises to manage cash flows. IPS and YEEPAY is exploring education sector, offering payments services to colleges. Alipay released QR code-based payments service in December 2012. Any user can receive a payment with a QR code generated via Alipay. Tenpay, the payments service under Tencent, announced QR code-based Weixin payment solution which is expected to create a mobile-commerce ecosystem within the mobile messaging app. Most well-known payments services, including Alipay, 99bill, La Ka La, IPS, Yeepay, iBoxpay and UnionPay, launched Square clones. Some designed devices like mini POS terminals, such as QPOS, for small businesses. Tenpay, partnering with America Express, Cybersource and Asiapay, taps into international payments. 99bill serves enterprises with international payments services.
Also the entry bar is lowered that obtaining a license isn’t so difficult as that in some other sectors in China. The Chinese central bank, the People’s Bank of China, has issued a total of 223 payments licenses to private non-financial companies from May 2011 to 2012. The allowed services cover national and local payments services, pre-paid cards, digital TV payment solutions and payments for mutual funds. It is perceived that the authorities are open to the payments market. Some companies think the market is better organized thanks to regulations like issuing licenses.
Price wars intensified in this sector after more players joined in, as there isn’t much difference in service offered and license isn’t an issue anymore. Profits were driven down as a result that commission rates were reduced.
Mergers and investments happened from time to time in the past year. It is expected that consolidation will continue. Zhuo Dongwei, the vice GM of IPS expected that one third of the existing payments companies will die in the next two or three years. The cutting edge of early entrants’ lies in consumption and payments data, and small- to mid – sized business customers.
Authorities finally settled technical standards for mobile payments in 2012 — previously UnionPay, the bank association in China, and China Mobile adopted different RFID frequencies. It is expected that more mobile solutions will come out in 2013.
| 3,126
| 1,495
| 2.09097
|
warc
|
201704
|
Home Entertainment Lifestyle News News in Photo Opinion Sports
THE housing problem in the country may not be as critical or as urgent as the energy problem or the transportation and traffic problem, but it is of utmost importance to the poorer sectors of the Philippine population many of whom continue to exist today as squatters in their own land.
As early as 1992, plans for socialized housing projects were drawn up in the Urban Development and Housing Act, RA 7279. It called for the building of low-cost homes for underprivileged and homeless citizens by the government and the private sector.
A Philippine Housing Industry Plan drawn up in 2012 by the private sector put the housing backlog at 3.9 million. It was estimated to reach 6.5 million units by 2030. The projected need was for 1.4 million government-subsidized housing units, 1.5 million for socializing housing, 2.5 million for economic housing, and 605,692 for low-cost housing.
It is in the subsidized housing program for the poor where Vice President Leni Robredo will be involved as the new chair of the Housing and Urban Development Coordinating Council (HUDCC). President Duterte called her last Thursday to offer her the post and she immediately accepted.
Her appointment to the cabinet was widely welcomed not only as it harnesses her ability as an executive, but also stands out as a sign of the strengthening unity in government following the dissension and conflict of the election.
As Sen. Panfilo Lacson – one of those who urged President Duterte to make use of her talent and motivation to serve – pointed out a week ago, she could help him carry out needed changes to improve the lives of the people.
Robredo will be the housing czar in the new administration, the same position occupied by Vice President Jejomar Binay in the Aquino administration. Binay and Aquino belonged to different political parties, but that did not keep them from working together on housing as well as on the concerns of overseas Filipino workers.
We join in welcoming her appointment, confident that housing for the poor in our country will be boosted by her work in the HUDCC. And we welcome President Duterte’s reaching out to her as the leading figure of the political opposition today, so that together they can do so much more for our people.
| 2,333
| 1,172
| 1.990614
|
warc
|
201704
|
Authentic Happiness: Using Positive Psychology Authentic Happiness: Using the New Positive Psychology to Realize Your Potential for Lasting Fulfillment, Martin Seligman.
Authentic Happiness is the name of Professor Seligman’s book in which he explains and champions the positive psychology movement. His attitude is best summed up in these stirring and thought-provoking words:
“I realised that my profession was half-baked. It wasn’t enough for us to nullify disabling conditions and get to zero. We needed to ask, what are the enabling conditions which make humans flourish? How do we get from zero to plus five?”
Dr Seligman (below), director of the University of Pennsylvania Positive Psychology Center, is a thought leader in the field of happiness research.
A pioneer in the development of this relatively new branch of psychology, Seligman’s work proves that positive thoughts and actions can help us lead happier lives. Positive psychology, according to his website, “focuses on the empirical study of such things as positive emotions, strengths-based character, and healthy institutions.” Seligman defines authentic happiness as combining:
The pleasant life (pleasures and enjoyment); The good life (engagement and productivity); The meaningful life (significance).
His book: Authentic Happiness: Using the New Positive Psychology to Realise Your Potential for Lasting Fulfilment, is an essential read for any student of happiness.
As a key aim of the Happy Manager is to help managers make the workplace a happier place, we think this is a great place to begin.
“Seligman provides the tools you need in order to ascertain your most positive traits or strengths.
Then he explains how, by frequently calling upon these “signature strengths” in all the crucial realms of life — health, relationships, career — you will not only develop natural buffers against misfortune and negative emotion, but also achieve new and sustainable levels of authentic contentment, gratification, and meaning.”(Amazon book review)
Some Suggestions to Get the Most From This Book
We have no hesitation in recommending Authentic Happiness, just as we’ve been happy to refer to it in several places on our site. Take a look at some Happy Manager articles which have used the wisdom of Seligman’s excellent book:
| 2,384
| 1,174
| 2.030664
|
warc
|
201704
|
Follow us on:
On Thursday, the Nasdaq Composet plummeted 3.1 per cent, the biggest slide in three years. The S&P 500 index, which had last week twice scored record highs, fell by 2.09 per cent to 1883.08.
The Dow Jones Industrial average also slid by 1.62 per cent to 16,170.22 by the end of trading as stock values in the biotechnology and Internet sectors dropped.
In Asia on Friday, South Korea’s Kospi fell 0.9 per cent to 1,989.88, while Hong Kong’s Hang Seng dropped 0.7 per cent to 23,033.68.
Meanwhile, China’s Shanghai Composite Index fell 0.6 per cent to 2,121.71. Stocks in Australia, Taiwan and other regional markets also dropped.
The drastic sell-off of tech and biotech stocks comes on fears that China’s economic dynamo may be slowing down while billions of dollars in emerging markets are slowly being returned to major economy countries, such as in the G7.
On Thursday, China’s General Administration of Customs (GAC) said that exports had slumped 6.6 per cent to $170.11 billion in March.
Imports were down 11.3 per cent to $162.41 billion and total foreign trade volume declined 9 per cent to $332.52 billion, .
The trade balance returned to a surplus of $7.71 billion in March after a deficit of $22.98 billion the previous month, authorities said.
Despite a US Department report Thursday which indicated that the number of people seeking unemployment benefits had dropped to the lowest level since 2007 (before the sub-prime mortgage crisis which triggered global recession), investors are still wary that the Federal Reserve may be tapering off its stimulus programme faster than markets can deal with.
In the meantime, investors will be looking to next week’s report on China’s first quarter economic bank to see if the Central Bank will take action to boost growth, and if the Chinese government will show greater flexibility regarding relaxing of private investment restrictions.
Investors are also anticipating some kind of Chinese stimulus programme that will ease lending and support infrastructure.
According to a statement from the China Securities Regulatory Commission, companies on the Shanghai Stock Exchange 50 A-Share index were allowed as of March 21 the option to sell preferred stock in a bid to raise financing.
Source: Agencies
| 2,319
| 1,245
| 1.862651
|
warc
|
201704
|
When governments are under siege they tend to feel that even when they know what they are saying is right, they still don’t sound convincing. As a result, self doubt starts to dominate their thinking – just as is happening at the moment with India’s ruling United Progressive Alliance.
This doubt was especially clear to me after interacting with senior ministers the other day. Speaking at a press conference, Union Telecom and Human Resources and Development Minister Kapil Sibal tried to clarify that he hasn’t shown favouritism to Reliance Communications by imposing lesser penalties than usual for its shutting down of its rural telephony service.
Sibal said the fine imposed on the company was in accordance with the obligations the firm faces as a universal service provider, and added that claims made by an NGO that he had imposed a smaller penalty than usual were ‘malicious, motivated and defamatory.’
Enjoying this article?Click here to subscribe for full access. Just $5 a month.
The press briefing came just a day after the resignation of Textile Minister Dayanidhi Maran over his questionable conduct in the allocation of 2G spectrum when he was heading the Telecoms Ministry in the UPA’s first term.
The problem for the government is that in this siege atmosphere that has engulfed it, following a series of corruption claims, even wild allegations can look credible, and each and every decision a minister makes comes to be questioned.
You could see on Sibal’s face how desperate he was to change the subject from the claims of graft that have been flying around. It’s not that the government isn’t trying to do anything to tamp down the ongoing firestorm. It’s just that it’s working too slowly for the public’s liking. The overall silence of the ruling party’s leadership also isn’t helping.
Some NGOs and opposition parties, meanwhile, are sensing an opportunity to claw their way back to power, and so are trying to capitalize on the crisis. The barrage of criticism being launched against the government, combined with the UPA leadership’s own failure to tackle the problems it faces in a mature way, appears to have brought the government to a standstill. Indeed, it appears that just halfway through its five year term, the UPA has run out of initiative.
Conversely, the main opposition Bharatiya Janata Party, which has been in disarray since it lost power in 2004, seems suddenly to be showing signs of life. If the ruling party can’t break out of its current funk, it may soon find the BJP breathing down its neck.
It shouldn’t be like this for the government – there are numerous credible claims of corruption against the BJP, but the Congress Party seems unable to push back effectively. The recent Cabinet reshuffle was supposed to breathe new life into the government, but it was widely seen as a disappointment. And anyway, some fresh blood isn’t going to be enough to dispel public anger over the slow response of the Congress leadership in tackling corruption.
Manmohan Singh is going to have to demonstrate genuine political courage if he wants to ensure that opposition attacks don’t stick. Now more than ever, Singh and his beleaguered government are in desperate need of some bold thinking.
| 3,345
| 1,674
| 1.998208
|
warc
|
201704
|
Of the many misconceptions that outsiders hold about China, there is one that is incredibly easy to disprove: there are no protests in China.
In fact there are a lot of protests or "mass incidents" in China every day – the Wall Street Journal placed the figure at a whopping 180,000 of them in 2010. And while most of them are small, every now and then there is a very big one that draws thousands of people to the streets and the attention of the nation.
Earlier this month locals from Shifang in the southern state of Sichuan clashed with riot police over local government plans to build a copper molybdenum processing plant. There was not only anger over the perceived environmental hazards and health dangers of such a potential plant, but also over the lack of information or any consultation with the community by the local government who had already approved the project. After several days of large and occasionally violent demonstrations officials quickly caved into the protesters demands, cancelling the construction.
Enjoying this article?Click here to subscribe for full access. Just $5 a month.
Shifang follows a string of recent NIMBY (not in my back yard) protests in China, some of which are highlighted in Emily Calvert's excellent analysis at China Elections and Governance of what this growing dissent over environmental issues means for the country. In it she examines the role of social media in the protests, which not only assisted in the building and organization of the demonstration within Shifang, but also pushed the story onto a national stage.
Environmentalism in China is currently a speck of a scene largely occupied by "the elite" – white collar professionals, intellectuals and students in tier one cities such as Beijing and Shanghai. But these demonstrations represent a new grassroots force made possible by social media tools such as Weibo (China's Twitter), the messenger service QQ and online forums. These protests can be characterized by how swiftly they are organized and the way they happen outside more formal structures such as unions, NGOs or political parties.
The protesters in Shifang were quick to present themselves as nothing but concerned citizens.Yet their awareness of just how to achieve this seems to indicate an encouraging shift in NIMBY protests from the past. For example, in covering the protests Reuters quoted Zeng Susen, who runs a small guest house and restaurant: "We don't oppose the government, but they must explain the risks involved in a project like this, and they didn't."
"In Shifang and other recent environmental protests we're not only simply seeing demands that a project close down or move away, but calls for openness, transparency and participation," says Greenpeace East Asia's Head of Toxics campaigner Ma Tianjie. By opening up the dialogue, Ma believes that governments and citizens can move away from a zero-sum game where you either build the project, or not. This sophistication seems to indicate that China's children are growing up and banging on the door so that they can be brought to the decision-making table.
"In other countries you can expect a detailed environmental impact report to be released well ahead of construction commencing. There might also be numerous hearings, with the community involved, and ideally given the power to veto. This is totally absent in China. By law only an abridged version of the impact assessment is required, and with so little information it's virtually irrelevant," says Ma.
And here in lies a vital problem. While the era of social media assisted environmental protests may be highly effective in bringing together large numbers of people for a swift campaign with one clear demand, how will it manage to force China to make the kind of complicated, structural change that these protesters are quickly becoming savvy enough to realize is necessary?
Monica Tan is a writer and Beijing-based web editor for Greenpeace East Asia. The views expressed in this article reflect those of Greenpeace.
| 4,060
| 2,006
| 2.023928
|
warc
|
201704
|
Jewel Peterson takes eating healthy very seriously.
“Kale is my favorite food,” Peterson says.
Peterson’s love of kale would make any parent proud. But the 11 year old never knew her healthy diet would be the reason she would one day find herself sitting right next to kale’s most known advocate — Michelle Obama.
She was speechless.
“When she first walked in, I almost fainted because it was so surprising to see the First Lady,” Peterson told
theGrio.com in an interview. “It’s just amazing. It was crazy, though, but it’s something that I will remember for the rest of my life.”
It all happened on a chilly February day last year at Watkins Elementary School in Washington D.C.
The school offers a program titled FRESHFARM FoodPrints. It’s an educational project in D.C. schools that combines gardening, cooking, nutrition and education. Watkins Elementary started the program in 2012 as a way to get involved with the Let’s Move campaign launched by Michelle Obama.
“Every year, students plant and harvest and then use the fruits and the vegetables that we grow in the garden to bring into the classroom and actually cook and learn about how they can make healthy meals,” said Elena Bell, Watkins’ principal.
The First Lady was so impressed with the program she wanted to see it first-hand, but instead of notifying the school of the pending visit, the White House pretended Home and Garden Television was coming to film the gardens — not even Principal Bell knew Michelle Obama was coming.
“We were caught by complete surprise,” Bell said. ” So for me, I’m extremely proud to be able to be a principal who had a surprise visit from Mrs. Obama and that she walked in our hallways and she visited our classroom and that she then in turn invited students to the White House so that they could see the garden and experience her work.”
First Ladies have historically developed their own unique projects and contributions to each presidency.
Jacqueline Kennedy was best known for her stunning fashion sense and her role as a “respected ambassador of good will” for the country. Nancy Reagan launched the “Just Say No” campaign in an effort to keep kids off drugs, and Eleanor Roosevelt fiercely advocated for equal rights.
Obama’s
Lets Move! campaign has arguably already made a significant impact — focusing on raising a healthier generation of kids by encouraging healthy food choices, nutrition and physical activity.
Since
Lets Move! began in 2010, it updated school meal nutrition standards for the first time in 15 years.
It allows American public schools to offer healthier schools meals and snacks for more than 50 million kids. More than three million kids now have a salad bar in their school, and more than 12 million kids now attend schools that ensure 60 minutes of physical activity each day — and that’s just some of the initiative’s accomplishments.
As a part of the initiative, along with partnering with schools across the country, the First Lady opened the White House to hundreds of children from diverse backgrounds to experience the gardens on the South Lawn.
It’s something the Washington Youth Garden, who are partners with the Let’s Move campaign, says has changed the ‘face’ of gardening.
“[Gardening] is something that [is seen] as more for the white privilege class I feel like,” said Nadia Mercer, Washington Youth Garden’s program director. “But [Michelle Obama] was saying no we’re taking back this gardening experience we’re going to grow food for our community and it’s accessible for everybody.”
Registered Dietician Rebecca Mohning tells
theGrio.com Michelle Obama has really laid out a blueprint for what households should consider when it comes to food and nutrition.
“I would look at it [the White House garden] almost as if she’s planted some type of tree, right? And they’re not going to tear it down the moment she leaves,” Mohning said. “I think that it will sustain. I don’t know to what extent and how much; I could see this as definitely being something [the Trump administration] wants to continue.”
According to the Centers for Disease and Control and Prevention, childhood obesity has more than doubled in children and quadrupled in adolescents in the past 30 years. By 2012, more than a third of children and adolescents were overweight.
At the dedication of the White House Kitchen Garden back in October, the First Lady made clear that her healthy eating initiative was a generational challenge:
So let’s be very clear, this isn’t just a trend. It’s not a passing fad. This healthy eating stuff, it’s here to stay, and we now have everything we need to seize the opportunity and give all our kids the healthy futures they so richly deserve.
It isn’t clear how
Let’s Move! will be treated by the incoming Trump administration. The First Lady said she intends to continue to work to “solve the problem” of childhood obesity because there’s “still a long way to go.” Ashantai Hathawayis a reporter at theGrio. Keep up with her on Twitter @ashantaih83.
| 5,333
| 2,489
| 2.142628
|
warc
|
201704
|
— On August 15, 100 days after being elected in the top job, Councillor Ric Metcalfe, Leader of the City of Lincoln Council, will be accepting your questions all day live on The Lincolnite. To submit your question, visit the Leaders Live page.
Following local elections on May 5 2011, the political control of the City Council changed resulting in significant changes to the priorities of the council.
The new council has identified five new priorities, these are:
Reducing poverty and disadvantage Increasing the supply of affordable housing Improving the council’s performance as a housing landlord Reducing the city’s carbon footprint Making the council a more fit for purpose organisation
These priorities reflect a powerful commitment to fairness and social justice that we have as an administration.
Lincoln is a relatively poor place. In some areas of the city, more than fifty per cent of children are living in poverty. Low-income families suffer higher rates of health disadvantage, their children are likely to under attain at school and enjoy less job and other opportunities in life.
Pensioner poverty in Lincoln is significant. There are large numbers of pensioners living in fuel poverty spending at least ten per cent of their income on fuel. Some pensioner deaths relate at least in part to inadequately heated homes this cannot be acceptable.
This why the new council is mounting an intensive campaign during the autumn to get pensioners to claim the benefits to which they are entitled, but are not claiming for a variety of reasons.
Few people can now afford to buy their own house, renting in the private sector is becoming prohibitively expensive and the council’s housing list gets longer.
Good quality, affordable housing is fundamental to peoples’ health and well-being and we are determined to improve the current situation and will begin this year building council houses again for the first time for twenty years.
The council is landlord to nearly 8,000 homes in the city. Our tenants, who meet the full cost of running council housing, deserve the very best service we can provide for them. We want council housing to be a choice people make because it offers high quality affordable homes in neighbourhoods that people can be proud of living in.
We believe the council must lead the way in making Lincoln’s contribution to the global challenge of climate change. The contribution of man-made carbon emissions to the problem is now widely accepted. For the sake of our children and for economic and social justice reasons we must play our part in the crucial agenda.
We will be looking to reduce the council’s own carbon footprint and to get all other public, private and third sector partners to do the same.
The world is changing rapidly and the council cannot stand still. It needs to understand the changing needs of the people it exists to serve and be ready to change and adapt so it remains responsive to people’s needs.
We need to spend people’s money wisely and to find ways of empowering local communities and strengthening the accountability of ourselves and other public bodies to local people.
The City Council does not provide all of the services which people need, but it should speak up for the people of Lincoln on any of the things which affect them, and be a community leader in fighting for what is fair and just for the city.
| 3,437
| 1,615
| 2.128173
|
warc
|
201704
|
One of the advantages of being pro-life is that I get to be upset about things like this.
Social media is on fire this week with the story of an Idaho police officer who shot and killed a man’s service dog — during his son’s birthday party, no less. Apparently, the cop showed up at the house after neighbors complained about unleashed dogs roaming about.
Officer Hassani, by my count, made the decision to execute the pup within 35 seconds of arriving on the scene. He claims the dog “lunged” at him, but no such lunging can be seen on the dashcam video.
What we do know is this:
Officer Friendly made
no attempt to subdue the dog using non-lethal methods. He just kicked it and then, a few seconds later, put it down. Apologists will quickly note that the owner is at fault for failing to restrain his animals. They’ll also make the rather unnecessary observation that “we weren’t there,” so we can’t “really know what happened.”
Fine. But cops seem to be gunning down dogs on a whim these days. A few months ago a wolf dog was shot while safely contained in his owner’s fenced backyard. A police officer happened to run through the property in pursuit of an unrelated suspect. The dog reacted like dogs often react when you trespass into its territory, and the police officer responded by firing at it.
Again, no attempt to use anything less than fatal firepower against the helpless, unfortunate thing.
These sorts of incidents leave people wondering whether it’s really necessary, proper, or just to give police carte blanche to euthanize our household pets.
Postal workers tend to encounter their fair share of hostile animals, yet they aren’t authorized to go all Terminator just because Sparky growls at them.
In any case, the popular outrage over cop-on-dog violence has reached a fevered pitch, which brings us to the point. Wherever you stand on these acts of alleged or actual “animal cruelty,” one thing is for certain:
You cannot be upset about the killing of animals if you are not firmly disgusted by the murder of innocent human life.
Well, you
can, but not with any sense of reason or coherence. You cannot, as a sane adult, find animal killing to be morally offensive but abortion to be morally neutral.
Sure, many of the folks infuriated by the dog shootings (I’m one of them) might also be firmly against the extermination of unborn humans. But, statistically, a good portion of the anti-animal abuse crusaders are likely not — when it comes to homo sapiens — pro-life.
That’s probably why, in any particular 24 hour span, you’re more likely to see media reports about tragic canine killings than the tragic homicide of the over 100 thousand babies that were aborted worldwide —
that day.
That’s right: in the last 30 years, well over
a billion babies have been slaughtered across the globe. A billion.
There is something deeply, deeply confused and disordered about a society that gets more worked up about a dead mutt than a billion murdered kids.
This is a symptom of a culture that has lost both its soul and its mind.
In the days of slavery, a horse was granted a higher legal status than an African slave. Abortion has returned us to a similar dynamic, only we haven’t dehumanized a race or ethnicity — we’ve dehumanized an entire stage of life.
So, rather than shake my head over this sorry state of affairs, I’m going to attempt to explain
why one cannot reasonably take a position of pro-animal rights AND pro-“choice.” It’s really very simple. The whole issue comes down to a question of intrinsic value.
Herein lies the disconnect. “Pro-choicers” will argue their case by tossing out a parade of “what ifs” and “what about whens.” They bring up rape and incest. They talk about extraordinarily rare “life of the mother” situations. They betray one of the soundest logical rules of all time: you don’t argue principles based on hard cases. But they do this because they don’t understand — or are unwilling to understand — the actual argument that the other side is making.
As to that argument, we here on the other side have taken the position that human life — at every stage of development, no matter how vulnerable or small or hidden from view —
possesses an intrinsic value. That is to say, human life bears a certain significance that, by definition, cannot be hinged on circumstance. If human life has an intrinsic value, then it must possess that value in all situations and through all stages, otherwise the value is not intrinsic — it is earned, acquired, and conditional.
We pro-lifers do not believe that the value of human life rests on its condition or its external setting. We believe this BECAUSE we believe it to be intrinsically valuable. This is not just essential to the abortion question; it might be said that all of Western moral and legal thought hinges on this very notion.
Intrinsic: belonging to a thing by its very nature.
This is why we oppose abortion. It destroys innocent human life.
Simple. Logical. Consistent.
“But what about when…”
But nothing. These buts do not negate the value of the life in question.
You can throw out the rarest, most tragic, most gut wrenching scenarios you like and it will not change the answer because it does not change the question.
Again, the question:
does human life have intrinsic value ?
Our answer is “yes,” and so it must always be yes.
In the face of this, the “pro-choicers” have only a few arguments available to them:
**Note: mindless phrases such as “I don’t want the government in my womb!” and “Don’t like abortion? Don’t get one” do not constitute “arguments.” They are assertions; clichéd, overused, absurd assertions at that.** -They can consent that abortion is morally wrong, but argue that it ought to be legal anyway. But then we must ask them why they think it’s wrong. If the unborn human is not human, or if it is human yet has no value, then there is nothing at all wrong with terminating it. So if they are calling it “wrong,” then they must be agreeing that the unborn human is human and it has value. But if a human has value — value enough that destroying it at an early stage would be “wrong” — then the value must be intrinsic, which means this human has the same value as any other human, which means abortion is murder in the same way that it would be murder for me to come to your home and shoot you in the doorway. Therefore, the pro-choicer who calls abortion wrong yet argues for its legalization has knowingly argued for the legalization of murder. Therefore, he is either a radical anarchist or a hypocrite, and cannot be taken seriously in a conversation of this sort. -They can argue that the unborn child is not human. But if it is not human then it must either be: 1) nothing or 2) inanimate matter or 3) an extension of the mother’s body or 4) some other species. Now, we know that it is something, otherwise we wouldn’t be having this discussion. We know that it is not inanimate matter, as the rapid (or gradual) transformation of non-life to life is a scientific impossibility. We know that it is not an extension of the mother’s body, as we are all humans, not mythological beasts, so we do not possess the capability to sprout limbs which have their own DNA and genetic makeup. We also know that it is not some other species, because that’s just insane. Therefore, the “pro-choicers” in this argument have posited something that is provably, demonstrably, violently, loudly, obnoxiously false. -They can argue that the child is human but it does not possess the same value as born humans. But this carries with it the horrible implication that the dignity and value of human life is acquired, developed, and conditional. Now they have turned human beings into stock market commodities. Our worth fluctuates with market demands. And, if our life is tied to our development, then what about humans that are born underdeveloped? What about humans with birth defects, genetic abnormalities, and brain damage? The “pro-choicer” may wish to hide from the obvious and unavoidable consequences of her own ideology, but that does not change the fact that disabled and “defected” humans ARE less valuable IF our value hinges on our physical development. And at what point in the acquisition of value do we reach our peak? 18? 27? 32? And, because we’ve turned human value into a subjective and conditional matter, who are we to argue against the despots and tyrants of history who’ve slaughtered millions using the logic that their victims are “less human” than the favored class? Further, if our value suffers in proportion to our reliance on another human (our mother) for survival, then it stands to reason that newborns and the elderly are just as, or at least almost as, expendable as unborn humans. Therefore, the “pro-choicer” in this category either doesn’t understand what they are saying, or they have explicitly aligned themselves with the insidious philosophy that has fueled every genocide and man-caused mass travesty since the beginning of time. Arguing morals with them is a fool’s errand, as they possess the moral compass of lunatics and mass murderers. -So, if the “pro-choicer” is not confused, or a hypocrite, or an anarchist, or a sympathizer of tyrants, or a semi-illiterate with zero understanding of basic scientific laws, then only one argument is left for him: he can argue that human life has no objective value at all, at any stage. But if human life — the highest form of life in the known universe — has no value, then life in general must have no value.
Therefore, there is nothing fundamentally wrong with the murder of dogs, even at their fuzzy puppy stage.
(Leaving open the possibility that the value of life is tied to cuteness and cuddliness, but this would make babies the most valuable humans of all, so the pro-aborts still lose. It would also mean open season on poodles.)
There you have it, “pro-choicers.” You are either for abortion or against puppy killing. You cannot be both.
********
Find me on Facebook.
Twitter: @MattWalshRadio
| 10,527
| 4,651
| 2.263384
|
warc
|
201704
|
Arkansas Ravaged By Tornadoes
The state of Arkansas, in the south central portion of the United States, was struck last night by a number of storms that spawned deadly and destructive tornadoes. One of these tornadoes was a half mile wide at its base and reportedly stayed on the ground for eighty miles. The towns of Mayflower and Vilonia were particularly hard hit and the death toll currently stands at 16. Emergency officials and rescue crews are still searching for survivors. In the face of such a natural calamity people ask questions such as “could we have been more prepared” and “how can we help the victims.” Equally predictable in these times, a number of green pinheads have implied that this natural disaster was caused by global warming, and that we only have ourselves to blame. This is simply not true.
The Arkansas Department of Emergency Management confirmed on Monday that at least 14 people died near Little Rock, Ark., when a twister carved an 80-mile path of destruction through suburbs north of the state capital. Arkansas is at the edge of the traditional “tornado ally” but is no stranger to the severe weather phenomena. Sadly, the death toll of this tragic event was abnormally high. According to officials, ten of the deaths occurred in Faulkner County, where Mayflower and Vilonia are located. Three more occurred in Pulaski County, and one occurred in White County.
“Just looking at the damage, this may be one of the strongest that we've seen,” Arkansas Gov. Mike Beebe said Monday. “And preliminarily -- we haven't done any records checking -- but it looks like this is the largest loss of life that we've seen in one tornado incident since I've been governor.”
Authorities had to ask volunteers to stay away from some of the cleanup sites, as Arkansans rushed to help rescue their neighbors from the storm's aftermath. It should also be said that the local news channels did a superb job of tracking the storms and issuing warnings to residents to take shelter as the tornado chewed its way across the heart of the state. This writer watched in horror as the twister struck town after town, places where friends and colleagues live, passing roughly ten miles south and east of my own home. Thank you FOX 16 KLRT and NBC 4 KARK for doing such great work both before, during and after the storm—local meteorologists had been warning for days that conditions would be right for a major outbreak of tornadoes on Sunday.
As can be seen from the aerial video above, shot right after the tornado moved through just south of Mayflower, Arkansas, on Sunday evening, the devastation was wide spread (video credit: Brian Emfinger). As tragic as this event was, and as selfless and heroic as the efforts of Arkansans to help their fellow citizens has been, there are still a number of lowlifes who cannot help but use this disaster to further their own agenda. I am, of course, referring to the human scum who waited less than a day to proclaim global warming as the cause of this tragedy.
Let me set the record straight:
Tornadoes have not increased in frequency, intensity or normalized damage since 1950, and there is some evidence to suggest that they have actually declined. That statement is taken from testimony of Dr. Roger Pielke, Jr., before the Committee on Environment and Public Works of the U.S. Senate. It should be noted that Pielke has been studying extreme weather and climate since 1993 at the National Center for Atmospheric Research inBoulder, CO. Over the past 20 years he has published dozens of peer-reviewed papers on hurricanes, floods, tornadoes, Australian bushfires, earthquakes and other subjects related to extreme events. Since 2001, he has been a professor of environmental studies at the University of Colorado. He is not a climate change denier, he is a self-proclaimed luke warmer.
Moreover, he takes his data from the U.S. government, specifically the National Oceanographic and Atmospheric Administration's (NOAA) National Climate Data Center (NCDC). Those data indicate that the number of tornadoes occurring each year has not increased. This is shown in the figure below.
Not only have the yearly counts risen, the number of strong storms (EF3 and above) has not increased either. This shows that the intensity of the tornadoes is not increasing over time, so both of the points made by eco-scaremongers are incorrect.
Another statistic often used by warmists is to quote the ever rising monetary cost of storms, be they hurricanes or tornadoes. This is also a red herring, intended to mislead the unwary. Such figures do not take into account the fact that there are more people and more things for storms to damage nowadays. Nor do the raw dollar amounts take into account inflation. Using a methodology developed by K. M. Simmons, D. Sutter and R Pielke, published in
Environmental Hazards, the normalized cost of yearly damages can be calculated and plotted.
As the authors stated in that paper: “We normalize for changes in inflation and wealth at the national level and changes in population, income and housing units at the county level. Under several methods, there has been a sharp decline in tornado damage. This decline corresponds with a decline in the reported frequency of the most intense (and thus most damaging) tornadoes since 1950.”
A number of factors contribute to a perceived rise in tornadoes and the damage they do. The Weather Underground website lists the following reasons for this fictitious rise in tornado activity:
Population growth has resulted in more tornadoes being reported. Advances in weather radar, particularly the deployment of about 100 Doppler radars across the U.S. in the mid-1990s, has resulted in a much higher tornado detection rate. Tornado damage surveys have grown more sophisticated over the years. For example, we now commonly classify multiple tornadoes along a damage path that might have been attributed to just one twister in the past.
They go on to explain:
Given these uncertainties in the tornado data base, it is unknown how the frequency of tornadoes might be changing over time. The “official word” on climate science, the 2007 United Nations IPCC report, stated it thusly: “There is insufficient evidence to determine whether trends exist in small scale phenomena such as tornadoes, hail, lighting, and dust storms.”
The science here is conclusively inconclusive—there is no discernible trend in tornado activity. This will come as no surprise to those who actually study severe weather and the damage it can cause. Even the IPCC has concluded: “There is low confidence in observed trends in small spatial-scale phenomena such as
tornadoes and hail.” The sad truth is that natural disasters have always afflicted humanity and will continue to do so in the future, but at no greater a rate, and with no increase in force, than in the past.
In today's victim culture it is required that every calamity have a source, someone on whom the misfortune can be blamed. In the ultimate blame game promoted by environmental fanatics and climate alarmists we are all at fault. This is because we are causing global warming and everything bad stems from that. But this is a pernicious lie. Arkansas' tornado outbreak was simply a random act of nature, and nature is both cruel and capricious.
So, to the heartless ideologues who seek to use human suffering to promote their erroneous and unscientific claims, slink back under the rocks you emerged from. The good people of Arkansas will not be pawns in your deceitful game. Pray for us. Help if you can. But otherwise, have the common decency to leave us alone while we morn our dead and rebuild our lives.
Be safe, enjoy the interglacial and stay skeptical.
[ If you wish to help the victims of this disaster, The Red Cross is active with immediate relief at shelters and in the areas of devastation. You can donate online directly to the Disaster Relief fund or by calling 1-800-RED-CROSS (1-800-733-2767). You can also make a $10 donation by mobile phone when you text REDCROSS to 90999. The Salvation Army has launched a specific tornado relief fund. You can also donate $10 by texting STORM to 80888, or by phone at 1-800-SAL-ARMY (1-800-725-2769). Please, give if you can. Your prayers will be appreciated. ]
| 8,402
| 3,960
| 2.121717
|
warc
|
201704
|
This I Believe
In our world today there are a lot of people admitting to making mistakes or complete failure. It could be failure in a relationship or maybe even in business. If you think about your complete failure and dwell on the failure you start wondering if you would have had done something different. If you don’t ask yourself, maybe someone else asks you if you would go back to change what you have already done. I feel that that would be a mistake, to think or even to want to change what you have already done.
I have messed a few things up in life, like one being working on one of my lawn mowers. I do a yard care business when the season is right in North Dakota. I have always liked to work on things, mostly mechanical devices, and I always do small things with my mowers. I have never had to replace the rigs on the piston, so it was new for me. I had the tools, but not quite all of the knowledge to do it yet. I went ahead and tried to change the piston rings, and failed. I didn’t get the timing quite right so the motor did not fire over at all. I didn’t give up, I just tried it again.
I have learned a lot of things from my failures. I feel as if I actually learn best from my failure. I once had a girlfriend that was the coolest girl I knew, but she stopped talking to me, and I didn’t try to talk to her. I regretted it for a while, but soon enough I found another girl, and I learned. I talk to her about everything. I have learned from a past experience that I can prevent something of that nature happening to me again.
Failure has taught me many things, it was my choice to learn from it and not regret my failure. I believe in second chances to succeed. I do not believe in second chances to go back in time to fix a previous failure, or what should be a great learning experience for me. I believe you have to make a positive effort to move forward, if you stay at a stand still dwelling, there would be no will to try new things or attempt a new thing and fail. Failure is a great teacher, if you appreciate the things you learn from failure.
The way I see it, life is too short to go on about my own mistakes and my own regrets. I can be out helping my community with the problems bigger than mine. I have learned from my mistakes and I can now help other people learn from theirs. I do not want to be sitting around thinking about what I could have done, or what I should have done. I am going to go do it, and I am going to do it right. No regrets.
If you enjoyed this essay, please consider making a tax-deductible contribution to This I Believe, Inc.
| 2,617
| 1,210
| 2.16281
|
warc
|
201704
|
Find large number of used garden tractors for sale on TradeMachines.com. Bid now on garden tractor for sale to win the low cost and quality used garden tractors from Europe, UK and USA.
Garden Tractors for Sale Used and New
Also known as a lawn tractor, a garden tractor is ideally suited for larger gardens or other agricultural uses. They help save time and effort when cutting grass and maintaining a property. These tractors provide excellent performance in addition to power in order to maintain a lawn efficiently. They are also more comfortable for operators, reducing the strain of using asit on mower to cut large grass in a large area.
Considerations When Buying a Garden Tractor
Always look at a selection of garden tractors for sale to find the right one for your needs. This type of equipment might be one of the most expensive pieces of equipment you buy to maintain your lawn. When looking at used garden tractors for sale, it is important to understand what choices are out there before investing in a new or used garden tractor. The first consideration is whether your lawn is large enough. A tractor might be a good idea of any lawn that is over 2,000 square metres or half an acre. Another consideration is whether a cutting deck is needed. Typically, wide deck sizes on a garden tractor will allow for quicker operation. If you have a smaller lawn, a tractor deck size of up to 42 inches or 116 centimetres should meet your needs. For larger lawns, consider buying a tractor with a cutting deck that is larger. Tractors also have a range of additional features, including aerators to move debris and hooks that allow you to tow cards with equipment and supplies. For low emission options that are gentler on the environment, overhead valve engines are also available.
Manual versus Automatic Transmission
A new or used garden tractor for sale is available with manual and automatic transmission options. Automatic transmission garden tractors are easier to operated, allowing you to almost put the machine on cruise control to mow the lawn quickly and virtually effortlessly. For greater control, consider a manual transmission garden tractor. These are well suited for awkwardly shaped gardens or properties with different elevations. Manual transmission models with a clutch and gear shift also gives operators more control over the engine’s range of power. There are also Shift-On-The-Go transmission garden tractors where operators can change forward speeds without stopping, although a clutch is not used.
Major Garden Tractor Manufacturers
Some of the leading manufacturers of garden tractors for sale include Toro, Kubota, Countax, Honda, Westwood, MTD, and AL-CO. JohnDeere is a market leader in garden machinery, including high performance garden and lawn tractors. Mountfield also produces quality ride-on tractors for lawns and gardens.
Operating a Garden Tractor Safely
When operating new or used garden tractors, you need to respect that these are powerful and precautions must be taken. Garden tractors should be driven slowly, especially when moving uphill. Always be cautious and aware of the surroundings, including clearing any large debris from the garden or yard beforehand. Operators should also wear protective gear such as ear plugs when using these machines for extended periods of time. New and used garden tractors for sale are also available with a range of safety features that are worth considering, especially when looking for tractor auctions, including seat belts and automatically stopping blades when an operator stands up.
| 3,594
| 1,598
| 2.249061
|
warc
|
201704
|
How have women’s childbearing experiences changed over the past decade?: A Listening to Mothers III Data Brief
Childbirth Connection has reported results from national
Listening to Mothers SM surveys of women’s childbearing experiences relating to births that occurred over a decade – in 2000-02, 2005, and 2011-12. Follow-up surveys explored additional questions with participants in the second and third surveys. Over time, the surveys have explored many new and timely questions. In addition, core questions used in two or all three time periods provide an opportunity to examine trends in women’s childbearing experiences during what has been in many respects a time of flux for the U.S. maternity and health care systems. The surveys have polled women 18-45 who had given birth to a single baby and could participate in English. Harris Interactive conducted all of the surveys, and the W.K. Kellogg Foundation funded the most recent two-stage survey. SUMMARY OF KEY FINDINGS Women’s readiness for pregnancy appears to be improving. The proportion of women who had a preconception visit increased sharply between the second and third surveys, from 28% to 52%. In the same period, there has been a decrease in unintended pregnancies, from 42% to 35%, and in obesity at the time of conception, from 25% to 20%. The use of prenatal ultrasound has increased, including a steep increase in use for an indication that is not supported by evidence. Between the second and third surveys, the proportion of women who had two or fewer ultrasounds decreased from 41% to 30%, while the proportion who had five or more ultrasounds increased from 23% to 34%. In the most recent survey, 68% of women reported that their caregiver used ultrasound near the end of pregnancy to estimate fetal weight, compared with 51% in the second survey. Many women report experiencing pressure from a care provider to have a cesarean, labor induction, or an epidural. The percentage of women who experienced pressure to have a cesarean rose from 9% to 13% between the second and third surveys, while pressure to accept an epidural increased from 7% to 15% and pressure to induce labor increased from 11% to 15%. The proportion of women who attempted to self-induce labor increased from 22% to 29% during the same period, which may be related to pressure to accept medical induction and desire to avoid such intervention. (In Listening to Mothers II, one-third of women who attempted self-induction did so to avoid a medical induction.) Women’s interest in and access to VBAC is shifting. The results on vaginal birth after cesarean (VBAC) suggest a small increase between the second and third surveys in the proportion of women with a prior cesarean who were interested in the option of a VBAC, from 45% to 48%. The proportion of women with a prior cesarean who reported a lack of access to VBAC grew to 56% in the current survey from 42% a decade earlier. For those who did not have the option of a VBAC, the proportion reporting that their care provider or their hospital was unwilling declined appreciably between the last two surveys, however, the proportion of mothers denied access to a VBAC for a medical reason unrelated to their prior pregnancy more than doubled (20% to 45%) from the second to the third survey. Rates of labor induction and episiotomy are on the decline, while an initial increase in cesarean section has stabilized. The proportion of labors brought on by medical induction decreased slightly over the three surveys, from 36% to 34% to 30%, while episiotomy (among vaginal births) decreased more dramatically, falling by half during the decade, from 35% to 25% to 17%. The cesarean rate increased sharply between the first and second surveys, from 24% to 32%, but remained essentially stable at 31% in the current survey. Rates of continuous electronic fetal monitoring (among women who experienced labor) have fluctuated over the past decade, in the range of 60% to 76%. Other labor practices varied. Use of epidural or spinal analgesia in labor remained high over the past decade (63% to 76%) while use of narcotics for pain relief declined (from 30% to 22% to 16%). The proportion of women who reporting not using any pain medications did not exceed one in five (from 20% to 14% to 17%). Few women used a labor doula for support during labor (5%, 3%, 6%), and those stating they had received support during labor from a spouse or partner declined notably (from 92% to 82% to 77%). The proportion of women who had a “spontaneous” vaginal birth without vacuum extraction or forceps steadily declined (from 64% to 61% to 59%). Hospital support for exclusive breastfeeding is improving, although women’s intentions to and experiences with exclusive breastfeeding appear to be declining. Among women intending to exclusively breastfeed, there has been a marked decrease in the percentage of women who received free formula samples or offers at hospital discharge (from 80% to 66% to 49%) and whose babies received formula or water supplementation during the hospital stay (from 47% to 38% to 29%). Across the two most recent surveys there was an increase in newborns being primarily in their mothers’ arms in the first hour after birth, a practice that facilitates breastfeeding, from 34% to 47%. However, the percentage of women nearing the end of pregnancy who hoped to breastfeed decreased over the three surveys, from 67% to 61% to 54%, as did the proportion exclusively breastfeeding at one week (falling from 58% to 51% to 50%.) In the postpartum period, an important measure of breastfeeding duration also declined. From the first to second follow-up survey, exclusive breastfeeding at six months fell from 20% to 17%. Despite the breastfeeding drop-off, mothers’ satisfaction with the duration of breastfeeding grew (46% to 49%).
M
others are reporting increased levels of health and wellness in the postpartum period. A third of women in the most recent follow-up survey were doing “very well” or “extremely well” getting enough exercise and eating a healthy diet, up from 16% and 21%, respectively, in the first follow-up survey. Over this period, fewer identified weight control as a “major” new-onset problem (23% to 16%). Data available from all three survey time frames show a decrease in major new problems such as physical exhaustion (24% for the first two surveys, then down to 16% in the most recent survey) and in lack of sexual desire (24% to 19% to 13%). The decline in these items could be related to the increased levels of support women with a spouse or partner report receiving from them. While the previous follow-up survey found that only a quarter of women shared the daily care of their babies with their partners, 35% report doing so in the most recent follow-up survey. The proportion of women who receive emotional support from their partners “all the time” also increased (from 30% to 41%). Employed mothers face new challenges. Both follow-up surveys asked women about the issues they faced with employment. Although more women are now receiving paid maternity leave (63% up from 40%), fewer of those are receiving more than 90% of their salary during leave (31% down from 50%). Trends in childcare are mixed as well. Having a child cared for by someone other than the parents for 33 hours a week or more fell to 26% of employed mothers from 46%. However, 28% of working mothers cited child-care arrangements as a major challenge in the transition to work, up from 16%.
| 7,602
| 3,071
| 2.475415
|
warc
|
201704
|
To judge by some of the reviews, Stephen Hawking has evidently given up his quest for the Theory of Everything.
A scientist is in a situation like those SF scenarios where we come into possession of advanced technology. The challenge is whether we, with our primitive technological understanding, or unevolved brains, can figure out more advanced technology. There are variants on this scenario. The technology may be more advanced because it comes from the future. Or it may be more advanced because it comes from a superior alien civilization. There are variants on that scenario as well. The alien technology may be more advanced because the alien civilization is further along the historical continuum, but someday we will catch up. Or it may be more advanced because the aliens are smarter than we are. If the aliens are smarter than we are, then their technology may defy our best efforts to understand it. We can never reason at their level. So the design will remain opaque to human understanding. Or perhaps we can learn a few things from studying the advanced technology–which will jump-start our own technology–but other things forever elude our grasp. In another variant, the technology may defeat our efforts to figure it out, not so much because the alien engineers are intellectually superior, but because their type of intelligence is simply incommensurable with ours. How they perceive the world, process information, &c.–is so different from ourselves that we have no common referent point. We can’t tell what problem they were trying to solve. We just don’t think like they do. This scenario often takes the form of an alien cockpit. After scientists figure out how to get inside the spacecraft–or the craft obliges them–they poke around the cockpit, trying to figure out if they can operate the control panel. Is there something analogous to human experience, some common denominator, some Rosetta Stone, which will enable them to decrypt the system? Of course, this scene tends to be a bit of a letdown since the cockpit was designed, not by superior aliens, but human beings pretending to be aliens. The cockpit suffers from the limited imagination of the screenwriter and FX dept., as well as the demands of a satisfying plot. To what extent is our universe comprehensible? Is God like a toymaker who comes down to the level of the child? Who designs a toy that we can take apart, put back together, or recombine–in ways we can fully master and exhaust? Or is God like a toymaker who designs a user-friendly toy that a child can play with, even though the underlying technology remains unintelligible to a child?
| 2,678
| 1,242
| 2.1562
|
warc
|
201704
|
...yes, I hate it. My mum still proudly tells the tale of how she put
Marmiteon my bread soldiers when I was little and I immediately threw them on the floor whilst pulling the most screwed up baby face you could ever imagine. NAH of course loves it, so I have to bear the sight and smell of this
I find it surprising how such a lovely product such as beer results in jars of yeast extract. Well, I suppose I shouldn't be really because the yeast used for beer making has enough at the end of the brewing process to start another 5 batches of beer. Thus a home has to be found for the other four fifths, otherwise over time our breweries wouldn't have enough room to produce any more beer and would be awash with loads of creamy, browny looking foam instead.
With our breweries facing such a disaster, some clever people in Burton-on-Trent decided the excess could be used to make yeast extract and that it would also be rather jolly if they put it in a jar modelled on the shape of a french stockpot, aka
marmite.
What's more, it's choc full of all those tricky B-vitamins that were quite hard to come by in the diets of yore, so it could be marketed to mums like mine as a nutritious tea-time (or breakfast or lunch) treat for their families. Bet they didn't imagine screaming toddlers throwing it on the floor though, just happy, smiley family faces instead. However, there must have been plenty of each scenario happening all over the land, otherwise how else could the phrase
Marmite Momenthave come about? [for a practical example of the use of this term, you need look no further than this post here - Ed]
So,
Mr. McGregor's Daughter(and not forgetting Gail), you can feel relieved that no lovely furry creatures like marmosets were harmed in the making of this fare, just lots of budding yeast cells instead. And if you're reading this in countries like Australia or New Zealand, I'm afraid your Vegemitecomes nowhere near to being as yukky as Marmiteactually is. And no, it's nothing like its deliciously meaty cousin, Bovril either ,even if both brands are owned by the same company and they're made in the same town.
Unbelievably this post only scratches the surface as far as
Marmiteis concerned, so the You Ask, We Answerteam have helpfully added this link [and this one, plus this one's rather fun - Ed] should you wish to know more.
| 2,359
| 1,269
| 1.858944
|
warc
|
201704
|
This post is brought to you by a Wholefully partner
My sister has been in town all this week. She lives on the complete other side of the country from us, so it’s always a really treat when her family makes it back to Indiana to cavort with us Midwesterners.
I’m not sure if this is true for every family, or just ours, but whenever we all get together, our events always center around food. I know people like to spew the line that food is just fuel and it shouldn’t be emotional, but I call bologna on that. Food is inherently emotional. It is so much more than just calories to keep your body going.
And if you don’t believe me, you should pop by one of our big family dinners where the wine is always flowing and the food is always impeccable. I am very lucky to be in a large family of incredible cooks who really value good food. So when we get together, we eat. And we eat goooood.
Lasagna is one of those dishes that I never would whip together on a random Tuesday evening when it’s just our little family of three (no matter how awesome lasagna leftovers are). Lasagna is meant for special occasions. Even though lasagna isn’t really that difficult to make, it just feels like it
needs to be shared with family and friends, doesn’t it?
If you ever invite me over to a dinner party (Please! I’d love to hang out with you!), you can make me über happy with a cheesy lasagna, a big bowl of salad, and a bottle (or three) of bottom shelf wine. Oh, and breadsticks. Because I heart hot bread.
| 1,549
| 844
| 1.835308
|
warc
|
201704
|
Under the
Ontario Employment Standards Act, an employer or employee may apply to the Ontario Labour Relations Board for a review of an Employment Standards Officer’s decision made pursuant to that Act in respect of three types of decisions: An Order (for example, an order to pay wages) The refusal to make an Order A Notice of Contravention
An Applicant for review of an Employment Standards decision must bear the following requirements in mind:
1. The Application for Review must be received by the Board within 30 days after service of the Order, the letter advising the employee of the Order, the letter advising of the refusal to issue an Order, or the Notice of Contravention, as the case may be.2. The Application must consist of:a. a copy of Form A-103;b. all supporting documents (including the officer’s order or notice or letter refusing to issue an order;c. proof of payment into the Board of the disputed amount, if you are an employer facing an order to pay;d. a copy of Ontario Labour Relations Board Information Bulletin 243. Before filing the Application with the Board, you must deliver it to the responding parties and any other party whom you identify as potentially impacted by the Application.
A Mediation Meeting, which requires parties to bring all documents and materials they want the Board to consider, usually follows. The purpose of mediation is to help the parties reach an agreement to settle the Application and therefore avoid the need for a hearing. Of note, this meeting is held on a without prejudice basis.
Failing the parties settling the Application at the Mediation stage, a hearing will be held which will determine the parties’ rights and obligations under the
Employment Standards Act.
There is no fee associated with making this type of application under the
Act.
| 1,833
| 861
| 2.12892
|
warc
|
201704
|
So recurrently the jargon activity and guidance are used synonymously in unattached business concern interview yet in reality, the two practices are unnoticeably definite from each separate. The observable behavioural differences involving the two footing can be elusive and difficult to tell apart to a detached bystander. As an skilled director and leader, I can say near fervour and conviction, the titles, responsibilities, and expectations of all are not the aforesaid.
I have commented in the gone that grave managers may not always be excellent leaders, even so marvellous leadership have promising been intense managers. The offer at dramatic play present is that the overall storage space of observable behavioral characteristics attributed to managers would be a smaller amount than the digit attributed to leaders since serious leaders have often been very good managers too. There is no doubt, this is a unlikely situation and would construct a fitting defeatist disagreement. My intention here is to stock perspicacity into my intelligent supported on my experiences and investigating. I too poorness to give a sensible borer for elucidation the similarities and differences between a organizer and a individual. There is an large amount of critical matter lightness empirically grounded gain knowledge of after be trained of secretarial and activity activity attributes; should you want to plunge yourself in the materials. However, if you are interested in a passing but highly multipurpose excuse of what separates managers from leaders, sustenance reading.
Throughout much of the 20th century, administration approaches to moving a business, and relationship next to employees, have been defined as a charge and hog make-up. Militaristic in word form and application, the decree and ownership composition served mercantilism by providing unhampered lines of hierarchal organization, depiction of duties and responsibilities, and a top-down position. Recall, if you will, the laid-back permanent status used to name to industrial greats similar to Carnegie, Rockefeller, Astor and Morgan as "captains" of industry.Post ads:
Medical Scrubs - Cherokee Uniforms Authentic Workwear / Carhartt Mens Lightweight Pvc Rain Bib / Thorlo Women's Thick Cushion Hiking Sock / Obama 2012 Mens T-shirt, Vote Barack Obama 2012 Men's / Dragon Ninja Weapon Set / CREEPER from Minecraft T-Shirt YOUTH MEDIUM SHIRT / CREEPER Minecraft Hoodie Hooded Sweatshirt Monster Ghost / NFL Pittsburgh Steelers Replica Youth Helmet and Jersey / Sons of Anarchy SAMCRO Boxed Reaper Men's Full Zip Hoodie / YogaColors Crystal Cotton Spandex Jersey Yoga Pant 8300 / Assassin's Creed III Logo T-Shirt / Woolrich Men's Wool Buffalo Shirt / Calvin Klein Performance Women's Thermal Yoga Wrap / Dickies Men's Everyday Cargo Scrub Pant / Angry Birds Red Bird Plush Beanie Embroidered Laplander / Officially Licensed DC Comics Green Lantern T-Shirt / Official Minecraft Creeper Head Cardboard Mask 12 Inches / Construction Worker Vest / Sexy Black Lace Peek-a-Boo Bra and Crotchless Thong 2
To complete mass yield efficiencies, struggle two planetary wars, and vegetate a red-brick industrialised nation, the enjoin and take over structure of managing fit the official document nicely. Workers toiled on meeting lines accomplishing their repetitive charge low the error of a higher-up or supervisor who may or may not have taken the irrefutable command career of Frederick Taylor. Clearly, activity behaviors and attributes were unavoidable to attain the oversize tasks of industrializing a political unit. However, theses attributes were shut-in to simply the top echelons of large organizations.
By the axis of the 20th century, the in advance rough-hewn bid and hog structures began to nick on a much intelligent mix of admin art and science, beside the practicable approving of imaginary models and direction approaches from Maslow, Hertzberg, Demming, Odiorne, and Druker. However, as our res publica mature and began touring to a comprehension and substance society, simpler order and authority government strategies were beginning to devolution. The psychology and behaviour of employees began to cart on prismatic knowledge domain and academic explanations. Workers were dawn to be seen as intricate and dimensional. Women and minorities were tally to the selection of the force in collective book of numbers.
In the latter partly of the 20th time period as the commercial enterprise age gave way to the scholarship and hearsay age; the traditional tell and legalize structures began to evolve into structures that fostered innovation; intrepreneurship, unpleasant person works, power circles, and self oriented slog teams. In addition, corporate ethics, citizenship, and responsibleness for all time allied corporations to the national expectations of our society. Business leadership had to cultivate adjustive skills needed to head businesses done these swirling winds of tweaking. Concurrently, staff were decent much independent, and interdependent. Workers craved to conduct operations themselves, and they rejected traditionalist forms of dictate and power supervision in benignity of leadership supported models.Post ads:
Dreamgirl Girls Capri Net Garter Dress / Pearl Izumi Men's Select Thermal Jersey / Pearl Izumi Men's Select Thermal Cycling Tight / OGIO Locker Bag / Carhartt Men's Extremes Arctic Quilt Lined Zipfront / Carhartt Men's Canvas Work Dungaree Pant / RSG Hosiery Men's & Women's Toe Socks - 6-Pack (One Size, / Disney Trading Pin Lot of 30 Collector Lapel Pins / SmartWool PhD Ski Medium Sock Lime, S / Nike Pro Basketball Player's Sleeve / Minecraft Adventure Youth T-Shirt / Grinch Who Stole Christmas: Santa Grinch Adult Costume / Cotton Soft Cup Maternity Bra / Saban's Power Ranger Samurai Gloves / Batman Batarang, Batman Throwing Knives, Batman Knives / Women's Feminine Unique Design Lace Chemise & Thong Set - / California Costumes Toys Little Mermaid / Brooks Women's Nightlife II Jacket / 103242 Combat Strker Mens T-shirt M
The attributes of a suitable principal in a bid and make conform artifact were moving to attributes that take account of the contemporary control ideal of business concern organisation. Combine the keenness of somebody freedom with initiatives to "flatten" organizations, and what we knew, as managers rapidly must adapt, attractive on control behavioral attributes to go along to be sure-fire. Under this scenario, inner managers now have broader spans of control, and manual labour is capable in self-directed teams done structure matrices. Titles make over from superintendent to squad lead, from principal to squad organizer reflective a softer detain to traditionalistic leading.
After a grade of research, I was able to build a simultaneous enumerate of discernible behavioural attributes undisputed to those occupied in regulation deeds and leading accomplishments. Two interesting revelations occurred as I created these lists. The initial was the fine cipher of flush "hits" using "behavioral attributes of a manager" (27 hits) and "behavioral attributes of a leader" (6 hits). This unscientific ending runs contrary to my faster location that the cipher of behavioral characteristics attributed to managers would be little than the cipher attributed to leaders since excessive leadership have normally been remarkable managers too. The 2d interesting discovery was that more of the behavioural attributes needful to name a managing director were important to depict a person in command.
As you spectacle and class the noticeable regulation behaviors above, inevitably, indisputable obloquy travel to heed we contact beside the chronicle of regulation attributes. Names specified as Max DuPree, James Cash Penney, Mary Kay Ash, and Herb Kelleher go to be bothered. These people embodied supervision attributes in their conceptualization to conglomerate and several of the leadership activity attributes are homogeneously immediate in all of these leaders. In addition, these leadership exhibited unreal behaviors too connected near leaders such as as compassion, empathy, morality, honesty, and state. Although these behaviors are not as confidently observed, they epitomize the intrinsic viewpoint that advice every judgment and exploit these and different leaders cause. Careful research of their businesses will uncover these behavioral attributes with kid gloves bamboo into the philosophy and textile of their businesses.
Notice that for the period of the negotiations above, within is no mention to either age or masculinity because neither is a restrictive cause or an enabling cause. There are conquering managers who are aged and little women and men. There are productive body who are senior and little women and men. Notice also that no comment has been ready-made to site/region, cultural, or childhood. That is because demographics are neither a constraining cause nor and facultative factor. Successful regulation is not limited to regional, educational, or appreciation factors. History has evidenced this to us repetitively through with the outgrowth of conglomerate leaders from all corners of the worldwide.
Instinctively, each of us seems to be competent to isolate leadership from managers when the noticeable leading behavioural characteristics are definitely plain in body and hopeful leaders. While we are not all MBA's, nor do we transportation shipshape checklists in the region of in our pockets, we learn to certificate those who deftly employ factual commercial control characteristics done their evident travels and equal faculty to front.
For those in leadership positions sounding for counsel and route in the option of managers, latent leaders and executive leadership, this piece provides you with a applicative carcass from which to do your investigating. For those in regulation positions superficial to change leadership from command stock, this piece begins the process of distinctive potential emergent body. Whether you are hiring or activity interior leadership, remember that nifty managers may not ever reiterate to pious body and that observable differences be involving the two.
| 10,203
| 4,894
| 2.084798
|
warc
|
201704
|
While it's been noted that the one thing President Obama needs to make his arguments, it's a bad guy. He desperately needs a villain that he can rail against, so he can properly cast himself as the "good guy' fighting against the bad guys. It's pretty much lifted whole cloth from the conservative side, but at least with them you can be halfway certain that the bad guys are actual bad guys -- the Soviet Union, Saddam Hussein, Iran, and whatnot.
This dovetails quite nicely with the liberals' own preferred tactic of finding and protecting "victims." If there's anything that comes across as more noble and self-sacrificing and good than fighting the bad guys, it's defending the helpless. Any scholar or fan of the heroic myths (Joseph Campbell, comic books, classical mythology, etc.) can tell you this. It's hard-wired into the American psyche.
However, these worthies currently using these archetypes should know better -- if you're going to make it work, you better do your homework first. People are a lot more savvy now, and they will distinguish between actual representations of these archetypes, and weak-ass attempts to fake them.
Case in point: President Obama's latest poster girl for health care reform, Natoma Canfield. She's a woman who came down with leukemia when she didn't have health insurance, and now she's sick as hell.
Obama's name-dropping her all over the place, talking about how much better off she'd be if we'd had ObamaCare in place before she got ill. And she's remarkably photogenic for the role -- he'd wanted her to introduce him at his carefully-screened speech, but she was bedridden with her illness.
But a few people started looking at her circumstances (after she and Obama had invited us all to imagine ourselves in her shoes, it was only natural that a few might actually check what size shoe she wears), and the details painted quite a different picture
It turns out that she wouldn't necessarily have been better off under ObamaCare. She's already being treated at one of the nation's top cancer clinics, who did NOT turn her away when it was discovered she had no insurance. They are not planning on putting a lien on her house if she can't pay the very high costs the clinic is incurring in treating her.
Under the current fatally flawed, cruel, heartless, unforgiving, greedy system, Ms. Canfield is getting some of the best care available for her condition, and while the hospital would like to recoup at least some of the money they're spending on her, they're going to do all they can to do so in as kind a way as possible: helping her get state aid, qualifying her for charity care, or other avenues of payment -- but have already ruled out taking her home.
The oncologists are not running a credit check on her before her chemotherapy. The nurses are not treating her pain medications as COD. There is no running meter on her hospital bed. And no one at the hospital is chastising her for remaining essentially unemployed for 12 years.
Under ObamaCare, though, as Jim Hoft noted, her prior bout with cancer might not have been detected as quickly as it was -- the screenings that caught it would have been deferred until she was older, as she wasn't in a statistically significant risk group.
Ms. Canfield reminds me of Graeme Frost, the little boy who became the Democrats' pet poster boy for expanding the S-CHIP program. They trotted him down to DC to give their response to President Bush's weekly address. Little Graeme pluckily read the speech prepared by the Democrats talking about how S-CHIP had helped save his life and his family after a horrific car crash.
Details of that didn't quite ring true to some people, so those people started asking questions. Chief among them was Michelle Malkin, who discovered that the Frost family had considerable assets and resources -- but had
chosen to invest them in other areas than health insurance.
In both cases here, the "victims" weren't victimized by "the system." They were "victimized" by their own choices, and "the system" that Obama so desparately wants to change actually worked quite well for them. The Frost family put other priorities ahead of health insurance for their family, and Ms. Canfield -- when she could no longer afford her private coverage -- did not avail herself of available state-provided charity care until she was terribly ill -- and then her hospital put its resources behind getting her state benefits while simultaneously giving her the care she needed -- NOT waiting for guarantees of payment first.
What these people want -- and what Obama is trying to secure for them -- is freedom from responsibility. Freedom from worrying.Freedom from anxiety.
Well, guess what? There's no guarantee of that in life. Life comes with exactly one guarantee -- that it will end. We all -- rich, poor, white, black, man, woman, powerful, powerless -- all get one permanent, lasting death. That's it. That's all we're promised. Everything else is catch as catch can.
Unless, of course, you're a liberal. Then you have a constitutional right to be treated :"fairly" by life under every exigence. And if something "unfair" happens, then the government is obligated to come in and make it all better.
Just remember these words of wisdom: "The government big enough to give you everything you want is big enough to take away everything you have."
| 5,401
| 2,634
| 2.050494
|
warc
|
201704
|
I have a novel idea to mitigate this problem. We all know that careless and irresponsible waste disposal practices are now a punishable offence. Since it is an offence, we tend to commit it. We deem dumping our waste criminally and carelessly on someone else’s premises or ‘elsewhere’ as heroics. This is a common misbehaviour associated with any community. We cannot stop it unless people take the initiative and stop it themselves. How can we make people stop it? The only way out is “waste-watching”. This has nothing to do with watching whether someone is disposing of waste in the right manner. Waste-watch is a new concept I propose to shame the people concerned. Since we are so obsessed with our integrity and dignity, it is a good idea to exploit this weakness of the people. Waste disposal and waste-watch through community participation – Merinews.com
| 894
| 498
| 1.795181
|
warc
|
201704
|
The Library A novel pink-pigmented facultative methylotroph, Methylobacterium thiocyanatum sp. nov., capable of growth on thiocyanate or cyanate as sole nitrogen sources
UNSPECIFIED. (1998)
A novel pink-pigmented facultative methylotroph, Methylobacterium thiocyanatum sp. nov., capable of growth on thiocyanate or cyanate as sole nitrogen sources. ARCHIVES OF MICROBIOLOGY, 169 (2). pp. 148-158. ISSN 0302-8933 Full text not available from this repository. Abstract
The isolation and properties of a novel species of pink-pigmented methylotroph, Methylobacterium thiocyanatum, are described. This organism satisfied all the morphological, biochemical, and growth-substrate criteria to be placed in the genus Methylobacterium. Sequencing of the gene encoding its 16S rRNA confirmed its position in this genus, with its closest phylogenetic relatives being M. rhodesianum, M. zatmanii and M. extorquens, from which it differed in its ability to grow on several diagnostic substrates. Methanol-grown organisms contained high activities of hydroxypyruvate reductase [3 mu mol NADH oxidized min(-1) (mg crude extract protein)(-1)], showing that the serine pathway was used for methylotrophic growth. M. thiocyanatum was able to use thiocyanate or cyanate as the sole source of nitrogen for growth, and thiocyanate as the sole source of sulfur in the absence of other sulfur compounds. It tolerated high concentrations (at least 50 mM) of thiocyanate or cyanate when these were supplied as nitrogen sources. Growing cultures degraded thiocyanate to produce thiosulfate as a major sulfur end product, apparently with the intermediate formation of volatile sulfur compounds (probably hydrogen sulfide and carbonyl sulfide). Enzymatic hydrolysis of thiocyanate by cell-free extracts was not demonstrated. Cyanate was metabolized by means of a cyanase enzyme that was expressed at approximately sevenfold greater activity during growth on thiocyanate [V-max 633 +/- 24 nmol NH3 formed min(-1) (mg protein)(-1)] than on cyanate [89 +/- 9 nmol NH3 min(-1) (mg protein)(-1)], Kinetic study of the cyanase in cell-free extracts showed the enzyme (1) to exhibit high affinity for cyanate (K-m 0.07 mM), (2) to require bicarbonate for activity, (3) to be subject to substrate inhibition by cyanate and competitive inhibition by thiocyanate (K-i 0.65 mM), (4) to be unaffected by 1 mM ammonium chloride, (5) to be strongly inhibited by selenocyanate, and (6) to be slightly inhibited by 5 mM thiosulfate, but unaffected by 0.25 mM sulfide or 1 mM thiosulfate. Polypeptides that might be a cyanase subunit (mol.wt. 17.9 kDa), a cyanate (and/or thiocyanate) permease (mol.wt. 25.1 and 27.2 kDa), and a putative thiocyanate hydrolase (mol.wt. 39.3 kDa) were identified by SDS-PAGE. Correlation of the growth rate of cultures with this cyanate concentration (both stimulatory and inhibitory) and the kinetics of cyanase activity might indicate that growth on thiocyanate involved the intermediate formation of cyanate, hence requiring cyanase activity. The very high activity of cyanase observed during growth on thiocyanate could be in compensation for the inhibitory effect of thiocyanate on cyanase. Alternatively, thiocyanate may be a nonsubstrate inducer of cyanase, while thiocyanate degradation itself proceeds by a carbonyl sulfide pathway not involving cyanate. A formal description of the new species (DSM 11490) is given.
Item Type: Journal Article Subjects: Q Science > QR Microbiology Journal or Publication Title: ARCHIVES OF MICROBIOLOGY Publisher: SPRINGER VERLAG ISSN: 0302-8933 Official Date: February 1998 Dates:
Volume: 169 Number: 2 Number of Pages: 11 Page Range: pp. 148-158 Publication Status: Published URI: http://wrap.warwick.ac.uk/id/eprint/15989 Actions (login required)
View Item
| 3,797
| 1,811
| 2.096632
|
warc
|
201704
|
Facebook has grown to be more than just a social network for connecting people, for businesses. With over 1.39 billion monthly users, facebook has become the go to area for gaining traffic for websites and getting access to customers for businesses within Nigeria.
There are 3 million active advertisers on facebook 70% of those facebook advertisers come from outside U.S. Average Ad CTR 0.09% Percentage of brands that promote their post is 70% Average Facebook Ad CPC $0.64 These are the things you need to focus on when growing your revenue: Interest Based Targeting
Facebook has a high level form of targeting that allows you to target ads based on location, age, gender, interest, relationship status, education and more.
When you target users based on interest, you’ll notice an increase in the number of email subscribers, Facebook fans, brand advocates and buyers. Overall, you’ll convert more users into customers.
It’s almost common sense, there is no point targeting a male demographic of users for an advert on female accessories. The same way if i’m a good soccer player, i’d rather click an Ad detailing on how to be a better soccer player than an Ad on table tennis.
Remember to set up a lead nurturing system, if you want to maximize your leads, because the vast majority of them aren’t ready to buy your product, yet. You’ll have to educate and persuade them first and, to do that, you need a system in place to communicate with them regularly.
Creating an Amazing Lead Magnet
Lead Generation is the process of generating leads, a lead is defined differently depending on your business model, but it generally consists of a piece of contact information of an interested prospect.
A lead magnet allows you to stay in contact with your prospect and keep the conversation going, with the sole aim of converting the prospect into a customer.
Leads are generated most of the time, using a tool called lead magnet.
A lead magnet is an irresistible offer or bribe given to a prospect in exchange for his/her contact information.
Now, It’s my utmost believe that your lead magnet needs to be your best offer, you can’t be lazy with what you offer to your prospects.
Because in most cases this will be the first transaction the prospect will be having with your company, even though it’s free, you still have to treat it as a transaction.
So it’s important that you create the best first impression by providing a lead magnet that is invaluable to your prospects.
If you impress them with your initial offer, they will become attracted to your company and become interested in your conversations.
Landing Page Development
Landing page relies on one thing only, which is conversations. Your focus when developing a landing page should be conversation rates. Once you start generating traffic to your landing page the next level of concern is how to ensure that traffic is converting to leads, customers and sales.
Your landing page should have elements – such as the headline, subtitle and call to action (CTA) – that can be tested and improved.
According to neilpatel.com, changing your CTA button color or position can have a significant impact on your conversion rate. But, it’s just the beginning for building a high-converting landing page. Your page itself needs to be focused on conversions, with persuasive copy that’s relevant to the readers.
Conclusion
This is one of the best strategies you can implement for your business towards growing your leads and customers. In our next article on facebook advertisement we will discuss building offers, conversion and nurturing strategies to get you more sales.
To find out more about any of WSI’s digital solutions,
| 3,778
| 1,781
| 2.12128
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.